From 6212edab030f494e87970cfdb541d5d37281037a Mon Sep 17 00:00:00 2001 From: Richard Theis Date: Fri, 4 Aug 2023 15:08:13 -0500 Subject: [PATCH] Update conformance results for v1.26/ibm-openshift (#2711) Red Hat OpenShift on IBM Cloud conformance results updated for version 4.13.5. Signed-off-by: Richard Theis Signed-off-by: cmondragon --- v1.26/ibm-openshift/PRODUCT.yaml | 2 +- v1.26/ibm-openshift/README.md | 2 +- v1.26/ibm-openshift/e2e.log | 67496 ++++++++++++++--------------- v1.26/ibm-openshift/junit_01.xml | 14148 +++--- 4 files changed, 40525 insertions(+), 41123 deletions(-) diff --git a/v1.26/ibm-openshift/PRODUCT.yaml b/v1.26/ibm-openshift/PRODUCT.yaml index ee267786ca..44e0f0364f 100644 --- a/v1.26/ibm-openshift/PRODUCT.yaml +++ b/v1.26/ibm-openshift/PRODUCT.yaml @@ -1,6 +1,6 @@ vendor: IBM name: Red Hat OpenShift on IBM Cloud -version: 4.13.0 +version: 4.13.5 website_url: https://www.ibm.com/cloud/openshift documentation_url: https://cloud.ibm.com/docs/openshift product_logo_url: https://raw.githubusercontent.com/ibm-cloud-docs/openshift/master/images/logo-red-hat-openshift-on-ibm-cloud-light.svg diff --git a/v1.26/ibm-openshift/README.md b/v1.26/ibm-openshift/README.md index 0af991e187..5dce7a91fd 100644 --- a/v1.26/ibm-openshift/README.md +++ b/v1.26/ibm-openshift/README.md @@ -26,7 +26,7 @@ $ ibmcloud oc cluster create vpc-gen2 --name conformance --version 4.13_openshif Go to [IBM Cloud catalog](https://cloud.ibm.com/catalog?category=containers#services) and select `Red Hat OpenShift on IBM Cloud` to create a cluster. From the -cluster creation UI, select version 4.13.0 and choose either classic or VPC +cluster creation UI, select version 4.13.5 and choose either classic or VPC infrastructure. Then choose an appropriate location and worker pool configuration. Finally, give the cluster a name, such as `conformance`, and select `Create`. diff --git a/v1.26/ibm-openshift/e2e.log b/v1.26/ibm-openshift/e2e.log index 14ffb641ba..fa6fa35b69 100644 --- a/v1.26/ibm-openshift/e2e.log +++ b/v1.26/ibm-openshift/e2e.log @@ -1,8 +1,8 @@ -I0612 20:39:54.446640 23 e2e.go:126] Starting e2e run "252e5f2f-6715-440e-b971-87933460a116" on Ginkgo node 1 -Jun 12 20:39:54.529: INFO: Enabling in-tree volume drivers +I0727 01:27:41.641437 20 e2e.go:126] Starting e2e run "d18faff3-626a-4ea4-87bd-253935adf598" on Ginkgo node 1 +Jul 27 01:27:41.662: INFO: Enabling in-tree volume drivers Running Suite: Kubernetes e2e suite - /usr/local/bin ==================================================== -Random Seed: 1686602393 - will randomize all specs +Random Seed: 1690421261 - will randomize all specs Will run 368 of 7069 specs ------------------------------ @@ -10,18917 +10,15821 @@ Will run 368 of 7069 specs test/e2e/e2e.go:77 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:77 -Jun 12 20:39:55.168: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -E0612 20:39:55.172336 23 progress.go:80] Failed to post progress update to http://localhost:8099/progress: Post "http://localhost:8099/progress": dial tcp [::1]:8099: connect: connection refused -Jun 12 20:39:55.173: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable -Jun 12 20:39:55.297: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready -Jun 12 20:39:55.358: INFO: 13 / 13 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) -Jun 12 20:39:55.358: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. -Jun 12 20:39:55.358: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start -Jun 12 20:39:55.373: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'ibm-keepalived-watcher' (0 seconds elapsed) -Jun 12 20:39:55.373: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'ibmcloud-block-storage-driver' (0 seconds elapsed) -Jun 12 20:39:55.373: INFO: e2e test version: v1.26.3 -Jun 12 20:39:55.377: INFO: kube-apiserver version: v1.26.3+b404935 +Jul 27 01:27:41.790: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 01:27:41.793: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable +Jul 27 01:27:41.841: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready +Jul 27 01:27:41.897: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) +Jul 27 01:27:41.897: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. +Jul 27 01:27:41.897: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start +Jul 27 01:27:41.926: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'ibm-keepalived-watcher' (0 seconds elapsed) +Jul 27 01:27:41.926: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'ibm-vpc-block-csi-node' (0 seconds elapsed) +Jul 27 01:27:41.926: INFO: e2e test version: v1.26.6 +Jul 27 01:27:41.930: INFO: kube-apiserver version: v1.26.6+f245ced [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:77 -Jun 12 20:39:55.377: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 20:39:55.397: INFO: Cluster IP family: ipv4 +Jul 27 01:27:41.930: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 01:27:41.945: INFO: Cluster IP family: ipv4 ------------------------------ -[SynchronizedBeforeSuite] PASSED [0.230 seconds] +[SynchronizedBeforeSuite] PASSED [0.161 seconds] [SynchronizedBeforeSuite] test/e2e/e2e.go:77 Begin Captured GinkgoWriter Output >> [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:77 - Jun 12 20:39:55.168: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - E0612 20:39:55.172336 23 progress.go:80] Failed to post progress update to http://localhost:8099/progress: Post "http://localhost:8099/progress": dial tcp [::1]:8099: connect: connection refused - Jun 12 20:39:55.173: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable - Jun 12 20:39:55.297: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready - Jun 12 20:39:55.358: INFO: 13 / 13 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) - Jun 12 20:39:55.358: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. - Jun 12 20:39:55.358: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start - Jun 12 20:39:55.373: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'ibm-keepalived-watcher' (0 seconds elapsed) - Jun 12 20:39:55.373: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'ibmcloud-block-storage-driver' (0 seconds elapsed) - Jun 12 20:39:55.373: INFO: e2e test version: v1.26.3 - Jun 12 20:39:55.377: INFO: kube-apiserver version: v1.26.3+b404935 + Jul 27 01:27:41.790: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 01:27:41.793: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable + Jul 27 01:27:41.841: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready + Jul 27 01:27:41.897: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) + Jul 27 01:27:41.897: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. + Jul 27 01:27:41.897: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start + Jul 27 01:27:41.926: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'ibm-keepalived-watcher' (0 seconds elapsed) + Jul 27 01:27:41.926: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'ibm-vpc-block-csi-node' (0 seconds elapsed) + Jul 27 01:27:41.926: INFO: e2e test version: v1.26.6 + Jul 27 01:27:41.930: INFO: kube-apiserver version: v1.26.6+f245ced [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:77 - Jun 12 20:39:55.377: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 20:39:55.397: INFO: Cluster IP family: ipv4 + Jul 27 01:27:41.930: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 01:27:41.945: INFO: Cluster IP family: ipv4 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-api-machinery] Servers with support for Table transformation - should return a 406 for a backend which does not implement metadata [Conformance] - test/e2e/apimachinery/table_conversion.go:154 -[BeforeEach] [sig-api-machinery] Servers with support for Table transformation +[sig-apps] Deployment + should run the lifecycle of a Deployment [Conformance] + test/e2e/apps/deployment.go:185 +[BeforeEach] [sig-apps] Deployment set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:39:55.486 -Jun 12 20:39:55.486: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename tables 06/12/23 20:39:55.488 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:39:55.545 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:39:55.567 -[BeforeEach] [sig-api-machinery] Servers with support for Table transformation +STEP: Creating a kubernetes client 07/27/23 01:27:41.962 +Jul 27 01:27:41.962: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename deployment 07/27/23 01:27:41.963 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:27:42.022 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:27:42.032 +[BeforeEach] [sig-apps] Deployment test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-api-machinery] Servers with support for Table transformation - test/e2e/apimachinery/table_conversion.go:49 -[It] should return a 406 for a backend which does not implement metadata [Conformance] - test/e2e/apimachinery/table_conversion.go:154 -[AfterEach] [sig-api-machinery] Servers with support for Table transformation +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] should run the lifecycle of a Deployment [Conformance] + test/e2e/apps/deployment.go:185 +STEP: creating a Deployment 07/27/23 01:27:42.069 +STEP: waiting for Deployment to be created 07/27/23 01:27:42.086 +STEP: waiting for all Replicas to be Ready 07/27/23 01:27:42.091 +Jul 27 01:27:42.104: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Jul 27 01:27:42.104: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Jul 27 01:27:42.110: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Jul 27 01:27:42.110: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Jul 27 01:27:42.138: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Jul 27 01:27:42.138: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Jul 27 01:27:42.224: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Jul 27 01:27:42.224: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Jul 27 01:27:53.805: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 and labels map[test-deployment-static:true] +Jul 27 01:27:53.805: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 and labels map[test-deployment-static:true] +Jul 27 01:27:58.424: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 and labels map[test-deployment-static:true] +STEP: patching the Deployment 07/27/23 01:27:58.424 +W0727 01:27:58.441511 20 warnings.go:70] unknown field "spec.template.spec.TerminationGracePeriodSeconds" +Jul 27 01:27:58.445: INFO: observed event type ADDED +STEP: waiting for Replicas to scale 07/27/23 01:27:58.445 +Jul 27 01:27:58.450: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 0 +Jul 27 01:27:58.450: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 0 +Jul 27 01:27:58.450: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 0 +Jul 27 01:27:58.450: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 0 +Jul 27 01:27:58.451: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 0 +Jul 27 01:27:58.451: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 0 +Jul 27 01:27:58.451: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 0 +Jul 27 01:27:58.451: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 0 +Jul 27 01:27:58.451: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 +Jul 27 01:27:58.451: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 +Jul 27 01:27:58.451: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 +Jul 27 01:27:58.451: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 +Jul 27 01:27:58.452: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 +Jul 27 01:27:58.452: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 +Jul 27 01:27:58.459: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 +Jul 27 01:27:58.459: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 +Jul 27 01:27:58.490: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 +Jul 27 01:27:58.490: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 +Jul 27 01:27:58.515: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 +Jul 27 01:27:58.515: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 +Jul 27 01:27:58.532: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 +Jul 27 01:27:58.532: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 +Jul 27 01:28:07.867: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 +Jul 27 01:28:07.867: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 +Jul 27 01:28:07.915: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 +STEP: listing Deployments 07/27/23 01:28:07.915 +Jul 27 01:28:07.932: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] +STEP: updating the Deployment 07/27/23 01:28:07.932 +Jul 27 01:28:07.954: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 +STEP: fetching the DeploymentStatus 07/27/23 01:28:07.954 +Jul 27 01:28:07.970: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Jul 27 01:28:07.970: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Jul 27 01:28:08.017: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Jul 27 01:28:08.039: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Jul 27 01:28:08.070: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Jul 27 01:28:16.492: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Jul 27 01:28:20.587: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] +Jul 27 01:28:20.641: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Jul 27 01:28:20.670: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Jul 27 01:28:32.954: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] +STEP: patching the DeploymentStatus 07/27/23 01:28:33.005 +STEP: fetching the DeploymentStatus 07/27/23 01:28:33.023 +Jul 27 01:28:33.036: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 +Jul 27 01:28:33.037: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 +Jul 27 01:28:33.037: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 +Jul 27 01:28:33.037: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 +Jul 27 01:28:33.037: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 +Jul 27 01:28:33.037: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 +Jul 27 01:28:33.037: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 3 +Jul 27 01:28:33.038: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 +Jul 27 01:28:33.038: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 +Jul 27 01:28:33.038: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 3 +STEP: deleting the Deployment 07/27/23 01:28:33.038 +Jul 27 01:28:33.060: INFO: observed event type MODIFIED +Jul 27 01:28:33.061: INFO: observed event type MODIFIED +Jul 27 01:28:33.061: INFO: observed event type MODIFIED +Jul 27 01:28:33.061: INFO: observed event type MODIFIED +Jul 27 01:28:33.061: INFO: observed event type MODIFIED +Jul 27 01:28:33.062: INFO: observed event type MODIFIED +Jul 27 01:28:33.062: INFO: observed event type MODIFIED +Jul 27 01:28:33.063: INFO: observed event type MODIFIED +Jul 27 01:28:33.063: INFO: observed event type MODIFIED +Jul 27 01:28:33.063: INFO: observed event type MODIFIED +Jul 27 01:28:33.063: INFO: observed event type MODIFIED +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Jul 27 01:28:33.070: INFO: Log out all the ReplicaSets if there is no deployment created +Jul 27 01:28:33.079: INFO: ReplicaSet "test-deployment-7b7876f9d6": +&ReplicaSet{ObjectMeta:{test-deployment-7b7876f9d6 deployment-3953 7e3d0337-5631-475f-88ad-84295e8a664f 57042 2 2023-07-27 01:28:07 +0000 UTC map[pod-template-hash:7b7876f9d6 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment 72c0ed72-ae95-4674-a44d-2e98415dc39d 0xc003c07637 0xc003c07638}] [] [{kube-controller-manager Update apps/v1 2023-07-27 01:28:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72c0ed72-ae95-4674-a44d-2e98415dc39d\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 01:28:32 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7b7876f9d6,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:7b7876f9d6 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003c076c0 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:2,AvailableReplicas:2,Conditions:[]ReplicaSetCondition{},},} + +Jul 27 01:28:33.088: INFO: pod: "test-deployment-7b7876f9d6-6l7gv": +&Pod{ObjectMeta:{test-deployment-7b7876f9d6-6l7gv test-deployment-7b7876f9d6- deployment-3953 5be0dcda-441f-4f51-a3c1-9db1dc885807 57041 0 2023-07-27 01:28:20 +0000 UTC map[pod-template-hash:7b7876f9d6 test-deployment-static:true] map[cni.projectcalico.org/containerID:f4eb017036e7dac92b68fb6c3a4bfd5678efc124a334e453144e3c54364e0533 cni.projectcalico.org/podIP:172.17.225.63/32 cni.projectcalico.org/podIPs:172.17.225.63/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.225.63" + ], + "default": true, + "dns": {} +}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-deployment-7b7876f9d6 7e3d0337-5631-475f-88ad-84295e8a664f 0xc003c07b97 0xc003c07b98}] [] [{kube-controller-manager Update v1 2023-07-27 01:28:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7e3d0337-5631-475f-88ad-84295e8a664f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-07-27 01:28:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-07-27 01:28:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-07-27 01:28:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.225.63\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4267p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4267p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.19,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c29,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-njzbb,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:28:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:28:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:28:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:28:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.19,PodIP:172.17.225.63,StartTime:2023-07-27 01:28:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-07-27 01:28:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://8378d22b026bf0d6323107e6c857e18ab8cb4264036a376b172103622dc23fc8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.17.225.63,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +Jul 27 01:28:33.088: INFO: pod: "test-deployment-7b7876f9d6-nsvrs": +&Pod{ObjectMeta:{test-deployment-7b7876f9d6-nsvrs test-deployment-7b7876f9d6- deployment-3953 7582089c-bec3-4e20-991a-da2d89af93ec 56940 0 2023-07-27 01:28:07 +0000 UTC map[pod-template-hash:7b7876f9d6 test-deployment-static:true] map[cni.projectcalico.org/containerID:953f4a9a664ac6e82da22ad7e201adb28cfa6617395b10cf7b2d2c0c5ea71e16 cni.projectcalico.org/podIP:172.17.218.41/32 cni.projectcalico.org/podIPs:172.17.218.41/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.218.41" + ], + "default": true, + "dns": {} +}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-deployment-7b7876f9d6 7e3d0337-5631-475f-88ad-84295e8a664f 0xc003c07e37 0xc003c07e38}] [] [{kube-controller-manager Update v1 2023-07-27 01:28:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7e3d0337-5631-475f-88ad-84295e8a664f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-07-27 01:28:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-07-27 01:28:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-07-27 01:28:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.218.41\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-nx6z2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nx6z2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.17,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c29,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-njzbb,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:28:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:28:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:28:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:28:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.17,PodIP:172.17.218.41,StartTime:2023-07-27 01:28:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-07-27 01:28:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://dfcc01b6fce917f460c96cf22dd5612b22f110aa450aeadead696d5fc89cba70,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.17.218.41,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +Jul 27 01:28:33.088: INFO: ReplicaSet "test-deployment-7df74c55ff": +&ReplicaSet{ObjectMeta:{test-deployment-7df74c55ff deployment-3953 e687bc94-747b-4609-b649-86dac912435a 57050 4 2023-07-27 01:27:58 +0000 UTC map[pod-template-hash:7df74c55ff test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment 72c0ed72-ae95-4674-a44d-2e98415dc39d 0xc003c07727 0xc003c07728}] [] [{kube-controller-manager Update apps/v1 2023-07-27 01:28:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72c0ed72-ae95-4674-a44d-2e98415dc39d\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 01:28:32 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7df74c55ff,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:7df74c55ff test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/pause:3.9 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003c077b0 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + +Jul 27 01:28:33.098: INFO: pod: "test-deployment-7df74c55ff-j9nbd": +&Pod{ObjectMeta:{test-deployment-7df74c55ff-j9nbd test-deployment-7df74c55ff- deployment-3953 4e9a0b5e-bc3a-4409-8b59-bf1496735322 57046 0 2023-07-27 01:27:58 +0000 UTC 2023-07-27 01:28:33 +0000 UTC 0xc002e733b8 map[pod-template-hash:7df74c55ff test-deployment-static:true] map[cni.projectcalico.org/containerID:6bb387c99dfb5b227fc9cb388714f7ecafa3d0284e7a4a7cf42c8ad0f5658696 cni.projectcalico.org/podIP:172.17.225.62/32 cni.projectcalico.org/podIPs:172.17.225.62/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.225.62" + ], + "default": true, + "dns": {} +}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-deployment-7df74c55ff e687bc94-747b-4609-b649-86dac912435a 0xc002e73417 0xc002e73418}] [] [{kube-controller-manager Update v1 2023-07-27 01:27:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e687bc94-747b-4609-b649-86dac912435a\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-07-27 01:27:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-07-27 01:27:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-07-27 01:28:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.225.62\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5g6tz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/pause:3.9,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5g6tz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.19,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c29,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-njzbb,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:27:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:28:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:28:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:27:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.19,PodIP:172.17.225.62,StartTime:2023-07-27 01:27:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-07-27 01:28:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/pause:3.9,ImageID:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,ContainerID:cri-o://462f74879d9bae10ba463c27377544e6fe6f990684b86a86be155503ca15f143,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.17.225.62,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +Jul 27 01:28:33.098: INFO: ReplicaSet "test-deployment-f4dbc4647": +&ReplicaSet{ObjectMeta:{test-deployment-f4dbc4647 deployment-3953 f90c75ca-2a70-48b6-a30f-8d1f62b9310e 56836 3 2023-07-27 01:27:42 +0000 UTC map[pod-template-hash:f4dbc4647 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment 72c0ed72-ae95-4674-a44d-2e98415dc39d 0xc003c07817 0xc003c07818}] [] [{kube-controller-manager Update apps/v1 2023-07-27 01:28:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72c0ed72-ae95-4674-a44d-2e98415dc39d\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 01:28:07 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: f4dbc4647,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:f4dbc4647 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003c078a0 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + +[AfterEach] [sig-apps] Deployment test/e2e/framework/node/init/init.go:32 -Jun 12 20:39:55.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] Servers with support for Table transformation +Jul 27 01:28:33.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Deployment test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] Servers with support for Table transformation +[DeferCleanup (Each)] [sig-apps] Deployment dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] Servers with support for Table transformation +[DeferCleanup (Each)] [sig-apps] Deployment tear down framework | framework.go:193 -STEP: Destroying namespace "tables-1469" for this suite. 06/12/23 20:39:55.594 +STEP: Destroying namespace "deployment-3953" for this suite. 07/27/23 01:28:33.121 ------------------------------ -• [0.135 seconds] -[sig-api-machinery] Servers with support for Table transformation -test/e2e/apimachinery/framework.go:23 - should return a 406 for a backend which does not implement metadata [Conformance] - test/e2e/apimachinery/table_conversion.go:154 +• [SLOW TEST] [51.193 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + should run the lifecycle of a Deployment [Conformance] + test/e2e/apps/deployment.go:185 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] Servers with support for Table transformation + [BeforeEach] [sig-apps] Deployment set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:39:55.486 - Jun 12 20:39:55.486: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename tables 06/12/23 20:39:55.488 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:39:55.545 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:39:55.567 - [BeforeEach] [sig-api-machinery] Servers with support for Table transformation + STEP: Creating a kubernetes client 07/27/23 01:27:41.962 + Jul 27 01:27:41.962: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename deployment 07/27/23 01:27:41.963 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:27:42.022 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:27:42.032 + [BeforeEach] [sig-apps] Deployment test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-api-machinery] Servers with support for Table transformation - test/e2e/apimachinery/table_conversion.go:49 - [It] should return a 406 for a backend which does not implement metadata [Conformance] - test/e2e/apimachinery/table_conversion.go:154 - [AfterEach] [sig-api-machinery] Servers with support for Table transformation + [BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 + [It] should run the lifecycle of a Deployment [Conformance] + test/e2e/apps/deployment.go:185 + STEP: creating a Deployment 07/27/23 01:27:42.069 + STEP: waiting for Deployment to be created 07/27/23 01:27:42.086 + STEP: waiting for all Replicas to be Ready 07/27/23 01:27:42.091 + Jul 27 01:27:42.104: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 0 and labels map[test-deployment-static:true] + Jul 27 01:27:42.104: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 0 and labels map[test-deployment-static:true] + Jul 27 01:27:42.110: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 0 and labels map[test-deployment-static:true] + Jul 27 01:27:42.110: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 0 and labels map[test-deployment-static:true] + Jul 27 01:27:42.138: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 0 and labels map[test-deployment-static:true] + Jul 27 01:27:42.138: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 0 and labels map[test-deployment-static:true] + Jul 27 01:27:42.224: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 0 and labels map[test-deployment-static:true] + Jul 27 01:27:42.224: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 0 and labels map[test-deployment-static:true] + Jul 27 01:27:53.805: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 and labels map[test-deployment-static:true] + Jul 27 01:27:53.805: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 and labels map[test-deployment-static:true] + Jul 27 01:27:58.424: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 and labels map[test-deployment-static:true] + STEP: patching the Deployment 07/27/23 01:27:58.424 + W0727 01:27:58.441511 20 warnings.go:70] unknown field "spec.template.spec.TerminationGracePeriodSeconds" + Jul 27 01:27:58.445: INFO: observed event type ADDED + STEP: waiting for Replicas to scale 07/27/23 01:27:58.445 + Jul 27 01:27:58.450: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 0 + Jul 27 01:27:58.450: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 0 + Jul 27 01:27:58.450: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 0 + Jul 27 01:27:58.450: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 0 + Jul 27 01:27:58.451: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 0 + Jul 27 01:27:58.451: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 0 + Jul 27 01:27:58.451: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 0 + Jul 27 01:27:58.451: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 0 + Jul 27 01:27:58.451: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 + Jul 27 01:27:58.451: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 + Jul 27 01:27:58.451: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 + Jul 27 01:27:58.451: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 + Jul 27 01:27:58.452: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 + Jul 27 01:27:58.452: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 + Jul 27 01:27:58.459: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 + Jul 27 01:27:58.459: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 + Jul 27 01:27:58.490: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 + Jul 27 01:27:58.490: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 + Jul 27 01:27:58.515: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 + Jul 27 01:27:58.515: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 + Jul 27 01:27:58.532: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 + Jul 27 01:27:58.532: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 + Jul 27 01:28:07.867: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 + Jul 27 01:28:07.867: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 + Jul 27 01:28:07.915: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 + STEP: listing Deployments 07/27/23 01:28:07.915 + Jul 27 01:28:07.932: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] + STEP: updating the Deployment 07/27/23 01:28:07.932 + Jul 27 01:28:07.954: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 + STEP: fetching the DeploymentStatus 07/27/23 01:28:07.954 + Jul 27 01:28:07.970: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] + Jul 27 01:28:07.970: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] + Jul 27 01:28:08.017: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] + Jul 27 01:28:08.039: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] + Jul 27 01:28:08.070: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] + Jul 27 01:28:16.492: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] + Jul 27 01:28:20.587: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] + Jul 27 01:28:20.641: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] + Jul 27 01:28:20.670: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] + Jul 27 01:28:32.954: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] + STEP: patching the DeploymentStatus 07/27/23 01:28:33.005 + STEP: fetching the DeploymentStatus 07/27/23 01:28:33.023 + Jul 27 01:28:33.036: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 + Jul 27 01:28:33.037: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 + Jul 27 01:28:33.037: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 + Jul 27 01:28:33.037: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 + Jul 27 01:28:33.037: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 1 + Jul 27 01:28:33.037: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 + Jul 27 01:28:33.037: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 3 + Jul 27 01:28:33.038: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 + Jul 27 01:28:33.038: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 2 + Jul 27 01:28:33.038: INFO: observed Deployment test-deployment in namespace deployment-3953 with ReadyReplicas 3 + STEP: deleting the Deployment 07/27/23 01:28:33.038 + Jul 27 01:28:33.060: INFO: observed event type MODIFIED + Jul 27 01:28:33.061: INFO: observed event type MODIFIED + Jul 27 01:28:33.061: INFO: observed event type MODIFIED + Jul 27 01:28:33.061: INFO: observed event type MODIFIED + Jul 27 01:28:33.061: INFO: observed event type MODIFIED + Jul 27 01:28:33.062: INFO: observed event type MODIFIED + Jul 27 01:28:33.062: INFO: observed event type MODIFIED + Jul 27 01:28:33.063: INFO: observed event type MODIFIED + Jul 27 01:28:33.063: INFO: observed event type MODIFIED + Jul 27 01:28:33.063: INFO: observed event type MODIFIED + Jul 27 01:28:33.063: INFO: observed event type MODIFIED + [AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 + Jul 27 01:28:33.070: INFO: Log out all the ReplicaSets if there is no deployment created + Jul 27 01:28:33.079: INFO: ReplicaSet "test-deployment-7b7876f9d6": + &ReplicaSet{ObjectMeta:{test-deployment-7b7876f9d6 deployment-3953 7e3d0337-5631-475f-88ad-84295e8a664f 57042 2 2023-07-27 01:28:07 +0000 UTC map[pod-template-hash:7b7876f9d6 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment 72c0ed72-ae95-4674-a44d-2e98415dc39d 0xc003c07637 0xc003c07638}] [] [{kube-controller-manager Update apps/v1 2023-07-27 01:28:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72c0ed72-ae95-4674-a44d-2e98415dc39d\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 01:28:32 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7b7876f9d6,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:7b7876f9d6 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003c076c0 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:2,AvailableReplicas:2,Conditions:[]ReplicaSetCondition{},},} + + Jul 27 01:28:33.088: INFO: pod: "test-deployment-7b7876f9d6-6l7gv": + &Pod{ObjectMeta:{test-deployment-7b7876f9d6-6l7gv test-deployment-7b7876f9d6- deployment-3953 5be0dcda-441f-4f51-a3c1-9db1dc885807 57041 0 2023-07-27 01:28:20 +0000 UTC map[pod-template-hash:7b7876f9d6 test-deployment-static:true] map[cni.projectcalico.org/containerID:f4eb017036e7dac92b68fb6c3a4bfd5678efc124a334e453144e3c54364e0533 cni.projectcalico.org/podIP:172.17.225.63/32 cni.projectcalico.org/podIPs:172.17.225.63/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.225.63" + ], + "default": true, + "dns": {} + }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-deployment-7b7876f9d6 7e3d0337-5631-475f-88ad-84295e8a664f 0xc003c07b97 0xc003c07b98}] [] [{kube-controller-manager Update v1 2023-07-27 01:28:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7e3d0337-5631-475f-88ad-84295e8a664f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-07-27 01:28:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-07-27 01:28:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-07-27 01:28:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.225.63\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4267p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4267p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.19,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c29,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-njzbb,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:28:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:28:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:28:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:28:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.19,PodIP:172.17.225.63,StartTime:2023-07-27 01:28:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-07-27 01:28:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://8378d22b026bf0d6323107e6c857e18ab8cb4264036a376b172103622dc23fc8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.17.225.63,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + + Jul 27 01:28:33.088: INFO: pod: "test-deployment-7b7876f9d6-nsvrs": + &Pod{ObjectMeta:{test-deployment-7b7876f9d6-nsvrs test-deployment-7b7876f9d6- deployment-3953 7582089c-bec3-4e20-991a-da2d89af93ec 56940 0 2023-07-27 01:28:07 +0000 UTC map[pod-template-hash:7b7876f9d6 test-deployment-static:true] map[cni.projectcalico.org/containerID:953f4a9a664ac6e82da22ad7e201adb28cfa6617395b10cf7b2d2c0c5ea71e16 cni.projectcalico.org/podIP:172.17.218.41/32 cni.projectcalico.org/podIPs:172.17.218.41/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.218.41" + ], + "default": true, + "dns": {} + }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-deployment-7b7876f9d6 7e3d0337-5631-475f-88ad-84295e8a664f 0xc003c07e37 0xc003c07e38}] [] [{kube-controller-manager Update v1 2023-07-27 01:28:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7e3d0337-5631-475f-88ad-84295e8a664f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-07-27 01:28:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-07-27 01:28:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-07-27 01:28:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.218.41\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-nx6z2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nx6z2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.17,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c29,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-njzbb,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:28:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:28:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:28:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:28:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.17,PodIP:172.17.218.41,StartTime:2023-07-27 01:28:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-07-27 01:28:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://dfcc01b6fce917f460c96cf22dd5612b22f110aa450aeadead696d5fc89cba70,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.17.218.41,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + + Jul 27 01:28:33.088: INFO: ReplicaSet "test-deployment-7df74c55ff": + &ReplicaSet{ObjectMeta:{test-deployment-7df74c55ff deployment-3953 e687bc94-747b-4609-b649-86dac912435a 57050 4 2023-07-27 01:27:58 +0000 UTC map[pod-template-hash:7df74c55ff test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment 72c0ed72-ae95-4674-a44d-2e98415dc39d 0xc003c07727 0xc003c07728}] [] [{kube-controller-manager Update apps/v1 2023-07-27 01:28:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72c0ed72-ae95-4674-a44d-2e98415dc39d\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 01:28:32 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7df74c55ff,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:7df74c55ff test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/pause:3.9 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003c077b0 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + + Jul 27 01:28:33.098: INFO: pod: "test-deployment-7df74c55ff-j9nbd": + &Pod{ObjectMeta:{test-deployment-7df74c55ff-j9nbd test-deployment-7df74c55ff- deployment-3953 4e9a0b5e-bc3a-4409-8b59-bf1496735322 57046 0 2023-07-27 01:27:58 +0000 UTC 2023-07-27 01:28:33 +0000 UTC 0xc002e733b8 map[pod-template-hash:7df74c55ff test-deployment-static:true] map[cni.projectcalico.org/containerID:6bb387c99dfb5b227fc9cb388714f7ecafa3d0284e7a4a7cf42c8ad0f5658696 cni.projectcalico.org/podIP:172.17.225.62/32 cni.projectcalico.org/podIPs:172.17.225.62/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.225.62" + ], + "default": true, + "dns": {} + }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-deployment-7df74c55ff e687bc94-747b-4609-b649-86dac912435a 0xc002e73417 0xc002e73418}] [] [{kube-controller-manager Update v1 2023-07-27 01:27:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e687bc94-747b-4609-b649-86dac912435a\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-07-27 01:27:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-07-27 01:27:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-07-27 01:28:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.225.62\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5g6tz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/pause:3.9,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5g6tz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.19,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c29,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-njzbb,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:27:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:28:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:28:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:27:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.19,PodIP:172.17.225.62,StartTime:2023-07-27 01:27:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-07-27 01:28:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/pause:3.9,ImageID:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,ContainerID:cri-o://462f74879d9bae10ba463c27377544e6fe6f990684b86a86be155503ca15f143,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.17.225.62,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + + Jul 27 01:28:33.098: INFO: ReplicaSet "test-deployment-f4dbc4647": + &ReplicaSet{ObjectMeta:{test-deployment-f4dbc4647 deployment-3953 f90c75ca-2a70-48b6-a30f-8d1f62b9310e 56836 3 2023-07-27 01:27:42 +0000 UTC map[pod-template-hash:f4dbc4647 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment 72c0ed72-ae95-4674-a44d-2e98415dc39d 0xc003c07817 0xc003c07818}] [] [{kube-controller-manager Update apps/v1 2023-07-27 01:28:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72c0ed72-ae95-4674-a44d-2e98415dc39d\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 01:28:07 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: f4dbc4647,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:f4dbc4647 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003c078a0 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + + [AfterEach] [sig-apps] Deployment test/e2e/framework/node/init/init.go:32 - Jun 12 20:39:55.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] Servers with support for Table transformation + Jul 27 01:28:33.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Deployment test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] Servers with support for Table transformation + [DeferCleanup (Each)] [sig-apps] Deployment dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] Servers with support for Table transformation + [DeferCleanup (Each)] [sig-apps] Deployment tear down framework | framework.go:193 - STEP: Destroying namespace "tables-1469" for this suite. 06/12/23 20:39:55.594 + STEP: Destroying namespace "deployment-3953" for this suite. 07/27/23 01:28:33.121 << End Captured GinkgoWriter Output ------------------------------ -SS +SSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] Projected downwardAPI - should update annotations on modification [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:162 -[BeforeEach] [sig-storage] Projected downwardAPI +[sig-node] InitContainer [NodeConformance] + should invoke init containers on a RestartAlways pod [Conformance] + test/e2e/common/node/init_container.go:255 +[BeforeEach] [sig-node] InitContainer [NodeConformance] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:39:55.621 -Jun 12 20:39:55.621: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -E0612 20:39:55.621930 23 progress.go:80] Failed to post progress update to http://localhost:8099/progress: Post "http://localhost:8099/progress": dial tcp [::1]:8099: connect: connection refused -STEP: Building a namespace api object, basename projected 06/12/23 20:39:55.622 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:39:55.668 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:39:55.679 -[BeforeEach] [sig-storage] Projected downwardAPI +STEP: Creating a kubernetes client 07/27/23 01:28:33.156 +Jul 27 01:28:33.156: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename init-container 07/27/23 01:28:33.157 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:28:33.202 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:28:33.211 +[BeforeEach] [sig-node] InitContainer [NodeConformance] test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-storage] Projected downwardAPI - test/e2e/common/storage/projected_downwardapi.go:44 -[It] should update annotations on modification [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:162 -STEP: Creating the pod 06/12/23 20:39:55.692 -Jun 12 20:39:55.715: INFO: Waiting up to 5m0s for pod "annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851" in namespace "projected-2560" to be "running and ready" -Jun 12 20:39:55.721: INFO: Pod "annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851": Phase="Pending", Reason="", readiness=false. Elapsed: 5.873115ms -Jun 12 20:39:55.721: INFO: The phase of Pod annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:39:57.729: INFO: Pod "annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013760689s -Jun 12 20:39:57.729: INFO: The phase of Pod annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:39:59.735: INFO: Pod "annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019559176s -Jun 12 20:39:59.735: INFO: The phase of Pod annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:40:01.731: INFO: Pod "annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015792893s -Jun 12 20:40:01.731: INFO: The phase of Pod annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:40:03.730: INFO: Pod "annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01510368s -Jun 12 20:40:03.730: INFO: The phase of Pod annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:40:05.730: INFO: Pod "annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851": Phase="Pending", Reason="", readiness=false. Elapsed: 10.014221699s -Jun 12 20:40:05.730: INFO: The phase of Pod annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:40:07.807: INFO: Pod "annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851": Phase="Pending", Reason="", readiness=false. Elapsed: 12.091553873s -Jun 12 20:40:07.807: INFO: The phase of Pod annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:40:09.729: INFO: Pod "annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851": Phase="Running", Reason="", readiness=true. Elapsed: 14.014030106s -Jun 12 20:40:09.729: INFO: The phase of Pod annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851 is Running (Ready = true) -Jun 12 20:40:09.729: INFO: Pod "annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851" satisfied condition "running and ready" -Jun 12 20:40:10.331: INFO: Successfully updated pod "annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851" -[AfterEach] [sig-storage] Projected downwardAPI +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:165 +[It] should invoke init containers on a RestartAlways pod [Conformance] + test/e2e/common/node/init_container.go:255 +STEP: creating the pod 07/27/23 01:28:33.222 +Jul 27 01:28:33.223: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] test/e2e/framework/node/init/init.go:32 -Jun 12 20:40:12.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Projected downwardAPI +Jul 27 01:28:44.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Projected downwardAPI +[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Projected downwardAPI +[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] tear down framework | framework.go:193 -STEP: Destroying namespace "projected-2560" for this suite. 06/12/23 20:40:12.445 +STEP: Destroying namespace "init-container-697" for this suite. 07/27/23 01:28:44.604 ------------------------------ -• [SLOW TEST] [16.850 seconds] -[sig-storage] Projected downwardAPI -test/e2e/common/storage/framework.go:23 - should update annotations on modification [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:162 +• [SLOW TEST] [11.493 seconds] +[sig-node] InitContainer [NodeConformance] +test/e2e/common/node/framework.go:23 + should invoke init containers on a RestartAlways pod [Conformance] + test/e2e/common/node/init_container.go:255 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Projected downwardAPI + [BeforeEach] [sig-node] InitContainer [NodeConformance] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:39:55.621 - Jun 12 20:39:55.621: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - E0612 20:39:55.621930 23 progress.go:80] Failed to post progress update to http://localhost:8099/progress: Post "http://localhost:8099/progress": dial tcp [::1]:8099: connect: connection refused - STEP: Building a namespace api object, basename projected 06/12/23 20:39:55.622 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:39:55.668 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:39:55.679 - [BeforeEach] [sig-storage] Projected downwardAPI + STEP: Creating a kubernetes client 07/27/23 01:28:33.156 + Jul 27 01:28:33.156: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename init-container 07/27/23 01:28:33.157 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:28:33.202 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:28:33.211 + [BeforeEach] [sig-node] InitContainer [NodeConformance] test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-storage] Projected downwardAPI - test/e2e/common/storage/projected_downwardapi.go:44 - [It] should update annotations on modification [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:162 - STEP: Creating the pod 06/12/23 20:39:55.692 - Jun 12 20:39:55.715: INFO: Waiting up to 5m0s for pod "annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851" in namespace "projected-2560" to be "running and ready" - Jun 12 20:39:55.721: INFO: Pod "annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851": Phase="Pending", Reason="", readiness=false. Elapsed: 5.873115ms - Jun 12 20:39:55.721: INFO: The phase of Pod annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:39:57.729: INFO: Pod "annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013760689s - Jun 12 20:39:57.729: INFO: The phase of Pod annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:39:59.735: INFO: Pod "annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019559176s - Jun 12 20:39:59.735: INFO: The phase of Pod annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:40:01.731: INFO: Pod "annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015792893s - Jun 12 20:40:01.731: INFO: The phase of Pod annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:40:03.730: INFO: Pod "annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01510368s - Jun 12 20:40:03.730: INFO: The phase of Pod annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:40:05.730: INFO: Pod "annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851": Phase="Pending", Reason="", readiness=false. Elapsed: 10.014221699s - Jun 12 20:40:05.730: INFO: The phase of Pod annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:40:07.807: INFO: Pod "annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851": Phase="Pending", Reason="", readiness=false. Elapsed: 12.091553873s - Jun 12 20:40:07.807: INFO: The phase of Pod annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:40:09.729: INFO: Pod "annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851": Phase="Running", Reason="", readiness=true. Elapsed: 14.014030106s - Jun 12 20:40:09.729: INFO: The phase of Pod annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851 is Running (Ready = true) - Jun 12 20:40:09.729: INFO: Pod "annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851" satisfied condition "running and ready" - Jun 12 20:40:10.331: INFO: Successfully updated pod "annotationupdate9702b376-a654-4aef-96b7-3d6f3da11851" - [AfterEach] [sig-storage] Projected downwardAPI + [BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:165 + [It] should invoke init containers on a RestartAlways pod [Conformance] + test/e2e/common/node/init_container.go:255 + STEP: creating the pod 07/27/23 01:28:33.222 + Jul 27 01:28:33.223: INFO: PodSpec: initContainers in spec.initContainers + [AfterEach] [sig-node] InitContainer [NodeConformance] test/e2e/framework/node/init/init.go:32 - Jun 12 20:40:12.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + Jul 27 01:28:44.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] tear down framework | framework.go:193 - STEP: Destroying namespace "projected-2560" for this suite. 06/12/23 20:40:12.445 + STEP: Destroying namespace "init-container-697" for this suite. 07/27/23 01:28:44.604 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSS +SSSS ------------------------------ -[sig-instrumentation] Events - should delete a collection of events [Conformance] - test/e2e/instrumentation/core_events.go:175 -[BeforeEach] [sig-instrumentation] Events +[sig-node] Secrets + should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/common/node/secrets.go:95 +[BeforeEach] [sig-node] Secrets set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:40:12.474 -Jun 12 20:40:12.474: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename events 06/12/23 20:40:12.476 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:40:12.547 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:40:12.577 -[BeforeEach] [sig-instrumentation] Events +STEP: Creating a kubernetes client 07/27/23 01:28:44.649 +Jul 27 01:28:44.649: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename secrets 07/27/23 01:28:44.65 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:28:44.696 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:28:44.705 +[BeforeEach] [sig-node] Secrets test/e2e/framework/metrics/init/init.go:31 -[It] should delete a collection of events [Conformance] - test/e2e/instrumentation/core_events.go:175 -STEP: Create set of events 06/12/23 20:40:12.598 -Jun 12 20:40:12.647: INFO: created test-event-1 -Jun 12 20:40:12.662: INFO: created test-event-2 -Jun 12 20:40:12.676: INFO: created test-event-3 -STEP: get a list of Events with a label in the current namespace 06/12/23 20:40:12.676 -STEP: delete collection of events 06/12/23 20:40:12.69 -Jun 12 20:40:12.690: INFO: requesting DeleteCollection of events -STEP: check that the list of events matches the requested quantity 06/12/23 20:40:12.765 -Jun 12 20:40:12.765: INFO: requesting list of events to confirm quantity -[AfterEach] [sig-instrumentation] Events +[It] should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/common/node/secrets.go:95 +STEP: creating secret secrets-7754/secret-test-b0926bdf-0fdd-471d-9840-76a9b672117a 07/27/23 01:28:44.718 +STEP: Creating a pod to test consume secrets 07/27/23 01:28:44.734 +Jul 27 01:28:44.762: INFO: Waiting up to 5m0s for pod "pod-configmaps-dc743279-ccab-45e6-9fb8-81ba36a13a75" in namespace "secrets-7754" to be "Succeeded or Failed" +Jul 27 01:28:44.782: INFO: Pod "pod-configmaps-dc743279-ccab-45e6-9fb8-81ba36a13a75": Phase="Pending", Reason="", readiness=false. Elapsed: 20.323003ms +Jul 27 01:28:46.790: INFO: Pod "pod-configmaps-dc743279-ccab-45e6-9fb8-81ba36a13a75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028321675s +Jul 27 01:28:48.792: INFO: Pod "pod-configmaps-dc743279-ccab-45e6-9fb8-81ba36a13a75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030074116s +Jul 27 01:28:50.792: INFO: Pod "pod-configmaps-dc743279-ccab-45e6-9fb8-81ba36a13a75": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030279759s +Jul 27 01:28:52.792: INFO: Pod "pod-configmaps-dc743279-ccab-45e6-9fb8-81ba36a13a75": Phase="Pending", Reason="", readiness=false. Elapsed: 8.030013852s +Jul 27 01:28:54.792: INFO: Pod "pod-configmaps-dc743279-ccab-45e6-9fb8-81ba36a13a75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.03056174s +STEP: Saw pod success 07/27/23 01:28:54.792 +Jul 27 01:28:54.792: INFO: Pod "pod-configmaps-dc743279-ccab-45e6-9fb8-81ba36a13a75" satisfied condition "Succeeded or Failed" +Jul 27 01:28:54.800: INFO: Trying to get logs from node 10.245.128.19 pod pod-configmaps-dc743279-ccab-45e6-9fb8-81ba36a13a75 container env-test: +STEP: delete the pod 07/27/23 01:28:54.842 +Jul 27 01:28:54.862: INFO: Waiting for pod pod-configmaps-dc743279-ccab-45e6-9fb8-81ba36a13a75 to disappear +Jul 27 01:28:54.869: INFO: Pod pod-configmaps-dc743279-ccab-45e6-9fb8-81ba36a13a75 no longer exists +[AfterEach] [sig-node] Secrets test/e2e/framework/node/init/init.go:32 -Jun 12 20:40:12.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-instrumentation] Events +Jul 27 01:28:54.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Secrets test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-instrumentation] Events +[DeferCleanup (Each)] [sig-node] Secrets dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-instrumentation] Events +[DeferCleanup (Each)] [sig-node] Secrets tear down framework | framework.go:193 -STEP: Destroying namespace "events-7658" for this suite. 06/12/23 20:40:12.788 +STEP: Destroying namespace "secrets-7754" for this suite. 07/27/23 01:28:54.883 ------------------------------ -• [0.340 seconds] -[sig-instrumentation] Events -test/e2e/instrumentation/common/framework.go:23 - should delete a collection of events [Conformance] - test/e2e/instrumentation/core_events.go:175 +• [SLOW TEST] [10.256 seconds] +[sig-node] Secrets +test/e2e/common/node/framework.go:23 + should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/common/node/secrets.go:95 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-instrumentation] Events + [BeforeEach] [sig-node] Secrets set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:40:12.474 - Jun 12 20:40:12.474: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename events 06/12/23 20:40:12.476 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:40:12.547 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:40:12.577 - [BeforeEach] [sig-instrumentation] Events + STEP: Creating a kubernetes client 07/27/23 01:28:44.649 + Jul 27 01:28:44.649: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename secrets 07/27/23 01:28:44.65 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:28:44.696 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:28:44.705 + [BeforeEach] [sig-node] Secrets test/e2e/framework/metrics/init/init.go:31 - [It] should delete a collection of events [Conformance] - test/e2e/instrumentation/core_events.go:175 - STEP: Create set of events 06/12/23 20:40:12.598 - Jun 12 20:40:12.647: INFO: created test-event-1 - Jun 12 20:40:12.662: INFO: created test-event-2 - Jun 12 20:40:12.676: INFO: created test-event-3 - STEP: get a list of Events with a label in the current namespace 06/12/23 20:40:12.676 - STEP: delete collection of events 06/12/23 20:40:12.69 - Jun 12 20:40:12.690: INFO: requesting DeleteCollection of events - STEP: check that the list of events matches the requested quantity 06/12/23 20:40:12.765 - Jun 12 20:40:12.765: INFO: requesting list of events to confirm quantity - [AfterEach] [sig-instrumentation] Events + [It] should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/common/node/secrets.go:95 + STEP: creating secret secrets-7754/secret-test-b0926bdf-0fdd-471d-9840-76a9b672117a 07/27/23 01:28:44.718 + STEP: Creating a pod to test consume secrets 07/27/23 01:28:44.734 + Jul 27 01:28:44.762: INFO: Waiting up to 5m0s for pod "pod-configmaps-dc743279-ccab-45e6-9fb8-81ba36a13a75" in namespace "secrets-7754" to be "Succeeded or Failed" + Jul 27 01:28:44.782: INFO: Pod "pod-configmaps-dc743279-ccab-45e6-9fb8-81ba36a13a75": Phase="Pending", Reason="", readiness=false. Elapsed: 20.323003ms + Jul 27 01:28:46.790: INFO: Pod "pod-configmaps-dc743279-ccab-45e6-9fb8-81ba36a13a75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028321675s + Jul 27 01:28:48.792: INFO: Pod "pod-configmaps-dc743279-ccab-45e6-9fb8-81ba36a13a75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030074116s + Jul 27 01:28:50.792: INFO: Pod "pod-configmaps-dc743279-ccab-45e6-9fb8-81ba36a13a75": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030279759s + Jul 27 01:28:52.792: INFO: Pod "pod-configmaps-dc743279-ccab-45e6-9fb8-81ba36a13a75": Phase="Pending", Reason="", readiness=false. Elapsed: 8.030013852s + Jul 27 01:28:54.792: INFO: Pod "pod-configmaps-dc743279-ccab-45e6-9fb8-81ba36a13a75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.03056174s + STEP: Saw pod success 07/27/23 01:28:54.792 + Jul 27 01:28:54.792: INFO: Pod "pod-configmaps-dc743279-ccab-45e6-9fb8-81ba36a13a75" satisfied condition "Succeeded or Failed" + Jul 27 01:28:54.800: INFO: Trying to get logs from node 10.245.128.19 pod pod-configmaps-dc743279-ccab-45e6-9fb8-81ba36a13a75 container env-test: + STEP: delete the pod 07/27/23 01:28:54.842 + Jul 27 01:28:54.862: INFO: Waiting for pod pod-configmaps-dc743279-ccab-45e6-9fb8-81ba36a13a75 to disappear + Jul 27 01:28:54.869: INFO: Pod pod-configmaps-dc743279-ccab-45e6-9fb8-81ba36a13a75 no longer exists + [AfterEach] [sig-node] Secrets test/e2e/framework/node/init/init.go:32 - Jun 12 20:40:12.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-instrumentation] Events + Jul 27 01:28:54.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Secrets test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-instrumentation] Events + [DeferCleanup (Each)] [sig-node] Secrets dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-instrumentation] Events + [DeferCleanup (Each)] [sig-node] Secrets tear down framework | framework.go:193 - STEP: Destroying namespace "events-7658" for this suite. 06/12/23 20:40:12.788 + STEP: Destroying namespace "secrets-7754" for this suite. 07/27/23 01:28:54.883 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSS +SSS ------------------------------ -[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - should honor timeout [Conformance] - test/e2e/apimachinery/webhook.go:381 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[sig-storage] ConfigMap + should be immutable if `immutable` field is set [Conformance] + test/e2e/common/storage/configmap_volume.go:504 +[BeforeEach] [sig-storage] ConfigMap set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:40:12.833 -Jun 12 20:40:12.834: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename webhook 06/12/23 20:40:12.839 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:40:12.937 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:40:12.958 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +STEP: Creating a kubernetes client 07/27/23 01:28:54.905 +Jul 27 01:28:54.905: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename configmap 07/27/23 01:28:54.906 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:28:54.945 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:28:54.954 +[BeforeEach] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:90 -STEP: Setting up server cert 06/12/23 20:40:13.026 -STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 20:40:15.107 -STEP: Deploying the webhook pod 06/12/23 20:40:15.145 -STEP: Wait for the deployment to be ready 06/12/23 20:40:15.205 -Jun 12 20:40:15.282: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-865554f4d9\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} -Jun 12 20:40:17.291: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 20:40:19.292: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 20:40:21.291: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 20:40:23.298: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 20:40:25.290: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 20:40:27.309: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} -STEP: Deploying the webhook service 06/12/23 20:40:29.29 -STEP: Verifying the service has paired with the endpoint 06/12/23 20:40:29.323 -Jun 12 20:40:30.324: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 -[It] should honor timeout [Conformance] - test/e2e/apimachinery/webhook.go:381 -STEP: Setting timeout (1s) shorter than webhook latency (5s) 06/12/23 20:40:30.336 -STEP: Registering slow webhook via the AdmissionRegistration API 06/12/23 20:40:30.337 -STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) 06/12/23 20:40:30.418 -STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore 06/12/23 20:40:31.445 -STEP: Registering slow webhook via the AdmissionRegistration API 06/12/23 20:40:31.445 -STEP: Having no error when timeout is longer than webhook latency 06/12/23 20:40:32.544 -STEP: Registering slow webhook via the AdmissionRegistration API 06/12/23 20:40:32.545 -STEP: Having no error when timeout is empty (defaulted to 10s in v1) 06/12/23 20:40:37.71 -STEP: Registering slow webhook via the AdmissionRegistration API 06/12/23 20:40:37.71 -[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[It] should be immutable if `immutable` field is set [Conformance] + test/e2e/common/storage/configmap_volume.go:504 +[AfterEach] [sig-storage] ConfigMap test/e2e/framework/node/init/init.go:32 -Jun 12 20:40:42.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:105 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +Jul 27 01:28:55.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-storage] ConfigMap dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-storage] ConfigMap tear down framework | framework.go:193 -STEP: Destroying namespace "webhook-8791" for this suite. 06/12/23 20:40:42.919 -STEP: Destroying namespace "webhook-8791-markers" for this suite. 06/12/23 20:40:42.963 +STEP: Destroying namespace "configmap-7003" for this suite. 07/27/23 01:28:55.22 ------------------------------ -• [SLOW TEST] [30.173 seconds] -[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] -test/e2e/apimachinery/framework.go:23 - should honor timeout [Conformance] - test/e2e/apimachinery/webhook.go:381 +• [0.354 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be immutable if `immutable` field is set [Conformance] + test/e2e/common/storage/configmap_volume.go:504 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [BeforeEach] [sig-storage] ConfigMap set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:40:12.833 - Jun 12 20:40:12.834: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename webhook 06/12/23 20:40:12.839 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:40:12.937 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:40:12.958 - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + STEP: Creating a kubernetes client 07/27/23 01:28:54.905 + Jul 27 01:28:54.905: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename configmap 07/27/23 01:28:54.906 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:28:54.945 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:28:54.954 + [BeforeEach] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:90 - STEP: Setting up server cert 06/12/23 20:40:13.026 - STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 20:40:15.107 - STEP: Deploying the webhook pod 06/12/23 20:40:15.145 - STEP: Wait for the deployment to be ready 06/12/23 20:40:15.205 - Jun 12 20:40:15.282: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-865554f4d9\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} - Jun 12 20:40:17.291: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 20:40:19.292: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 20:40:21.291: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 20:40:23.298: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 20:40:25.290: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 20:40:27.309: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 40, 15, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} - STEP: Deploying the webhook service 06/12/23 20:40:29.29 - STEP: Verifying the service has paired with the endpoint 06/12/23 20:40:29.323 - Jun 12 20:40:30.324: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 - [It] should honor timeout [Conformance] - test/e2e/apimachinery/webhook.go:381 - STEP: Setting timeout (1s) shorter than webhook latency (5s) 06/12/23 20:40:30.336 - STEP: Registering slow webhook via the AdmissionRegistration API 06/12/23 20:40:30.337 - STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) 06/12/23 20:40:30.418 - STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore 06/12/23 20:40:31.445 - STEP: Registering slow webhook via the AdmissionRegistration API 06/12/23 20:40:31.445 - STEP: Having no error when timeout is longer than webhook latency 06/12/23 20:40:32.544 - STEP: Registering slow webhook via the AdmissionRegistration API 06/12/23 20:40:32.545 - STEP: Having no error when timeout is empty (defaulted to 10s in v1) 06/12/23 20:40:37.71 - STEP: Registering slow webhook via the AdmissionRegistration API 06/12/23 20:40:37.71 - [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [It] should be immutable if `immutable` field is set [Conformance] + test/e2e/common/storage/configmap_volume.go:504 + [AfterEach] [sig-storage] ConfigMap test/e2e/framework/node/init/init.go:32 - Jun 12 20:40:42.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:105 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + Jul 27 01:28:55.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-storage] ConfigMap dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-storage] ConfigMap tear down framework | framework.go:193 - STEP: Destroying namespace "webhook-8791" for this suite. 06/12/23 20:40:42.919 - STEP: Destroying namespace "webhook-8791-markers" for this suite. 06/12/23 20:40:42.963 + STEP: Destroying namespace "configmap-7003" for this suite. 07/27/23 01:28:55.22 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-network] EndpointSlice - should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] - test/e2e/network/endpointslice.go:205 -[BeforeEach] [sig-network] EndpointSlice +[sig-storage] Downward API volume + should provide container's memory request [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:235 +[BeforeEach] [sig-storage] Downward API volume set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:40:43.01 -Jun 12 20:40:43.010: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename endpointslice 06/12/23 20:40:43.012 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:40:43.088 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:40:43.11 -[BeforeEach] [sig-network] EndpointSlice +STEP: Creating a kubernetes client 07/27/23 01:28:55.26 +Jul 27 01:28:55.261: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename downward-api 07/27/23 01:28:55.261 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:28:55.326 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:28:55.336 +[BeforeEach] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-network] EndpointSlice - test/e2e/network/endpointslice.go:52 -[It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] - test/e2e/network/endpointslice.go:205 -STEP: referencing a single matching pod 06/12/23 20:40:58.496 -STEP: referencing matching pods with named port 06/12/23 20:41:03.522 -STEP: creating empty Endpoints and EndpointSlices for no matching Pods 06/12/23 20:41:08.556 -STEP: recreating EndpointSlices after they've been deleted 06/12/23 20:41:13.631 -Jun 12 20:41:13.699: INFO: EndpointSlice for Service endpointslice-8044/example-named-port not found -[AfterEach] [sig-network] EndpointSlice +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 +[It] should provide container's memory request [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:235 +STEP: Creating a pod to test downward API volume plugin 07/27/23 01:28:55.345 +W0727 01:28:55.377313 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +Jul 27 01:28:55.377: INFO: Waiting up to 5m0s for pod "downwardapi-volume-03aacce7-7b3b-428d-b7ae-ccbe048c7925" in namespace "downward-api-9643" to be "Succeeded or Failed" +Jul 27 01:28:55.387: INFO: Pod "downwardapi-volume-03aacce7-7b3b-428d-b7ae-ccbe048c7925": Phase="Pending", Reason="", readiness=false. Elapsed: 9.8419ms +Jul 27 01:28:57.396: INFO: Pod "downwardapi-volume-03aacce7-7b3b-428d-b7ae-ccbe048c7925": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019517403s +Jul 27 01:28:59.397: INFO: Pod "downwardapi-volume-03aacce7-7b3b-428d-b7ae-ccbe048c7925": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019953383s +STEP: Saw pod success 07/27/23 01:28:59.397 +Jul 27 01:28:59.397: INFO: Pod "downwardapi-volume-03aacce7-7b3b-428d-b7ae-ccbe048c7925" satisfied condition "Succeeded or Failed" +Jul 27 01:28:59.405: INFO: Trying to get logs from node 10.245.128.18 pod downwardapi-volume-03aacce7-7b3b-428d-b7ae-ccbe048c7925 container client-container: +STEP: delete the pod 07/27/23 01:28:59.454 +Jul 27 01:28:59.475: INFO: Waiting for pod downwardapi-volume-03aacce7-7b3b-428d-b7ae-ccbe048c7925 to disappear +Jul 27 01:28:59.483: INFO: Pod downwardapi-volume-03aacce7-7b3b-428d-b7ae-ccbe048c7925 no longer exists +[AfterEach] [sig-storage] Downward API volume test/e2e/framework/node/init/init.go:32 -Jun 12 20:41:23.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] EndpointSlice +Jul 27 01:28:59.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] EndpointSlice +[DeferCleanup (Each)] [sig-storage] Downward API volume dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] EndpointSlice +[DeferCleanup (Each)] [sig-storage] Downward API volume tear down framework | framework.go:193 -STEP: Destroying namespace "endpointslice-8044" for this suite. 06/12/23 20:41:23.739 +STEP: Destroying namespace "downward-api-9643" for this suite. 07/27/23 01:28:59.496 ------------------------------ -• [SLOW TEST] [40.751 seconds] -[sig-network] EndpointSlice -test/e2e/network/common/framework.go:23 - should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] - test/e2e/network/endpointslice.go:205 +• [4.260 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should provide container's memory request [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:235 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] EndpointSlice + [BeforeEach] [sig-storage] Downward API volume set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:40:43.01 - Jun 12 20:40:43.010: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename endpointslice 06/12/23 20:40:43.012 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:40:43.088 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:40:43.11 - [BeforeEach] [sig-network] EndpointSlice + STEP: Creating a kubernetes client 07/27/23 01:28:55.26 + Jul 27 01:28:55.261: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename downward-api 07/27/23 01:28:55.261 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:28:55.326 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:28:55.336 + [BeforeEach] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-network] EndpointSlice - test/e2e/network/endpointslice.go:52 - [It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] - test/e2e/network/endpointslice.go:205 - STEP: referencing a single matching pod 06/12/23 20:40:58.496 - STEP: referencing matching pods with named port 06/12/23 20:41:03.522 - STEP: creating empty Endpoints and EndpointSlices for no matching Pods 06/12/23 20:41:08.556 - STEP: recreating EndpointSlices after they've been deleted 06/12/23 20:41:13.631 - Jun 12 20:41:13.699: INFO: EndpointSlice for Service endpointslice-8044/example-named-port not found - [AfterEach] [sig-network] EndpointSlice + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 + [It] should provide container's memory request [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:235 + STEP: Creating a pod to test downward API volume plugin 07/27/23 01:28:55.345 + W0727 01:28:55.377313 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + Jul 27 01:28:55.377: INFO: Waiting up to 5m0s for pod "downwardapi-volume-03aacce7-7b3b-428d-b7ae-ccbe048c7925" in namespace "downward-api-9643" to be "Succeeded or Failed" + Jul 27 01:28:55.387: INFO: Pod "downwardapi-volume-03aacce7-7b3b-428d-b7ae-ccbe048c7925": Phase="Pending", Reason="", readiness=false. Elapsed: 9.8419ms + Jul 27 01:28:57.396: INFO: Pod "downwardapi-volume-03aacce7-7b3b-428d-b7ae-ccbe048c7925": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019517403s + Jul 27 01:28:59.397: INFO: Pod "downwardapi-volume-03aacce7-7b3b-428d-b7ae-ccbe048c7925": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019953383s + STEP: Saw pod success 07/27/23 01:28:59.397 + Jul 27 01:28:59.397: INFO: Pod "downwardapi-volume-03aacce7-7b3b-428d-b7ae-ccbe048c7925" satisfied condition "Succeeded or Failed" + Jul 27 01:28:59.405: INFO: Trying to get logs from node 10.245.128.18 pod downwardapi-volume-03aacce7-7b3b-428d-b7ae-ccbe048c7925 container client-container: + STEP: delete the pod 07/27/23 01:28:59.454 + Jul 27 01:28:59.475: INFO: Waiting for pod downwardapi-volume-03aacce7-7b3b-428d-b7ae-ccbe048c7925 to disappear + Jul 27 01:28:59.483: INFO: Pod downwardapi-volume-03aacce7-7b3b-428d-b7ae-ccbe048c7925 no longer exists + [AfterEach] [sig-storage] Downward API volume test/e2e/framework/node/init/init.go:32 - Jun 12 20:41:23.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] EndpointSlice + Jul 27 01:28:59.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] EndpointSlice + [DeferCleanup (Each)] [sig-storage] Downward API volume dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] EndpointSlice + [DeferCleanup (Each)] [sig-storage] Downward API volume tear down framework | framework.go:193 - STEP: Destroying namespace "endpointslice-8044" for this suite. 06/12/23 20:41:23.739 + STEP: Destroying namespace "downward-api-9643" for this suite. 07/27/23 01:28:59.496 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSS ------------------------------ -[sig-storage] Projected secret - optional updates should be reflected in volume [NodeConformance] [Conformance] - test/e2e/common/storage/projected_secret.go:215 -[BeforeEach] [sig-storage] Projected secret +[sig-api-machinery] Namespaces [Serial] + should apply an update to a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:366 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:41:23.768 -Jun 12 20:41:23.768: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename projected 06/12/23 20:41:23.771 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:41:23.825 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:41:23.84 -[BeforeEach] [sig-storage] Projected secret +STEP: Creating a kubernetes client 07/27/23 01:28:59.521 +Jul 27 01:28:59.521: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename namespaces 07/27/23 01:28:59.522 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:28:59.56 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:28:59.568 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 -[It] optional updates should be reflected in volume [NodeConformance] [Conformance] - test/e2e/common/storage/projected_secret.go:215 -Jun 12 20:41:23.868: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node -STEP: Creating secret with name s-test-opt-del-ced45479-51fc-43fd-8c02-19861668bd85 06/12/23 20:41:23.868 -STEP: Creating secret with name s-test-opt-upd-afe26941-9fe0-4b7b-a57c-730816d6e376 06/12/23 20:41:23.902 -STEP: Creating the pod 06/12/23 20:41:23.924 -Jun 12 20:41:23.953: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d6ffda43-f68d-45c2-904f-2e0fd4ee4602" in namespace "projected-9076" to be "running and ready" -Jun 12 20:41:23.964: INFO: Pod "pod-projected-secrets-d6ffda43-f68d-45c2-904f-2e0fd4ee4602": Phase="Pending", Reason="", readiness=false. Elapsed: 10.889438ms -Jun 12 20:41:23.964: INFO: The phase of Pod pod-projected-secrets-d6ffda43-f68d-45c2-904f-2e0fd4ee4602 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:41:25.972: INFO: Pod "pod-projected-secrets-d6ffda43-f68d-45c2-904f-2e0fd4ee4602": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018999376s -Jun 12 20:41:25.972: INFO: The phase of Pod pod-projected-secrets-d6ffda43-f68d-45c2-904f-2e0fd4ee4602 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:41:27.972: INFO: Pod "pod-projected-secrets-d6ffda43-f68d-45c2-904f-2e0fd4ee4602": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01896376s -Jun 12 20:41:27.972: INFO: The phase of Pod pod-projected-secrets-d6ffda43-f68d-45c2-904f-2e0fd4ee4602 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:41:29.972: INFO: Pod "pod-projected-secrets-d6ffda43-f68d-45c2-904f-2e0fd4ee4602": Phase="Running", Reason="", readiness=true. Elapsed: 6.019656194s -Jun 12 20:41:29.973: INFO: The phase of Pod pod-projected-secrets-d6ffda43-f68d-45c2-904f-2e0fd4ee4602 is Running (Ready = true) -Jun 12 20:41:29.973: INFO: Pod "pod-projected-secrets-d6ffda43-f68d-45c2-904f-2e0fd4ee4602" satisfied condition "running and ready" -STEP: Deleting secret s-test-opt-del-ced45479-51fc-43fd-8c02-19861668bd85 06/12/23 20:41:30.104 -STEP: Updating secret s-test-opt-upd-afe26941-9fe0-4b7b-a57c-730816d6e376 06/12/23 20:41:30.121 -STEP: Creating secret with name s-test-opt-create-4ad84648-d541-4198-969d-7508b0668d50 06/12/23 20:41:30.134 -STEP: waiting to observe update in volume 06/12/23 20:41:30.149 -[AfterEach] [sig-storage] Projected secret +[It] should apply an update to a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:366 +STEP: Updating Namespace "namespaces-2024" 07/27/23 01:28:59.577 +Jul 27 01:28:59.616: INFO: Namespace "namespaces-2024" now has labels, map[string]string{"e2e-framework":"namespaces", "e2e-run":"d18faff3-626a-4ea4-87bd-253935adf598", "kubernetes.io/metadata.name":"namespaces-2024", "namespaces-2024":"updated", "pod-security.kubernetes.io/audit":"privileged", "pod-security.kubernetes.io/audit-version":"v1.24", "pod-security.kubernetes.io/enforce":"baseline", "pod-security.kubernetes.io/warn":"privileged", "pod-security.kubernetes.io/warn-version":"v1.24"} +[AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 -Jun 12 20:41:34.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Projected secret +Jul 27 01:28:59.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Projected secret +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Projected secret +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 -STEP: Destroying namespace "projected-9076" for this suite. 06/12/23 20:41:34.311 +STEP: Destroying namespace "namespaces-2024" for this suite. 07/27/23 01:28:59.632 ------------------------------ -• [SLOW TEST] [10.563 seconds] -[sig-storage] Projected secret -test/e2e/common/storage/framework.go:23 - optional updates should be reflected in volume [NodeConformance] [Conformance] - test/e2e/common/storage/projected_secret.go:215 +• [0.134 seconds] +[sig-api-machinery] Namespaces [Serial] +test/e2e/apimachinery/framework.go:23 + should apply an update to a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:366 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Projected secret + [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:41:23.768 - Jun 12 20:41:23.768: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename projected 06/12/23 20:41:23.771 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:41:23.825 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:41:23.84 - [BeforeEach] [sig-storage] Projected secret + STEP: Creating a kubernetes client 07/27/23 01:28:59.521 + Jul 27 01:28:59.521: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename namespaces 07/27/23 01:28:59.522 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:28:59.56 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:28:59.568 + [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 - [It] optional updates should be reflected in volume [NodeConformance] [Conformance] - test/e2e/common/storage/projected_secret.go:215 - Jun 12 20:41:23.868: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node - STEP: Creating secret with name s-test-opt-del-ced45479-51fc-43fd-8c02-19861668bd85 06/12/23 20:41:23.868 - STEP: Creating secret with name s-test-opt-upd-afe26941-9fe0-4b7b-a57c-730816d6e376 06/12/23 20:41:23.902 - STEP: Creating the pod 06/12/23 20:41:23.924 - Jun 12 20:41:23.953: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d6ffda43-f68d-45c2-904f-2e0fd4ee4602" in namespace "projected-9076" to be "running and ready" - Jun 12 20:41:23.964: INFO: Pod "pod-projected-secrets-d6ffda43-f68d-45c2-904f-2e0fd4ee4602": Phase="Pending", Reason="", readiness=false. Elapsed: 10.889438ms - Jun 12 20:41:23.964: INFO: The phase of Pod pod-projected-secrets-d6ffda43-f68d-45c2-904f-2e0fd4ee4602 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:41:25.972: INFO: Pod "pod-projected-secrets-d6ffda43-f68d-45c2-904f-2e0fd4ee4602": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018999376s - Jun 12 20:41:25.972: INFO: The phase of Pod pod-projected-secrets-d6ffda43-f68d-45c2-904f-2e0fd4ee4602 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:41:27.972: INFO: Pod "pod-projected-secrets-d6ffda43-f68d-45c2-904f-2e0fd4ee4602": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01896376s - Jun 12 20:41:27.972: INFO: The phase of Pod pod-projected-secrets-d6ffda43-f68d-45c2-904f-2e0fd4ee4602 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:41:29.972: INFO: Pod "pod-projected-secrets-d6ffda43-f68d-45c2-904f-2e0fd4ee4602": Phase="Running", Reason="", readiness=true. Elapsed: 6.019656194s - Jun 12 20:41:29.973: INFO: The phase of Pod pod-projected-secrets-d6ffda43-f68d-45c2-904f-2e0fd4ee4602 is Running (Ready = true) - Jun 12 20:41:29.973: INFO: Pod "pod-projected-secrets-d6ffda43-f68d-45c2-904f-2e0fd4ee4602" satisfied condition "running and ready" - STEP: Deleting secret s-test-opt-del-ced45479-51fc-43fd-8c02-19861668bd85 06/12/23 20:41:30.104 - STEP: Updating secret s-test-opt-upd-afe26941-9fe0-4b7b-a57c-730816d6e376 06/12/23 20:41:30.121 - STEP: Creating secret with name s-test-opt-create-4ad84648-d541-4198-969d-7508b0668d50 06/12/23 20:41:30.134 - STEP: waiting to observe update in volume 06/12/23 20:41:30.149 - [AfterEach] [sig-storage] Projected secret + [It] should apply an update to a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:366 + STEP: Updating Namespace "namespaces-2024" 07/27/23 01:28:59.577 + Jul 27 01:28:59.616: INFO: Namespace "namespaces-2024" now has labels, map[string]string{"e2e-framework":"namespaces", "e2e-run":"d18faff3-626a-4ea4-87bd-253935adf598", "kubernetes.io/metadata.name":"namespaces-2024", "namespaces-2024":"updated", "pod-security.kubernetes.io/audit":"privileged", "pod-security.kubernetes.io/audit-version":"v1.24", "pod-security.kubernetes.io/enforce":"baseline", "pod-security.kubernetes.io/warn":"privileged", "pod-security.kubernetes.io/warn-version":"v1.24"} + [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 - Jun 12 20:41:34.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Projected secret + Jul 27 01:28:59.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Projected secret + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Projected secret + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 - STEP: Destroying namespace "projected-9076" for this suite. 06/12/23 20:41:34.311 + STEP: Destroying namespace "namespaces-2024" for this suite. 07/27/23 01:28:59.632 << End Captured GinkgoWriter Output ------------------------------ -SSS +SSSSSSSSSSS ------------------------------ -[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - should mutate configmap [Conformance] - test/e2e/apimachinery/webhook.go:252 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart exec hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:134 +[BeforeEach] [sig-node] Container Lifecycle Hook set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:41:34.333 -Jun 12 20:41:34.333: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename webhook 06/12/23 20:41:34.336 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:41:34.425 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:41:34.436 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +STEP: Creating a kubernetes client 07/27/23 01:28:59.656 +Jul 27 01:28:59.656: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename container-lifecycle-hook 07/27/23 01:28:59.658 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:28:59.697 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:28:59.706 +[BeforeEach] [sig-node] Container Lifecycle Hook test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:90 -STEP: Setting up server cert 06/12/23 20:41:34.49 -STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 20:41:36.434 -STEP: Deploying the webhook pod 06/12/23 20:41:36.467 -STEP: Wait for the deployment to be ready 06/12/23 20:41:36.493 -Jun 12 20:41:36.508: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created -Jun 12 20:41:38.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 20, 41, 36, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 41, 36, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 20, 41, 36, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 41, 36, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} -STEP: Deploying the webhook service 06/12/23 20:41:40.552 -STEP: Verifying the service has paired with the endpoint 06/12/23 20:41:40.604 -Jun 12 20:41:41.609: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 -[It] should mutate configmap [Conformance] - test/e2e/apimachinery/webhook.go:252 -STEP: Registering the mutating configmap webhook via the AdmissionRegistration API 06/12/23 20:41:41.623 -STEP: create a configmap that should be updated by the webhook 06/12/23 20:41:41.696 -[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:77 +STEP: create the container to handle the HTTPGet hook request. 07/27/23 01:28:59.729 +Jul 27 01:28:59.758: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-5993" to be "running and ready" +Jul 27 01:28:59.766: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 8.188831ms +Jul 27 01:28:59.766: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Jul 27 01:29:01.775: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0169371s +Jul 27 01:29:01.775: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Jul 27 01:29:03.777: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019634839s +Jul 27 01:29:03.778: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Jul 27 01:29:05.777: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019129846s +Jul 27 01:29:05.777: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Jul 27 01:29:07.776: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017771725s +Jul 27 01:29:07.776: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Jul 27 01:29:09.776: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 10.018026617s +Jul 27 01:29:09.776: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Jul 27 01:29:11.776: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 12.017847673s +Jul 27 01:29:11.776: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) +Jul 27 01:29:11.776: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" +[It] should execute poststart exec hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:134 +STEP: create the pod with lifecycle hook 07/27/23 01:29:11.785 +Jul 27 01:29:11.801: INFO: Waiting up to 5m0s for pod "pod-with-poststart-exec-hook" in namespace "container-lifecycle-hook-5993" to be "running and ready" +Jul 27 01:29:11.809: INFO: Pod "pod-with-poststart-exec-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 8.692658ms +Jul 27 01:29:11.809: INFO: The phase of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) +Jul 27 01:29:13.819: INFO: Pod "pod-with-poststart-exec-hook": Phase="Running", Reason="", readiness=true. Elapsed: 2.018578536s +Jul 27 01:29:13.819: INFO: The phase of Pod pod-with-poststart-exec-hook is Running (Ready = true) +Jul 27 01:29:13.819: INFO: Pod "pod-with-poststart-exec-hook" satisfied condition "running and ready" +STEP: check poststart hook 07/27/23 01:29:13.827 +STEP: delete the pod with lifecycle hook 07/27/23 01:29:13.87 +Jul 27 01:29:13.884: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jul 27 01:29:13.892: INFO: Pod pod-with-poststart-exec-hook still exists +Jul 27 01:29:15.893: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jul 27 01:29:15.903: INFO: Pod pod-with-poststart-exec-hook still exists +Jul 27 01:29:17.893: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jul 27 01:29:17.903: INFO: Pod pod-with-poststart-exec-hook no longer exists +[AfterEach] [sig-node] Container Lifecycle Hook test/e2e/framework/node/init/init.go:32 -Jun 12 20:41:41.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:105 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +Jul 27 01:29:17.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook tear down framework | framework.go:193 -STEP: Destroying namespace "webhook-9487" for this suite. 06/12/23 20:41:41.873 -STEP: Destroying namespace "webhook-9487-markers" for this suite. 06/12/23 20:41:41.896 +STEP: Destroying namespace "container-lifecycle-hook-5993" for this suite. 07/27/23 01:29:17.919 ------------------------------ -• [SLOW TEST] [7.592 seconds] -[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] -test/e2e/apimachinery/framework.go:23 - should mutate configmap [Conformance] - test/e2e/apimachinery/webhook.go:252 +• [SLOW TEST] [18.299 seconds] +[sig-node] Container Lifecycle Hook +test/e2e/common/node/framework.go:23 + when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:46 + should execute poststart exec hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:134 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [BeforeEach] [sig-node] Container Lifecycle Hook set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:41:34.333 - Jun 12 20:41:34.333: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename webhook 06/12/23 20:41:34.336 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:41:34.425 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:41:34.436 - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + STEP: Creating a kubernetes client 07/27/23 01:28:59.656 + Jul 27 01:28:59.656: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename container-lifecycle-hook 07/27/23 01:28:59.658 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:28:59.697 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:28:59.706 + [BeforeEach] [sig-node] Container Lifecycle Hook test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:90 - STEP: Setting up server cert 06/12/23 20:41:34.49 - STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 20:41:36.434 - STEP: Deploying the webhook pod 06/12/23 20:41:36.467 - STEP: Wait for the deployment to be ready 06/12/23 20:41:36.493 - Jun 12 20:41:36.508: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created - Jun 12 20:41:38.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 20, 41, 36, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 41, 36, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 20, 41, 36, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 41, 36, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} - STEP: Deploying the webhook service 06/12/23 20:41:40.552 - STEP: Verifying the service has paired with the endpoint 06/12/23 20:41:40.604 - Jun 12 20:41:41.609: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 - [It] should mutate configmap [Conformance] - test/e2e/apimachinery/webhook.go:252 - STEP: Registering the mutating configmap webhook via the AdmissionRegistration API 06/12/23 20:41:41.623 - STEP: create a configmap that should be updated by the webhook 06/12/23 20:41:41.696 - [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:77 + STEP: create the container to handle the HTTPGet hook request. 07/27/23 01:28:59.729 + Jul 27 01:28:59.758: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-5993" to be "running and ready" + Jul 27 01:28:59.766: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 8.188831ms + Jul 27 01:28:59.766: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) + Jul 27 01:29:01.775: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0169371s + Jul 27 01:29:01.775: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) + Jul 27 01:29:03.777: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019634839s + Jul 27 01:29:03.778: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) + Jul 27 01:29:05.777: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019129846s + Jul 27 01:29:05.777: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) + Jul 27 01:29:07.776: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017771725s + Jul 27 01:29:07.776: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) + Jul 27 01:29:09.776: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 10.018026617s + Jul 27 01:29:09.776: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) + Jul 27 01:29:11.776: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 12.017847673s + Jul 27 01:29:11.776: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) + Jul 27 01:29:11.776: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" + [It] should execute poststart exec hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:134 + STEP: create the pod with lifecycle hook 07/27/23 01:29:11.785 + Jul 27 01:29:11.801: INFO: Waiting up to 5m0s for pod "pod-with-poststart-exec-hook" in namespace "container-lifecycle-hook-5993" to be "running and ready" + Jul 27 01:29:11.809: INFO: Pod "pod-with-poststart-exec-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 8.692658ms + Jul 27 01:29:11.809: INFO: The phase of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) + Jul 27 01:29:13.819: INFO: Pod "pod-with-poststart-exec-hook": Phase="Running", Reason="", readiness=true. Elapsed: 2.018578536s + Jul 27 01:29:13.819: INFO: The phase of Pod pod-with-poststart-exec-hook is Running (Ready = true) + Jul 27 01:29:13.819: INFO: Pod "pod-with-poststart-exec-hook" satisfied condition "running and ready" + STEP: check poststart hook 07/27/23 01:29:13.827 + STEP: delete the pod with lifecycle hook 07/27/23 01:29:13.87 + Jul 27 01:29:13.884: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear + Jul 27 01:29:13.892: INFO: Pod pod-with-poststart-exec-hook still exists + Jul 27 01:29:15.893: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear + Jul 27 01:29:15.903: INFO: Pod pod-with-poststart-exec-hook still exists + Jul 27 01:29:17.893: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear + Jul 27 01:29:17.903: INFO: Pod pod-with-poststart-exec-hook no longer exists + [AfterEach] [sig-node] Container Lifecycle Hook test/e2e/framework/node/init/init.go:32 - Jun 12 20:41:41.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:105 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + Jul 27 01:29:17.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook tear down framework | framework.go:193 - STEP: Destroying namespace "webhook-9487" for this suite. 06/12/23 20:41:41.873 - STEP: Destroying namespace "webhook-9487-markers" for this suite. 06/12/23 20:41:41.896 + STEP: Destroying namespace "container-lifecycle-hook-5993" for this suite. 07/27/23 01:29:17.919 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSS ------------------------------ -[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - should mutate custom resource with different stored version [Conformance] - test/e2e/apimachinery/webhook.go:323 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[sig-api-machinery] Garbage collector + should orphan pods created by rc if delete options say so [Conformance] + test/e2e/apimachinery/garbage_collector.go:370 +[BeforeEach] [sig-api-machinery] Garbage collector set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:41:41.937 -Jun 12 20:41:41.937: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename webhook 06/12/23 20:41:41.939 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:41:41.993 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:41:42.043 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +STEP: Creating a kubernetes client 07/27/23 01:29:17.956 +Jul 27 01:29:17.956: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename gc 07/27/23 01:29:17.957 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:29:17.998 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:29:18.01 +[BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:90 -STEP: Setting up server cert 06/12/23 20:41:42.113 -STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 20:41:44.391 -STEP: Deploying the webhook pod 06/12/23 20:41:44.419 -STEP: Wait for the deployment to be ready 06/12/23 20:41:44.446 -Jun 12 20:41:44.459: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set -Jun 12 20:41:46.480: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 20, 41, 44, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 41, 44, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 20, 41, 44, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 41, 44, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} -STEP: Deploying the webhook service 06/12/23 20:41:48.488 -STEP: Verifying the service has paired with the endpoint 06/12/23 20:41:48.527 -Jun 12 20:41:49.528: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 -[It] should mutate custom resource with different stored version [Conformance] - test/e2e/apimachinery/webhook.go:323 -Jun 12 20:41:49.543: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4856-crds.webhook.example.com via the AdmissionRegistration API 06/12/23 20:41:50.087 -STEP: Creating a custom resource while v1 is storage version 06/12/23 20:41:50.127 -STEP: Patching Custom Resource Definition to set v2 as storage 06/12/23 20:41:52.316 -STEP: Patching the custom resource while v2 is storage version 06/12/23 20:41:52.328 -[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[It] should orphan pods created by rc if delete options say so [Conformance] + test/e2e/apimachinery/garbage_collector.go:370 +STEP: create the rc 07/27/23 01:29:18.036 +STEP: delete the rc 07/27/23 01:29:23.079 +STEP: wait for the rc to be deleted 07/27/23 01:29:23.112 +STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods 07/27/23 01:29:28.134 +STEP: Gathering metrics 07/27/23 01:29:58.166 +W0727 01:29:58.185663 20 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +Jul 27 01:29:58.185: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +Jul 27 01:29:58.185: INFO: Deleting pod "simpletest.rc-22krd" in namespace "gc-6844" +Jul 27 01:29:58.204: INFO: Deleting pod "simpletest.rc-2562p" in namespace "gc-6844" +Jul 27 01:29:58.228: INFO: Deleting pod "simpletest.rc-25s7x" in namespace "gc-6844" +Jul 27 01:29:58.262: INFO: Deleting pod "simpletest.rc-26qm6" in namespace "gc-6844" +Jul 27 01:29:58.303: INFO: Deleting pod "simpletest.rc-27g22" in namespace "gc-6844" +Jul 27 01:29:58.325: INFO: Deleting pod "simpletest.rc-28cd5" in namespace "gc-6844" +Jul 27 01:29:58.354: INFO: Deleting pod "simpletest.rc-2mb8x" in namespace "gc-6844" +Jul 27 01:29:58.374: INFO: Deleting pod "simpletest.rc-2mlsr" in namespace "gc-6844" +Jul 27 01:29:58.401: INFO: Deleting pod "simpletest.rc-2rg2m" in namespace "gc-6844" +Jul 27 01:29:58.430: INFO: Deleting pod "simpletest.rc-2t88b" in namespace "gc-6844" +Jul 27 01:29:58.463: INFO: Deleting pod "simpletest.rc-42fgq" in namespace "gc-6844" +Jul 27 01:29:58.488: INFO: Deleting pod "simpletest.rc-4jzck" in namespace "gc-6844" +Jul 27 01:29:58.525: INFO: Deleting pod "simpletest.rc-4r2lh" in namespace "gc-6844" +Jul 27 01:29:58.548: INFO: Deleting pod "simpletest.rc-4t82c" in namespace "gc-6844" +Jul 27 01:29:58.567: INFO: Deleting pod "simpletest.rc-57vd2" in namespace "gc-6844" +Jul 27 01:29:58.591: INFO: Deleting pod "simpletest.rc-5hctb" in namespace "gc-6844" +Jul 27 01:29:58.635: INFO: Deleting pod "simpletest.rc-6259h" in namespace "gc-6844" +Jul 27 01:29:58.662: INFO: Deleting pod "simpletest.rc-6htf6" in namespace "gc-6844" +Jul 27 01:29:58.699: INFO: Deleting pod "simpletest.rc-6mklr" in namespace "gc-6844" +Jul 27 01:29:58.744: INFO: Deleting pod "simpletest.rc-6r9bz" in namespace "gc-6844" +Jul 27 01:29:58.788: INFO: Deleting pod "simpletest.rc-6swbb" in namespace "gc-6844" +Jul 27 01:29:58.825: INFO: Deleting pod "simpletest.rc-75xbf" in namespace "gc-6844" +Jul 27 01:29:58.863: INFO: Deleting pod "simpletest.rc-7899j" in namespace "gc-6844" +Jul 27 01:29:58.905: INFO: Deleting pod "simpletest.rc-7b26t" in namespace "gc-6844" +Jul 27 01:29:58.931: INFO: Deleting pod "simpletest.rc-7qxx8" in namespace "gc-6844" +Jul 27 01:29:58.955: INFO: Deleting pod "simpletest.rc-7r7cq" in namespace "gc-6844" +Jul 27 01:29:58.993: INFO: Deleting pod "simpletest.rc-7sll7" in namespace "gc-6844" +Jul 27 01:29:59.021: INFO: Deleting pod "simpletest.rc-8hjrx" in namespace "gc-6844" +Jul 27 01:29:59.048: INFO: Deleting pod "simpletest.rc-8nfrl" in namespace "gc-6844" +Jul 27 01:29:59.072: INFO: Deleting pod "simpletest.rc-9x969" in namespace "gc-6844" +Jul 27 01:29:59.093: INFO: Deleting pod "simpletest.rc-9zglp" in namespace "gc-6844" +Jul 27 01:29:59.119: INFO: Deleting pod "simpletest.rc-bjnld" in namespace "gc-6844" +Jul 27 01:29:59.143: INFO: Deleting pod "simpletest.rc-bz2sh" in namespace "gc-6844" +Jul 27 01:29:59.184: INFO: Deleting pod "simpletest.rc-cdqr4" in namespace "gc-6844" +Jul 27 01:29:59.215: INFO: Deleting pod "simpletest.rc-cl4gt" in namespace "gc-6844" +Jul 27 01:29:59.246: INFO: Deleting pod "simpletest.rc-cp9d7" in namespace "gc-6844" +Jul 27 01:29:59.278: INFO: Deleting pod "simpletest.rc-cvpmx" in namespace "gc-6844" +Jul 27 01:29:59.312: INFO: Deleting pod "simpletest.rc-czrcv" in namespace "gc-6844" +Jul 27 01:29:59.337: INFO: Deleting pod "simpletest.rc-d2ccm" in namespace "gc-6844" +Jul 27 01:29:59.372: INFO: Deleting pod "simpletest.rc-dbbxl" in namespace "gc-6844" +Jul 27 01:29:59.410: INFO: Deleting pod "simpletest.rc-ddjdk" in namespace "gc-6844" +Jul 27 01:29:59.445: INFO: Deleting pod "simpletest.rc-djhdz" in namespace "gc-6844" +Jul 27 01:29:59.502: INFO: Deleting pod "simpletest.rc-ds8tv" in namespace "gc-6844" +Jul 27 01:29:59.534: INFO: Deleting pod "simpletest.rc-fbn27" in namespace "gc-6844" +Jul 27 01:29:59.564: INFO: Deleting pod "simpletest.rc-fhwtr" in namespace "gc-6844" +Jul 27 01:29:59.617: INFO: Deleting pod "simpletest.rc-fpg4g" in namespace "gc-6844" +Jul 27 01:29:59.721: INFO: Deleting pod "simpletest.rc-g2jnb" in namespace "gc-6844" +Jul 27 01:29:59.799: INFO: Deleting pod "simpletest.rc-g96g9" in namespace "gc-6844" +Jul 27 01:29:59.822: INFO: Deleting pod "simpletest.rc-gbdnt" in namespace "gc-6844" +Jul 27 01:29:59.857: INFO: Deleting pod "simpletest.rc-gqtwl" in namespace "gc-6844" +Jul 27 01:29:59.881: INFO: Deleting pod "simpletest.rc-gwcr2" in namespace "gc-6844" +Jul 27 01:29:59.909: INFO: Deleting pod "simpletest.rc-h48x2" in namespace "gc-6844" +Jul 27 01:29:59.939: INFO: Deleting pod "simpletest.rc-hh6ng" in namespace "gc-6844" +Jul 27 01:29:59.965: INFO: Deleting pod "simpletest.rc-hvc2q" in namespace "gc-6844" +Jul 27 01:29:59.986: INFO: Deleting pod "simpletest.rc-k8b5g" in namespace "gc-6844" +Jul 27 01:30:00.010: INFO: Deleting pod "simpletest.rc-kbblp" in namespace "gc-6844" +Jul 27 01:30:00.029: INFO: Deleting pod "simpletest.rc-kfj5f" in namespace "gc-6844" +Jul 27 01:30:00.057: INFO: Deleting pod "simpletest.rc-kgcqc" in namespace "gc-6844" +Jul 27 01:30:00.094: INFO: Deleting pod "simpletest.rc-kgzxk" in namespace "gc-6844" +Jul 27 01:30:00.130: INFO: Deleting pod "simpletest.rc-ljq5h" in namespace "gc-6844" +Jul 27 01:30:00.203: INFO: Deleting pod "simpletest.rc-lmzp5" in namespace "gc-6844" +Jul 27 01:30:00.253: INFO: Deleting pod "simpletest.rc-ls9gw" in namespace "gc-6844" +Jul 27 01:30:00.289: INFO: Deleting pod "simpletest.rc-m8p6w" in namespace "gc-6844" +Jul 27 01:30:00.331: INFO: Deleting pod "simpletest.rc-n86zp" in namespace "gc-6844" +Jul 27 01:30:00.356: INFO: Deleting pod "simpletest.rc-nfjx8" in namespace "gc-6844" +Jul 27 01:30:00.388: INFO: Deleting pod "simpletest.rc-nmmmh" in namespace "gc-6844" +Jul 27 01:30:00.461: INFO: Deleting pod "simpletest.rc-p4sfm" in namespace "gc-6844" +Jul 27 01:30:00.513: INFO: Deleting pod "simpletest.rc-pxjk4" in namespace "gc-6844" +Jul 27 01:30:00.540: INFO: Deleting pod "simpletest.rc-q7h8v" in namespace "gc-6844" +Jul 27 01:30:00.561: INFO: Deleting pod "simpletest.rc-qd4jg" in namespace "gc-6844" +Jul 27 01:30:00.588: INFO: Deleting pod "simpletest.rc-qm6fj" in namespace "gc-6844" +Jul 27 01:30:00.613: INFO: Deleting pod "simpletest.rc-qr464" in namespace "gc-6844" +Jul 27 01:30:00.643: INFO: Deleting pod "simpletest.rc-qvwdf" in namespace "gc-6844" +Jul 27 01:30:00.672: INFO: Deleting pod "simpletest.rc-r4857" in namespace "gc-6844" +Jul 27 01:30:00.711: INFO: Deleting pod "simpletest.rc-r5drr" in namespace "gc-6844" +Jul 27 01:30:00.742: INFO: Deleting pod "simpletest.rc-r6snm" in namespace "gc-6844" +Jul 27 01:30:00.771: INFO: Deleting pod "simpletest.rc-rgbl9" in namespace "gc-6844" +Jul 27 01:30:00.804: INFO: Deleting pod "simpletest.rc-rwmvg" in namespace "gc-6844" +Jul 27 01:30:00.829: INFO: Deleting pod "simpletest.rc-rx7xl" in namespace "gc-6844" +Jul 27 01:30:00.854: INFO: Deleting pod "simpletest.rc-s5vzj" in namespace "gc-6844" +Jul 27 01:30:00.885: INFO: Deleting pod "simpletest.rc-s72mg" in namespace "gc-6844" +Jul 27 01:30:00.914: INFO: Deleting pod "simpletest.rc-stgw4" in namespace "gc-6844" +Jul 27 01:30:00.939: INFO: Deleting pod "simpletest.rc-t6ctw" in namespace "gc-6844" +Jul 27 01:30:00.975: INFO: Deleting pod "simpletest.rc-tpnks" in namespace "gc-6844" +Jul 27 01:30:01.001: INFO: Deleting pod "simpletest.rc-v772s" in namespace "gc-6844" +Jul 27 01:30:01.030: INFO: Deleting pod "simpletest.rc-vfjpp" in namespace "gc-6844" +Jul 27 01:30:01.062: INFO: Deleting pod "simpletest.rc-vlxq9" in namespace "gc-6844" +Jul 27 01:30:01.094: INFO: Deleting pod "simpletest.rc-w8lhl" in namespace "gc-6844" +Jul 27 01:30:01.125: INFO: Deleting pod "simpletest.rc-wjvvw" in namespace "gc-6844" +Jul 27 01:30:01.152: INFO: Deleting pod "simpletest.rc-wsv28" in namespace "gc-6844" +Jul 27 01:30:01.186: INFO: Deleting pod "simpletest.rc-wz2k2" in namespace "gc-6844" +Jul 27 01:30:01.227: INFO: Deleting pod "simpletest.rc-x5pxg" in namespace "gc-6844" +Jul 27 01:30:01.300: INFO: Deleting pod "simpletest.rc-xhlw7" in namespace "gc-6844" +Jul 27 01:30:01.354: INFO: Deleting pod "simpletest.rc-xjdzk" in namespace "gc-6844" +Jul 27 01:30:01.389: INFO: Deleting pod "simpletest.rc-xs675" in namespace "gc-6844" +Jul 27 01:30:01.432: INFO: Deleting pod "simpletest.rc-xsrm4" in namespace "gc-6844" +Jul 27 01:30:01.494: INFO: Deleting pod "simpletest.rc-z45zx" in namespace "gc-6844" +Jul 27 01:30:01.545: INFO: Deleting pod "simpletest.rc-zft2p" in namespace "gc-6844" +Jul 27 01:30:01.578: INFO: Deleting pod "simpletest.rc-zkdnf" in namespace "gc-6844" +Jul 27 01:30:01.605: INFO: Deleting pod "simpletest.rc-ztsl5" in namespace "gc-6844" +[AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 -Jun 12 20:41:52.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:105 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +Jul 27 01:30:01.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector tear down framework | framework.go:193 -STEP: Destroying namespace "webhook-2284" for this suite. 06/12/23 20:41:53.161 -STEP: Destroying namespace "webhook-2284-markers" for this suite. 06/12/23 20:41:53.227 +STEP: Destroying namespace "gc-6844" for this suite. 07/27/23 01:30:01.715 ------------------------------ -• [SLOW TEST] [11.445 seconds] -[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +• [SLOW TEST] [43.809 seconds] +[sig-api-machinery] Garbage collector test/e2e/apimachinery/framework.go:23 - should mutate custom resource with different stored version [Conformance] - test/e2e/apimachinery/webhook.go:323 + should orphan pods created by rc if delete options say so [Conformance] + test/e2e/apimachinery/garbage_collector.go:370 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [BeforeEach] [sig-api-machinery] Garbage collector set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:41:41.937 - Jun 12 20:41:41.937: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename webhook 06/12/23 20:41:41.939 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:41:41.993 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:41:42.043 - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + STEP: Creating a kubernetes client 07/27/23 01:29:17.956 + Jul 27 01:29:17.956: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename gc 07/27/23 01:29:17.957 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:29:17.998 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:29:18.01 + [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:90 - STEP: Setting up server cert 06/12/23 20:41:42.113 - STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 20:41:44.391 - STEP: Deploying the webhook pod 06/12/23 20:41:44.419 - STEP: Wait for the deployment to be ready 06/12/23 20:41:44.446 - Jun 12 20:41:44.459: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set - Jun 12 20:41:46.480: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 20, 41, 44, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 41, 44, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 20, 41, 44, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 41, 44, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} - STEP: Deploying the webhook service 06/12/23 20:41:48.488 - STEP: Verifying the service has paired with the endpoint 06/12/23 20:41:48.527 - Jun 12 20:41:49.528: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 - [It] should mutate custom resource with different stored version [Conformance] - test/e2e/apimachinery/webhook.go:323 - Jun 12 20:41:49.543: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4856-crds.webhook.example.com via the AdmissionRegistration API 06/12/23 20:41:50.087 - STEP: Creating a custom resource while v1 is storage version 06/12/23 20:41:50.127 - STEP: Patching Custom Resource Definition to set v2 as storage 06/12/23 20:41:52.316 - STEP: Patching the custom resource while v2 is storage version 06/12/23 20:41:52.328 - [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/framework/node/init/init.go:32 - Jun 12 20:41:52.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:105 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - tear down framework | framework.go:193 - STEP: Destroying namespace "webhook-2284" for this suite. 06/12/23 20:41:53.161 - STEP: Destroying namespace "webhook-2284-markers" for this suite. 06/12/23 20:41:53.227 - << End Captured GinkgoWriter Output ------------------------------- -SSSSS ------------------------------- -[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] - should be able to convert a non homogeneous list of CRs [Conformance] - test/e2e/apimachinery/crd_conversion_webhook.go:184 -[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] - set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:41:53.487 -Jun 12 20:41:53.487: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename crd-webhook 06/12/23 20:41:53.49 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:41:53.627 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:41:53.753 -[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + [It] should orphan pods created by rc if delete options say so [Conformance] + test/e2e/apimachinery/garbage_collector.go:370 + STEP: create the rc 07/27/23 01:29:18.036 + STEP: delete the rc 07/27/23 01:29:23.079 + STEP: wait for the rc to be deleted 07/27/23 01:29:23.112 + STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods 07/27/23 01:29:28.134 + STEP: Gathering metrics 07/27/23 01:29:58.166 + W0727 01:29:58.185663 20 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. + Jul 27 01:29:58.185: INFO: For apiserver_request_total: + For apiserver_request_latency_seconds: + For apiserver_init_events_total: + For garbage_collector_attempt_to_delete_queue_latency: + For garbage_collector_attempt_to_delete_work_duration: + For garbage_collector_attempt_to_orphan_queue_latency: + For garbage_collector_attempt_to_orphan_work_duration: + For garbage_collector_dirty_processing_latency_microseconds: + For garbage_collector_event_processing_latency_microseconds: + For garbage_collector_graph_changes_queue_latency: + For garbage_collector_graph_changes_work_duration: + For garbage_collector_orphan_processing_latency_microseconds: + For namespace_queue_latency: + For namespace_queue_latency_sum: + For namespace_queue_latency_count: + For namespace_retries: + For namespace_work_duration: + For namespace_work_duration_sum: + For namespace_work_duration_count: + For function_duration_seconds: + For errors_total: + For evicted_pods_total: + + Jul 27 01:29:58.185: INFO: Deleting pod "simpletest.rc-22krd" in namespace "gc-6844" + Jul 27 01:29:58.204: INFO: Deleting pod "simpletest.rc-2562p" in namespace "gc-6844" + Jul 27 01:29:58.228: INFO: Deleting pod "simpletest.rc-25s7x" in namespace "gc-6844" + Jul 27 01:29:58.262: INFO: Deleting pod "simpletest.rc-26qm6" in namespace "gc-6844" + Jul 27 01:29:58.303: INFO: Deleting pod "simpletest.rc-27g22" in namespace "gc-6844" + Jul 27 01:29:58.325: INFO: Deleting pod "simpletest.rc-28cd5" in namespace "gc-6844" + Jul 27 01:29:58.354: INFO: Deleting pod "simpletest.rc-2mb8x" in namespace "gc-6844" + Jul 27 01:29:58.374: INFO: Deleting pod "simpletest.rc-2mlsr" in namespace "gc-6844" + Jul 27 01:29:58.401: INFO: Deleting pod "simpletest.rc-2rg2m" in namespace "gc-6844" + Jul 27 01:29:58.430: INFO: Deleting pod "simpletest.rc-2t88b" in namespace "gc-6844" + Jul 27 01:29:58.463: INFO: Deleting pod "simpletest.rc-42fgq" in namespace "gc-6844" + Jul 27 01:29:58.488: INFO: Deleting pod "simpletest.rc-4jzck" in namespace "gc-6844" + Jul 27 01:29:58.525: INFO: Deleting pod "simpletest.rc-4r2lh" in namespace "gc-6844" + Jul 27 01:29:58.548: INFO: Deleting pod "simpletest.rc-4t82c" in namespace "gc-6844" + Jul 27 01:29:58.567: INFO: Deleting pod "simpletest.rc-57vd2" in namespace "gc-6844" + Jul 27 01:29:58.591: INFO: Deleting pod "simpletest.rc-5hctb" in namespace "gc-6844" + Jul 27 01:29:58.635: INFO: Deleting pod "simpletest.rc-6259h" in namespace "gc-6844" + Jul 27 01:29:58.662: INFO: Deleting pod "simpletest.rc-6htf6" in namespace "gc-6844" + Jul 27 01:29:58.699: INFO: Deleting pod "simpletest.rc-6mklr" in namespace "gc-6844" + Jul 27 01:29:58.744: INFO: Deleting pod "simpletest.rc-6r9bz" in namespace "gc-6844" + Jul 27 01:29:58.788: INFO: Deleting pod "simpletest.rc-6swbb" in namespace "gc-6844" + Jul 27 01:29:58.825: INFO: Deleting pod "simpletest.rc-75xbf" in namespace "gc-6844" + Jul 27 01:29:58.863: INFO: Deleting pod "simpletest.rc-7899j" in namespace "gc-6844" + Jul 27 01:29:58.905: INFO: Deleting pod "simpletest.rc-7b26t" in namespace "gc-6844" + Jul 27 01:29:58.931: INFO: Deleting pod "simpletest.rc-7qxx8" in namespace "gc-6844" + Jul 27 01:29:58.955: INFO: Deleting pod "simpletest.rc-7r7cq" in namespace "gc-6844" + Jul 27 01:29:58.993: INFO: Deleting pod "simpletest.rc-7sll7" in namespace "gc-6844" + Jul 27 01:29:59.021: INFO: Deleting pod "simpletest.rc-8hjrx" in namespace "gc-6844" + Jul 27 01:29:59.048: INFO: Deleting pod "simpletest.rc-8nfrl" in namespace "gc-6844" + Jul 27 01:29:59.072: INFO: Deleting pod "simpletest.rc-9x969" in namespace "gc-6844" + Jul 27 01:29:59.093: INFO: Deleting pod "simpletest.rc-9zglp" in namespace "gc-6844" + Jul 27 01:29:59.119: INFO: Deleting pod "simpletest.rc-bjnld" in namespace "gc-6844" + Jul 27 01:29:59.143: INFO: Deleting pod "simpletest.rc-bz2sh" in namespace "gc-6844" + Jul 27 01:29:59.184: INFO: Deleting pod "simpletest.rc-cdqr4" in namespace "gc-6844" + Jul 27 01:29:59.215: INFO: Deleting pod "simpletest.rc-cl4gt" in namespace "gc-6844" + Jul 27 01:29:59.246: INFO: Deleting pod "simpletest.rc-cp9d7" in namespace "gc-6844" + Jul 27 01:29:59.278: INFO: Deleting pod "simpletest.rc-cvpmx" in namespace "gc-6844" + Jul 27 01:29:59.312: INFO: Deleting pod "simpletest.rc-czrcv" in namespace "gc-6844" + Jul 27 01:29:59.337: INFO: Deleting pod "simpletest.rc-d2ccm" in namespace "gc-6844" + Jul 27 01:29:59.372: INFO: Deleting pod "simpletest.rc-dbbxl" in namespace "gc-6844" + Jul 27 01:29:59.410: INFO: Deleting pod "simpletest.rc-ddjdk" in namespace "gc-6844" + Jul 27 01:29:59.445: INFO: Deleting pod "simpletest.rc-djhdz" in namespace "gc-6844" + Jul 27 01:29:59.502: INFO: Deleting pod "simpletest.rc-ds8tv" in namespace "gc-6844" + Jul 27 01:29:59.534: INFO: Deleting pod "simpletest.rc-fbn27" in namespace "gc-6844" + Jul 27 01:29:59.564: INFO: Deleting pod "simpletest.rc-fhwtr" in namespace "gc-6844" + Jul 27 01:29:59.617: INFO: Deleting pod "simpletest.rc-fpg4g" in namespace "gc-6844" + Jul 27 01:29:59.721: INFO: Deleting pod "simpletest.rc-g2jnb" in namespace "gc-6844" + Jul 27 01:29:59.799: INFO: Deleting pod "simpletest.rc-g96g9" in namespace "gc-6844" + Jul 27 01:29:59.822: INFO: Deleting pod "simpletest.rc-gbdnt" in namespace "gc-6844" + Jul 27 01:29:59.857: INFO: Deleting pod "simpletest.rc-gqtwl" in namespace "gc-6844" + Jul 27 01:29:59.881: INFO: Deleting pod "simpletest.rc-gwcr2" in namespace "gc-6844" + Jul 27 01:29:59.909: INFO: Deleting pod "simpletest.rc-h48x2" in namespace "gc-6844" + Jul 27 01:29:59.939: INFO: Deleting pod "simpletest.rc-hh6ng" in namespace "gc-6844" + Jul 27 01:29:59.965: INFO: Deleting pod "simpletest.rc-hvc2q" in namespace "gc-6844" + Jul 27 01:29:59.986: INFO: Deleting pod "simpletest.rc-k8b5g" in namespace "gc-6844" + Jul 27 01:30:00.010: INFO: Deleting pod "simpletest.rc-kbblp" in namespace "gc-6844" + Jul 27 01:30:00.029: INFO: Deleting pod "simpletest.rc-kfj5f" in namespace "gc-6844" + Jul 27 01:30:00.057: INFO: Deleting pod "simpletest.rc-kgcqc" in namespace "gc-6844" + Jul 27 01:30:00.094: INFO: Deleting pod "simpletest.rc-kgzxk" in namespace "gc-6844" + Jul 27 01:30:00.130: INFO: Deleting pod "simpletest.rc-ljq5h" in namespace "gc-6844" + Jul 27 01:30:00.203: INFO: Deleting pod "simpletest.rc-lmzp5" in namespace "gc-6844" + Jul 27 01:30:00.253: INFO: Deleting pod "simpletest.rc-ls9gw" in namespace "gc-6844" + Jul 27 01:30:00.289: INFO: Deleting pod "simpletest.rc-m8p6w" in namespace "gc-6844" + Jul 27 01:30:00.331: INFO: Deleting pod "simpletest.rc-n86zp" in namespace "gc-6844" + Jul 27 01:30:00.356: INFO: Deleting pod "simpletest.rc-nfjx8" in namespace "gc-6844" + Jul 27 01:30:00.388: INFO: Deleting pod "simpletest.rc-nmmmh" in namespace "gc-6844" + Jul 27 01:30:00.461: INFO: Deleting pod "simpletest.rc-p4sfm" in namespace "gc-6844" + Jul 27 01:30:00.513: INFO: Deleting pod "simpletest.rc-pxjk4" in namespace "gc-6844" + Jul 27 01:30:00.540: INFO: Deleting pod "simpletest.rc-q7h8v" in namespace "gc-6844" + Jul 27 01:30:00.561: INFO: Deleting pod "simpletest.rc-qd4jg" in namespace "gc-6844" + Jul 27 01:30:00.588: INFO: Deleting pod "simpletest.rc-qm6fj" in namespace "gc-6844" + Jul 27 01:30:00.613: INFO: Deleting pod "simpletest.rc-qr464" in namespace "gc-6844" + Jul 27 01:30:00.643: INFO: Deleting pod "simpletest.rc-qvwdf" in namespace "gc-6844" + Jul 27 01:30:00.672: INFO: Deleting pod "simpletest.rc-r4857" in namespace "gc-6844" + Jul 27 01:30:00.711: INFO: Deleting pod "simpletest.rc-r5drr" in namespace "gc-6844" + Jul 27 01:30:00.742: INFO: Deleting pod "simpletest.rc-r6snm" in namespace "gc-6844" + Jul 27 01:30:00.771: INFO: Deleting pod "simpletest.rc-rgbl9" in namespace "gc-6844" + Jul 27 01:30:00.804: INFO: Deleting pod "simpletest.rc-rwmvg" in namespace "gc-6844" + Jul 27 01:30:00.829: INFO: Deleting pod "simpletest.rc-rx7xl" in namespace "gc-6844" + Jul 27 01:30:00.854: INFO: Deleting pod "simpletest.rc-s5vzj" in namespace "gc-6844" + Jul 27 01:30:00.885: INFO: Deleting pod "simpletest.rc-s72mg" in namespace "gc-6844" + Jul 27 01:30:00.914: INFO: Deleting pod "simpletest.rc-stgw4" in namespace "gc-6844" + Jul 27 01:30:00.939: INFO: Deleting pod "simpletest.rc-t6ctw" in namespace "gc-6844" + Jul 27 01:30:00.975: INFO: Deleting pod "simpletest.rc-tpnks" in namespace "gc-6844" + Jul 27 01:30:01.001: INFO: Deleting pod "simpletest.rc-v772s" in namespace "gc-6844" + Jul 27 01:30:01.030: INFO: Deleting pod "simpletest.rc-vfjpp" in namespace "gc-6844" + Jul 27 01:30:01.062: INFO: Deleting pod "simpletest.rc-vlxq9" in namespace "gc-6844" + Jul 27 01:30:01.094: INFO: Deleting pod "simpletest.rc-w8lhl" in namespace "gc-6844" + Jul 27 01:30:01.125: INFO: Deleting pod "simpletest.rc-wjvvw" in namespace "gc-6844" + Jul 27 01:30:01.152: INFO: Deleting pod "simpletest.rc-wsv28" in namespace "gc-6844" + Jul 27 01:30:01.186: INFO: Deleting pod "simpletest.rc-wz2k2" in namespace "gc-6844" + Jul 27 01:30:01.227: INFO: Deleting pod "simpletest.rc-x5pxg" in namespace "gc-6844" + Jul 27 01:30:01.300: INFO: Deleting pod "simpletest.rc-xhlw7" in namespace "gc-6844" + Jul 27 01:30:01.354: INFO: Deleting pod "simpletest.rc-xjdzk" in namespace "gc-6844" + Jul 27 01:30:01.389: INFO: Deleting pod "simpletest.rc-xs675" in namespace "gc-6844" + Jul 27 01:30:01.432: INFO: Deleting pod "simpletest.rc-xsrm4" in namespace "gc-6844" + Jul 27 01:30:01.494: INFO: Deleting pod "simpletest.rc-z45zx" in namespace "gc-6844" + Jul 27 01:30:01.545: INFO: Deleting pod "simpletest.rc-zft2p" in namespace "gc-6844" + Jul 27 01:30:01.578: INFO: Deleting pod "simpletest.rc-zkdnf" in namespace "gc-6844" + Jul 27 01:30:01.605: INFO: Deleting pod "simpletest.rc-ztsl5" in namespace "gc-6844" + [AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/node/init/init.go:32 + Jul 27 01:30:01.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + tear down framework | framework.go:193 + STEP: Destroying namespace "gc-6844" for this suite. 07/27/23 01:30:01.715 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory request [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:235 +[BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 +STEP: Creating a kubernetes client 07/27/23 01:30:01.768 +Jul 27 01:30:01.768: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename projected 07/27/23 01:30:01.776 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:30:01.855 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:30:01.867 +[BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/crd_conversion_webhook.go:128 -STEP: Setting up server cert 06/12/23 20:41:53.893 -STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication 06/12/23 20:41:56.07 -STEP: Deploying the custom resource conversion webhook pod 06/12/23 20:41:56.094 -STEP: Wait for the deployment to be ready 06/12/23 20:41:56.122 -Jun 12 20:41:56.137: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set -Jun 12 20:41:58.157: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 20, 41, 56, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 41, 56, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 20, 41, 56, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 41, 56, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-74ff66dd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -STEP: Deploying the webhook service 06/12/23 20:42:00.165 -STEP: Verifying the service has paired with the endpoint 06/12/23 20:42:00.203 -Jun 12 20:42:01.204: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 -[It] should be able to convert a non homogeneous list of CRs [Conformance] - test/e2e/apimachinery/crd_conversion_webhook.go:184 -Jun 12 20:42:01.216: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Creating a v1 custom resource 06/12/23 20:42:04.039 -STEP: Create a v2 custom resource 06/12/23 20:42:04.076 -STEP: List CRs in v1 06/12/23 20:42:04.149 -STEP: List CRs in v2 06/12/23 20:42:04.164 -[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 +[It] should provide container's memory request [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:235 +STEP: Creating a pod to test downward API volume plugin 07/27/23 01:30:01.884 +Jul 27 01:30:01.933: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5fd19170-ea84-4884-a754-1d170829a34f" in namespace "projected-5160" to be "Succeeded or Failed" +Jul 27 01:30:01.949: INFO: Pod "downwardapi-volume-5fd19170-ea84-4884-a754-1d170829a34f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.222532ms +Jul 27 01:30:03.966: INFO: Pod "downwardapi-volume-5fd19170-ea84-4884-a754-1d170829a34f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032951841s +Jul 27 01:30:05.976: INFO: Pod "downwardapi-volume-5fd19170-ea84-4884-a754-1d170829a34f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043198028s +Jul 27 01:30:07.963: INFO: Pod "downwardapi-volume-5fd19170-ea84-4884-a754-1d170829a34f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030306839s +Jul 27 01:30:09.967: INFO: Pod "downwardapi-volume-5fd19170-ea84-4884-a754-1d170829a34f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.034211964s +STEP: Saw pod success 07/27/23 01:30:09.967 +Jul 27 01:30:09.967: INFO: Pod "downwardapi-volume-5fd19170-ea84-4884-a754-1d170829a34f" satisfied condition "Succeeded or Failed" +Jul 27 01:30:09.977: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-5fd19170-ea84-4884-a754-1d170829a34f container client-container: +STEP: delete the pod 07/27/23 01:30:09.999 +Jul 27 01:30:10.026: INFO: Waiting for pod downwardapi-volume-5fd19170-ea84-4884-a754-1d170829a34f to disappear +Jul 27 01:30:10.040: INFO: Pod downwardapi-volume-5fd19170-ea84-4884-a754-1d170829a34f no longer exists +[AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/node/init/init.go:32 -Jun 12 20:42:04.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/crd_conversion_webhook.go:139 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] +Jul 27 01:30:10.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI tear down framework | framework.go:193 -STEP: Destroying namespace "crd-webhook-8678" for this suite. 06/12/23 20:42:04.872 +STEP: Destroying namespace "projected-5160" for this suite. 07/27/23 01:30:10.055 ------------------------------ -• [SLOW TEST] [11.409 seconds] -[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] -test/e2e/apimachinery/framework.go:23 - should be able to convert a non homogeneous list of CRs [Conformance] - test/e2e/apimachinery/crd_conversion_webhook.go:184 +• [SLOW TEST] [8.335 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should provide container's memory request [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:235 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + [BeforeEach] [sig-storage] Projected downwardAPI set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:41:53.487 - Jun 12 20:41:53.487: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename crd-webhook 06/12/23 20:41:53.49 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:41:53.627 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:41:53.753 - [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + STEP: Creating a kubernetes client 07/27/23 01:30:01.768 + Jul 27 01:30:01.768: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename projected 07/27/23 01:30:01.776 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:30:01.855 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:30:01.867 + [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/crd_conversion_webhook.go:128 - STEP: Setting up server cert 06/12/23 20:41:53.893 - STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication 06/12/23 20:41:56.07 - STEP: Deploying the custom resource conversion webhook pod 06/12/23 20:41:56.094 - STEP: Wait for the deployment to be ready 06/12/23 20:41:56.122 - Jun 12 20:41:56.137: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set - Jun 12 20:41:58.157: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 20, 41, 56, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 41, 56, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 20, 41, 56, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 41, 56, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-74ff66dd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - STEP: Deploying the webhook service 06/12/23 20:42:00.165 - STEP: Verifying the service has paired with the endpoint 06/12/23 20:42:00.203 - Jun 12 20:42:01.204: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 - [It] should be able to convert a non homogeneous list of CRs [Conformance] - test/e2e/apimachinery/crd_conversion_webhook.go:184 - Jun 12 20:42:01.216: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Creating a v1 custom resource 06/12/23 20:42:04.039 - STEP: Create a v2 custom resource 06/12/23 20:42:04.076 - STEP: List CRs in v1 06/12/23 20:42:04.149 - STEP: List CRs in v2 06/12/23 20:42:04.164 - [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 + [It] should provide container's memory request [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:235 + STEP: Creating a pod to test downward API volume plugin 07/27/23 01:30:01.884 + Jul 27 01:30:01.933: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5fd19170-ea84-4884-a754-1d170829a34f" in namespace "projected-5160" to be "Succeeded or Failed" + Jul 27 01:30:01.949: INFO: Pod "downwardapi-volume-5fd19170-ea84-4884-a754-1d170829a34f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.222532ms + Jul 27 01:30:03.966: INFO: Pod "downwardapi-volume-5fd19170-ea84-4884-a754-1d170829a34f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032951841s + Jul 27 01:30:05.976: INFO: Pod "downwardapi-volume-5fd19170-ea84-4884-a754-1d170829a34f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043198028s + Jul 27 01:30:07.963: INFO: Pod "downwardapi-volume-5fd19170-ea84-4884-a754-1d170829a34f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030306839s + Jul 27 01:30:09.967: INFO: Pod "downwardapi-volume-5fd19170-ea84-4884-a754-1d170829a34f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.034211964s + STEP: Saw pod success 07/27/23 01:30:09.967 + Jul 27 01:30:09.967: INFO: Pod "downwardapi-volume-5fd19170-ea84-4884-a754-1d170829a34f" satisfied condition "Succeeded or Failed" + Jul 27 01:30:09.977: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-5fd19170-ea84-4884-a754-1d170829a34f container client-container: + STEP: delete the pod 07/27/23 01:30:09.999 + Jul 27 01:30:10.026: INFO: Waiting for pod downwardapi-volume-5fd19170-ea84-4884-a754-1d170829a34f to disappear + Jul 27 01:30:10.040: INFO: Pod downwardapi-volume-5fd19170-ea84-4884-a754-1d170829a34f no longer exists + [AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/node/init/init.go:32 - Jun 12 20:42:04.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/crd_conversion_webhook.go:139 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + Jul 27 01:30:10.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI tear down framework | framework.go:193 - STEP: Destroying namespace "crd-webhook-8678" for this suite. 06/12/23 20:42:04.872 + STEP: Destroying namespace "projected-5160" for this suite. 07/27/23 01:30:10.055 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-node] Pods - should run through the lifecycle of Pods and PodStatus [Conformance] - test/e2e/common/node/pods.go:896 -[BeforeEach] [sig-node] Pods +[sig-node] Probing container + with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:72 +[BeforeEach] [sig-node] Probing container set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:42:04.901 -Jun 12 20:42:04.901: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename pods 06/12/23 20:42:04.918 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:42:04.98 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:42:04.987 -[BeforeEach] [sig-node] Pods +STEP: Creating a kubernetes client 07/27/23 01:30:10.116 +Jul 27 01:30:10.116: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename container-probe 07/27/23 01:30:10.117 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:30:10.172 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:30:10.182 +[BeforeEach] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-node] Pods - test/e2e/common/node/pods.go:194 -[It] should run through the lifecycle of Pods and PodStatus [Conformance] - test/e2e/common/node/pods.go:896 -STEP: creating a Pod with a static label 06/12/23 20:42:05.028 -STEP: watching for Pod to be ready 06/12/23 20:42:05.067 -Jun 12 20:42:05.078: INFO: observed Pod pod-test in namespace pods-887 in phase Pending with labels: map[test-pod-static:true] & conditions [] -Jun 12 20:42:05.078: INFO: observed Pod pod-test in namespace pods-887 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:05 +0000 UTC }] -Jun 12 20:42:05.117: INFO: observed Pod pod-test in namespace pods-887 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:05 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:05 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:05 +0000 UTC }] -Jun 12 20:42:06.433: INFO: observed Pod pod-test in namespace pods-887 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:05 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:05 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:05 +0000 UTC }] -Jun 12 20:42:06.555: INFO: observed Pod pod-test in namespace pods-887 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:05 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:05 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:05 +0000 UTC }] -Jun 12 20:42:08.473: INFO: Found Pod pod-test in namespace pods-887 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:05 +0000 UTC }] -STEP: patching the Pod with a new Label and updated data 06/12/23 20:42:08.485 -STEP: getting the Pod and ensuring that it's patched 06/12/23 20:42:08.525 -STEP: replacing the Pod's status Ready condition to False 06/12/23 20:42:08.533 -STEP: check the Pod again to ensure its Ready conditions are False 06/12/23 20:42:08.559 -STEP: deleting the Pod via a Collection with a LabelSelector 06/12/23 20:42:08.559 -STEP: watching for the Pod to be deleted 06/12/23 20:42:08.576 -Jun 12 20:42:08.582: INFO: observed event type MODIFIED -Jun 12 20:42:09.590: INFO: observed event type MODIFIED -Jun 12 20:42:11.034: INFO: observed event type MODIFIED -Jun 12 20:42:12.592: INFO: observed event type MODIFIED -Jun 12 20:42:12.622: INFO: observed event type MODIFIED -[AfterEach] [sig-node] Pods +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 +[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:72 +Jul 27 01:30:10.221: INFO: Waiting up to 5m0s for pod "test-webserver-3223279e-0786-49be-b249-66c5bf685311" in namespace "container-probe-3770" to be "running and ready" +Jul 27 01:30:10.229: INFO: Pod "test-webserver-3223279e-0786-49be-b249-66c5bf685311": Phase="Pending", Reason="", readiness=false. Elapsed: 8.024546ms +Jul 27 01:30:10.229: INFO: The phase of Pod test-webserver-3223279e-0786-49be-b249-66c5bf685311 is Pending, waiting for it to be Running (with Ready = true) +Jul 27 01:30:12.242: INFO: Pod "test-webserver-3223279e-0786-49be-b249-66c5bf685311": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020745898s +Jul 27 01:30:12.242: INFO: The phase of Pod test-webserver-3223279e-0786-49be-b249-66c5bf685311 is Pending, waiting for it to be Running (with Ready = true) +Jul 27 01:30:14.247: INFO: Pod "test-webserver-3223279e-0786-49be-b249-66c5bf685311": Phase="Running", Reason="", readiness=false. Elapsed: 4.025513391s +Jul 27 01:30:14.247: INFO: The phase of Pod test-webserver-3223279e-0786-49be-b249-66c5bf685311 is Running (Ready = false) +Jul 27 01:30:16.242: INFO: Pod "test-webserver-3223279e-0786-49be-b249-66c5bf685311": Phase="Running", Reason="", readiness=false. Elapsed: 6.021315375s +Jul 27 01:30:16.243: INFO: The phase of Pod test-webserver-3223279e-0786-49be-b249-66c5bf685311 is Running (Ready = false) +Jul 27 01:30:18.240: INFO: Pod "test-webserver-3223279e-0786-49be-b249-66c5bf685311": Phase="Running", Reason="", readiness=false. Elapsed: 8.019227663s +Jul 27 01:30:18.240: INFO: The phase of Pod test-webserver-3223279e-0786-49be-b249-66c5bf685311 is Running (Ready = false) +Jul 27 01:30:20.239: INFO: Pod "test-webserver-3223279e-0786-49be-b249-66c5bf685311": Phase="Running", Reason="", readiness=false. Elapsed: 10.017431189s +Jul 27 01:30:20.239: INFO: The phase of Pod test-webserver-3223279e-0786-49be-b249-66c5bf685311 is Running (Ready = false) +Jul 27 01:30:22.252: INFO: Pod "test-webserver-3223279e-0786-49be-b249-66c5bf685311": Phase="Running", Reason="", readiness=false. Elapsed: 12.030361179s +Jul 27 01:30:22.252: INFO: The phase of Pod test-webserver-3223279e-0786-49be-b249-66c5bf685311 is Running (Ready = false) +Jul 27 01:30:24.239: INFO: Pod "test-webserver-3223279e-0786-49be-b249-66c5bf685311": Phase="Running", Reason="", readiness=false. Elapsed: 14.017531786s +Jul 27 01:30:24.239: INFO: The phase of Pod test-webserver-3223279e-0786-49be-b249-66c5bf685311 is Running (Ready = false) +Jul 27 01:30:26.239: INFO: Pod "test-webserver-3223279e-0786-49be-b249-66c5bf685311": Phase="Running", Reason="", readiness=false. Elapsed: 16.017733563s +Jul 27 01:30:26.239: INFO: The phase of Pod test-webserver-3223279e-0786-49be-b249-66c5bf685311 is Running (Ready = false) +Jul 27 01:30:28.240: INFO: Pod "test-webserver-3223279e-0786-49be-b249-66c5bf685311": Phase="Running", Reason="", readiness=false. Elapsed: 18.018745705s +Jul 27 01:30:28.240: INFO: The phase of Pod test-webserver-3223279e-0786-49be-b249-66c5bf685311 is Running (Ready = false) +Jul 27 01:30:30.243: INFO: Pod "test-webserver-3223279e-0786-49be-b249-66c5bf685311": Phase="Running", Reason="", readiness=false. Elapsed: 20.021961772s +Jul 27 01:30:30.243: INFO: The phase of Pod test-webserver-3223279e-0786-49be-b249-66c5bf685311 is Running (Ready = false) +Jul 27 01:30:32.238: INFO: Pod "test-webserver-3223279e-0786-49be-b249-66c5bf685311": Phase="Running", Reason="", readiness=true. Elapsed: 22.016378871s +Jul 27 01:30:32.238: INFO: The phase of Pod test-webserver-3223279e-0786-49be-b249-66c5bf685311 is Running (Ready = true) +Jul 27 01:30:32.238: INFO: Pod "test-webserver-3223279e-0786-49be-b249-66c5bf685311" satisfied condition "running and ready" +Jul 27 01:30:32.246: INFO: Container started at 2023-07-27 01:30:11 +0000 UTC, pod became ready at 2023-07-27 01:30:30 +0000 UTC +[AfterEach] [sig-node] Probing container test/e2e/framework/node/init/init.go:32 -Jun 12 20:42:12.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Pods +Jul 27 01:30:32.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Pods +[DeferCleanup (Each)] [sig-node] Probing container dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Pods +[DeferCleanup (Each)] [sig-node] Probing container tear down framework | framework.go:193 -STEP: Destroying namespace "pods-887" for this suite. 06/12/23 20:42:12.644 +STEP: Destroying namespace "container-probe-3770" for this suite. 07/27/23 01:30:32.26 ------------------------------ -• [SLOW TEST] [7.768 seconds] -[sig-node] Pods +• [SLOW TEST] [22.170 seconds] +[sig-node] Probing container test/e2e/common/node/framework.go:23 - should run through the lifecycle of Pods and PodStatus [Conformance] - test/e2e/common/node/pods.go:896 + with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:72 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Pods + [BeforeEach] [sig-node] Probing container set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:42:04.901 - Jun 12 20:42:04.901: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename pods 06/12/23 20:42:04.918 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:42:04.98 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:42:04.987 - [BeforeEach] [sig-node] Pods + STEP: Creating a kubernetes client 07/27/23 01:30:10.116 + Jul 27 01:30:10.116: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename container-probe 07/27/23 01:30:10.117 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:30:10.172 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:30:10.182 + [BeforeEach] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-node] Pods - test/e2e/common/node/pods.go:194 - [It] should run through the lifecycle of Pods and PodStatus [Conformance] - test/e2e/common/node/pods.go:896 - STEP: creating a Pod with a static label 06/12/23 20:42:05.028 - STEP: watching for Pod to be ready 06/12/23 20:42:05.067 - Jun 12 20:42:05.078: INFO: observed Pod pod-test in namespace pods-887 in phase Pending with labels: map[test-pod-static:true] & conditions [] - Jun 12 20:42:05.078: INFO: observed Pod pod-test in namespace pods-887 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:05 +0000 UTC }] - Jun 12 20:42:05.117: INFO: observed Pod pod-test in namespace pods-887 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:05 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:05 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:05 +0000 UTC }] - Jun 12 20:42:06.433: INFO: observed Pod pod-test in namespace pods-887 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:05 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:05 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:05 +0000 UTC }] - Jun 12 20:42:06.555: INFO: observed Pod pod-test in namespace pods-887 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:05 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:05 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:05 +0000 UTC }] - Jun 12 20:42:08.473: INFO: Found Pod pod-test in namespace pods-887 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 20:42:05 +0000 UTC }] - STEP: patching the Pod with a new Label and updated data 06/12/23 20:42:08.485 - STEP: getting the Pod and ensuring that it's patched 06/12/23 20:42:08.525 - STEP: replacing the Pod's status Ready condition to False 06/12/23 20:42:08.533 - STEP: check the Pod again to ensure its Ready conditions are False 06/12/23 20:42:08.559 - STEP: deleting the Pod via a Collection with a LabelSelector 06/12/23 20:42:08.559 - STEP: watching for the Pod to be deleted 06/12/23 20:42:08.576 - Jun 12 20:42:08.582: INFO: observed event type MODIFIED - Jun 12 20:42:09.590: INFO: observed event type MODIFIED - Jun 12 20:42:11.034: INFO: observed event type MODIFIED - Jun 12 20:42:12.592: INFO: observed event type MODIFIED - Jun 12 20:42:12.622: INFO: observed event type MODIFIED - [AfterEach] [sig-node] Pods + [BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 + [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:72 + Jul 27 01:30:10.221: INFO: Waiting up to 5m0s for pod "test-webserver-3223279e-0786-49be-b249-66c5bf685311" in namespace "container-probe-3770" to be "running and ready" + Jul 27 01:30:10.229: INFO: Pod "test-webserver-3223279e-0786-49be-b249-66c5bf685311": Phase="Pending", Reason="", readiness=false. Elapsed: 8.024546ms + Jul 27 01:30:10.229: INFO: The phase of Pod test-webserver-3223279e-0786-49be-b249-66c5bf685311 is Pending, waiting for it to be Running (with Ready = true) + Jul 27 01:30:12.242: INFO: Pod "test-webserver-3223279e-0786-49be-b249-66c5bf685311": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020745898s + Jul 27 01:30:12.242: INFO: The phase of Pod test-webserver-3223279e-0786-49be-b249-66c5bf685311 is Pending, waiting for it to be Running (with Ready = true) + Jul 27 01:30:14.247: INFO: Pod "test-webserver-3223279e-0786-49be-b249-66c5bf685311": Phase="Running", Reason="", readiness=false. Elapsed: 4.025513391s + Jul 27 01:30:14.247: INFO: The phase of Pod test-webserver-3223279e-0786-49be-b249-66c5bf685311 is Running (Ready = false) + Jul 27 01:30:16.242: INFO: Pod "test-webserver-3223279e-0786-49be-b249-66c5bf685311": Phase="Running", Reason="", readiness=false. Elapsed: 6.021315375s + Jul 27 01:30:16.243: INFO: The phase of Pod test-webserver-3223279e-0786-49be-b249-66c5bf685311 is Running (Ready = false) + Jul 27 01:30:18.240: INFO: Pod "test-webserver-3223279e-0786-49be-b249-66c5bf685311": Phase="Running", Reason="", readiness=false. Elapsed: 8.019227663s + Jul 27 01:30:18.240: INFO: The phase of Pod test-webserver-3223279e-0786-49be-b249-66c5bf685311 is Running (Ready = false) + Jul 27 01:30:20.239: INFO: Pod "test-webserver-3223279e-0786-49be-b249-66c5bf685311": Phase="Running", Reason="", readiness=false. Elapsed: 10.017431189s + Jul 27 01:30:20.239: INFO: The phase of Pod test-webserver-3223279e-0786-49be-b249-66c5bf685311 is Running (Ready = false) + Jul 27 01:30:22.252: INFO: Pod "test-webserver-3223279e-0786-49be-b249-66c5bf685311": Phase="Running", Reason="", readiness=false. Elapsed: 12.030361179s + Jul 27 01:30:22.252: INFO: The phase of Pod test-webserver-3223279e-0786-49be-b249-66c5bf685311 is Running (Ready = false) + Jul 27 01:30:24.239: INFO: Pod "test-webserver-3223279e-0786-49be-b249-66c5bf685311": Phase="Running", Reason="", readiness=false. Elapsed: 14.017531786s + Jul 27 01:30:24.239: INFO: The phase of Pod test-webserver-3223279e-0786-49be-b249-66c5bf685311 is Running (Ready = false) + Jul 27 01:30:26.239: INFO: Pod "test-webserver-3223279e-0786-49be-b249-66c5bf685311": Phase="Running", Reason="", readiness=false. Elapsed: 16.017733563s + Jul 27 01:30:26.239: INFO: The phase of Pod test-webserver-3223279e-0786-49be-b249-66c5bf685311 is Running (Ready = false) + Jul 27 01:30:28.240: INFO: Pod "test-webserver-3223279e-0786-49be-b249-66c5bf685311": Phase="Running", Reason="", readiness=false. Elapsed: 18.018745705s + Jul 27 01:30:28.240: INFO: The phase of Pod test-webserver-3223279e-0786-49be-b249-66c5bf685311 is Running (Ready = false) + Jul 27 01:30:30.243: INFO: Pod "test-webserver-3223279e-0786-49be-b249-66c5bf685311": Phase="Running", Reason="", readiness=false. Elapsed: 20.021961772s + Jul 27 01:30:30.243: INFO: The phase of Pod test-webserver-3223279e-0786-49be-b249-66c5bf685311 is Running (Ready = false) + Jul 27 01:30:32.238: INFO: Pod "test-webserver-3223279e-0786-49be-b249-66c5bf685311": Phase="Running", Reason="", readiness=true. Elapsed: 22.016378871s + Jul 27 01:30:32.238: INFO: The phase of Pod test-webserver-3223279e-0786-49be-b249-66c5bf685311 is Running (Ready = true) + Jul 27 01:30:32.238: INFO: Pod "test-webserver-3223279e-0786-49be-b249-66c5bf685311" satisfied condition "running and ready" + Jul 27 01:30:32.246: INFO: Container started at 2023-07-27 01:30:11 +0000 UTC, pod became ready at 2023-07-27 01:30:30 +0000 UTC + [AfterEach] [sig-node] Probing container test/e2e/framework/node/init/init.go:32 - Jun 12 20:42:12.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Pods + Jul 27 01:30:32.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Pods + [DeferCleanup (Each)] [sig-node] Probing container dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Pods + [DeferCleanup (Each)] [sig-node] Probing container tear down framework | framework.go:193 - STEP: Destroying namespace "pods-887" for this suite. 06/12/23 20:42:12.644 + STEP: Destroying namespace "container-probe-3770" for this suite. 07/27/23 01:30:32.26 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSS +SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] InitContainer [NodeConformance] - should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] - test/e2e/common/node/init_container.go:458 + should invoke init containers on a RestartNever pod [Conformance] + test/e2e/common/node/init_container.go:177 [BeforeEach] [sig-node] InitContainer [NodeConformance] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:42:12.672 -Jun 12 20:42:12.672: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename init-container 06/12/23 20:42:12.676 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:42:12.728 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:42:12.738 +STEP: Creating a kubernetes client 07/27/23 01:30:32.286 +Jul 27 01:30:32.286: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename init-container 07/27/23 01:30:32.287 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:30:32.328 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:30:32.337 [BeforeEach] [sig-node] InitContainer [NodeConformance] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-node] InitContainer [NodeConformance] test/e2e/common/node/init_container.go:165 -[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] - test/e2e/common/node/init_container.go:458 -STEP: creating the pod 06/12/23 20:42:12.748 -Jun 12 20:42:12.749: INFO: PodSpec: initContainers in spec.initContainers +[It] should invoke init containers on a RestartNever pod [Conformance] + test/e2e/common/node/init_container.go:177 +STEP: creating the pod 07/27/23 01:30:32.349 +Jul 27 01:30:32.349: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] test/e2e/framework/node/init/init.go:32 -Jun 12 20:42:26.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +Jul 27 01:30:38.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] tear down framework | framework.go:193 -STEP: Destroying namespace "init-container-1431" for this suite. 06/12/23 20:42:27.08 +STEP: Destroying namespace "init-container-8011" for this suite. 07/27/23 01:30:38.416 ------------------------------ -• [SLOW TEST] [14.431 seconds] +• [SLOW TEST] [6.153 seconds] [sig-node] InitContainer [NodeConformance] test/e2e/common/node/framework.go:23 - should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] - test/e2e/common/node/init_container.go:458 + should invoke init containers on a RestartNever pod [Conformance] + test/e2e/common/node/init_container.go:177 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-node] InitContainer [NodeConformance] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:42:12.672 - Jun 12 20:42:12.672: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename init-container 06/12/23 20:42:12.676 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:42:12.728 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:42:12.738 + STEP: Creating a kubernetes client 07/27/23 01:30:32.286 + Jul 27 01:30:32.286: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename init-container 07/27/23 01:30:32.287 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:30:32.328 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:30:32.337 [BeforeEach] [sig-node] InitContainer [NodeConformance] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-node] InitContainer [NodeConformance] test/e2e/common/node/init_container.go:165 - [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] - test/e2e/common/node/init_container.go:458 - STEP: creating the pod 06/12/23 20:42:12.748 - Jun 12 20:42:12.749: INFO: PodSpec: initContainers in spec.initContainers + [It] should invoke init containers on a RestartNever pod [Conformance] + test/e2e/common/node/init_container.go:177 + STEP: creating the pod 07/27/23 01:30:32.349 + Jul 27 01:30:32.349: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] test/e2e/framework/node/init/init.go:32 - Jun 12 20:42:26.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + Jul 27 01:30:38.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] tear down framework | framework.go:193 - STEP: Destroying namespace "init-container-1431" for this suite. 06/12/23 20:42:27.08 + STEP: Destroying namespace "init-container-8011" for this suite. 07/27/23 01:30:38.416 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSS +S ------------------------------ -[sig-apps] Job - should adopt matching orphans and release non-matching pods [Conformance] - test/e2e/apps/job.go:507 -[BeforeEach] [sig-apps] Job +[sig-apps] Daemon set [Serial] + should rollback without unnecessary restarts [Conformance] + test/e2e/apps/daemon_set.go:432 +[BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:42:27.108 -Jun 12 20:42:27.108: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename job 06/12/23 20:42:27.112 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:42:27.178 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:42:27.221 -[BeforeEach] [sig-apps] Job +STEP: Creating a kubernetes client 07/27/23 01:30:38.441 +Jul 27 01:30:38.441: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename daemonsets 07/27/23 01:30:38.442 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:30:38.511 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:30:38.524 +[BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 -[It] should adopt matching orphans and release non-matching pods [Conformance] - test/e2e/apps/job.go:507 -STEP: Creating a job 06/12/23 20:42:27.253 -STEP: Ensuring active pods == parallelism 06/12/23 20:42:27.29 -STEP: Orphaning one of the Job's Pods 06/12/23 20:42:39.301 -Jun 12 20:42:39.831: INFO: Successfully updated pod "adopt-release-7bz2l" -STEP: Checking that the Job readopts the Pod 06/12/23 20:42:39.831 -Jun 12 20:42:39.832: INFO: Waiting up to 15m0s for pod "adopt-release-7bz2l" in namespace "job-7357" to be "adopted" -Jun 12 20:42:39.838: INFO: Pod "adopt-release-7bz2l": Phase="Running", Reason="", readiness=true. Elapsed: 6.027263ms -Jun 12 20:42:41.846: INFO: Pod "adopt-release-7bz2l": Phase="Running", Reason="", readiness=true. Elapsed: 2.014137979s -Jun 12 20:42:41.846: INFO: Pod "adopt-release-7bz2l" satisfied condition "adopted" -STEP: Removing the labels from the Job's Pod 06/12/23 20:42:41.847 -Jun 12 20:42:42.405: INFO: Successfully updated pod "adopt-release-7bz2l" -STEP: Checking that the Job releases the Pod 06/12/23 20:42:42.424 -Jun 12 20:42:42.425: INFO: Waiting up to 15m0s for pod "adopt-release-7bz2l" in namespace "job-7357" to be "released" -Jun 12 20:42:42.433: INFO: Pod "adopt-release-7bz2l": Phase="Running", Reason="", readiness=true. Elapsed: 8.315817ms -Jun 12 20:42:44.440: INFO: Pod "adopt-release-7bz2l": Phase="Running", Reason="", readiness=true. Elapsed: 2.015235443s -Jun 12 20:42:44.440: INFO: Pod "adopt-release-7bz2l" satisfied condition "released" -[AfterEach] [sig-apps] Job +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 +[It] should rollback without unnecessary restarts [Conformance] + test/e2e/apps/daemon_set.go:432 +Jul 27 01:30:38.739: INFO: Create a RollingUpdate DaemonSet +Jul 27 01:30:38.754: INFO: Check that daemon pods launch on every node of the cluster +Jul 27 01:30:38.805: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jul 27 01:30:38.805: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 +Jul 27 01:30:39.829: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jul 27 01:30:39.829: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 +Jul 27 01:30:40.872: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Jul 27 01:30:40.872: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 +Jul 27 01:30:41.834: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Jul 27 01:30:41.834: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 +Jul 27 01:30:42.830: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Jul 27 01:30:42.831: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 +Jul 27 01:30:43.836: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Jul 27 01:30:43.836: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 +Jul 27 01:30:44.921: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Jul 27 01:30:44.921: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 +Jul 27 01:30:45.841: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Jul 27 01:30:45.841: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 +Jul 27 01:30:46.841: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Jul 27 01:30:46.841: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 +Jul 27 01:30:47.835: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Jul 27 01:30:47.835: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 +Jul 27 01:30:48.837: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Jul 27 01:30:48.837: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 +Jul 27 01:30:49.831: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Jul 27 01:30:49.831: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 +Jul 27 01:30:50.832: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Jul 27 01:30:50.832: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +Jul 27 01:30:50.832: INFO: Update the DaemonSet to trigger a rollout +Jul 27 01:30:50.909: INFO: Updating DaemonSet daemon-set +Jul 27 01:30:53.963: INFO: Roll back the DaemonSet before rollout is complete +Jul 27 01:30:54.020: INFO: Updating DaemonSet daemon-set +Jul 27 01:30:54.020: INFO: Make sure DaemonSet rollback is complete +Jul 27 01:30:54.068: INFO: Wrong image for pod: daemon-set-z4qnz. Expected: registry.k8s.io/e2e-test-images/httpd:2.4.38-4, got: foo:non-existent. +Jul 27 01:30:54.068: INFO: Pod daemon-set-z4qnz is not available +Jul 27 01:31:00.095: INFO: Pod daemon-set-zrssl is not available +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 +STEP: Deleting DaemonSet "daemon-set" 07/27/23 01:31:00.171 +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6357, will wait for the garbage collector to delete the pods 07/27/23 01:31:00.171 +Jul 27 01:31:00.257: INFO: Deleting DaemonSet.extensions daemon-set took: 26.738562ms +Jul 27 01:31:00.359: INFO: Terminating DaemonSet.extensions daemon-set pods took: 102.076386ms +Jul 27 01:31:03.071: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jul 27 01:31:03.071: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Jul 27 01:31:03.089: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"61951"},"items":null} + +Jul 27 01:31:03.098: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"61951"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 -Jun 12 20:42:44.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] Job +Jul 27 01:31:03.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] Job +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] Job +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 -STEP: Destroying namespace "job-7357" for this suite. 06/12/23 20:42:44.452 +STEP: Destroying namespace "daemonsets-6357" for this suite. 07/27/23 01:31:03.173 ------------------------------ -• [SLOW TEST] [17.365 seconds] -[sig-apps] Job +• [SLOW TEST] [24.756 seconds] +[sig-apps] Daemon set [Serial] test/e2e/apps/framework.go:23 - should adopt matching orphans and release non-matching pods [Conformance] - test/e2e/apps/job.go:507 + should rollback without unnecessary restarts [Conformance] + test/e2e/apps/daemon_set.go:432 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] Job + [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:42:27.108 - Jun 12 20:42:27.108: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename job 06/12/23 20:42:27.112 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:42:27.178 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:42:27.221 - [BeforeEach] [sig-apps] Job + STEP: Creating a kubernetes client 07/27/23 01:30:38.441 + Jul 27 01:30:38.441: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename daemonsets 07/27/23 01:30:38.442 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:30:38.511 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:30:38.524 + [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 - [It] should adopt matching orphans and release non-matching pods [Conformance] - test/e2e/apps/job.go:507 - STEP: Creating a job 06/12/23 20:42:27.253 - STEP: Ensuring active pods == parallelism 06/12/23 20:42:27.29 - STEP: Orphaning one of the Job's Pods 06/12/23 20:42:39.301 - Jun 12 20:42:39.831: INFO: Successfully updated pod "adopt-release-7bz2l" - STEP: Checking that the Job readopts the Pod 06/12/23 20:42:39.831 - Jun 12 20:42:39.832: INFO: Waiting up to 15m0s for pod "adopt-release-7bz2l" in namespace "job-7357" to be "adopted" - Jun 12 20:42:39.838: INFO: Pod "adopt-release-7bz2l": Phase="Running", Reason="", readiness=true. Elapsed: 6.027263ms - Jun 12 20:42:41.846: INFO: Pod "adopt-release-7bz2l": Phase="Running", Reason="", readiness=true. Elapsed: 2.014137979s - Jun 12 20:42:41.846: INFO: Pod "adopt-release-7bz2l" satisfied condition "adopted" - STEP: Removing the labels from the Job's Pod 06/12/23 20:42:41.847 - Jun 12 20:42:42.405: INFO: Successfully updated pod "adopt-release-7bz2l" - STEP: Checking that the Job releases the Pod 06/12/23 20:42:42.424 - Jun 12 20:42:42.425: INFO: Waiting up to 15m0s for pod "adopt-release-7bz2l" in namespace "job-7357" to be "released" - Jun 12 20:42:42.433: INFO: Pod "adopt-release-7bz2l": Phase="Running", Reason="", readiness=true. Elapsed: 8.315817ms - Jun 12 20:42:44.440: INFO: Pod "adopt-release-7bz2l": Phase="Running", Reason="", readiness=true. Elapsed: 2.015235443s - Jun 12 20:42:44.440: INFO: Pod "adopt-release-7bz2l" satisfied condition "released" - [AfterEach] [sig-apps] Job + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 + [It] should rollback without unnecessary restarts [Conformance] + test/e2e/apps/daemon_set.go:432 + Jul 27 01:30:38.739: INFO: Create a RollingUpdate DaemonSet + Jul 27 01:30:38.754: INFO: Check that daemon pods launch on every node of the cluster + Jul 27 01:30:38.805: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jul 27 01:30:38.805: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 + Jul 27 01:30:39.829: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jul 27 01:30:39.829: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 + Jul 27 01:30:40.872: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Jul 27 01:30:40.872: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 + Jul 27 01:30:41.834: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Jul 27 01:30:41.834: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 + Jul 27 01:30:42.830: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Jul 27 01:30:42.831: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 + Jul 27 01:30:43.836: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Jul 27 01:30:43.836: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 + Jul 27 01:30:44.921: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Jul 27 01:30:44.921: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 + Jul 27 01:30:45.841: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Jul 27 01:30:45.841: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 + Jul 27 01:30:46.841: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Jul 27 01:30:46.841: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 + Jul 27 01:30:47.835: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Jul 27 01:30:47.835: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 + Jul 27 01:30:48.837: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Jul 27 01:30:48.837: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 + Jul 27 01:30:49.831: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Jul 27 01:30:49.831: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 + Jul 27 01:30:50.832: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Jul 27 01:30:50.832: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + Jul 27 01:30:50.832: INFO: Update the DaemonSet to trigger a rollout + Jul 27 01:30:50.909: INFO: Updating DaemonSet daemon-set + Jul 27 01:30:53.963: INFO: Roll back the DaemonSet before rollout is complete + Jul 27 01:30:54.020: INFO: Updating DaemonSet daemon-set + Jul 27 01:30:54.020: INFO: Make sure DaemonSet rollback is complete + Jul 27 01:30:54.068: INFO: Wrong image for pod: daemon-set-z4qnz. Expected: registry.k8s.io/e2e-test-images/httpd:2.4.38-4, got: foo:non-existent. + Jul 27 01:30:54.068: INFO: Pod daemon-set-z4qnz is not available + Jul 27 01:31:00.095: INFO: Pod daemon-set-zrssl is not available + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 + STEP: Deleting DaemonSet "daemon-set" 07/27/23 01:31:00.171 + STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6357, will wait for the garbage collector to delete the pods 07/27/23 01:31:00.171 + Jul 27 01:31:00.257: INFO: Deleting DaemonSet.extensions daemon-set took: 26.738562ms + Jul 27 01:31:00.359: INFO: Terminating DaemonSet.extensions daemon-set pods took: 102.076386ms + Jul 27 01:31:03.071: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jul 27 01:31:03.071: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set + Jul 27 01:31:03.089: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"61951"},"items":null} + + Jul 27 01:31:03.098: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"61951"},"items":null} + + [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 - Jun 12 20:42:44.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] Job + Jul 27 01:31:03.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] Job + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] Job + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 - STEP: Destroying namespace "job-7357" for this suite. 06/12/23 20:42:44.452 + STEP: Destroying namespace "daemonsets-6357" for this suite. 07/27/23 01:31:03.173 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSS +SS ------------------------------ -[sig-apps] Deployment - deployment should support proportional scaling [Conformance] - test/e2e/apps/deployment.go:160 -[BeforeEach] [sig-apps] Deployment +[sig-node] Secrets + should patch a secret [Conformance] + test/e2e/common/node/secrets.go:154 +[BeforeEach] [sig-node] Secrets set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:42:44.475 -Jun 12 20:42:44.475: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename deployment 06/12/23 20:42:44.478 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:42:44.53 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:42:44.541 -[BeforeEach] [sig-apps] Deployment +STEP: Creating a kubernetes client 07/27/23 01:31:03.197 +Jul 27 01:31:03.197: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename secrets 07/27/23 01:31:03.197 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:31:03.244 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:31:03.274 +[BeforeEach] [sig-node] Secrets test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:91 -[It] deployment should support proportional scaling [Conformance] - test/e2e/apps/deployment.go:160 -Jun 12 20:42:44.569: INFO: Creating deployment "webserver-deployment" -Jun 12 20:42:44.591: INFO: Waiting for observed generation 1 -Jun 12 20:42:46.614: INFO: Waiting for all required pods to come up -Jun 12 20:42:46.651: INFO: Pod name httpd: Found 10 pods out of 10 -STEP: ensuring each pod is running 06/12/23 20:42:46.651 -Jun 12 20:42:46.651: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-v6lnd" in namespace "deployment-5608" to be "running" -Jun 12 20:42:46.652: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-qrpxz" in namespace "deployment-5608" to be "running" -Jun 12 20:42:46.652: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-4gl75" in namespace "deployment-5608" to be "running" -Jun 12 20:42:46.653: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-b8btd" in namespace "deployment-5608" to be "running" -Jun 12 20:42:46.653: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-gkvfv" in namespace "deployment-5608" to be "running" -Jun 12 20:42:46.653: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-m958r" in namespace "deployment-5608" to be "running" -Jun 12 20:42:46.653: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-rnqqp" in namespace "deployment-5608" to be "running" -Jun 12 20:42:46.653: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-rn8z9" in namespace "deployment-5608" to be "running" -Jun 12 20:42:46.654: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-sgc6x" in namespace "deployment-5608" to be "running" -Jun 12 20:42:46.654: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-t4r66" in namespace "deployment-5608" to be "running" -Jun 12 20:42:46.660: INFO: Pod "webserver-deployment-7f5969cbc7-v6lnd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.720997ms -Jun 12 20:42:46.662: INFO: Pod "webserver-deployment-7f5969cbc7-qrpxz": Phase="Pending", Reason="", readiness=false. Elapsed: 9.901581ms -Jun 12 20:42:46.667: INFO: Pod "webserver-deployment-7f5969cbc7-m958r": Phase="Pending", Reason="", readiness=false. Elapsed: 14.305706ms -Jun 12 20:42:46.669: INFO: Pod "webserver-deployment-7f5969cbc7-gkvfv": Phase="Pending", Reason="", readiness=false. Elapsed: 16.028474ms -Jun 12 20:42:46.669: INFO: Pod "webserver-deployment-7f5969cbc7-sgc6x": Phase="Pending", Reason="", readiness=false. Elapsed: 15.41586ms -Jun 12 20:42:46.669: INFO: Pod "webserver-deployment-7f5969cbc7-t4r66": Phase="Pending", Reason="", readiness=false. Elapsed: 15.723251ms -Jun 12 20:42:46.670: INFO: Pod "webserver-deployment-7f5969cbc7-4gl75": Phase="Pending", Reason="", readiness=false. Elapsed: 17.103541ms -Jun 12 20:42:46.670: INFO: Pod "webserver-deployment-7f5969cbc7-rn8z9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.579035ms -Jun 12 20:42:46.670: INFO: Pod "webserver-deployment-7f5969cbc7-rnqqp": Phase="Pending", Reason="", readiness=false. Elapsed: 16.920775ms -Jun 12 20:42:46.670: INFO: Pod "webserver-deployment-7f5969cbc7-b8btd": Phase="Pending", Reason="", readiness=false. Elapsed: 17.782423ms -Jun 12 20:42:48.673: INFO: Pod "webserver-deployment-7f5969cbc7-v6lnd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021877912s -Jun 12 20:42:48.678: INFO: Pod "webserver-deployment-7f5969cbc7-gkvfv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025499686s -Jun 12 20:42:48.681: INFO: Pod "webserver-deployment-7f5969cbc7-4gl75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028077457s -Jun 12 20:42:48.681: INFO: Pod "webserver-deployment-7f5969cbc7-qrpxz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028602826s -Jun 12 20:42:48.682: INFO: Pod "webserver-deployment-7f5969cbc7-b8btd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029645159s -Jun 12 20:42:48.683: INFO: Pod "webserver-deployment-7f5969cbc7-t4r66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028819991s -Jun 12 20:42:48.683: INFO: Pod "webserver-deployment-7f5969cbc7-m958r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030126458s -Jun 12 20:42:48.683: INFO: Pod "webserver-deployment-7f5969cbc7-rnqqp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029963553s -Jun 12 20:42:48.683: INFO: Pod "webserver-deployment-7f5969cbc7-rn8z9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029839124s -Jun 12 20:42:48.683: INFO: Pod "webserver-deployment-7f5969cbc7-sgc6x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029625098s -Jun 12 20:42:50.669: INFO: Pod "webserver-deployment-7f5969cbc7-v6lnd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017702164s -Jun 12 20:42:50.669: INFO: Pod "webserver-deployment-7f5969cbc7-qrpxz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016697219s -Jun 12 20:42:50.677: INFO: Pod "webserver-deployment-7f5969cbc7-m958r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024254844s -Jun 12 20:42:50.680: INFO: Pod "webserver-deployment-7f5969cbc7-t4r66": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026215198s -Jun 12 20:42:50.681: INFO: Pod "webserver-deployment-7f5969cbc7-b8btd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028140371s -Jun 12 20:42:50.681: INFO: Pod "webserver-deployment-7f5969cbc7-4gl75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028553922s -Jun 12 20:42:50.682: INFO: Pod "webserver-deployment-7f5969cbc7-sgc6x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027983883s -Jun 12 20:42:50.683: INFO: Pod "webserver-deployment-7f5969cbc7-gkvfv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029804023s -Jun 12 20:42:50.683: INFO: Pod "webserver-deployment-7f5969cbc7-rnqqp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029884779s -Jun 12 20:42:50.684: INFO: Pod "webserver-deployment-7f5969cbc7-rn8z9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03038794s -Jun 12 20:42:52.668: INFO: Pod "webserver-deployment-7f5969cbc7-v6lnd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016752655s -Jun 12 20:42:52.670: INFO: Pod "webserver-deployment-7f5969cbc7-qrpxz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01711502s -Jun 12 20:42:52.673: INFO: Pod "webserver-deployment-7f5969cbc7-m958r": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02047502s -Jun 12 20:42:52.676: INFO: Pod "webserver-deployment-7f5969cbc7-gkvfv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.022770414s -Jun 12 20:42:52.677: INFO: Pod "webserver-deployment-7f5969cbc7-rn8z9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.023583034s -Jun 12 20:42:52.678: INFO: Pod "webserver-deployment-7f5969cbc7-4gl75": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025395955s -Jun 12 20:42:52.678: INFO: Pod "webserver-deployment-7f5969cbc7-sgc6x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024647957s -Jun 12 20:42:52.679: INFO: Pod "webserver-deployment-7f5969cbc7-t4r66": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024753502s -Jun 12 20:42:52.682: INFO: Pod "webserver-deployment-7f5969cbc7-b8btd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028970628s -Jun 12 20:42:52.682: INFO: Pod "webserver-deployment-7f5969cbc7-rnqqp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028718302s -Jun 12 20:42:54.669: INFO: Pod "webserver-deployment-7f5969cbc7-v6lnd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017577835s -Jun 12 20:42:54.670: INFO: Pod "webserver-deployment-7f5969cbc7-qrpxz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017745751s -Jun 12 20:42:54.675: INFO: Pod "webserver-deployment-7f5969cbc7-m958r": Phase="Pending", Reason="", readiness=false. Elapsed: 8.022287886s -Jun 12 20:42:54.677: INFO: Pod "webserver-deployment-7f5969cbc7-sgc6x": Phase="Pending", Reason="", readiness=false. Elapsed: 8.023109056s -Jun 12 20:42:54.677: INFO: Pod "webserver-deployment-7f5969cbc7-gkvfv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.024088417s -Jun 12 20:42:54.679: INFO: Pod "webserver-deployment-7f5969cbc7-4gl75": Phase="Pending", Reason="", readiness=false. Elapsed: 8.026389116s -Jun 12 20:42:54.679: INFO: Pod "webserver-deployment-7f5969cbc7-rn8z9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.025777135s -Jun 12 20:42:54.680: INFO: Pod "webserver-deployment-7f5969cbc7-rnqqp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.026799221s -Jun 12 20:42:54.680: INFO: Pod "webserver-deployment-7f5969cbc7-t4r66": Phase="Pending", Reason="", readiness=false. Elapsed: 8.026648042s -Jun 12 20:42:54.681: INFO: Pod "webserver-deployment-7f5969cbc7-b8btd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.028632497s -Jun 12 20:42:56.674: INFO: Pod "webserver-deployment-7f5969cbc7-v6lnd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022317208s -Jun 12 20:42:56.674: INFO: Pod "webserver-deployment-7f5969cbc7-qrpxz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.021271913s -Jun 12 20:42:56.676: INFO: Pod "webserver-deployment-7f5969cbc7-m958r": Phase="Pending", Reason="", readiness=false. Elapsed: 10.02303019s -Jun 12 20:42:56.678: INFO: Pod "webserver-deployment-7f5969cbc7-sgc6x": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024324779s -Jun 12 20:42:56.679: INFO: Pod "webserver-deployment-7f5969cbc7-gkvfv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.025923484s -Jun 12 20:42:56.679: INFO: Pod "webserver-deployment-7f5969cbc7-4gl75": Phase="Pending", Reason="", readiness=false. Elapsed: 10.026385936s -Jun 12 20:42:56.679: INFO: Pod "webserver-deployment-7f5969cbc7-rn8z9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.025569444s -Jun 12 20:42:56.679: INFO: Pod "webserver-deployment-7f5969cbc7-t4r66": Phase="Pending", Reason="", readiness=false. Elapsed: 10.025466075s -Jun 12 20:42:56.683: INFO: Pod "webserver-deployment-7f5969cbc7-rnqqp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.029445281s -Jun 12 20:42:56.683: INFO: Pod "webserver-deployment-7f5969cbc7-b8btd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.030384921s -Jun 12 20:42:58.682: INFO: Pod "webserver-deployment-7f5969cbc7-qrpxz": Phase="Pending", Reason="", readiness=false. Elapsed: 12.029806515s -Jun 12 20:42:58.682: INFO: Pod "webserver-deployment-7f5969cbc7-v6lnd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.031196392s -Jun 12 20:42:58.713: INFO: Pod "webserver-deployment-7f5969cbc7-gkvfv": Phase="Pending", Reason="", readiness=false. Elapsed: 12.059878499s -Jun 12 20:42:58.713: INFO: Pod "webserver-deployment-7f5969cbc7-m958r": Phase="Pending", Reason="", readiness=false. Elapsed: 12.06016037s -Jun 12 20:42:58.714: INFO: Pod "webserver-deployment-7f5969cbc7-4gl75": Phase="Pending", Reason="", readiness=false. Elapsed: 12.061002741s -Jun 12 20:42:58.714: INFO: Pod "webserver-deployment-7f5969cbc7-b8btd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.061084391s -Jun 12 20:42:58.714: INFO: Pod "webserver-deployment-7f5969cbc7-sgc6x": Phase="Pending", Reason="", readiness=false. Elapsed: 12.060410849s -Jun 12 20:42:58.714: INFO: Pod "webserver-deployment-7f5969cbc7-rn8z9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.060928079s -Jun 12 20:42:58.715: INFO: Pod "webserver-deployment-7f5969cbc7-rnqqp": Phase="Pending", Reason="", readiness=false. Elapsed: 12.061527551s -Jun 12 20:42:58.715: INFO: Pod "webserver-deployment-7f5969cbc7-t4r66": Phase="Pending", Reason="", readiness=false. Elapsed: 12.0611723s -Jun 12 20:43:00.738: INFO: Pod "webserver-deployment-7f5969cbc7-qrpxz": Phase="Pending", Reason="", readiness=false. Elapsed: 14.085663626s -Jun 12 20:43:00.738: INFO: Pod "webserver-deployment-7f5969cbc7-v6lnd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.087146935s -Jun 12 20:43:00.824: INFO: Pod "webserver-deployment-7f5969cbc7-m958r": Phase="Pending", Reason="", readiness=false. Elapsed: 14.171202686s -Jun 12 20:43:00.825: INFO: Pod "webserver-deployment-7f5969cbc7-gkvfv": Phase="Pending", Reason="", readiness=false. Elapsed: 14.17179641s -Jun 12 20:43:00.825: INFO: Pod "webserver-deployment-7f5969cbc7-b8btd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.172240471s -Jun 12 20:43:00.825: INFO: Pod "webserver-deployment-7f5969cbc7-4gl75": Phase="Pending", Reason="", readiness=false. Elapsed: 14.172778721s -Jun 12 20:43:00.826: INFO: Pod "webserver-deployment-7f5969cbc7-sgc6x": Phase="Pending", Reason="", readiness=false. Elapsed: 14.171925192s -Jun 12 20:43:00.826: INFO: Pod "webserver-deployment-7f5969cbc7-rn8z9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.17231613s -Jun 12 20:43:00.826: INFO: Pod "webserver-deployment-7f5969cbc7-rnqqp": Phase="Pending", Reason="", readiness=false. Elapsed: 14.173062228s -Jun 12 20:43:00.827: INFO: Pod "webserver-deployment-7f5969cbc7-t4r66": Phase="Pending", Reason="", readiness=false. Elapsed: 14.172997828s -Jun 12 20:43:02.718: INFO: Pod "webserver-deployment-7f5969cbc7-qrpxz": Phase="Pending", Reason="", readiness=false. Elapsed: 16.065507479s -Jun 12 20:43:02.719: INFO: Pod "webserver-deployment-7f5969cbc7-v6lnd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.067430059s -Jun 12 20:43:02.750: INFO: Pod "webserver-deployment-7f5969cbc7-gkvfv": Phase="Pending", Reason="", readiness=false. Elapsed: 16.097390504s -Jun 12 20:43:02.755: INFO: Pod "webserver-deployment-7f5969cbc7-sgc6x": Phase="Pending", Reason="", readiness=false. Elapsed: 16.101493603s -Jun 12 20:43:02.755: INFO: Pod "webserver-deployment-7f5969cbc7-t4r66": Phase="Pending", Reason="", readiness=false. Elapsed: 16.101526073s -Jun 12 20:43:02.755: INFO: Pod "webserver-deployment-7f5969cbc7-m958r": Phase="Pending", Reason="", readiness=false. Elapsed: 16.102484508s -Jun 12 20:43:02.759: INFO: Pod "webserver-deployment-7f5969cbc7-rn8z9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.105809244s -Jun 12 20:43:02.761: INFO: Pod "webserver-deployment-7f5969cbc7-b8btd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.107909965s -Jun 12 20:43:02.761: INFO: Pod "webserver-deployment-7f5969cbc7-rnqqp": Phase="Pending", Reason="", readiness=false. Elapsed: 16.107571134s -Jun 12 20:43:02.761: INFO: Pod "webserver-deployment-7f5969cbc7-4gl75": Phase="Pending", Reason="", readiness=false. Elapsed: 16.108489008s -Jun 12 20:43:04.675: INFO: Pod "webserver-deployment-7f5969cbc7-qrpxz": Phase="Pending", Reason="", readiness=false. Elapsed: 18.022550063s -Jun 12 20:43:04.675: INFO: Pod "webserver-deployment-7f5969cbc7-v6lnd": Phase="Pending", Reason="", readiness=false. Elapsed: 18.024049036s -Jun 12 20:43:04.679: INFO: Pod "webserver-deployment-7f5969cbc7-m958r": Phase="Pending", Reason="", readiness=false. Elapsed: 18.025656448s -Jun 12 20:43:04.680: INFO: Pod "webserver-deployment-7f5969cbc7-gkvfv": Phase="Pending", Reason="", readiness=false. Elapsed: 18.027516986s -Jun 12 20:43:04.681: INFO: Pod "webserver-deployment-7f5969cbc7-sgc6x": Phase="Pending", Reason="", readiness=false. Elapsed: 18.026978616s -Jun 12 20:43:04.681: INFO: Pod "webserver-deployment-7f5969cbc7-rnqqp": Phase="Pending", Reason="", readiness=false. Elapsed: 18.027669645s -Jun 12 20:43:04.680: INFO: Pod "webserver-deployment-7f5969cbc7-4gl75": Phase="Pending", Reason="", readiness=false. Elapsed: 18.027861544s -Jun 12 20:43:04.682: INFO: Pod "webserver-deployment-7f5969cbc7-b8btd": Phase="Pending", Reason="", readiness=false. Elapsed: 18.029027533s -Jun 12 20:43:04.682: INFO: Pod "webserver-deployment-7f5969cbc7-rn8z9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.028506186s -Jun 12 20:43:04.682: INFO: Pod "webserver-deployment-7f5969cbc7-t4r66": Phase="Pending", Reason="", readiness=false. Elapsed: 18.0281038s -Jun 12 20:43:06.670: INFO: Pod "webserver-deployment-7f5969cbc7-qrpxz": Phase="Pending", Reason="", readiness=false. Elapsed: 20.018041188s -Jun 12 20:43:06.671: INFO: Pod "webserver-deployment-7f5969cbc7-v6lnd": Phase="Pending", Reason="", readiness=false. Elapsed: 20.01953672s -Jun 12 20:43:06.679: INFO: Pod "webserver-deployment-7f5969cbc7-m958r": Phase="Pending", Reason="", readiness=false. Elapsed: 20.02579169s -Jun 12 20:43:06.679: INFO: Pod "webserver-deployment-7f5969cbc7-gkvfv": Phase="Pending", Reason="", readiness=false. Elapsed: 20.025988656s -Jun 12 20:43:06.680: INFO: Pod "webserver-deployment-7f5969cbc7-sgc6x": Phase="Running", Reason="", readiness=true. Elapsed: 20.025994972s -Jun 12 20:43:06.680: INFO: Pod "webserver-deployment-7f5969cbc7-sgc6x" satisfied condition "running" -Jun 12 20:43:06.680: INFO: Pod "webserver-deployment-7f5969cbc7-b8btd": Phase="Running", Reason="", readiness=true. Elapsed: 20.027093807s -Jun 12 20:43:06.680: INFO: Pod "webserver-deployment-7f5969cbc7-b8btd" satisfied condition "running" -Jun 12 20:43:06.680: INFO: Pod "webserver-deployment-7f5969cbc7-rnqqp": Phase="Running", Reason="", readiness=true. Elapsed: 20.026766702s -Jun 12 20:43:06.680: INFO: Pod "webserver-deployment-7f5969cbc7-rnqqp" satisfied condition "running" -Jun 12 20:43:06.680: INFO: Pod "webserver-deployment-7f5969cbc7-4gl75": Phase="Pending", Reason="", readiness=false. Elapsed: 20.027381481s -Jun 12 20:43:06.681: INFO: Pod "webserver-deployment-7f5969cbc7-t4r66": Phase="Pending", Reason="", readiness=false. Elapsed: 20.026848288s -Jun 12 20:43:06.681: INFO: Pod "webserver-deployment-7f5969cbc7-rn8z9": Phase="Running", Reason="", readiness=true. Elapsed: 20.027458501s -Jun 12 20:43:06.681: INFO: Pod "webserver-deployment-7f5969cbc7-rn8z9" satisfied condition "running" -Jun 12 20:43:08.676: INFO: Pod "webserver-deployment-7f5969cbc7-v6lnd": Phase="Running", Reason="", readiness=true. Elapsed: 22.024526935s -Jun 12 20:43:08.676: INFO: Pod "webserver-deployment-7f5969cbc7-v6lnd" satisfied condition "running" -Jun 12 20:43:08.679: INFO: Pod "webserver-deployment-7f5969cbc7-qrpxz": Phase="Pending", Reason="", readiness=false. Elapsed: 22.026555212s -Jun 12 20:43:08.685: INFO: Pod "webserver-deployment-7f5969cbc7-m958r": Phase="Pending", Reason="", readiness=false. Elapsed: 22.032044219s -Jun 12 20:43:08.685: INFO: Pod "webserver-deployment-7f5969cbc7-4gl75": Phase="Pending", Reason="", readiness=false. Elapsed: 22.032943735s -Jun 12 20:43:08.686: INFO: Pod "webserver-deployment-7f5969cbc7-gkvfv": Phase="Pending", Reason="", readiness=false. Elapsed: 22.032795698s -Jun 12 20:43:08.686: INFO: Pod "webserver-deployment-7f5969cbc7-t4r66": Phase="Pending", Reason="", readiness=false. Elapsed: 22.03221075s -Jun 12 20:43:10.671: INFO: Pod "webserver-deployment-7f5969cbc7-qrpxz": Phase="Running", Reason="", readiness=true. Elapsed: 24.018755084s -Jun 12 20:43:10.671: INFO: Pod "webserver-deployment-7f5969cbc7-qrpxz" satisfied condition "running" -Jun 12 20:43:10.678: INFO: Pod "webserver-deployment-7f5969cbc7-m958r": Phase="Running", Reason="", readiness=true. Elapsed: 24.025127195s -Jun 12 20:43:10.678: INFO: Pod "webserver-deployment-7f5969cbc7-m958r" satisfied condition "running" -Jun 12 20:43:10.678: INFO: Pod "webserver-deployment-7f5969cbc7-gkvfv": Phase="Pending", Reason="", readiness=false. Elapsed: 24.025411639s -Jun 12 20:43:10.681: INFO: Pod "webserver-deployment-7f5969cbc7-4gl75": Phase="Pending", Reason="", readiness=false. Elapsed: 24.02799976s -Jun 12 20:43:10.681: INFO: Pod "webserver-deployment-7f5969cbc7-t4r66": Phase="Running", Reason="", readiness=true. Elapsed: 24.027263693s -Jun 12 20:43:10.681: INFO: Pod "webserver-deployment-7f5969cbc7-t4r66" satisfied condition "running" -Jun 12 20:43:12.677: INFO: Pod "webserver-deployment-7f5969cbc7-gkvfv": Phase="Running", Reason="", readiness=true. Elapsed: 26.02417607s -Jun 12 20:43:12.677: INFO: Pod "webserver-deployment-7f5969cbc7-gkvfv" satisfied condition "running" -Jun 12 20:43:12.678: INFO: Pod "webserver-deployment-7f5969cbc7-4gl75": Phase="Running", Reason="", readiness=true. Elapsed: 26.02588095s -Jun 12 20:43:12.678: INFO: Pod "webserver-deployment-7f5969cbc7-4gl75" satisfied condition "running" -Jun 12 20:43:12.679: INFO: Waiting for deployment "webserver-deployment" to complete -Jun 12 20:43:12.692: INFO: Updating deployment "webserver-deployment" with a non-existent image -Jun 12 20:43:12.757: INFO: Updating deployment webserver-deployment -Jun 12 20:43:12.757: INFO: Waiting for observed generation 2 -Jun 12 20:43:14.774: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 -Jun 12 20:43:14.799: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 -Jun 12 20:43:14.843: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas -Jun 12 20:43:14.867: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 -Jun 12 20:43:14.867: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 -Jun 12 20:43:14.878: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas -Jun 12 20:43:14.890: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas -Jun 12 20:43:14.891: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 -Jun 12 20:43:14.909: INFO: Updating deployment webserver-deployment -Jun 12 20:43:14.909: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas -Jun 12 20:43:14.924: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 -Jun 12 20:43:14.930: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 -[AfterEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:84 -Jun 12 20:43:14.950: INFO: Deployment "webserver-deployment": -&Deployment{ObjectMeta:{webserver-deployment deployment-5608 5a623793-48d9-4404-bddb-d5aa3f0d9bb1 74706 3 2023-06-12 20:42:44 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{kube-controller-manager Update apps/v1 2023-06-12 20:43:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status} {e2e.test Update apps/v1 2023-06-12 20:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00491d168 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-06-12 20:43:10 +0000 UTC,LastTransitionTime:2023-06-12 20:43:10 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-d9f79cb5" is progressing.,LastUpdateTime:2023-06-12 20:43:12 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} - -Jun 12 20:43:14.960: INFO: New ReplicaSet "webserver-deployment-d9f79cb5" of Deployment "webserver-deployment": -&ReplicaSet{ObjectMeta:{webserver-deployment-d9f79cb5 deployment-5608 bc2e4e27-dd55-4fa3-8ec6-08a36749ecf2 74710 3 2023-06-12 20:43:12 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 5a623793-48d9-4404-bddb-d5aa3f0d9bb1 0xc0049e49e7 0xc0049e49e8}] [] [{kube-controller-manager Update apps/v1 2023-06-12 20:43:12 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status} {kube-controller-manager Update apps/v1 2023-06-12 20:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5a623793-48d9-4404-bddb-d5aa3f0d9bb1\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: d9f79cb5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0049e4a88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} -Jun 12 20:43:14.962: INFO: All old ReplicaSets of Deployment "webserver-deployment": -Jun 12 20:43:14.963: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-7f5969cbc7 deployment-5608 703f383f-bf60-4f1c-b4cc-adcf8ec99a6d 74708 3 2023-06-12 20:42:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 5a623793-48d9-4404-bddb-d5aa3f0d9bb1 0xc0049e48f7 0xc0049e48f8}] [] [{kube-controller-manager Update apps/v1 2023-06-12 20:43:12 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status} {kube-controller-manager Update apps/v1 2023-06-12 20:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5a623793-48d9-4404-bddb-d5aa3f0d9bb1\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 7f5969cbc7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0049e4988 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} -Jun 12 20:43:14.986: INFO: Pod "webserver-deployment-7f5969cbc7-4gl75" is available: -&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-4gl75 webserver-deployment-7f5969cbc7- deployment-5608 ebde2cfd-36ae-4615-a6cb-7364b082091d 74604 0 2023-06-12 20:42:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:ade6777ec53946a7c282a696f682c425a7441cff2c6676038e114a543f520b4e cni.projectcalico.org/podIP:172.30.185.112/32 cni.projectcalico.org/podIPs:172.30.185.112/32 k8s.v1.cni.cncf.io/network-status:[{ - "name": "k8s-pod-network", - "ips": [ - "172.30.185.112" - ], - "default": true, - "dns": {} -}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 703f383f-bf60-4f1c-b4cc-adcf8ec99a6d 0xc00491d597 0xc00491d598}] [] [{kube-controller-manager Update v1 2023-06-12 20:42:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"703f383f-bf60-4f1c-b4cc-adcf8ec99a6d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-06-12 20:42:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-06-12 20:42:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-06-12 20:43:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.30.185.112\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gxqmd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gxqmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.116,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c32,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.116,PodIP:172.30.185.112,StartTime:2023-06-12 20:42:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-12 20:43:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://3328c2930c702b7c4dcd02053e5657bc86c92eb57aa707657760cdf37f8732bf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.30.185.112,},},EphemeralContainerStatuses:[]ContainerStatus{},},} -Jun 12 20:43:14.986: INFO: Pod "webserver-deployment-7f5969cbc7-b8btd" is available: -&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-b8btd webserver-deployment-7f5969cbc7- deployment-5608 4076a899-0a06-477d-8d7c-1e1728108cc6 74550 0 2023-06-12 20:42:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:e096ebebbfa56d45d6f5b9d111f1833810399140a3ccb11bfb5b1b6dc6b9bd4c cni.projectcalico.org/podIP:172.30.161.100/32 cni.projectcalico.org/podIPs:172.30.161.100/32 k8s.v1.cni.cncf.io/network-status:[{ - "name": "k8s-pod-network", - "ips": [ - "172.30.161.100" - ], - "default": true, - "dns": {} -}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 703f383f-bf60-4f1c-b4cc-adcf8ec99a6d 0xc00491d807 0xc00491d808}] [] [{kube-controller-manager Update v1 2023-06-12 20:42:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"703f383f-bf60-4f1c-b4cc-adcf8ec99a6d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-06-12 20:42:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-06-12 20:42:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-06-12 20:43:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.30.161.100\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-466t2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-466t2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.112,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c32,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.112,PodIP:172.30.161.100,StartTime:2023-06-12 20:42:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-12 20:43:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://7093c3a9405763d80f39cd892056bdeea10e145eb1a3e69f4d9b5a6c47f1f53b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.30.161.100,},},EphemeralContainerStatuses:[]ContainerStatus{},},} -Jun 12 20:43:14.988: INFO: Pod "webserver-deployment-7f5969cbc7-gkvfv" is available: -&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-gkvfv webserver-deployment-7f5969cbc7- deployment-5608 3996afef-017a-41a6-a3df-4da9e5ca1a4d 74607 0 2023-06-12 20:42:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:2aca79e7fa3c6aea021f084ad408a3623eac367217b6d5519b09b0a571b3c4c7 cni.projectcalico.org/podIP:172.30.185.116/32 cni.projectcalico.org/podIPs:172.30.185.116/32 k8s.v1.cni.cncf.io/network-status:[{ - "name": "k8s-pod-network", - "ips": [ - "172.30.185.116" - ], - "default": true, - "dns": {} -}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 703f383f-bf60-4f1c-b4cc-adcf8ec99a6d 0xc00491da77 0xc00491da78}] [] [{kube-controller-manager Update v1 2023-06-12 20:42:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"703f383f-bf60-4f1c-b4cc-adcf8ec99a6d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-06-12 20:42:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-06-12 20:42:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-06-12 20:43:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.30.185.116\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vjfs4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vjfs4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.116,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c32,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.116,PodIP:172.30.185.116,StartTime:2023-06-12 20:42:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-12 20:43:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://d3ba74e9ebf72aa888b67033d7106e2a13375150097c70d60902c71d0d826b1e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.30.185.116,},},EphemeralContainerStatuses:[]ContainerStatus{},},} -Jun 12 20:43:14.988: INFO: Pod "webserver-deployment-7f5969cbc7-jl4vl" is not available: -&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-jl4vl webserver-deployment-7f5969cbc7- deployment-5608 4de8a9f5-e87e-4fa4-b4a3-63ae29a3d4c8 74714 0 2023-06-12 20:43:14 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 703f383f-bf60-4f1c-b4cc-adcf8ec99a6d 0xc00491dce7 0xc00491dce8}] [] [{kube-controller-manager Update v1 2023-06-12 20:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"703f383f-bf60-4f1c-b4cc-adcf8ec99a6d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7rwsx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7rwsx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c32,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-k6j57,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} -Jun 12 20:43:14.989: INFO: Pod "webserver-deployment-7f5969cbc7-m958r" is available: -&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-m958r webserver-deployment-7f5969cbc7- deployment-5608 4bf30a79-4f40-4231-9779-08710c703f4a 74583 0 2023-06-12 20:42:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:3c0c9702b770ec053638672d22ad3706297566630aab1f904feef98f49958297 cni.projectcalico.org/podIP:172.30.224.26/32 cni.projectcalico.org/podIPs:172.30.224.26/32 k8s.v1.cni.cncf.io/network-status:[{ - "name": "k8s-pod-network", - "ips": [ - "172.30.224.26" - ], - "default": true, - "dns": {} -}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 703f383f-bf60-4f1c-b4cc-adcf8ec99a6d 0xc00491de87 0xc00491de88}] [] [{kube-controller-manager Update v1 2023-06-12 20:42:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"703f383f-bf60-4f1c-b4cc-adcf8ec99a6d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-06-12 20:42:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-06-12 20:42:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-06-12 20:43:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.30.224.26\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lbdht,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lbdht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.70,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c32,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.70,PodIP:172.30.224.26,StartTime:2023-06-12 20:42:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-12 20:43:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://36023b7b6a8a16210b17a43f921b557194c572ab00e87870e59e73fdb5676726,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.30.224.26,},},EphemeralContainerStatuses:[]ContainerStatus{},},} -Jun 12 20:43:14.991: INFO: Pod "webserver-deployment-7f5969cbc7-rn8z9" is available: -&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-rn8z9 webserver-deployment-7f5969cbc7- deployment-5608 f6729123-1ed2-4c74-ae1a-c20c326f54c6 74534 0 2023-06-12 20:42:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:f25f76c5454b30a53183793c4b99164bcdf536db651f45042129b29ea047002d cni.projectcalico.org/podIP:172.30.224.57/32 cni.projectcalico.org/podIPs:172.30.224.57/32 k8s.v1.cni.cncf.io/network-status:[{ - "name": "k8s-pod-network", - "ips": [ - "172.30.224.57" - ], - "default": true, - "dns": {} -}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 703f383f-bf60-4f1c-b4cc-adcf8ec99a6d 0xc004b8c117 0xc004b8c118}] [] [{kube-controller-manager Update v1 2023-06-12 20:42:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"703f383f-bf60-4f1c-b4cc-adcf8ec99a6d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-06-12 20:42:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-06-12 20:42:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-06-12 20:43:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.30.224.57\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6zn4n,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6zn4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.70,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c32,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-k6j57,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.70,PodIP:172.30.224.57,StartTime:2023-06-12 20:42:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-12 20:43:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://1c9d48143cc5f76a6aeb9e44b89786b237e3cfae4db4aa95622c55b3d4b6e889,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.30.224.57,},},EphemeralContainerStatuses:[]ContainerStatus{},},} -Jun 12 20:43:14.991: INFO: Pod "webserver-deployment-7f5969cbc7-rnqqp" is available: -&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-rnqqp webserver-deployment-7f5969cbc7- deployment-5608 b8e399b8-f1f1-45d9-9383-8fdea13161da 74548 0 2023-06-12 20:42:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:943528098c9b13c811eb5b38630f9fcaaaabc5038f134205070279611bcbb042 cni.projectcalico.org/podIP:172.30.161.98/32 cni.projectcalico.org/podIPs:172.30.161.98/32 k8s.v1.cni.cncf.io/network-status:[{ - "name": "k8s-pod-network", - "ips": [ - "172.30.161.98" - ], - "default": true, - "dns": {} -}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 703f383f-bf60-4f1c-b4cc-adcf8ec99a6d 0xc004b8c3a7 0xc004b8c3a8}] [] [{kube-controller-manager Update v1 2023-06-12 20:42:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"703f383f-bf60-4f1c-b4cc-adcf8ec99a6d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-06-12 20:42:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-06-12 20:42:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-06-12 20:43:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.30.161.98\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zprgl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zprgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.112,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c32,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-k6j57,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.112,PodIP:172.30.161.98,StartTime:2023-06-12 20:42:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-12 20:43:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://83a39afa9af47edb7dc01f1d702c526f4f747635e6d2dad08f8b38f2ebba6e82,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.30.161.98,},},EphemeralContainerStatuses:[]ContainerStatus{},},} -Jun 12 20:43:14.993: INFO: Pod "webserver-deployment-7f5969cbc7-sgc6x" is available: -&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-sgc6x webserver-deployment-7f5969cbc7- deployment-5608 4d68e6fa-9a3d-42c6-8182-61643394a26e 74536 0 2023-06-12 20:42:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:fa5a4c60e9867531577e018b4a949678838b1430b0c3519f7c8845266e755ce8 cni.projectcalico.org/podIP:172.30.224.25/32 cni.projectcalico.org/podIPs:172.30.224.25/32 k8s.v1.cni.cncf.io/network-status:[{ - "name": "k8s-pod-network", - "ips": [ - "172.30.224.25" - ], - "default": true, - "dns": {} -}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 703f383f-bf60-4f1c-b4cc-adcf8ec99a6d 0xc004b8c637 0xc004b8c638}] [] [{kube-controller-manager Update v1 2023-06-12 20:42:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"703f383f-bf60-4f1c-b4cc-adcf8ec99a6d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-06-12 20:42:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-06-12 20:42:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-06-12 20:43:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.30.224.25\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rj7rk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rj7rk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.70,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c32,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.70,PodIP:172.30.224.25,StartTime:2023-06-12 20:42:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-12 20:43:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://53e1d0f71bea84ec5b2048aeb94b4e400e8f0177b20e7b300119283f2d4425ef,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.30.224.25,},},EphemeralContainerStatuses:[]ContainerStatus{},},} -Jun 12 20:43:14.993: INFO: Pod "webserver-deployment-7f5969cbc7-v6lnd" is available: -&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-v6lnd webserver-deployment-7f5969cbc7- deployment-5608 0ba8d4af-b9b5-44b9-b27f-827b645a0a91 74561 0 2023-06-12 20:42:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:1499a10a9203c13362e632ac32f31d6ccb766012045920213223ab4b8e0f87c4 cni.projectcalico.org/podIP:172.30.161.101/32 cni.projectcalico.org/podIPs:172.30.161.101/32 k8s.v1.cni.cncf.io/network-status:[{ - "name": "k8s-pod-network", - "ips": [ - "172.30.161.101" - ], - "default": true, - "dns": {} -}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 703f383f-bf60-4f1c-b4cc-adcf8ec99a6d 0xc004b8c8a7 0xc004b8c8a8}] [] [{kube-controller-manager Update v1 2023-06-12 20:42:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"703f383f-bf60-4f1c-b4cc-adcf8ec99a6d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-06-12 20:42:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-06-12 20:42:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-06-12 20:43:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.30.161.101\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6zkg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6zkg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.112,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c32,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.112,PodIP:172.30.161.101,StartTime:2023-06-12 20:42:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-12 20:43:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://e8252cf63ad7ea4a5cfc52f9401eb71fbd9ed4c27172c96122f3ec2b8b2d88a2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.30.161.101,},},EphemeralContainerStatuses:[]ContainerStatus{},},} -Jun 12 20:43:14.994: INFO: Pod "webserver-deployment-d9f79cb5-cr8g5" is not available: -&Pod{ObjectMeta:{webserver-deployment-d9f79cb5-cr8g5 webserver-deployment-d9f79cb5- deployment-5608 21024aa8-e9b2-4fa9-b823-318fa748a521 74652 0 2023-06-12 20:43:12 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 bc2e4e27-dd55-4fa3-8ec6-08a36749ecf2 0xc004b8cb17 0xc004b8cb18}] [] [{kube-controller-manager Update v1 2023-06-12 20:43:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc2e4e27-dd55-4fa3-8ec6-08a36749ecf2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-06-12 20:43:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kwz9p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kwz9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.112,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c32,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-k6j57,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.112,PodIP:,StartTime:2023-06-12 20:43:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} -Jun 12 20:43:14.995: INFO: Pod "webserver-deployment-d9f79cb5-dxmvc" is not available: -&Pod{ObjectMeta:{webserver-deployment-d9f79cb5-dxmvc webserver-deployment-d9f79cb5- deployment-5608 799af80c-2cd1-4156-9e7f-823f7b30e71f 74689 0 2023-06-12 20:43:12 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[cni.projectcalico.org/containerID:d16f6af7e7230b830d1a4169c8f8faf1c02efbde9eb36ce3dcdf8f10b18822b0 cni.projectcalico.org/podIP:172.30.224.33/32 cni.projectcalico.org/podIPs:172.30.224.33/32 k8s.v1.cni.cncf.io/network-status:[{ - "name": "k8s-pod-network", - "ips": [ - "172.30.224.33" - ], - "default": true, - "dns": {} -}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 bc2e4e27-dd55-4fa3-8ec6-08a36749ecf2 0xc004b8cd67 0xc004b8cd68}] [] [{kube-controller-manager Update v1 2023-06-12 20:43:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc2e4e27-dd55-4fa3-8ec6-08a36749ecf2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-06-12 20:43:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-06-12 20:43:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-06-12 20:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4ls29,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4ls29,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.70,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c32,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-k6j57,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.70,PodIP:,StartTime:2023-06-12 20:43:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} -Jun 12 20:43:15.035: INFO: Pod "webserver-deployment-d9f79cb5-m6226" is not available: -&Pod{ObjectMeta:{webserver-deployment-d9f79cb5-m6226 webserver-deployment-d9f79cb5- deployment-5608 8ac8cea1-e760-43f7-a3ef-2efed18d7af9 74704 0 2023-06-12 20:43:12 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[cni.projectcalico.org/containerID:1ed3306cba41b7a2276d18c1b117645719fcff520db921534171d9f81743aef0 cni.projectcalico.org/podIP:172.30.161.103/32 cni.projectcalico.org/podIPs:172.30.161.103/32 openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 bc2e4e27-dd55-4fa3-8ec6-08a36749ecf2 0xc004b8cfd7 0xc004b8cfd8}] [] [{kube-controller-manager Update v1 2023-06-12 20:43:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc2e4e27-dd55-4fa3-8ec6-08a36749ecf2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-06-12 20:43:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-06-12 20:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hg9bw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hg9bw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.112,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c32,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-k6j57,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.112,PodIP:,StartTime:2023-06-12 20:43:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} -Jun 12 20:43:15.060: INFO: Pod "webserver-deployment-d9f79cb5-rq5pr" is not available: -&Pod{ObjectMeta:{webserver-deployment-d9f79cb5-rq5pr webserver-deployment-d9f79cb5- deployment-5608 e1a44f10-99b4-4052-aaa2-5a224d326a99 74715 0 2023-06-12 20:43:14 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 bc2e4e27-dd55-4fa3-8ec6-08a36749ecf2 0xc004b8d227 0xc004b8d228}] [] [{kube-controller-manager Update v1 2023-06-12 20:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc2e4e27-dd55-4fa3-8ec6-08a36749ecf2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hm4pq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hm4pq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c32,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-k6j57,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} -Jun 12 20:43:15.061: INFO: Pod "webserver-deployment-d9f79cb5-zjdmj" is not available: -&Pod{ObjectMeta:{webserver-deployment-d9f79cb5-zjdmj webserver-deployment-d9f79cb5- deployment-5608 0c418837-7f90-4ea0-b322-ac72e713b270 74698 0 2023-06-12 20:43:12 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[cni.projectcalico.org/containerID:2027c67e8d622c48538dc921f76d7728962966fd7f14a53d44cff5be3d054532 cni.projectcalico.org/podIP:172.30.185.117/32 cni.projectcalico.org/podIPs:172.30.185.117/32 k8s.v1.cni.cncf.io/network-status:[{ - "name": "k8s-pod-network", - "ips": [ - "172.30.185.117" - ], - "default": true, - "dns": {} -}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 bc2e4e27-dd55-4fa3-8ec6-08a36749ecf2 0xc004b8d3b7 0xc004b8d3b8}] [] [{kube-controller-manager Update v1 2023-06-12 20:43:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc2e4e27-dd55-4fa3-8ec6-08a36749ecf2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-06-12 20:43:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-06-12 20:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-06-12 20:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-pskg4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pskg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.116,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c32,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-k6j57,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.116,PodIP:,StartTime:2023-06-12 20:43:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} -Jun 12 20:43:15.062: INFO: Pod "webserver-deployment-d9f79cb5-zmh8g" is not available: -&Pod{ObjectMeta:{webserver-deployment-d9f79cb5-zmh8g webserver-deployment-d9f79cb5- deployment-5608 e56b0760-bb45-4f40-ab6d-8e2e732b4f69 74679 0 2023-06-12 20:43:12 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 bc2e4e27-dd55-4fa3-8ec6-08a36749ecf2 0xc004b8d627 0xc004b8d628}] [] [{kube-controller-manager Update v1 2023-06-12 20:43:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc2e4e27-dd55-4fa3-8ec6-08a36749ecf2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-06-12 20:43:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xw9js,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xw9js,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.116,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c32,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-k6j57,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.116,PodIP:,StartTime:2023-06-12 20:43:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} -[AfterEach] [sig-apps] Deployment +[It] should patch a secret [Conformance] + test/e2e/common/node/secrets.go:154 +STEP: creating a secret 07/27/23 01:31:03.295 +STEP: listing secrets in all namespaces to ensure that there are more than zero 07/27/23 01:31:03.313 +STEP: patching the secret 07/27/23 01:31:03.54 +STEP: deleting the secret using a LabelSelector 07/27/23 01:31:03.574 +STEP: listing secrets in all namespaces, searching for label name and value in patch 07/27/23 01:31:03.596 +[AfterEach] [sig-node] Secrets test/e2e/framework/node/init/init.go:32 -Jun 12 20:43:15.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] Deployment +Jul 27 01:31:03.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Secrets test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] Deployment +[DeferCleanup (Each)] [sig-node] Secrets dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] Deployment +[DeferCleanup (Each)] [sig-node] Secrets tear down framework | framework.go:193 -STEP: Destroying namespace "deployment-5608" for this suite. 06/12/23 20:43:15.101 +STEP: Destroying namespace "secrets-8142" for this suite. 07/27/23 01:31:03.815 ------------------------------ -• [SLOW TEST] [30.671 seconds] -[sig-apps] Deployment -test/e2e/apps/framework.go:23 - deployment should support proportional scaling [Conformance] - test/e2e/apps/deployment.go:160 +• [0.652 seconds] +[sig-node] Secrets +test/e2e/common/node/framework.go:23 + should patch a secret [Conformance] + test/e2e/common/node/secrets.go:154 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] Deployment + [BeforeEach] [sig-node] Secrets set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:42:44.475 - Jun 12 20:42:44.475: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename deployment 06/12/23 20:42:44.478 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:42:44.53 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:42:44.541 - [BeforeEach] [sig-apps] Deployment + STEP: Creating a kubernetes client 07/27/23 01:31:03.197 + Jul 27 01:31:03.197: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename secrets 07/27/23 01:31:03.197 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:31:03.244 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:31:03.274 + [BeforeEach] [sig-node] Secrets test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:91 - [It] deployment should support proportional scaling [Conformance] - test/e2e/apps/deployment.go:160 - Jun 12 20:42:44.569: INFO: Creating deployment "webserver-deployment" - Jun 12 20:42:44.591: INFO: Waiting for observed generation 1 - Jun 12 20:42:46.614: INFO: Waiting for all required pods to come up - Jun 12 20:42:46.651: INFO: Pod name httpd: Found 10 pods out of 10 - STEP: ensuring each pod is running 06/12/23 20:42:46.651 - Jun 12 20:42:46.651: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-v6lnd" in namespace "deployment-5608" to be "running" - Jun 12 20:42:46.652: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-qrpxz" in namespace "deployment-5608" to be "running" - Jun 12 20:42:46.652: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-4gl75" in namespace "deployment-5608" to be "running" - Jun 12 20:42:46.653: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-b8btd" in namespace "deployment-5608" to be "running" - Jun 12 20:42:46.653: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-gkvfv" in namespace "deployment-5608" to be "running" - Jun 12 20:42:46.653: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-m958r" in namespace "deployment-5608" to be "running" - Jun 12 20:42:46.653: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-rnqqp" in namespace "deployment-5608" to be "running" - Jun 12 20:42:46.653: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-rn8z9" in namespace "deployment-5608" to be "running" - Jun 12 20:42:46.654: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-sgc6x" in namespace "deployment-5608" to be "running" - Jun 12 20:42:46.654: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-t4r66" in namespace "deployment-5608" to be "running" - Jun 12 20:42:46.660: INFO: Pod "webserver-deployment-7f5969cbc7-v6lnd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.720997ms - Jun 12 20:42:46.662: INFO: Pod "webserver-deployment-7f5969cbc7-qrpxz": Phase="Pending", Reason="", readiness=false. Elapsed: 9.901581ms - Jun 12 20:42:46.667: INFO: Pod "webserver-deployment-7f5969cbc7-m958r": Phase="Pending", Reason="", readiness=false. Elapsed: 14.305706ms - Jun 12 20:42:46.669: INFO: Pod "webserver-deployment-7f5969cbc7-gkvfv": Phase="Pending", Reason="", readiness=false. Elapsed: 16.028474ms - Jun 12 20:42:46.669: INFO: Pod "webserver-deployment-7f5969cbc7-sgc6x": Phase="Pending", Reason="", readiness=false. Elapsed: 15.41586ms - Jun 12 20:42:46.669: INFO: Pod "webserver-deployment-7f5969cbc7-t4r66": Phase="Pending", Reason="", readiness=false. Elapsed: 15.723251ms - Jun 12 20:42:46.670: INFO: Pod "webserver-deployment-7f5969cbc7-4gl75": Phase="Pending", Reason="", readiness=false. Elapsed: 17.103541ms - Jun 12 20:42:46.670: INFO: Pod "webserver-deployment-7f5969cbc7-rn8z9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.579035ms - Jun 12 20:42:46.670: INFO: Pod "webserver-deployment-7f5969cbc7-rnqqp": Phase="Pending", Reason="", readiness=false. Elapsed: 16.920775ms - Jun 12 20:42:46.670: INFO: Pod "webserver-deployment-7f5969cbc7-b8btd": Phase="Pending", Reason="", readiness=false. Elapsed: 17.782423ms - Jun 12 20:42:48.673: INFO: Pod "webserver-deployment-7f5969cbc7-v6lnd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021877912s - Jun 12 20:42:48.678: INFO: Pod "webserver-deployment-7f5969cbc7-gkvfv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025499686s - Jun 12 20:42:48.681: INFO: Pod "webserver-deployment-7f5969cbc7-4gl75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028077457s - Jun 12 20:42:48.681: INFO: Pod "webserver-deployment-7f5969cbc7-qrpxz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028602826s - Jun 12 20:42:48.682: INFO: Pod "webserver-deployment-7f5969cbc7-b8btd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029645159s - Jun 12 20:42:48.683: INFO: Pod "webserver-deployment-7f5969cbc7-t4r66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028819991s - Jun 12 20:42:48.683: INFO: Pod "webserver-deployment-7f5969cbc7-m958r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030126458s - Jun 12 20:42:48.683: INFO: Pod "webserver-deployment-7f5969cbc7-rnqqp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029963553s - Jun 12 20:42:48.683: INFO: Pod "webserver-deployment-7f5969cbc7-rn8z9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029839124s - Jun 12 20:42:48.683: INFO: Pod "webserver-deployment-7f5969cbc7-sgc6x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029625098s - Jun 12 20:42:50.669: INFO: Pod "webserver-deployment-7f5969cbc7-v6lnd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017702164s - Jun 12 20:42:50.669: INFO: Pod "webserver-deployment-7f5969cbc7-qrpxz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016697219s - Jun 12 20:42:50.677: INFO: Pod "webserver-deployment-7f5969cbc7-m958r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024254844s - Jun 12 20:42:50.680: INFO: Pod "webserver-deployment-7f5969cbc7-t4r66": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026215198s - Jun 12 20:42:50.681: INFO: Pod "webserver-deployment-7f5969cbc7-b8btd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028140371s - Jun 12 20:42:50.681: INFO: Pod "webserver-deployment-7f5969cbc7-4gl75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028553922s - Jun 12 20:42:50.682: INFO: Pod "webserver-deployment-7f5969cbc7-sgc6x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027983883s - Jun 12 20:42:50.683: INFO: Pod "webserver-deployment-7f5969cbc7-gkvfv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029804023s - Jun 12 20:42:50.683: INFO: Pod "webserver-deployment-7f5969cbc7-rnqqp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029884779s - Jun 12 20:42:50.684: INFO: Pod "webserver-deployment-7f5969cbc7-rn8z9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03038794s - Jun 12 20:42:52.668: INFO: Pod "webserver-deployment-7f5969cbc7-v6lnd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016752655s - Jun 12 20:42:52.670: INFO: Pod "webserver-deployment-7f5969cbc7-qrpxz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01711502s - Jun 12 20:42:52.673: INFO: Pod "webserver-deployment-7f5969cbc7-m958r": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02047502s - Jun 12 20:42:52.676: INFO: Pod "webserver-deployment-7f5969cbc7-gkvfv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.022770414s - Jun 12 20:42:52.677: INFO: Pod "webserver-deployment-7f5969cbc7-rn8z9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.023583034s - Jun 12 20:42:52.678: INFO: Pod "webserver-deployment-7f5969cbc7-4gl75": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025395955s - Jun 12 20:42:52.678: INFO: Pod "webserver-deployment-7f5969cbc7-sgc6x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024647957s - Jun 12 20:42:52.679: INFO: Pod "webserver-deployment-7f5969cbc7-t4r66": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024753502s - Jun 12 20:42:52.682: INFO: Pod "webserver-deployment-7f5969cbc7-b8btd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028970628s - Jun 12 20:42:52.682: INFO: Pod "webserver-deployment-7f5969cbc7-rnqqp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028718302s - Jun 12 20:42:54.669: INFO: Pod "webserver-deployment-7f5969cbc7-v6lnd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017577835s - Jun 12 20:42:54.670: INFO: Pod "webserver-deployment-7f5969cbc7-qrpxz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017745751s - Jun 12 20:42:54.675: INFO: Pod "webserver-deployment-7f5969cbc7-m958r": Phase="Pending", Reason="", readiness=false. Elapsed: 8.022287886s - Jun 12 20:42:54.677: INFO: Pod "webserver-deployment-7f5969cbc7-sgc6x": Phase="Pending", Reason="", readiness=false. Elapsed: 8.023109056s - Jun 12 20:42:54.677: INFO: Pod "webserver-deployment-7f5969cbc7-gkvfv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.024088417s - Jun 12 20:42:54.679: INFO: Pod "webserver-deployment-7f5969cbc7-4gl75": Phase="Pending", Reason="", readiness=false. Elapsed: 8.026389116s - Jun 12 20:42:54.679: INFO: Pod "webserver-deployment-7f5969cbc7-rn8z9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.025777135s - Jun 12 20:42:54.680: INFO: Pod "webserver-deployment-7f5969cbc7-rnqqp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.026799221s - Jun 12 20:42:54.680: INFO: Pod "webserver-deployment-7f5969cbc7-t4r66": Phase="Pending", Reason="", readiness=false. Elapsed: 8.026648042s - Jun 12 20:42:54.681: INFO: Pod "webserver-deployment-7f5969cbc7-b8btd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.028632497s - Jun 12 20:42:56.674: INFO: Pod "webserver-deployment-7f5969cbc7-v6lnd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022317208s - Jun 12 20:42:56.674: INFO: Pod "webserver-deployment-7f5969cbc7-qrpxz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.021271913s - Jun 12 20:42:56.676: INFO: Pod "webserver-deployment-7f5969cbc7-m958r": Phase="Pending", Reason="", readiness=false. Elapsed: 10.02303019s - Jun 12 20:42:56.678: INFO: Pod "webserver-deployment-7f5969cbc7-sgc6x": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024324779s - Jun 12 20:42:56.679: INFO: Pod "webserver-deployment-7f5969cbc7-gkvfv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.025923484s - Jun 12 20:42:56.679: INFO: Pod "webserver-deployment-7f5969cbc7-4gl75": Phase="Pending", Reason="", readiness=false. Elapsed: 10.026385936s - Jun 12 20:42:56.679: INFO: Pod "webserver-deployment-7f5969cbc7-rn8z9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.025569444s - Jun 12 20:42:56.679: INFO: Pod "webserver-deployment-7f5969cbc7-t4r66": Phase="Pending", Reason="", readiness=false. Elapsed: 10.025466075s - Jun 12 20:42:56.683: INFO: Pod "webserver-deployment-7f5969cbc7-rnqqp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.029445281s - Jun 12 20:42:56.683: INFO: Pod "webserver-deployment-7f5969cbc7-b8btd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.030384921s - Jun 12 20:42:58.682: INFO: Pod "webserver-deployment-7f5969cbc7-qrpxz": Phase="Pending", Reason="", readiness=false. Elapsed: 12.029806515s - Jun 12 20:42:58.682: INFO: Pod "webserver-deployment-7f5969cbc7-v6lnd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.031196392s - Jun 12 20:42:58.713: INFO: Pod "webserver-deployment-7f5969cbc7-gkvfv": Phase="Pending", Reason="", readiness=false. Elapsed: 12.059878499s - Jun 12 20:42:58.713: INFO: Pod "webserver-deployment-7f5969cbc7-m958r": Phase="Pending", Reason="", readiness=false. Elapsed: 12.06016037s - Jun 12 20:42:58.714: INFO: Pod "webserver-deployment-7f5969cbc7-4gl75": Phase="Pending", Reason="", readiness=false. Elapsed: 12.061002741s - Jun 12 20:42:58.714: INFO: Pod "webserver-deployment-7f5969cbc7-b8btd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.061084391s - Jun 12 20:42:58.714: INFO: Pod "webserver-deployment-7f5969cbc7-sgc6x": Phase="Pending", Reason="", readiness=false. Elapsed: 12.060410849s - Jun 12 20:42:58.714: INFO: Pod "webserver-deployment-7f5969cbc7-rn8z9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.060928079s - Jun 12 20:42:58.715: INFO: Pod "webserver-deployment-7f5969cbc7-rnqqp": Phase="Pending", Reason="", readiness=false. Elapsed: 12.061527551s - Jun 12 20:42:58.715: INFO: Pod "webserver-deployment-7f5969cbc7-t4r66": Phase="Pending", Reason="", readiness=false. Elapsed: 12.0611723s - Jun 12 20:43:00.738: INFO: Pod "webserver-deployment-7f5969cbc7-qrpxz": Phase="Pending", Reason="", readiness=false. Elapsed: 14.085663626s - Jun 12 20:43:00.738: INFO: Pod "webserver-deployment-7f5969cbc7-v6lnd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.087146935s - Jun 12 20:43:00.824: INFO: Pod "webserver-deployment-7f5969cbc7-m958r": Phase="Pending", Reason="", readiness=false. Elapsed: 14.171202686s - Jun 12 20:43:00.825: INFO: Pod "webserver-deployment-7f5969cbc7-gkvfv": Phase="Pending", Reason="", readiness=false. Elapsed: 14.17179641s - Jun 12 20:43:00.825: INFO: Pod "webserver-deployment-7f5969cbc7-b8btd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.172240471s - Jun 12 20:43:00.825: INFO: Pod "webserver-deployment-7f5969cbc7-4gl75": Phase="Pending", Reason="", readiness=false. Elapsed: 14.172778721s - Jun 12 20:43:00.826: INFO: Pod "webserver-deployment-7f5969cbc7-sgc6x": Phase="Pending", Reason="", readiness=false. Elapsed: 14.171925192s - Jun 12 20:43:00.826: INFO: Pod "webserver-deployment-7f5969cbc7-rn8z9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.17231613s - Jun 12 20:43:00.826: INFO: Pod "webserver-deployment-7f5969cbc7-rnqqp": Phase="Pending", Reason="", readiness=false. Elapsed: 14.173062228s - Jun 12 20:43:00.827: INFO: Pod "webserver-deployment-7f5969cbc7-t4r66": Phase="Pending", Reason="", readiness=false. Elapsed: 14.172997828s - Jun 12 20:43:02.718: INFO: Pod "webserver-deployment-7f5969cbc7-qrpxz": Phase="Pending", Reason="", readiness=false. Elapsed: 16.065507479s - Jun 12 20:43:02.719: INFO: Pod "webserver-deployment-7f5969cbc7-v6lnd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.067430059s - Jun 12 20:43:02.750: INFO: Pod "webserver-deployment-7f5969cbc7-gkvfv": Phase="Pending", Reason="", readiness=false. Elapsed: 16.097390504s - Jun 12 20:43:02.755: INFO: Pod "webserver-deployment-7f5969cbc7-sgc6x": Phase="Pending", Reason="", readiness=false. Elapsed: 16.101493603s - Jun 12 20:43:02.755: INFO: Pod "webserver-deployment-7f5969cbc7-t4r66": Phase="Pending", Reason="", readiness=false. Elapsed: 16.101526073s - Jun 12 20:43:02.755: INFO: Pod "webserver-deployment-7f5969cbc7-m958r": Phase="Pending", Reason="", readiness=false. Elapsed: 16.102484508s - Jun 12 20:43:02.759: INFO: Pod "webserver-deployment-7f5969cbc7-rn8z9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.105809244s - Jun 12 20:43:02.761: INFO: Pod "webserver-deployment-7f5969cbc7-b8btd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.107909965s - Jun 12 20:43:02.761: INFO: Pod "webserver-deployment-7f5969cbc7-rnqqp": Phase="Pending", Reason="", readiness=false. Elapsed: 16.107571134s - Jun 12 20:43:02.761: INFO: Pod "webserver-deployment-7f5969cbc7-4gl75": Phase="Pending", Reason="", readiness=false. Elapsed: 16.108489008s - Jun 12 20:43:04.675: INFO: Pod "webserver-deployment-7f5969cbc7-qrpxz": Phase="Pending", Reason="", readiness=false. Elapsed: 18.022550063s - Jun 12 20:43:04.675: INFO: Pod "webserver-deployment-7f5969cbc7-v6lnd": Phase="Pending", Reason="", readiness=false. Elapsed: 18.024049036s - Jun 12 20:43:04.679: INFO: Pod "webserver-deployment-7f5969cbc7-m958r": Phase="Pending", Reason="", readiness=false. Elapsed: 18.025656448s - Jun 12 20:43:04.680: INFO: Pod "webserver-deployment-7f5969cbc7-gkvfv": Phase="Pending", Reason="", readiness=false. Elapsed: 18.027516986s - Jun 12 20:43:04.681: INFO: Pod "webserver-deployment-7f5969cbc7-sgc6x": Phase="Pending", Reason="", readiness=false. Elapsed: 18.026978616s - Jun 12 20:43:04.681: INFO: Pod "webserver-deployment-7f5969cbc7-rnqqp": Phase="Pending", Reason="", readiness=false. Elapsed: 18.027669645s - Jun 12 20:43:04.680: INFO: Pod "webserver-deployment-7f5969cbc7-4gl75": Phase="Pending", Reason="", readiness=false. Elapsed: 18.027861544s - Jun 12 20:43:04.682: INFO: Pod "webserver-deployment-7f5969cbc7-b8btd": Phase="Pending", Reason="", readiness=false. Elapsed: 18.029027533s - Jun 12 20:43:04.682: INFO: Pod "webserver-deployment-7f5969cbc7-rn8z9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.028506186s - Jun 12 20:43:04.682: INFO: Pod "webserver-deployment-7f5969cbc7-t4r66": Phase="Pending", Reason="", readiness=false. Elapsed: 18.0281038s - Jun 12 20:43:06.670: INFO: Pod "webserver-deployment-7f5969cbc7-qrpxz": Phase="Pending", Reason="", readiness=false. Elapsed: 20.018041188s - Jun 12 20:43:06.671: INFO: Pod "webserver-deployment-7f5969cbc7-v6lnd": Phase="Pending", Reason="", readiness=false. Elapsed: 20.01953672s - Jun 12 20:43:06.679: INFO: Pod "webserver-deployment-7f5969cbc7-m958r": Phase="Pending", Reason="", readiness=false. Elapsed: 20.02579169s - Jun 12 20:43:06.679: INFO: Pod "webserver-deployment-7f5969cbc7-gkvfv": Phase="Pending", Reason="", readiness=false. Elapsed: 20.025988656s - Jun 12 20:43:06.680: INFO: Pod "webserver-deployment-7f5969cbc7-sgc6x": Phase="Running", Reason="", readiness=true. Elapsed: 20.025994972s - Jun 12 20:43:06.680: INFO: Pod "webserver-deployment-7f5969cbc7-sgc6x" satisfied condition "running" - Jun 12 20:43:06.680: INFO: Pod "webserver-deployment-7f5969cbc7-b8btd": Phase="Running", Reason="", readiness=true. Elapsed: 20.027093807s - Jun 12 20:43:06.680: INFO: Pod "webserver-deployment-7f5969cbc7-b8btd" satisfied condition "running" - Jun 12 20:43:06.680: INFO: Pod "webserver-deployment-7f5969cbc7-rnqqp": Phase="Running", Reason="", readiness=true. Elapsed: 20.026766702s - Jun 12 20:43:06.680: INFO: Pod "webserver-deployment-7f5969cbc7-rnqqp" satisfied condition "running" - Jun 12 20:43:06.680: INFO: Pod "webserver-deployment-7f5969cbc7-4gl75": Phase="Pending", Reason="", readiness=false. Elapsed: 20.027381481s - Jun 12 20:43:06.681: INFO: Pod "webserver-deployment-7f5969cbc7-t4r66": Phase="Pending", Reason="", readiness=false. Elapsed: 20.026848288s - Jun 12 20:43:06.681: INFO: Pod "webserver-deployment-7f5969cbc7-rn8z9": Phase="Running", Reason="", readiness=true. Elapsed: 20.027458501s - Jun 12 20:43:06.681: INFO: Pod "webserver-deployment-7f5969cbc7-rn8z9" satisfied condition "running" - Jun 12 20:43:08.676: INFO: Pod "webserver-deployment-7f5969cbc7-v6lnd": Phase="Running", Reason="", readiness=true. Elapsed: 22.024526935s - Jun 12 20:43:08.676: INFO: Pod "webserver-deployment-7f5969cbc7-v6lnd" satisfied condition "running" - Jun 12 20:43:08.679: INFO: Pod "webserver-deployment-7f5969cbc7-qrpxz": Phase="Pending", Reason="", readiness=false. Elapsed: 22.026555212s - Jun 12 20:43:08.685: INFO: Pod "webserver-deployment-7f5969cbc7-m958r": Phase="Pending", Reason="", readiness=false. Elapsed: 22.032044219s - Jun 12 20:43:08.685: INFO: Pod "webserver-deployment-7f5969cbc7-4gl75": Phase="Pending", Reason="", readiness=false. Elapsed: 22.032943735s - Jun 12 20:43:08.686: INFO: Pod "webserver-deployment-7f5969cbc7-gkvfv": Phase="Pending", Reason="", readiness=false. Elapsed: 22.032795698s - Jun 12 20:43:08.686: INFO: Pod "webserver-deployment-7f5969cbc7-t4r66": Phase="Pending", Reason="", readiness=false. Elapsed: 22.03221075s - Jun 12 20:43:10.671: INFO: Pod "webserver-deployment-7f5969cbc7-qrpxz": Phase="Running", Reason="", readiness=true. Elapsed: 24.018755084s - Jun 12 20:43:10.671: INFO: Pod "webserver-deployment-7f5969cbc7-qrpxz" satisfied condition "running" - Jun 12 20:43:10.678: INFO: Pod "webserver-deployment-7f5969cbc7-m958r": Phase="Running", Reason="", readiness=true. Elapsed: 24.025127195s - Jun 12 20:43:10.678: INFO: Pod "webserver-deployment-7f5969cbc7-m958r" satisfied condition "running" - Jun 12 20:43:10.678: INFO: Pod "webserver-deployment-7f5969cbc7-gkvfv": Phase="Pending", Reason="", readiness=false. Elapsed: 24.025411639s - Jun 12 20:43:10.681: INFO: Pod "webserver-deployment-7f5969cbc7-4gl75": Phase="Pending", Reason="", readiness=false. Elapsed: 24.02799976s - Jun 12 20:43:10.681: INFO: Pod "webserver-deployment-7f5969cbc7-t4r66": Phase="Running", Reason="", readiness=true. Elapsed: 24.027263693s - Jun 12 20:43:10.681: INFO: Pod "webserver-deployment-7f5969cbc7-t4r66" satisfied condition "running" - Jun 12 20:43:12.677: INFO: Pod "webserver-deployment-7f5969cbc7-gkvfv": Phase="Running", Reason="", readiness=true. Elapsed: 26.02417607s - Jun 12 20:43:12.677: INFO: Pod "webserver-deployment-7f5969cbc7-gkvfv" satisfied condition "running" - Jun 12 20:43:12.678: INFO: Pod "webserver-deployment-7f5969cbc7-4gl75": Phase="Running", Reason="", readiness=true. Elapsed: 26.02588095s - Jun 12 20:43:12.678: INFO: Pod "webserver-deployment-7f5969cbc7-4gl75" satisfied condition "running" - Jun 12 20:43:12.679: INFO: Waiting for deployment "webserver-deployment" to complete - Jun 12 20:43:12.692: INFO: Updating deployment "webserver-deployment" with a non-existent image - Jun 12 20:43:12.757: INFO: Updating deployment webserver-deployment - Jun 12 20:43:12.757: INFO: Waiting for observed generation 2 - Jun 12 20:43:14.774: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 - Jun 12 20:43:14.799: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 - Jun 12 20:43:14.843: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas - Jun 12 20:43:14.867: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 - Jun 12 20:43:14.867: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 - Jun 12 20:43:14.878: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas - Jun 12 20:43:14.890: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas - Jun 12 20:43:14.891: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 - Jun 12 20:43:14.909: INFO: Updating deployment webserver-deployment - Jun 12 20:43:14.909: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas - Jun 12 20:43:14.924: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 - Jun 12 20:43:14.930: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 - [AfterEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:84 - Jun 12 20:43:14.950: INFO: Deployment "webserver-deployment": - &Deployment{ObjectMeta:{webserver-deployment deployment-5608 5a623793-48d9-4404-bddb-d5aa3f0d9bb1 74706 3 2023-06-12 20:42:44 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{kube-controller-manager Update apps/v1 2023-06-12 20:43:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status} {e2e.test Update apps/v1 2023-06-12 20:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00491d168 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-06-12 20:43:10 +0000 UTC,LastTransitionTime:2023-06-12 20:43:10 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-d9f79cb5" is progressing.,LastUpdateTime:2023-06-12 20:43:12 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} - - Jun 12 20:43:14.960: INFO: New ReplicaSet "webserver-deployment-d9f79cb5" of Deployment "webserver-deployment": - &ReplicaSet{ObjectMeta:{webserver-deployment-d9f79cb5 deployment-5608 bc2e4e27-dd55-4fa3-8ec6-08a36749ecf2 74710 3 2023-06-12 20:43:12 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 5a623793-48d9-4404-bddb-d5aa3f0d9bb1 0xc0049e49e7 0xc0049e49e8}] [] [{kube-controller-manager Update apps/v1 2023-06-12 20:43:12 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status} {kube-controller-manager Update apps/v1 2023-06-12 20:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5a623793-48d9-4404-bddb-d5aa3f0d9bb1\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: d9f79cb5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0049e4a88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} - Jun 12 20:43:14.962: INFO: All old ReplicaSets of Deployment "webserver-deployment": - Jun 12 20:43:14.963: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-7f5969cbc7 deployment-5608 703f383f-bf60-4f1c-b4cc-adcf8ec99a6d 74708 3 2023-06-12 20:42:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 5a623793-48d9-4404-bddb-d5aa3f0d9bb1 0xc0049e48f7 0xc0049e48f8}] [] [{kube-controller-manager Update apps/v1 2023-06-12 20:43:12 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status} {kube-controller-manager Update apps/v1 2023-06-12 20:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5a623793-48d9-4404-bddb-d5aa3f0d9bb1\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 7f5969cbc7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0049e4988 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} - Jun 12 20:43:14.986: INFO: Pod "webserver-deployment-7f5969cbc7-4gl75" is available: - &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-4gl75 webserver-deployment-7f5969cbc7- deployment-5608 ebde2cfd-36ae-4615-a6cb-7364b082091d 74604 0 2023-06-12 20:42:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:ade6777ec53946a7c282a696f682c425a7441cff2c6676038e114a543f520b4e cni.projectcalico.org/podIP:172.30.185.112/32 cni.projectcalico.org/podIPs:172.30.185.112/32 k8s.v1.cni.cncf.io/network-status:[{ - "name": "k8s-pod-network", - "ips": [ - "172.30.185.112" - ], - "default": true, - "dns": {} - }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 703f383f-bf60-4f1c-b4cc-adcf8ec99a6d 0xc00491d597 0xc00491d598}] [] [{kube-controller-manager Update v1 2023-06-12 20:42:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"703f383f-bf60-4f1c-b4cc-adcf8ec99a6d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-06-12 20:42:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-06-12 20:42:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-06-12 20:43:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.30.185.112\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gxqmd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gxqmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.116,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c32,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.116,PodIP:172.30.185.112,StartTime:2023-06-12 20:42:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-12 20:43:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://3328c2930c702b7c4dcd02053e5657bc86c92eb57aa707657760cdf37f8732bf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.30.185.112,},},EphemeralContainerStatuses:[]ContainerStatus{},},} - Jun 12 20:43:14.986: INFO: Pod "webserver-deployment-7f5969cbc7-b8btd" is available: - &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-b8btd webserver-deployment-7f5969cbc7- deployment-5608 4076a899-0a06-477d-8d7c-1e1728108cc6 74550 0 2023-06-12 20:42:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:e096ebebbfa56d45d6f5b9d111f1833810399140a3ccb11bfb5b1b6dc6b9bd4c cni.projectcalico.org/podIP:172.30.161.100/32 cni.projectcalico.org/podIPs:172.30.161.100/32 k8s.v1.cni.cncf.io/network-status:[{ - "name": "k8s-pod-network", - "ips": [ - "172.30.161.100" - ], - "default": true, - "dns": {} - }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 703f383f-bf60-4f1c-b4cc-adcf8ec99a6d 0xc00491d807 0xc00491d808}] [] [{kube-controller-manager Update v1 2023-06-12 20:42:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"703f383f-bf60-4f1c-b4cc-adcf8ec99a6d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-06-12 20:42:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-06-12 20:42:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-06-12 20:43:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.30.161.100\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-466t2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-466t2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.112,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c32,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.112,PodIP:172.30.161.100,StartTime:2023-06-12 20:42:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-12 20:43:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://7093c3a9405763d80f39cd892056bdeea10e145eb1a3e69f4d9b5a6c47f1f53b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.30.161.100,},},EphemeralContainerStatuses:[]ContainerStatus{},},} - Jun 12 20:43:14.988: INFO: Pod "webserver-deployment-7f5969cbc7-gkvfv" is available: - &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-gkvfv webserver-deployment-7f5969cbc7- deployment-5608 3996afef-017a-41a6-a3df-4da9e5ca1a4d 74607 0 2023-06-12 20:42:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:2aca79e7fa3c6aea021f084ad408a3623eac367217b6d5519b09b0a571b3c4c7 cni.projectcalico.org/podIP:172.30.185.116/32 cni.projectcalico.org/podIPs:172.30.185.116/32 k8s.v1.cni.cncf.io/network-status:[{ - "name": "k8s-pod-network", - "ips": [ - "172.30.185.116" - ], - "default": true, - "dns": {} - }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 703f383f-bf60-4f1c-b4cc-adcf8ec99a6d 0xc00491da77 0xc00491da78}] [] [{kube-controller-manager Update v1 2023-06-12 20:42:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"703f383f-bf60-4f1c-b4cc-adcf8ec99a6d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-06-12 20:42:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-06-12 20:42:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-06-12 20:43:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.30.185.116\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vjfs4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vjfs4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.116,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c32,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.116,PodIP:172.30.185.116,StartTime:2023-06-12 20:42:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-12 20:43:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://d3ba74e9ebf72aa888b67033d7106e2a13375150097c70d60902c71d0d826b1e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.30.185.116,},},EphemeralContainerStatuses:[]ContainerStatus{},},} - Jun 12 20:43:14.988: INFO: Pod "webserver-deployment-7f5969cbc7-jl4vl" is not available: - &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-jl4vl webserver-deployment-7f5969cbc7- deployment-5608 4de8a9f5-e87e-4fa4-b4a3-63ae29a3d4c8 74714 0 2023-06-12 20:43:14 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 703f383f-bf60-4f1c-b4cc-adcf8ec99a6d 0xc00491dce7 0xc00491dce8}] [] [{kube-controller-manager Update v1 2023-06-12 20:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"703f383f-bf60-4f1c-b4cc-adcf8ec99a6d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7rwsx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7rwsx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c32,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-k6j57,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} - Jun 12 20:43:14.989: INFO: Pod "webserver-deployment-7f5969cbc7-m958r" is available: - &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-m958r webserver-deployment-7f5969cbc7- deployment-5608 4bf30a79-4f40-4231-9779-08710c703f4a 74583 0 2023-06-12 20:42:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:3c0c9702b770ec053638672d22ad3706297566630aab1f904feef98f49958297 cni.projectcalico.org/podIP:172.30.224.26/32 cni.projectcalico.org/podIPs:172.30.224.26/32 k8s.v1.cni.cncf.io/network-status:[{ - "name": "k8s-pod-network", - "ips": [ - "172.30.224.26" - ], - "default": true, - "dns": {} - }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 703f383f-bf60-4f1c-b4cc-adcf8ec99a6d 0xc00491de87 0xc00491de88}] [] [{kube-controller-manager Update v1 2023-06-12 20:42:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"703f383f-bf60-4f1c-b4cc-adcf8ec99a6d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-06-12 20:42:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-06-12 20:42:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-06-12 20:43:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.30.224.26\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lbdht,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lbdht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.70,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c32,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.70,PodIP:172.30.224.26,StartTime:2023-06-12 20:42:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-12 20:43:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://36023b7b6a8a16210b17a43f921b557194c572ab00e87870e59e73fdb5676726,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.30.224.26,},},EphemeralContainerStatuses:[]ContainerStatus{},},} - Jun 12 20:43:14.991: INFO: Pod "webserver-deployment-7f5969cbc7-rn8z9" is available: - &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-rn8z9 webserver-deployment-7f5969cbc7- deployment-5608 f6729123-1ed2-4c74-ae1a-c20c326f54c6 74534 0 2023-06-12 20:42:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:f25f76c5454b30a53183793c4b99164bcdf536db651f45042129b29ea047002d cni.projectcalico.org/podIP:172.30.224.57/32 cni.projectcalico.org/podIPs:172.30.224.57/32 k8s.v1.cni.cncf.io/network-status:[{ - "name": "k8s-pod-network", - "ips": [ - "172.30.224.57" - ], - "default": true, - "dns": {} - }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 703f383f-bf60-4f1c-b4cc-adcf8ec99a6d 0xc004b8c117 0xc004b8c118}] [] [{kube-controller-manager Update v1 2023-06-12 20:42:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"703f383f-bf60-4f1c-b4cc-adcf8ec99a6d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-06-12 20:42:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-06-12 20:42:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-06-12 20:43:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.30.224.57\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6zn4n,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6zn4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.70,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c32,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-k6j57,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.70,PodIP:172.30.224.57,StartTime:2023-06-12 20:42:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-12 20:43:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://1c9d48143cc5f76a6aeb9e44b89786b237e3cfae4db4aa95622c55b3d4b6e889,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.30.224.57,},},EphemeralContainerStatuses:[]ContainerStatus{},},} - Jun 12 20:43:14.991: INFO: Pod "webserver-deployment-7f5969cbc7-rnqqp" is available: - &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-rnqqp webserver-deployment-7f5969cbc7- deployment-5608 b8e399b8-f1f1-45d9-9383-8fdea13161da 74548 0 2023-06-12 20:42:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:943528098c9b13c811eb5b38630f9fcaaaabc5038f134205070279611bcbb042 cni.projectcalico.org/podIP:172.30.161.98/32 cni.projectcalico.org/podIPs:172.30.161.98/32 k8s.v1.cni.cncf.io/network-status:[{ - "name": "k8s-pod-network", - "ips": [ - "172.30.161.98" - ], - "default": true, - "dns": {} - }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 703f383f-bf60-4f1c-b4cc-adcf8ec99a6d 0xc004b8c3a7 0xc004b8c3a8}] [] [{kube-controller-manager Update v1 2023-06-12 20:42:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"703f383f-bf60-4f1c-b4cc-adcf8ec99a6d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-06-12 20:42:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-06-12 20:42:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-06-12 20:43:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.30.161.98\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zprgl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zprgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.112,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c32,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-k6j57,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.112,PodIP:172.30.161.98,StartTime:2023-06-12 20:42:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-12 20:43:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://83a39afa9af47edb7dc01f1d702c526f4f747635e6d2dad08f8b38f2ebba6e82,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.30.161.98,},},EphemeralContainerStatuses:[]ContainerStatus{},},} - Jun 12 20:43:14.993: INFO: Pod "webserver-deployment-7f5969cbc7-sgc6x" is available: - &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-sgc6x webserver-deployment-7f5969cbc7- deployment-5608 4d68e6fa-9a3d-42c6-8182-61643394a26e 74536 0 2023-06-12 20:42:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:fa5a4c60e9867531577e018b4a949678838b1430b0c3519f7c8845266e755ce8 cni.projectcalico.org/podIP:172.30.224.25/32 cni.projectcalico.org/podIPs:172.30.224.25/32 k8s.v1.cni.cncf.io/network-status:[{ - "name": "k8s-pod-network", - "ips": [ - "172.30.224.25" - ], - "default": true, - "dns": {} - }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 703f383f-bf60-4f1c-b4cc-adcf8ec99a6d 0xc004b8c637 0xc004b8c638}] [] [{kube-controller-manager Update v1 2023-06-12 20:42:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"703f383f-bf60-4f1c-b4cc-adcf8ec99a6d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-06-12 20:42:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-06-12 20:42:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-06-12 20:43:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.30.224.25\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rj7rk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rj7rk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.70,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c32,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.70,PodIP:172.30.224.25,StartTime:2023-06-12 20:42:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-12 20:43:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://53e1d0f71bea84ec5b2048aeb94b4e400e8f0177b20e7b300119283f2d4425ef,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.30.224.25,},},EphemeralContainerStatuses:[]ContainerStatus{},},} - Jun 12 20:43:14.993: INFO: Pod "webserver-deployment-7f5969cbc7-v6lnd" is available: - &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-v6lnd webserver-deployment-7f5969cbc7- deployment-5608 0ba8d4af-b9b5-44b9-b27f-827b645a0a91 74561 0 2023-06-12 20:42:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:1499a10a9203c13362e632ac32f31d6ccb766012045920213223ab4b8e0f87c4 cni.projectcalico.org/podIP:172.30.161.101/32 cni.projectcalico.org/podIPs:172.30.161.101/32 k8s.v1.cni.cncf.io/network-status:[{ - "name": "k8s-pod-network", - "ips": [ - "172.30.161.101" - ], - "default": true, - "dns": {} - }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 703f383f-bf60-4f1c-b4cc-adcf8ec99a6d 0xc004b8c8a7 0xc004b8c8a8}] [] [{kube-controller-manager Update v1 2023-06-12 20:42:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"703f383f-bf60-4f1c-b4cc-adcf8ec99a6d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-06-12 20:42:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-06-12 20:42:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-06-12 20:43:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.30.161.101\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6zkg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6zkg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.112,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c32,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:42:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.112,PodIP:172.30.161.101,StartTime:2023-06-12 20:42:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-12 20:43:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://e8252cf63ad7ea4a5cfc52f9401eb71fbd9ed4c27172c96122f3ec2b8b2d88a2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.30.161.101,},},EphemeralContainerStatuses:[]ContainerStatus{},},} - Jun 12 20:43:14.994: INFO: Pod "webserver-deployment-d9f79cb5-cr8g5" is not available: - &Pod{ObjectMeta:{webserver-deployment-d9f79cb5-cr8g5 webserver-deployment-d9f79cb5- deployment-5608 21024aa8-e9b2-4fa9-b823-318fa748a521 74652 0 2023-06-12 20:43:12 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 bc2e4e27-dd55-4fa3-8ec6-08a36749ecf2 0xc004b8cb17 0xc004b8cb18}] [] [{kube-controller-manager Update v1 2023-06-12 20:43:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc2e4e27-dd55-4fa3-8ec6-08a36749ecf2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-06-12 20:43:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kwz9p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kwz9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.112,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c32,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-k6j57,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.112,PodIP:,StartTime:2023-06-12 20:43:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} - Jun 12 20:43:14.995: INFO: Pod "webserver-deployment-d9f79cb5-dxmvc" is not available: - &Pod{ObjectMeta:{webserver-deployment-d9f79cb5-dxmvc webserver-deployment-d9f79cb5- deployment-5608 799af80c-2cd1-4156-9e7f-823f7b30e71f 74689 0 2023-06-12 20:43:12 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[cni.projectcalico.org/containerID:d16f6af7e7230b830d1a4169c8f8faf1c02efbde9eb36ce3dcdf8f10b18822b0 cni.projectcalico.org/podIP:172.30.224.33/32 cni.projectcalico.org/podIPs:172.30.224.33/32 k8s.v1.cni.cncf.io/network-status:[{ - "name": "k8s-pod-network", - "ips": [ - "172.30.224.33" - ], - "default": true, - "dns": {} - }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 bc2e4e27-dd55-4fa3-8ec6-08a36749ecf2 0xc004b8cd67 0xc004b8cd68}] [] [{kube-controller-manager Update v1 2023-06-12 20:43:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc2e4e27-dd55-4fa3-8ec6-08a36749ecf2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-06-12 20:43:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-06-12 20:43:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-06-12 20:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4ls29,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4ls29,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.70,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c32,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-k6j57,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.70,PodIP:,StartTime:2023-06-12 20:43:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} - Jun 12 20:43:15.035: INFO: Pod "webserver-deployment-d9f79cb5-m6226" is not available: - &Pod{ObjectMeta:{webserver-deployment-d9f79cb5-m6226 webserver-deployment-d9f79cb5- deployment-5608 8ac8cea1-e760-43f7-a3ef-2efed18d7af9 74704 0 2023-06-12 20:43:12 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[cni.projectcalico.org/containerID:1ed3306cba41b7a2276d18c1b117645719fcff520db921534171d9f81743aef0 cni.projectcalico.org/podIP:172.30.161.103/32 cni.projectcalico.org/podIPs:172.30.161.103/32 openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 bc2e4e27-dd55-4fa3-8ec6-08a36749ecf2 0xc004b8cfd7 0xc004b8cfd8}] [] [{kube-controller-manager Update v1 2023-06-12 20:43:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc2e4e27-dd55-4fa3-8ec6-08a36749ecf2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-06-12 20:43:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-06-12 20:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hg9bw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hg9bw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.112,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c32,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-k6j57,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.112,PodIP:,StartTime:2023-06-12 20:43:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} - Jun 12 20:43:15.060: INFO: Pod "webserver-deployment-d9f79cb5-rq5pr" is not available: - &Pod{ObjectMeta:{webserver-deployment-d9f79cb5-rq5pr webserver-deployment-d9f79cb5- deployment-5608 e1a44f10-99b4-4052-aaa2-5a224d326a99 74715 0 2023-06-12 20:43:14 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 bc2e4e27-dd55-4fa3-8ec6-08a36749ecf2 0xc004b8d227 0xc004b8d228}] [] [{kube-controller-manager Update v1 2023-06-12 20:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc2e4e27-dd55-4fa3-8ec6-08a36749ecf2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hm4pq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hm4pq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c32,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-k6j57,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} - Jun 12 20:43:15.061: INFO: Pod "webserver-deployment-d9f79cb5-zjdmj" is not available: - &Pod{ObjectMeta:{webserver-deployment-d9f79cb5-zjdmj webserver-deployment-d9f79cb5- deployment-5608 0c418837-7f90-4ea0-b322-ac72e713b270 74698 0 2023-06-12 20:43:12 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[cni.projectcalico.org/containerID:2027c67e8d622c48538dc921f76d7728962966fd7f14a53d44cff5be3d054532 cni.projectcalico.org/podIP:172.30.185.117/32 cni.projectcalico.org/podIPs:172.30.185.117/32 k8s.v1.cni.cncf.io/network-status:[{ - "name": "k8s-pod-network", - "ips": [ - "172.30.185.117" - ], - "default": true, - "dns": {} - }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 bc2e4e27-dd55-4fa3-8ec6-08a36749ecf2 0xc004b8d3b7 0xc004b8d3b8}] [] [{kube-controller-manager Update v1 2023-06-12 20:43:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc2e4e27-dd55-4fa3-8ec6-08a36749ecf2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-06-12 20:43:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-06-12 20:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-06-12 20:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-pskg4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pskg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.116,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c32,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-k6j57,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.116,PodIP:,StartTime:2023-06-12 20:43:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} - Jun 12 20:43:15.062: INFO: Pod "webserver-deployment-d9f79cb5-zmh8g" is not available: - &Pod{ObjectMeta:{webserver-deployment-d9f79cb5-zmh8g webserver-deployment-d9f79cb5- deployment-5608 e56b0760-bb45-4f40-ab6d-8e2e732b4f69 74679 0 2023-06-12 20:43:12 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 bc2e4e27-dd55-4fa3-8ec6-08a36749ecf2 0xc004b8d627 0xc004b8d628}] [] [{kube-controller-manager Update v1 2023-06-12 20:43:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc2e4e27-dd55-4fa3-8ec6-08a36749ecf2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-06-12 20:43:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xw9js,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xw9js,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.116,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c32,c4,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-k6j57,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 20:43:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.116,PodIP:,StartTime:2023-06-12 20:43:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} - [AfterEach] [sig-apps] Deployment + [It] should patch a secret [Conformance] + test/e2e/common/node/secrets.go:154 + STEP: creating a secret 07/27/23 01:31:03.295 + STEP: listing secrets in all namespaces to ensure that there are more than zero 07/27/23 01:31:03.313 + STEP: patching the secret 07/27/23 01:31:03.54 + STEP: deleting the secret using a LabelSelector 07/27/23 01:31:03.574 + STEP: listing secrets in all namespaces, searching for label name and value in patch 07/27/23 01:31:03.596 + [AfterEach] [sig-node] Secrets test/e2e/framework/node/init/init.go:32 - Jun 12 20:43:15.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] Deployment + Jul 27 01:31:03.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Secrets test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] Deployment + [DeferCleanup (Each)] [sig-node] Secrets dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] Deployment + [DeferCleanup (Each)] [sig-node] Secrets tear down framework | framework.go:193 - STEP: Destroying namespace "deployment-5608" for this suite. 06/12/23 20:43:15.101 + STEP: Destroying namespace "secrets-8142" for this suite. 07/27/23 01:31:03.815 << End Captured GinkgoWriter Output ------------------------------ -SSSS +SSSSS ------------------------------ -[sig-apps] CronJob - should support CronJob API operations [Conformance] - test/e2e/apps/cronjob.go:319 -[BeforeEach] [sig-apps] CronJob +[sig-storage] Secrets + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:205 +[BeforeEach] [sig-storage] Secrets set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:43:15.356 -Jun 12 20:43:15.356: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename cronjob 06/12/23 20:43:15.359 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:43:15.458 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:43:15.468 -[BeforeEach] [sig-apps] CronJob +STEP: Creating a kubernetes client 07/27/23 01:31:03.849 +Jul 27 01:31:03.849: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename secrets 07/27/23 01:31:03.852 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:31:03.899 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:31:03.92 +[BeforeEach] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:31 -[It] should support CronJob API operations [Conformance] - test/e2e/apps/cronjob.go:319 -STEP: Creating a cronjob 06/12/23 20:43:15.485 -STEP: creating 06/12/23 20:43:15.485 -STEP: getting 06/12/23 20:43:15.501 -STEP: listing 06/12/23 20:43:15.517 -STEP: watching 06/12/23 20:43:15.531 -Jun 12 20:43:15.532: INFO: starting watch -STEP: cluster-wide listing 06/12/23 20:43:15.535 -STEP: cluster-wide watching 06/12/23 20:43:15.547 -Jun 12 20:43:15.548: INFO: starting watch -STEP: patching 06/12/23 20:43:15.581 -STEP: updating 06/12/23 20:43:15.597 -Jun 12 20:43:15.626: INFO: waiting for watch events with expected annotations -Jun 12 20:43:15.626: INFO: saw patched and updated annotations -STEP: patching /status 06/12/23 20:43:15.627 -STEP: updating /status 06/12/23 20:43:15.646 -STEP: get /status 06/12/23 20:43:15.73 -STEP: deleting 06/12/23 20:43:15.743 -STEP: deleting a collection 06/12/23 20:43:15.804 -[AfterEach] [sig-apps] CronJob +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:205 +Jul 27 01:31:03.963: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node +STEP: Creating secret with name s-test-opt-del-9c283ea8-1870-47b0-a600-3fea86834242 07/27/23 01:31:03.963 +STEP: Creating secret with name s-test-opt-upd-2fb9f7f6-78af-4539-ae1d-25058d90c291 07/27/23 01:31:03.989 +STEP: Creating the pod 07/27/23 01:31:04.024 +Jul 27 01:31:04.066: INFO: Waiting up to 5m0s for pod "pod-secrets-a06ceb4e-7fb7-4225-aaf6-d47d590f9dd5" in namespace "secrets-3800" to be "running and ready" +Jul 27 01:31:04.082: INFO: Pod "pod-secrets-a06ceb4e-7fb7-4225-aaf6-d47d590f9dd5": Phase="Pending", Reason="", readiness=false. Elapsed: 15.468476ms +Jul 27 01:31:04.083: INFO: The phase of Pod pod-secrets-a06ceb4e-7fb7-4225-aaf6-d47d590f9dd5 is Pending, waiting for it to be Running (with Ready = true) +Jul 27 01:31:06.125: INFO: Pod "pod-secrets-a06ceb4e-7fb7-4225-aaf6-d47d590f9dd5": Phase="Running", Reason="", readiness=true. Elapsed: 2.05805348s +Jul 27 01:31:06.129: INFO: The phase of Pod pod-secrets-a06ceb4e-7fb7-4225-aaf6-d47d590f9dd5 is Running (Ready = true) +Jul 27 01:31:06.131: INFO: Pod "pod-secrets-a06ceb4e-7fb7-4225-aaf6-d47d590f9dd5" satisfied condition "running and ready" +STEP: Deleting secret s-test-opt-del-9c283ea8-1870-47b0-a600-3fea86834242 07/27/23 01:31:06.284 +STEP: Updating secret s-test-opt-upd-2fb9f7f6-78af-4539-ae1d-25058d90c291 07/27/23 01:31:06.297 +STEP: Creating secret with name s-test-opt-create-1afdbb60-c0a9-427e-a6ae-acdc19e35f79 07/27/23 01:31:06.311 +STEP: waiting to observe update in volume 07/27/23 01:31:06.352 +[AfterEach] [sig-storage] Secrets test/e2e/framework/node/init/init.go:32 -Jun 12 20:43:15.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] CronJob +Jul 27 01:31:08.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] CronJob +[DeferCleanup (Each)] [sig-storage] Secrets dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] CronJob +[DeferCleanup (Each)] [sig-storage] Secrets tear down framework | framework.go:193 -STEP: Destroying namespace "cronjob-5649" for this suite. 06/12/23 20:43:15.856 +STEP: Destroying namespace "secrets-3800" for this suite. 07/27/23 01:31:08.566 ------------------------------ -• [0.523 seconds] -[sig-apps] CronJob -test/e2e/apps/framework.go:23 - should support CronJob API operations [Conformance] - test/e2e/apps/cronjob.go:319 +• [4.759 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:205 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] CronJob + [BeforeEach] [sig-storage] Secrets set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:43:15.356 - Jun 12 20:43:15.356: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename cronjob 06/12/23 20:43:15.359 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:43:15.458 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:43:15.468 - [BeforeEach] [sig-apps] CronJob + STEP: Creating a kubernetes client 07/27/23 01:31:03.849 + Jul 27 01:31:03.849: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename secrets 07/27/23 01:31:03.852 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:31:03.899 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:31:03.92 + [BeforeEach] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:31 - [It] should support CronJob API operations [Conformance] - test/e2e/apps/cronjob.go:319 - STEP: Creating a cronjob 06/12/23 20:43:15.485 - STEP: creating 06/12/23 20:43:15.485 - STEP: getting 06/12/23 20:43:15.501 - STEP: listing 06/12/23 20:43:15.517 - STEP: watching 06/12/23 20:43:15.531 - Jun 12 20:43:15.532: INFO: starting watch - STEP: cluster-wide listing 06/12/23 20:43:15.535 - STEP: cluster-wide watching 06/12/23 20:43:15.547 - Jun 12 20:43:15.548: INFO: starting watch - STEP: patching 06/12/23 20:43:15.581 - STEP: updating 06/12/23 20:43:15.597 - Jun 12 20:43:15.626: INFO: waiting for watch events with expected annotations - Jun 12 20:43:15.626: INFO: saw patched and updated annotations - STEP: patching /status 06/12/23 20:43:15.627 - STEP: updating /status 06/12/23 20:43:15.646 - STEP: get /status 06/12/23 20:43:15.73 - STEP: deleting 06/12/23 20:43:15.743 - STEP: deleting a collection 06/12/23 20:43:15.804 - [AfterEach] [sig-apps] CronJob + [It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:205 + Jul 27 01:31:03.963: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node + STEP: Creating secret with name s-test-opt-del-9c283ea8-1870-47b0-a600-3fea86834242 07/27/23 01:31:03.963 + STEP: Creating secret with name s-test-opt-upd-2fb9f7f6-78af-4539-ae1d-25058d90c291 07/27/23 01:31:03.989 + STEP: Creating the pod 07/27/23 01:31:04.024 + Jul 27 01:31:04.066: INFO: Waiting up to 5m0s for pod "pod-secrets-a06ceb4e-7fb7-4225-aaf6-d47d590f9dd5" in namespace "secrets-3800" to be "running and ready" + Jul 27 01:31:04.082: INFO: Pod "pod-secrets-a06ceb4e-7fb7-4225-aaf6-d47d590f9dd5": Phase="Pending", Reason="", readiness=false. Elapsed: 15.468476ms + Jul 27 01:31:04.083: INFO: The phase of Pod pod-secrets-a06ceb4e-7fb7-4225-aaf6-d47d590f9dd5 is Pending, waiting for it to be Running (with Ready = true) + Jul 27 01:31:06.125: INFO: Pod "pod-secrets-a06ceb4e-7fb7-4225-aaf6-d47d590f9dd5": Phase="Running", Reason="", readiness=true. Elapsed: 2.05805348s + Jul 27 01:31:06.129: INFO: The phase of Pod pod-secrets-a06ceb4e-7fb7-4225-aaf6-d47d590f9dd5 is Running (Ready = true) + Jul 27 01:31:06.131: INFO: Pod "pod-secrets-a06ceb4e-7fb7-4225-aaf6-d47d590f9dd5" satisfied condition "running and ready" + STEP: Deleting secret s-test-opt-del-9c283ea8-1870-47b0-a600-3fea86834242 07/27/23 01:31:06.284 + STEP: Updating secret s-test-opt-upd-2fb9f7f6-78af-4539-ae1d-25058d90c291 07/27/23 01:31:06.297 + STEP: Creating secret with name s-test-opt-create-1afdbb60-c0a9-427e-a6ae-acdc19e35f79 07/27/23 01:31:06.311 + STEP: waiting to observe update in volume 07/27/23 01:31:06.352 + [AfterEach] [sig-storage] Secrets test/e2e/framework/node/init/init.go:32 - Jun 12 20:43:15.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] CronJob + Jul 27 01:31:08.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] CronJob + [DeferCleanup (Each)] [sig-storage] Secrets dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] CronJob + [DeferCleanup (Each)] [sig-storage] Secrets tear down framework | framework.go:193 - STEP: Destroying namespace "cronjob-5649" for this suite. 06/12/23 20:43:15.856 + STEP: Destroying namespace "secrets-3800" for this suite. 07/27/23 01:31:08.566 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SS ------------------------------ -[sig-node] Pods - should be updated [NodeConformance] [Conformance] - test/e2e/common/node/pods.go:344 -[BeforeEach] [sig-node] Pods +[sig-scheduling] SchedulerPreemption [Serial] + validates basic preemption works [Conformance] + test/e2e/scheduling/preemption.go:130 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:43:15.902 -Jun 12 20:43:15.903: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename pods 06/12/23 20:43:15.908 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:43:15.963 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:43:15.975 -[BeforeEach] [sig-node] Pods +STEP: Creating a kubernetes client 07/27/23 01:31:08.613 +Jul 27 01:31:08.613: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename sched-preemption 07/27/23 01:31:08.614 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:31:08.657 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:31:08.666 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-node] Pods - test/e2e/common/node/pods.go:194 -[It] should be updated [NodeConformance] [Conformance] - test/e2e/common/node/pods.go:344 -STEP: creating the pod 06/12/23 20:43:15.994 -STEP: submitting the pod to kubernetes 06/12/23 20:43:15.995 -Jun 12 20:43:16.024: INFO: Waiting up to 5m0s for pod "pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be" in namespace "pods-212" to be "running and ready" -Jun 12 20:43:16.032: INFO: Pod "pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be": Phase="Pending", Reason="", readiness=false. Elapsed: 8.321388ms -Jun 12 20:43:16.032: INFO: The phase of Pod pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:43:18.042: INFO: Pod "pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017674625s -Jun 12 20:43:18.042: INFO: The phase of Pod pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:43:20.040: INFO: Pod "pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016167962s -Jun 12 20:43:20.040: INFO: The phase of Pod pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:43:22.052: INFO: Pod "pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028182873s -Jun 12 20:43:22.052: INFO: The phase of Pod pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:43:24.042: INFO: Pod "pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017874122s -Jun 12 20:43:24.042: INFO: The phase of Pod pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:43:26.041: INFO: Pod "pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be": Phase="Pending", Reason="", readiness=false. Elapsed: 10.01700353s -Jun 12 20:43:26.041: INFO: The phase of Pod pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:43:28.042: INFO: Pod "pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be": Phase="Running", Reason="", readiness=true. Elapsed: 12.017805271s -Jun 12 20:43:28.042: INFO: The phase of Pod pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be is Running (Ready = true) -Jun 12 20:43:28.042: INFO: Pod "pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be" satisfied condition "running and ready" -STEP: verifying the pod is in kubernetes 06/12/23 20:43:28.049 -STEP: updating the pod 06/12/23 20:43:28.057 -Jun 12 20:43:28.604: INFO: Successfully updated pod "pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be" -Jun 12 20:43:28.604: INFO: Waiting up to 5m0s for pod "pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be" in namespace "pods-212" to be "running" -Jun 12 20:43:28.623: INFO: Pod "pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be": Phase="Running", Reason="", readiness=true. Elapsed: 18.591156ms -Jun 12 20:43:28.623: INFO: Pod "pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be" satisfied condition "running" -STEP: verifying the updated pod is in kubernetes 06/12/23 20:43:28.623 -Jun 12 20:43:28.631: INFO: Pod update OK -[AfterEach] [sig-node] Pods +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:97 +Jul 27 01:31:08.800: INFO: Waiting up to 1m0s for all nodes to be ready +Jul 27 01:32:09.046: INFO: Waiting for terminating namespaces to be deleted... +[It] validates basic preemption works [Conformance] + test/e2e/scheduling/preemption.go:130 +STEP: Create pods that use 4/5 of node resources. 07/27/23 01:32:09.076 +Jul 27 01:32:09.150: INFO: Created pod: pod0-0-sched-preemption-low-priority +Jul 27 01:32:09.169: INFO: Created pod: pod0-1-sched-preemption-medium-priority +Jul 27 01:32:09.237: INFO: Created pod: pod1-0-sched-preemption-medium-priority +Jul 27 01:32:09.255: INFO: Created pod: pod1-1-sched-preemption-medium-priority +Jul 27 01:32:09.302: INFO: Created pod: pod2-0-sched-preemption-medium-priority +Jul 27 01:32:09.319: INFO: Created pod: pod2-1-sched-preemption-medium-priority +STEP: Wait for pods to be scheduled. 07/27/23 01:32:09.319 +Jul 27 01:32:09.320: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-6784" to be "running" +Jul 27 01:32:09.328: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 8.210667ms +Jul 27 01:32:11.337: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017905852s +Jul 27 01:32:13.337: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017055053s +Jul 27 01:32:15.338: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018439168s +Jul 27 01:32:17.337: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 8.017331062s +Jul 27 01:32:17.337: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" +Jul 27 01:32:17.337: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-6784" to be "running" +Jul 27 01:32:17.345: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 8.439008ms +Jul 27 01:32:17.345: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" +Jul 27 01:32:17.345: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-6784" to be "running" +Jul 27 01:32:17.354: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 8.345303ms +Jul 27 01:32:17.354: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" +Jul 27 01:32:17.354: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-6784" to be "running" +Jul 27 01:32:17.361: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 6.918276ms +Jul 27 01:32:17.361: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" +Jul 27 01:32:17.361: INFO: Waiting up to 5m0s for pod "pod2-0-sched-preemption-medium-priority" in namespace "sched-preemption-6784" to be "running" +Jul 27 01:32:17.368: INFO: Pod "pod2-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 7.718862ms +Jul 27 01:32:17.368: INFO: Pod "pod2-0-sched-preemption-medium-priority" satisfied condition "running" +Jul 27 01:32:17.368: INFO: Waiting up to 5m0s for pod "pod2-1-sched-preemption-medium-priority" in namespace "sched-preemption-6784" to be "running" +Jul 27 01:32:17.376: INFO: Pod "pod2-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 7.226788ms +Jul 27 01:32:17.376: INFO: Pod "pod2-1-sched-preemption-medium-priority" satisfied condition "running" +STEP: Run a high priority pod that has same requirements as that of lower priority pod 07/27/23 01:32:17.376 +Jul 27 01:32:17.390: INFO: Waiting up to 2m0s for pod "preemptor-pod" in namespace "sched-preemption-6784" to be "running" +Jul 27 01:32:17.397: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 7.200932ms +Jul 27 01:32:19.406: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015556216s +Jul 27 01:32:21.407: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016563882s +Jul 27 01:32:23.406: INFO: Pod "preemptor-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.016018251s +Jul 27 01:32:23.406: INFO: Pod "preemptor-pod" satisfied condition "running" +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 -Jun 12 20:43:28.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Pods +Jul 27 01:32:23.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:84 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Pods +[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Pods +[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 -STEP: Destroying namespace "pods-212" for this suite. 06/12/23 20:43:28.682 +STEP: Destroying namespace "sched-preemption-6784" for this suite. 07/27/23 01:32:23.603 ------------------------------ -• [SLOW TEST] [12.803 seconds] -[sig-node] Pods -test/e2e/common/node/framework.go:23 - should be updated [NodeConformance] [Conformance] - test/e2e/common/node/pods.go:344 +• [SLOW TEST] [75.013 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +test/e2e/scheduling/framework.go:40 + validates basic preemption works [Conformance] + test/e2e/scheduling/preemption.go:130 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Pods + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:43:15.902 - Jun 12 20:43:15.903: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename pods 06/12/23 20:43:15.908 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:43:15.963 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:43:15.975 - [BeforeEach] [sig-node] Pods + STEP: Creating a kubernetes client 07/27/23 01:31:08.613 + Jul 27 01:31:08.613: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename sched-preemption 07/27/23 01:31:08.614 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:31:08.657 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:31:08.666 + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-node] Pods - test/e2e/common/node/pods.go:194 - [It] should be updated [NodeConformance] [Conformance] - test/e2e/common/node/pods.go:344 - STEP: creating the pod 06/12/23 20:43:15.994 - STEP: submitting the pod to kubernetes 06/12/23 20:43:15.995 - Jun 12 20:43:16.024: INFO: Waiting up to 5m0s for pod "pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be" in namespace "pods-212" to be "running and ready" - Jun 12 20:43:16.032: INFO: Pod "pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be": Phase="Pending", Reason="", readiness=false. Elapsed: 8.321388ms - Jun 12 20:43:16.032: INFO: The phase of Pod pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:43:18.042: INFO: Pod "pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017674625s - Jun 12 20:43:18.042: INFO: The phase of Pod pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:43:20.040: INFO: Pod "pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016167962s - Jun 12 20:43:20.040: INFO: The phase of Pod pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:43:22.052: INFO: Pod "pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028182873s - Jun 12 20:43:22.052: INFO: The phase of Pod pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:43:24.042: INFO: Pod "pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017874122s - Jun 12 20:43:24.042: INFO: The phase of Pod pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:43:26.041: INFO: Pod "pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be": Phase="Pending", Reason="", readiness=false. Elapsed: 10.01700353s - Jun 12 20:43:26.041: INFO: The phase of Pod pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:43:28.042: INFO: Pod "pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be": Phase="Running", Reason="", readiness=true. Elapsed: 12.017805271s - Jun 12 20:43:28.042: INFO: The phase of Pod pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be is Running (Ready = true) - Jun 12 20:43:28.042: INFO: Pod "pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be" satisfied condition "running and ready" - STEP: verifying the pod is in kubernetes 06/12/23 20:43:28.049 - STEP: updating the pod 06/12/23 20:43:28.057 - Jun 12 20:43:28.604: INFO: Successfully updated pod "pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be" - Jun 12 20:43:28.604: INFO: Waiting up to 5m0s for pod "pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be" in namespace "pods-212" to be "running" - Jun 12 20:43:28.623: INFO: Pod "pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be": Phase="Running", Reason="", readiness=true. Elapsed: 18.591156ms - Jun 12 20:43:28.623: INFO: Pod "pod-update-8f323a4d-55d7-4933-a71a-1adf21fa40be" satisfied condition "running" - STEP: verifying the updated pod is in kubernetes 06/12/23 20:43:28.623 - Jun 12 20:43:28.631: INFO: Pod update OK - [AfterEach] [sig-node] Pods + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:97 + Jul 27 01:31:08.800: INFO: Waiting up to 1m0s for all nodes to be ready + Jul 27 01:32:09.046: INFO: Waiting for terminating namespaces to be deleted... + [It] validates basic preemption works [Conformance] + test/e2e/scheduling/preemption.go:130 + STEP: Create pods that use 4/5 of node resources. 07/27/23 01:32:09.076 + Jul 27 01:32:09.150: INFO: Created pod: pod0-0-sched-preemption-low-priority + Jul 27 01:32:09.169: INFO: Created pod: pod0-1-sched-preemption-medium-priority + Jul 27 01:32:09.237: INFO: Created pod: pod1-0-sched-preemption-medium-priority + Jul 27 01:32:09.255: INFO: Created pod: pod1-1-sched-preemption-medium-priority + Jul 27 01:32:09.302: INFO: Created pod: pod2-0-sched-preemption-medium-priority + Jul 27 01:32:09.319: INFO: Created pod: pod2-1-sched-preemption-medium-priority + STEP: Wait for pods to be scheduled. 07/27/23 01:32:09.319 + Jul 27 01:32:09.320: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-6784" to be "running" + Jul 27 01:32:09.328: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 8.210667ms + Jul 27 01:32:11.337: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017905852s + Jul 27 01:32:13.337: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017055053s + Jul 27 01:32:15.338: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018439168s + Jul 27 01:32:17.337: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 8.017331062s + Jul 27 01:32:17.337: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" + Jul 27 01:32:17.337: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-6784" to be "running" + Jul 27 01:32:17.345: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 8.439008ms + Jul 27 01:32:17.345: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" + Jul 27 01:32:17.345: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-6784" to be "running" + Jul 27 01:32:17.354: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 8.345303ms + Jul 27 01:32:17.354: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" + Jul 27 01:32:17.354: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-6784" to be "running" + Jul 27 01:32:17.361: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 6.918276ms + Jul 27 01:32:17.361: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" + Jul 27 01:32:17.361: INFO: Waiting up to 5m0s for pod "pod2-0-sched-preemption-medium-priority" in namespace "sched-preemption-6784" to be "running" + Jul 27 01:32:17.368: INFO: Pod "pod2-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 7.718862ms + Jul 27 01:32:17.368: INFO: Pod "pod2-0-sched-preemption-medium-priority" satisfied condition "running" + Jul 27 01:32:17.368: INFO: Waiting up to 5m0s for pod "pod2-1-sched-preemption-medium-priority" in namespace "sched-preemption-6784" to be "running" + Jul 27 01:32:17.376: INFO: Pod "pod2-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 7.226788ms + Jul 27 01:32:17.376: INFO: Pod "pod2-1-sched-preemption-medium-priority" satisfied condition "running" + STEP: Run a high priority pod that has same requirements as that of lower priority pod 07/27/23 01:32:17.376 + Jul 27 01:32:17.390: INFO: Waiting up to 2m0s for pod "preemptor-pod" in namespace "sched-preemption-6784" to be "running" + Jul 27 01:32:17.397: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 7.200932ms + Jul 27 01:32:19.406: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015556216s + Jul 27 01:32:21.407: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016563882s + Jul 27 01:32:23.406: INFO: Pod "preemptor-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.016018251s + Jul 27 01:32:23.406: INFO: Pod "preemptor-pod" satisfied condition "running" + [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 - Jun 12 20:43:28.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Pods + Jul 27 01:32:23.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:84 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Pods + [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Pods + [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 - STEP: Destroying namespace "pods-212" for this suite. 06/12/23 20:43:28.682 + STEP: Destroying namespace "sched-preemption-6784" for this suite. 07/27/23 01:32:23.603 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSS ------------------------------- -[sig-node] Downward API - should provide host IP as an env var [NodeConformance] [Conformance] - test/e2e/common/node/downwardapi.go:90 -[BeforeEach] [sig-node] Downward API +[sig-node] InitContainer [NodeConformance] + should not start app containers if init containers fail on a RestartAlways pod [Conformance] + test/e2e/common/node/init_container.go:334 +[BeforeEach] [sig-node] InitContainer [NodeConformance] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:43:28.708 -Jun 12 20:43:28.708: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename downward-api 06/12/23 20:43:28.716 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:43:28.787 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:43:28.797 -[BeforeEach] [sig-node] Downward API +STEP: Creating a kubernetes client 07/27/23 01:32:23.626 +Jul 27 01:32:23.627: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename init-container 07/27/23 01:32:23.627 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:32:23.666 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:32:23.675 +[BeforeEach] [sig-node] InitContainer [NodeConformance] test/e2e/framework/metrics/init/init.go:31 -[It] should provide host IP as an env var [NodeConformance] [Conformance] - test/e2e/common/node/downwardapi.go:90 -STEP: Creating a pod to test downward api env vars 06/12/23 20:43:28.81 -Jun 12 20:43:28.830: INFO: Waiting up to 5m0s for pod "downward-api-93adde23-ab04-40b4-a683-c60758afadeb" in namespace "downward-api-9117" to be "Succeeded or Failed" -Jun 12 20:43:28.838: INFO: Pod "downward-api-93adde23-ab04-40b4-a683-c60758afadeb": Phase="Pending", Reason="", readiness=false. Elapsed: 7.653257ms -Jun 12 20:43:30.847: INFO: Pod "downward-api-93adde23-ab04-40b4-a683-c60758afadeb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016517115s -Jun 12 20:43:32.851: INFO: Pod "downward-api-93adde23-ab04-40b4-a683-c60758afadeb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020589228s -Jun 12 20:43:34.917: INFO: Pod "downward-api-93adde23-ab04-40b4-a683-c60758afadeb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086816145s -Jun 12 20:43:36.850: INFO: Pod "downward-api-93adde23-ab04-40b4-a683-c60758afadeb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.019494222s -STEP: Saw pod success 06/12/23 20:43:36.851 -Jun 12 20:43:36.851: INFO: Pod "downward-api-93adde23-ab04-40b4-a683-c60758afadeb" satisfied condition "Succeeded or Failed" -Jun 12 20:43:36.862: INFO: Trying to get logs from node 10.138.75.70 pod downward-api-93adde23-ab04-40b4-a683-c60758afadeb container dapi-container: -STEP: delete the pod 06/12/23 20:43:36.919 -Jun 12 20:43:36.938: INFO: Waiting for pod downward-api-93adde23-ab04-40b4-a683-c60758afadeb to disappear -Jun 12 20:43:36.945: INFO: Pod downward-api-93adde23-ab04-40b4-a683-c60758afadeb no longer exists -[AfterEach] [sig-node] Downward API +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:165 +[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] + test/e2e/common/node/init_container.go:334 +STEP: creating the pod 07/27/23 01:32:23.684 +Jul 27 01:32:23.684: INFO: PodSpec: initContainers in spec.initContainers +Jul 27 01:33:07.977: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-c96a6a86-8f1b-4db6-845e-2d8df79265b4", GenerateName:"", Namespace:"init-container-3435", SelfLink:"", UID:"c725f3dc-5abb-460a-ba71-8a9eaf76b20c", ResourceVersion:"62983", Generation:0, CreationTimestamp:time.Date(2023, time.July, 27, 1, 32, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"684740635"}, Annotations:map[string]string{"cni.projectcalico.org/containerID":"33f2f21396cb596553f59894af0edf1217e8b688ba6c27b2086b17d406bb2fd1", "cni.projectcalico.org/podIP":"172.17.225.2/32", "cni.projectcalico.org/podIPs":"172.17.225.2/32", "k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"k8s-pod-network\",\n \"ips\": [\n \"172.17.225.2\"\n ],\n \"default\": true,\n \"dns\": {}\n}]", "openshift.io/scc":"anyuid"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.July, 27, 1, 32, 23, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e06690), Subresource:""}, v1.ManagedFieldsEntry{Manager:"calico", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.July, 27, 1, 32, 24, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e066c0), Subresource:"status"}, v1.ManagedFieldsEntry{Manager:"multus", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.July, 27, 1, 32, 24, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e066f0), Subresource:"status"}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.July, 27, 1, 33, 7, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e06738), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-h4bk2", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc000e63420), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"registry.k8s.io/e2e-test-images/busybox:1.29-4", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil), Claims:[]v1.ResourceClaim(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-h4bk2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0042ce660), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"registry.k8s.io/e2e-test-images/busybox:1.29-4", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil), Claims:[]v1.ResourceClaim(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-h4bk2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0042ce6c0), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"registry.k8s.io/pause:3.9", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Claims:[]v1.ResourceClaim(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-h4bk2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0042ce600), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003fb9408), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"10.245.128.19", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0004779d0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003fb94c0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003fb94e0)}, v1.Toleration{Key:"node.kubernetes.io/memory-pressure", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003fb94fc), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003fb9500), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc00108bb60), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.July, 27, 1, 32, 23, 0, time.Local), Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.July, 27, 1, 32, 23, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.July, 27, 1, 32, 23, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.July, 27, 1, 32, 23, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.245.128.19", PodIP:"172.17.225.2", PodIPs:[]v1.PodIP{v1.PodIP{IP:"172.17.225.2"}}, StartTime:time.Date(2023, time.July, 27, 1, 32, 23, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000477b20)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000477b90)}, Ready:false, RestartCount:3, Image:"registry.k8s.io/e2e-test-images/busybox:1.29-4", ImageID:"registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937", ContainerID:"cri-o://330cf895a7f79c9d87758ca0c03b544ef44ae48f59d3174626acf865d7770792", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000e634e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/busybox:1.29-4", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000e634c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/pause:3.9", ImageID:"", ContainerID:"", Started:(*bool)(0xc003fb957f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} +[AfterEach] [sig-node] InitContainer [NodeConformance] test/e2e/framework/node/init/init.go:32 -Jun 12 20:43:36.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Downward API +Jul 27 01:33:07.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Downward API +[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Downward API +[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] tear down framework | framework.go:193 -STEP: Destroying namespace "downward-api-9117" for this suite. 06/12/23 20:43:36.986 +STEP: Destroying namespace "init-container-3435" for this suite. 07/27/23 01:33:07.994 ------------------------------ -• [SLOW TEST] [8.301 seconds] -[sig-node] Downward API +• [SLOW TEST] [44.402 seconds] +[sig-node] InitContainer [NodeConformance] test/e2e/common/node/framework.go:23 - should provide host IP as an env var [NodeConformance] [Conformance] - test/e2e/common/node/downwardapi.go:90 + should not start app containers if init containers fail on a RestartAlways pod [Conformance] + test/e2e/common/node/init_container.go:334 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Downward API + [BeforeEach] [sig-node] InitContainer [NodeConformance] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:43:28.708 - Jun 12 20:43:28.708: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename downward-api 06/12/23 20:43:28.716 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:43:28.787 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:43:28.797 - [BeforeEach] [sig-node] Downward API + STEP: Creating a kubernetes client 07/27/23 01:32:23.626 + Jul 27 01:32:23.627: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename init-container 07/27/23 01:32:23.627 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:32:23.666 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:32:23.675 + [BeforeEach] [sig-node] InitContainer [NodeConformance] test/e2e/framework/metrics/init/init.go:31 - [It] should provide host IP as an env var [NodeConformance] [Conformance] - test/e2e/common/node/downwardapi.go:90 - STEP: Creating a pod to test downward api env vars 06/12/23 20:43:28.81 - Jun 12 20:43:28.830: INFO: Waiting up to 5m0s for pod "downward-api-93adde23-ab04-40b4-a683-c60758afadeb" in namespace "downward-api-9117" to be "Succeeded or Failed" - Jun 12 20:43:28.838: INFO: Pod "downward-api-93adde23-ab04-40b4-a683-c60758afadeb": Phase="Pending", Reason="", readiness=false. Elapsed: 7.653257ms - Jun 12 20:43:30.847: INFO: Pod "downward-api-93adde23-ab04-40b4-a683-c60758afadeb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016517115s - Jun 12 20:43:32.851: INFO: Pod "downward-api-93adde23-ab04-40b4-a683-c60758afadeb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020589228s - Jun 12 20:43:34.917: INFO: Pod "downward-api-93adde23-ab04-40b4-a683-c60758afadeb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086816145s - Jun 12 20:43:36.850: INFO: Pod "downward-api-93adde23-ab04-40b4-a683-c60758afadeb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.019494222s - STEP: Saw pod success 06/12/23 20:43:36.851 - Jun 12 20:43:36.851: INFO: Pod "downward-api-93adde23-ab04-40b4-a683-c60758afadeb" satisfied condition "Succeeded or Failed" - Jun 12 20:43:36.862: INFO: Trying to get logs from node 10.138.75.70 pod downward-api-93adde23-ab04-40b4-a683-c60758afadeb container dapi-container: - STEP: delete the pod 06/12/23 20:43:36.919 - Jun 12 20:43:36.938: INFO: Waiting for pod downward-api-93adde23-ab04-40b4-a683-c60758afadeb to disappear - Jun 12 20:43:36.945: INFO: Pod downward-api-93adde23-ab04-40b4-a683-c60758afadeb no longer exists - [AfterEach] [sig-node] Downward API + [BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:165 + [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] + test/e2e/common/node/init_container.go:334 + STEP: creating the pod 07/27/23 01:32:23.684 + Jul 27 01:32:23.684: INFO: PodSpec: initContainers in spec.initContainers + Jul 27 01:33:07.977: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-c96a6a86-8f1b-4db6-845e-2d8df79265b4", GenerateName:"", Namespace:"init-container-3435", SelfLink:"", UID:"c725f3dc-5abb-460a-ba71-8a9eaf76b20c", ResourceVersion:"62983", Generation:0, CreationTimestamp:time.Date(2023, time.July, 27, 1, 32, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"684740635"}, Annotations:map[string]string{"cni.projectcalico.org/containerID":"33f2f21396cb596553f59894af0edf1217e8b688ba6c27b2086b17d406bb2fd1", "cni.projectcalico.org/podIP":"172.17.225.2/32", "cni.projectcalico.org/podIPs":"172.17.225.2/32", "k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"k8s-pod-network\",\n \"ips\": [\n \"172.17.225.2\"\n ],\n \"default\": true,\n \"dns\": {}\n}]", "openshift.io/scc":"anyuid"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.July, 27, 1, 32, 23, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e06690), Subresource:""}, v1.ManagedFieldsEntry{Manager:"calico", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.July, 27, 1, 32, 24, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e066c0), Subresource:"status"}, v1.ManagedFieldsEntry{Manager:"multus", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.July, 27, 1, 32, 24, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e066f0), Subresource:"status"}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.July, 27, 1, 33, 7, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e06738), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-h4bk2", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc000e63420), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"registry.k8s.io/e2e-test-images/busybox:1.29-4", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil), Claims:[]v1.ResourceClaim(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-h4bk2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0042ce660), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"registry.k8s.io/e2e-test-images/busybox:1.29-4", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil), Claims:[]v1.ResourceClaim(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-h4bk2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0042ce6c0), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"registry.k8s.io/pause:3.9", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Claims:[]v1.ResourceClaim(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-h4bk2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0042ce600), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003fb9408), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"10.245.128.19", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0004779d0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003fb94c0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003fb94e0)}, v1.Toleration{Key:"node.kubernetes.io/memory-pressure", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003fb94fc), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003fb9500), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc00108bb60), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.July, 27, 1, 32, 23, 0, time.Local), Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.July, 27, 1, 32, 23, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.July, 27, 1, 32, 23, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.July, 27, 1, 32, 23, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.245.128.19", PodIP:"172.17.225.2", PodIPs:[]v1.PodIP{v1.PodIP{IP:"172.17.225.2"}}, StartTime:time.Date(2023, time.July, 27, 1, 32, 23, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000477b20)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000477b90)}, Ready:false, RestartCount:3, Image:"registry.k8s.io/e2e-test-images/busybox:1.29-4", ImageID:"registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937", ContainerID:"cri-o://330cf895a7f79c9d87758ca0c03b544ef44ae48f59d3174626acf865d7770792", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000e634e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/busybox:1.29-4", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000e634c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/pause:3.9", ImageID:"", ContainerID:"", Started:(*bool)(0xc003fb957f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} + [AfterEach] [sig-node] InitContainer [NodeConformance] test/e2e/framework/node/init/init.go:32 - Jun 12 20:43:36.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Downward API + Jul 27 01:33:07.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Downward API + [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Downward API + [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] tear down framework | framework.go:193 - STEP: Destroying namespace "downward-api-9117" for this suite. 06/12/23 20:43:36.986 + STEP: Destroying namespace "init-container-3435" for this suite. 07/27/23 01:33:07.994 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------- -[sig-apps] Job - should delete a job [Conformance] - test/e2e/apps/job.go:481 -[BeforeEach] [sig-apps] Job - set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:43:37.024 -Jun 12 20:43:37.024: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename job 06/12/23 20:43:37.026 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:43:37.1 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:43:37.112 -[BeforeEach] [sig-apps] Job - test/e2e/framework/metrics/init/init.go:31 -[It] should delete a job [Conformance] - test/e2e/apps/job.go:481 -STEP: Creating a job 06/12/23 20:43:37.124 -STEP: Ensuring active pods == parallelism 06/12/23 20:43:37.159 -STEP: delete a job 06/12/23 20:43:41.168 -STEP: deleting Job.batch foo in namespace job-6032, will wait for the garbage collector to delete the pods 06/12/23 20:43:41.168 -Jun 12 20:43:41.247: INFO: Deleting Job.batch foo took: 17.527798ms -Jun 12 20:43:41.348: INFO: Terminating Job.batch foo pods took: 101.089973ms -STEP: Ensuring job was deleted 06/12/23 20:44:14.649 -[AfterEach] [sig-apps] Job - test/e2e/framework/node/init/init.go:32 -Jun 12 20:44:14.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] Job - test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] Job - dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] Job - tear down framework | framework.go:193 -STEP: Destroying namespace "job-6032" for this suite. 06/12/23 20:44:14.672 ------------------------------- -• [SLOW TEST] [37.672 seconds] -[sig-apps] Job -test/e2e/apps/framework.go:23 - should delete a job [Conformance] - test/e2e/apps/job.go:481 - - Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] Job - set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:43:37.024 - Jun 12 20:43:37.024: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename job 06/12/23 20:43:37.026 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:43:37.1 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:43:37.112 - [BeforeEach] [sig-apps] Job - test/e2e/framework/metrics/init/init.go:31 - [It] should delete a job [Conformance] - test/e2e/apps/job.go:481 - STEP: Creating a job 06/12/23 20:43:37.124 - STEP: Ensuring active pods == parallelism 06/12/23 20:43:37.159 - STEP: delete a job 06/12/23 20:43:41.168 - STEP: deleting Job.batch foo in namespace job-6032, will wait for the garbage collector to delete the pods 06/12/23 20:43:41.168 - Jun 12 20:43:41.247: INFO: Deleting Job.batch foo took: 17.527798ms - Jun 12 20:43:41.348: INFO: Terminating Job.batch foo pods took: 101.089973ms - STEP: Ensuring job was deleted 06/12/23 20:44:14.649 - [AfterEach] [sig-apps] Job - test/e2e/framework/node/init/init.go:32 - Jun 12 20:44:14.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] Job - test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] Job - dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] Job - tear down framework | framework.go:193 - STEP: Destroying namespace "job-6032" for this suite. 06/12/23 20:44:14.672 - << End Captured GinkgoWriter Output +SS ------------------------------ -[sig-node] Downward API - should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] - test/e2e/common/node/downwardapi.go:44 -[BeforeEach] [sig-node] Downward API +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + should be able to convert a non homogeneous list of CRs [Conformance] + test/e2e/apimachinery/crd_conversion_webhook.go:184 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:44:14.696 -Jun 12 20:44:14.697: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename downward-api 06/12/23 20:44:14.699 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:44:14.777 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:44:14.816 -[BeforeEach] [sig-node] Downward API +STEP: Creating a kubernetes client 07/27/23 01:33:08.028 +Jul 27 01:33:08.029: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename crd-webhook 07/27/23 01:33:08.03 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:33:08.082 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:33:08.104 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 -[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] - test/e2e/common/node/downwardapi.go:44 -STEP: Creating a pod to test downward api env vars 06/12/23 20:44:14.825 -Jun 12 20:44:14.847: INFO: Waiting up to 5m0s for pod "downward-api-6a54eb64-0f21-4a68-a309-4e0aff262be9" in namespace "downward-api-1990" to be "Succeeded or Failed" -Jun 12 20:44:14.856: INFO: Pod "downward-api-6a54eb64-0f21-4a68-a309-4e0aff262be9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.845165ms -Jun 12 20:44:16.880: INFO: Pod "downward-api-6a54eb64-0f21-4a68-a309-4e0aff262be9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033151995s -Jun 12 20:44:18.865: INFO: Pod "downward-api-6a54eb64-0f21-4a68-a309-4e0aff262be9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017614879s -Jun 12 20:44:20.863: INFO: Pod "downward-api-6a54eb64-0f21-4a68-a309-4e0aff262be9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016385667s -STEP: Saw pod success 06/12/23 20:44:20.863 -Jun 12 20:44:20.864: INFO: Pod "downward-api-6a54eb64-0f21-4a68-a309-4e0aff262be9" satisfied condition "Succeeded or Failed" -Jun 12 20:44:20.871: INFO: Trying to get logs from node 10.138.75.70 pod downward-api-6a54eb64-0f21-4a68-a309-4e0aff262be9 container dapi-container: -STEP: delete the pod 06/12/23 20:44:20.89 -Jun 12 20:44:20.924: INFO: Waiting for pod downward-api-6a54eb64-0f21-4a68-a309-4e0aff262be9 to disappear -Jun 12 20:44:20.930: INFO: Pod downward-api-6a54eb64-0f21-4a68-a309-4e0aff262be9 no longer exists -[AfterEach] [sig-node] Downward API +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:128 +STEP: Setting up server cert 07/27/23 01:33:08.118 +STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication 07/27/23 01:33:08.597 +STEP: Deploying the custom resource conversion webhook pod 07/27/23 01:33:08.628 +STEP: Wait for the deployment to be ready 07/27/23 01:33:08.656 +Jul 27 01:33:08.673: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 07/27/23 01:33:10.703 +STEP: Verifying the service has paired with the endpoint 07/27/23 01:33:10.744 +Jul 27 01:33:11.745: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 +[It] should be able to convert a non homogeneous list of CRs [Conformance] + test/e2e/apimachinery/crd_conversion_webhook.go:184 +Jul 27 01:33:11.756: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Creating a v1 custom resource 07/27/23 01:33:14.547 +STEP: Create a v2 custom resource 07/27/23 01:33:14.591 +STEP: List CRs in v1 07/27/23 01:33:14.736 +STEP: List CRs in v2 07/27/23 01:33:14.755 +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 -Jun 12 20:44:20.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Downward API +Jul 27 01:33:15.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:139 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Downward API +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Downward API +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 -STEP: Destroying namespace "downward-api-1990" for this suite. 06/12/23 20:44:20.942 +STEP: Destroying namespace "crd-webhook-6104" for this suite. 07/27/23 01:33:15.525 ------------------------------ -• [SLOW TEST] [6.269 seconds] -[sig-node] Downward API -test/e2e/common/node/framework.go:23 - should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] - test/e2e/common/node/downwardapi.go:44 +• [SLOW TEST] [7.527 seconds] +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should be able to convert a non homogeneous list of CRs [Conformance] + test/e2e/apimachinery/crd_conversion_webhook.go:184 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Downward API + [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:44:14.696 - Jun 12 20:44:14.697: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename downward-api 06/12/23 20:44:14.699 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:44:14.777 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:44:14.816 - [BeforeEach] [sig-node] Downward API + STEP: Creating a kubernetes client 07/27/23 01:33:08.028 + Jul 27 01:33:08.029: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename crd-webhook 07/27/23 01:33:08.03 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:33:08.082 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:33:08.104 + [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 - [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] - test/e2e/common/node/downwardapi.go:44 - STEP: Creating a pod to test downward api env vars 06/12/23 20:44:14.825 - Jun 12 20:44:14.847: INFO: Waiting up to 5m0s for pod "downward-api-6a54eb64-0f21-4a68-a309-4e0aff262be9" in namespace "downward-api-1990" to be "Succeeded or Failed" - Jun 12 20:44:14.856: INFO: Pod "downward-api-6a54eb64-0f21-4a68-a309-4e0aff262be9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.845165ms - Jun 12 20:44:16.880: INFO: Pod "downward-api-6a54eb64-0f21-4a68-a309-4e0aff262be9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033151995s - Jun 12 20:44:18.865: INFO: Pod "downward-api-6a54eb64-0f21-4a68-a309-4e0aff262be9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017614879s - Jun 12 20:44:20.863: INFO: Pod "downward-api-6a54eb64-0f21-4a68-a309-4e0aff262be9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016385667s - STEP: Saw pod success 06/12/23 20:44:20.863 - Jun 12 20:44:20.864: INFO: Pod "downward-api-6a54eb64-0f21-4a68-a309-4e0aff262be9" satisfied condition "Succeeded or Failed" - Jun 12 20:44:20.871: INFO: Trying to get logs from node 10.138.75.70 pod downward-api-6a54eb64-0f21-4a68-a309-4e0aff262be9 container dapi-container: - STEP: delete the pod 06/12/23 20:44:20.89 - Jun 12 20:44:20.924: INFO: Waiting for pod downward-api-6a54eb64-0f21-4a68-a309-4e0aff262be9 to disappear - Jun 12 20:44:20.930: INFO: Pod downward-api-6a54eb64-0f21-4a68-a309-4e0aff262be9 no longer exists - [AfterEach] [sig-node] Downward API + [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:128 + STEP: Setting up server cert 07/27/23 01:33:08.118 + STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication 07/27/23 01:33:08.597 + STEP: Deploying the custom resource conversion webhook pod 07/27/23 01:33:08.628 + STEP: Wait for the deployment to be ready 07/27/23 01:33:08.656 + Jul 27 01:33:08.673: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 07/27/23 01:33:10.703 + STEP: Verifying the service has paired with the endpoint 07/27/23 01:33:10.744 + Jul 27 01:33:11.745: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 + [It] should be able to convert a non homogeneous list of CRs [Conformance] + test/e2e/apimachinery/crd_conversion_webhook.go:184 + Jul 27 01:33:11.756: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Creating a v1 custom resource 07/27/23 01:33:14.547 + STEP: Create a v2 custom resource 07/27/23 01:33:14.591 + STEP: List CRs in v1 07/27/23 01:33:14.736 + STEP: List CRs in v2 07/27/23 01:33:14.755 + [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 20:44:20.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Downward API + Jul 27 01:33:15.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:139 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Downward API + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Downward API + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "downward-api-1990" for this suite. 06/12/23 20:44:20.942 + STEP: Destroying namespace "crd-webhook-6104" for this suite. 07/27/23 01:33:15.525 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSS +SSSSSSSSSSSSSSS ------------------------------ -[sig-network] EndpointSlice - should have Endpoints and EndpointSlices pointing to API Server [Conformance] - test/e2e/network/endpointslice.go:66 -[BeforeEach] [sig-network] EndpointSlice +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + should include custom resource definition resources in discovery documents [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:198 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:44:20.972 -Jun 12 20:44:20.973: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename endpointslice 06/12/23 20:44:20.975 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:44:21.036 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:44:21.047 -[BeforeEach] [sig-network] EndpointSlice +STEP: Creating a kubernetes client 07/27/23 01:33:15.556 +Jul 27 01:33:15.557: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename custom-resource-definition 07/27/23 01:33:15.558 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:33:15.597 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:33:15.607 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-network] EndpointSlice - test/e2e/network/endpointslice.go:52 -[It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] - test/e2e/network/endpointslice.go:66 -Jun 12 20:44:21.112: INFO: Endpoints addresses: [172.20.0.1] , ports: [2040] -Jun 12 20:44:21.112: INFO: EndpointSlices addresses: [172.20.0.1] , ports: [2040] -[AfterEach] [sig-network] EndpointSlice +[It] should include custom resource definition resources in discovery documents [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:198 +STEP: fetching the /apis discovery document 07/27/23 01:33:15.616 +STEP: finding the apiextensions.k8s.io API group in the /apis discovery document 07/27/23 01:33:15.62 +STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document 07/27/23 01:33:15.62 +STEP: fetching the /apis/apiextensions.k8s.io discovery document 07/27/23 01:33:15.621 +STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document 07/27/23 01:33:15.625 +STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document 07/27/23 01:33:15.625 +STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document 07/27/23 01:33:15.629 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 -Jun 12 20:44:21.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] EndpointSlice +Jul 27 01:33:15.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] EndpointSlice +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] EndpointSlice +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] tear down framework | framework.go:193 -STEP: Destroying namespace "endpointslice-1689" for this suite. 06/12/23 20:44:21.147 +STEP: Destroying namespace "custom-resource-definition-229" for this suite. 07/27/23 01:33:15.642 ------------------------------ -• [0.196 seconds] -[sig-network] EndpointSlice -test/e2e/network/common/framework.go:23 - should have Endpoints and EndpointSlices pointing to API Server [Conformance] - test/e2e/network/endpointslice.go:66 +• [0.153 seconds] +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should include custom resource definition resources in discovery documents [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:198 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] EndpointSlice + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:44:20.972 - Jun 12 20:44:20.973: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename endpointslice 06/12/23 20:44:20.975 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:44:21.036 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:44:21.047 - [BeforeEach] [sig-network] EndpointSlice + STEP: Creating a kubernetes client 07/27/23 01:33:15.556 + Jul 27 01:33:15.557: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename custom-resource-definition 07/27/23 01:33:15.558 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:33:15.597 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:33:15.607 + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-network] EndpointSlice - test/e2e/network/endpointslice.go:52 - [It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] - test/e2e/network/endpointslice.go:66 - Jun 12 20:44:21.112: INFO: Endpoints addresses: [172.20.0.1] , ports: [2040] - Jun 12 20:44:21.112: INFO: EndpointSlices addresses: [172.20.0.1] , ports: [2040] - [AfterEach] [sig-network] EndpointSlice + [It] should include custom resource definition resources in discovery documents [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:198 + STEP: fetching the /apis discovery document 07/27/23 01:33:15.616 + STEP: finding the apiextensions.k8s.io API group in the /apis discovery document 07/27/23 01:33:15.62 + STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document 07/27/23 01:33:15.62 + STEP: fetching the /apis/apiextensions.k8s.io discovery document 07/27/23 01:33:15.621 + STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document 07/27/23 01:33:15.625 + STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document 07/27/23 01:33:15.625 + STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document 07/27/23 01:33:15.629 + [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 20:44:21.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] EndpointSlice + Jul 27 01:33:15.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] EndpointSlice + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] EndpointSlice + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "endpointslice-1689" for this suite. 06/12/23 20:44:21.147 + STEP: Destroying namespace "custom-resource-definition-229" for this suite. 07/27/23 01:33:15.642 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSS ------------------------------ [sig-node] Sysctls [LinuxOnly] [NodeConformance] - should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] - test/e2e/common/node/sysctl.go:123 + should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/common/node/sysctl.go:77 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] test/e2e/common/node/sysctl.go:37 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:44:21.177 -Jun 12 20:44:21.177: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename sysctl 06/12/23 20:44:21.179 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:44:21.232 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:44:21.241 +STEP: Creating a kubernetes client 07/27/23 01:33:15.713 +Jul 27 01:33:15.713: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename sysctl 07/27/23 01:33:15.714 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:33:15.786 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:33:15.795 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] test/e2e/common/node/sysctl.go:67 -[It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] - test/e2e/common/node/sysctl.go:123 -STEP: Creating a pod with one valid and two invalid sysctls 06/12/23 20:44:21.253 +[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/common/node/sysctl.go:77 +STEP: Creating a pod with the kernel.shm_rmid_forced sysctl 07/27/23 01:33:15.805 +W0727 01:33:15.845651 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "test-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "test-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "test-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "test-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +STEP: Watching for error events or started pod 07/27/23 01:33:15.845 +STEP: Waiting for pod completion 07/27/23 01:33:17.861 +Jul 27 01:33:17.861: INFO: Waiting up to 3m0s for pod "sysctl-965edbec-a44e-402d-82c3-bb76ee2b699a" in namespace "sysctl-7508" to be "completed" +Jul 27 01:33:17.870: INFO: Pod "sysctl-965edbec-a44e-402d-82c3-bb76ee2b699a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.588408ms +Jul 27 01:33:19.881: INFO: Pod "sysctl-965edbec-a44e-402d-82c3-bb76ee2b699a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019906886s +Jul 27 01:33:21.880: INFO: Pod "sysctl-965edbec-a44e-402d-82c3-bb76ee2b699a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018248611s +Jul 27 01:33:21.880: INFO: Pod "sysctl-965edbec-a44e-402d-82c3-bb76ee2b699a" satisfied condition "completed" +STEP: Checking that the pod succeeded 07/27/23 01:33:21.887 +STEP: Getting logs from the pod 07/27/23 01:33:21.887 +STEP: Checking that the sysctl is actually updated 07/27/23 01:33:21.924 [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] test/e2e/framework/node/init/init.go:32 -Jun 12 20:44:21.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +Jul 27 01:33:21.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] tear down framework | framework.go:193 -STEP: Destroying namespace "sysctl-2285" for this suite. 06/12/23 20:44:21.283 +STEP: Destroying namespace "sysctl-7508" for this suite. 07/27/23 01:33:21.948 ------------------------------ -• [0.133 seconds] +• [SLOW TEST] [6.259 seconds] [sig-node] Sysctls [LinuxOnly] [NodeConformance] test/e2e/common/node/framework.go:23 - should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] - test/e2e/common/node/sysctl.go:123 + should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/common/node/sysctl.go:77 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] test/e2e/common/node/sysctl.go:37 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:44:21.177 - Jun 12 20:44:21.177: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename sysctl 06/12/23 20:44:21.179 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:44:21.232 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:44:21.241 + STEP: Creating a kubernetes client 07/27/23 01:33:15.713 + Jul 27 01:33:15.713: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename sysctl 07/27/23 01:33:15.714 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:33:15.786 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:33:15.795 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] test/e2e/common/node/sysctl.go:67 - [It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] - test/e2e/common/node/sysctl.go:123 - STEP: Creating a pod with one valid and two invalid sysctls 06/12/23 20:44:21.253 + [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/common/node/sysctl.go:77 + STEP: Creating a pod with the kernel.shm_rmid_forced sysctl 07/27/23 01:33:15.805 + W0727 01:33:15.845651 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "test-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "test-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "test-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "test-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + STEP: Watching for error events or started pod 07/27/23 01:33:15.845 + STEP: Waiting for pod completion 07/27/23 01:33:17.861 + Jul 27 01:33:17.861: INFO: Waiting up to 3m0s for pod "sysctl-965edbec-a44e-402d-82c3-bb76ee2b699a" in namespace "sysctl-7508" to be "completed" + Jul 27 01:33:17.870: INFO: Pod "sysctl-965edbec-a44e-402d-82c3-bb76ee2b699a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.588408ms + Jul 27 01:33:19.881: INFO: Pod "sysctl-965edbec-a44e-402d-82c3-bb76ee2b699a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019906886s + Jul 27 01:33:21.880: INFO: Pod "sysctl-965edbec-a44e-402d-82c3-bb76ee2b699a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018248611s + Jul 27 01:33:21.880: INFO: Pod "sysctl-965edbec-a44e-402d-82c3-bb76ee2b699a" satisfied condition "completed" + STEP: Checking that the pod succeeded 07/27/23 01:33:21.887 + STEP: Getting logs from the pod 07/27/23 01:33:21.887 + STEP: Checking that the sysctl is actually updated 07/27/23 01:33:21.924 [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] test/e2e/framework/node/init/init.go:32 - Jun 12 20:44:21.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + Jul 27 01:33:21.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] tear down framework | framework.go:193 - STEP: Destroying namespace "sysctl-2285" for this suite. 06/12/23 20:44:21.283 + STEP: Destroying namespace "sysctl-7508" for this suite. 07/27/23 01:33:21.948 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SS ------------------------------ -[sig-storage] ConfigMap - should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:99 -[BeforeEach] [sig-storage] ConfigMap +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate configmap [Conformance] + test/e2e/apimachinery/webhook.go:252 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:44:21.323 -Jun 12 20:44:21.323: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename configmap 06/12/23 20:44:21.324 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:44:21.379 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:44:21.395 -[BeforeEach] [sig-storage] ConfigMap +STEP: Creating a kubernetes client 07/27/23 01:33:21.972 +Jul 27 01:33:21.972: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename webhook 07/27/23 01:33:21.972 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:33:22.013 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:33:22.023 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 -[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:99 -STEP: Creating configMap with name configmap-test-volume-map-bae02ab1-00e1-4904-a1a7-e2c9bc5d04b4 06/12/23 20:44:21.406 -STEP: Creating a pod to test consume configMaps 06/12/23 20:44:21.426 -Jun 12 20:44:21.481: INFO: Waiting up to 5m0s for pod "pod-configmaps-f91eef25-b913-4b8d-9bc8-b3f8abc59d5a" in namespace "configmap-1808" to be "Succeeded or Failed" -Jun 12 20:44:21.487: INFO: Pod "pod-configmaps-f91eef25-b913-4b8d-9bc8-b3f8abc59d5a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.464419ms -Jun 12 20:44:23.495: INFO: Pod "pod-configmaps-f91eef25-b913-4b8d-9bc8-b3f8abc59d5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013951619s -Jun 12 20:44:25.495: INFO: Pod "pod-configmaps-f91eef25-b913-4b8d-9bc8-b3f8abc59d5a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014575711s -Jun 12 20:44:27.496: INFO: Pod "pod-configmaps-f91eef25-b913-4b8d-9bc8-b3f8abc59d5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014812584s -STEP: Saw pod success 06/12/23 20:44:27.496 -Jun 12 20:44:27.496: INFO: Pod "pod-configmaps-f91eef25-b913-4b8d-9bc8-b3f8abc59d5a" satisfied condition "Succeeded or Failed" -Jun 12 20:44:27.502: INFO: Trying to get logs from node 10.138.75.70 pod pod-configmaps-f91eef25-b913-4b8d-9bc8-b3f8abc59d5a container agnhost-container: -STEP: delete the pod 06/12/23 20:44:27.517 -Jun 12 20:44:27.535: INFO: Waiting for pod pod-configmaps-f91eef25-b913-4b8d-9bc8-b3f8abc59d5a to disappear -Jun 12 20:44:27.542: INFO: Pod pod-configmaps-f91eef25-b913-4b8d-9bc8-b3f8abc59d5a no longer exists -[AfterEach] [sig-storage] ConfigMap +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 07/27/23 01:33:22.087 +STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 01:33:22.52 +STEP: Deploying the webhook pod 07/27/23 01:33:22.548 +STEP: Wait for the deployment to be ready 07/27/23 01:33:22.579 +Jul 27 01:33:22.614: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 07/27/23 01:33:24.653 +STEP: Verifying the service has paired with the endpoint 07/27/23 01:33:24.69 +Jul 27 01:33:25.691: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate configmap [Conformance] + test/e2e/apimachinery/webhook.go:252 +STEP: Registering the mutating configmap webhook via the AdmissionRegistration API 07/27/23 01:33:25.701 +STEP: create a configmap that should be updated by the webhook 07/27/23 01:33:25.755 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 -Jun 12 20:44:27.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] ConfigMap +Jul 27 01:33:25.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] ConfigMap +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] ConfigMap +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 -STEP: Destroying namespace "configmap-1808" for this suite. 06/12/23 20:44:27.556 +STEP: Destroying namespace "webhook-7600" for this suite. 07/27/23 01:33:25.938 +STEP: Destroying namespace "webhook-7600-markers" for this suite. 07/27/23 01:33:25.963 ------------------------------ -• [SLOW TEST] [6.253 seconds] -[sig-storage] ConfigMap -test/e2e/common/storage/framework.go:23 - should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:99 +• [4.017 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should mutate configmap [Conformance] + test/e2e/apimachinery/webhook.go:252 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] ConfigMap + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:44:21.323 - Jun 12 20:44:21.323: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename configmap 06/12/23 20:44:21.324 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:44:21.379 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:44:21.395 - [BeforeEach] [sig-storage] ConfigMap + STEP: Creating a kubernetes client 07/27/23 01:33:21.972 + Jul 27 01:33:21.972: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename webhook 07/27/23 01:33:21.972 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:33:22.013 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:33:22.023 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 - [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:99 - STEP: Creating configMap with name configmap-test-volume-map-bae02ab1-00e1-4904-a1a7-e2c9bc5d04b4 06/12/23 20:44:21.406 - STEP: Creating a pod to test consume configMaps 06/12/23 20:44:21.426 - Jun 12 20:44:21.481: INFO: Waiting up to 5m0s for pod "pod-configmaps-f91eef25-b913-4b8d-9bc8-b3f8abc59d5a" in namespace "configmap-1808" to be "Succeeded or Failed" - Jun 12 20:44:21.487: INFO: Pod "pod-configmaps-f91eef25-b913-4b8d-9bc8-b3f8abc59d5a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.464419ms - Jun 12 20:44:23.495: INFO: Pod "pod-configmaps-f91eef25-b913-4b8d-9bc8-b3f8abc59d5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013951619s - Jun 12 20:44:25.495: INFO: Pod "pod-configmaps-f91eef25-b913-4b8d-9bc8-b3f8abc59d5a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014575711s - Jun 12 20:44:27.496: INFO: Pod "pod-configmaps-f91eef25-b913-4b8d-9bc8-b3f8abc59d5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014812584s - STEP: Saw pod success 06/12/23 20:44:27.496 - Jun 12 20:44:27.496: INFO: Pod "pod-configmaps-f91eef25-b913-4b8d-9bc8-b3f8abc59d5a" satisfied condition "Succeeded or Failed" - Jun 12 20:44:27.502: INFO: Trying to get logs from node 10.138.75.70 pod pod-configmaps-f91eef25-b913-4b8d-9bc8-b3f8abc59d5a container agnhost-container: - STEP: delete the pod 06/12/23 20:44:27.517 - Jun 12 20:44:27.535: INFO: Waiting for pod pod-configmaps-f91eef25-b913-4b8d-9bc8-b3f8abc59d5a to disappear - Jun 12 20:44:27.542: INFO: Pod pod-configmaps-f91eef25-b913-4b8d-9bc8-b3f8abc59d5a no longer exists - [AfterEach] [sig-storage] ConfigMap + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 07/27/23 01:33:22.087 + STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 01:33:22.52 + STEP: Deploying the webhook pod 07/27/23 01:33:22.548 + STEP: Wait for the deployment to be ready 07/27/23 01:33:22.579 + Jul 27 01:33:22.614: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 07/27/23 01:33:24.653 + STEP: Verifying the service has paired with the endpoint 07/27/23 01:33:24.69 + Jul 27 01:33:25.691: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should mutate configmap [Conformance] + test/e2e/apimachinery/webhook.go:252 + STEP: Registering the mutating configmap webhook via the AdmissionRegistration API 07/27/23 01:33:25.701 + STEP: create a configmap that should be updated by the webhook 07/27/23 01:33:25.755 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 20:44:27.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] ConfigMap + Jul 27 01:33:25.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] ConfigMap + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] ConfigMap + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "configmap-1808" for this suite. 06/12/23 20:44:27.556 + STEP: Destroying namespace "webhook-7600" for this suite. 07/27/23 01:33:25.938 + STEP: Destroying namespace "webhook-7600-markers" for this suite. 07/27/23 01:33:25.963 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------- -[sig-apps] DisruptionController - should block an eviction until the PDB is updated to allow it [Conformance] - test/e2e/apps/disruption.go:347 -[BeforeEach] [sig-apps] DisruptionController +[sig-api-machinery] Namespaces [Serial] + should apply a finalizer to a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:394 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:44:27.581 -Jun 12 20:44:27.582: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename disruption 06/12/23 20:44:27.583 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:44:27.643 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:44:27.653 -[BeforeEach] [sig-apps] DisruptionController +STEP: Creating a kubernetes client 07/27/23 01:33:25.989 +Jul 27 01:33:25.989: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename namespaces 07/27/23 01:33:25.99 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:33:26.029 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:33:26.038 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-apps] DisruptionController - test/e2e/apps/disruption.go:72 -[It] should block an eviction until the PDB is updated to allow it [Conformance] - test/e2e/apps/disruption.go:347 -STEP: Creating a pdb that targets all three pods in a test replica set 06/12/23 20:44:27.664 -STEP: Waiting for the pdb to be processed 06/12/23 20:44:27.681 -STEP: First trying to evict a pod which shouldn't be evictable 06/12/23 20:44:29.716 -STEP: Waiting for all pods to be running 06/12/23 20:44:29.716 -Jun 12 20:44:29.734: INFO: pods: 0 < 3 -Jun 12 20:44:31.742: INFO: running pods: 0 < 3 -Jun 12 20:44:33.744: INFO: running pods: 1 < 3 -Jun 12 20:44:35.743: INFO: running pods: 1 < 3 -Jun 12 20:44:37.742: INFO: running pods: 1 < 3 -Jun 12 20:44:39.747: INFO: running pods: 1 < 3 -STEP: locating a running pod 06/12/23 20:44:41.757 -STEP: Updating the pdb to allow a pod to be evicted 06/12/23 20:44:41.793 -STEP: Waiting for the pdb to be processed 06/12/23 20:44:41.825 -STEP: Trying to evict the same pod we tried earlier which should now be evictable 06/12/23 20:44:41.842 -STEP: Waiting for all pods to be running 06/12/23 20:44:41.842 -STEP: Waiting for the pdb to observed all healthy pods 06/12/23 20:44:41.892 -STEP: Patching the pdb to disallow a pod to be evicted 06/12/23 20:44:41.963 -STEP: Waiting for the pdb to be processed 06/12/23 20:44:42.006 -STEP: Waiting for all pods to be running 06/12/23 20:44:42.047 -Jun 12 20:44:42.056: INFO: running pods: 2 < 3 -Jun 12 20:44:44.085: INFO: running pods: 2 < 3 -STEP: locating a running pod 06/12/23 20:44:46.067 -STEP: Deleting the pdb to allow a pod to be evicted 06/12/23 20:44:46.087 -STEP: Waiting for the pdb to be deleted 06/12/23 20:44:46.104 -STEP: Trying to evict the same pod we tried earlier which should now be evictable 06/12/23 20:44:46.112 -STEP: Waiting for all pods to be running 06/12/23 20:44:46.113 -[AfterEach] [sig-apps] DisruptionController +[It] should apply a finalizer to a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:394 +STEP: Creating namespace "e2e-ns-vnd4t" 07/27/23 01:33:26.047 +Jul 27 01:33:26.093: INFO: Namespace "e2e-ns-vnd4t-7766" has []v1.FinalizerName{"kubernetes"} +STEP: Adding e2e finalizer to namespace "e2e-ns-vnd4t-7766" 07/27/23 01:33:26.093 +Jul 27 01:33:26.214: INFO: Namespace "e2e-ns-vnd4t-7766" has []v1.FinalizerName{"kubernetes", "e2e.example.com/fakeFinalizer"} +STEP: Removing e2e finalizer from namespace "e2e-ns-vnd4t-7766" 07/27/23 01:33:26.214 +Jul 27 01:33:26.251: INFO: Namespace "e2e-ns-vnd4t-7766" has []v1.FinalizerName{"kubernetes"} +[AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 -Jun 12 20:44:46.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] DisruptionController +Jul 27 01:33:26.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] DisruptionController +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] DisruptionController +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 -STEP: Destroying namespace "disruption-4202" for this suite. 06/12/23 20:44:46.19 +STEP: Destroying namespace "namespaces-6167" for this suite. 07/27/23 01:33:26.261 +STEP: Destroying namespace "e2e-ns-vnd4t-7766" for this suite. 07/27/23 01:33:26.283 ------------------------------ -• [SLOW TEST] [18.632 seconds] -[sig-apps] DisruptionController -test/e2e/apps/framework.go:23 - should block an eviction until the PDB is updated to allow it [Conformance] - test/e2e/apps/disruption.go:347 +• [0.320 seconds] +[sig-api-machinery] Namespaces [Serial] +test/e2e/apimachinery/framework.go:23 + should apply a finalizer to a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:394 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] DisruptionController + [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:44:27.581 - Jun 12 20:44:27.582: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename disruption 06/12/23 20:44:27.583 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:44:27.643 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:44:27.653 - [BeforeEach] [sig-apps] DisruptionController + STEP: Creating a kubernetes client 07/27/23 01:33:25.989 + Jul 27 01:33:25.989: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename namespaces 07/27/23 01:33:25.99 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:33:26.029 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:33:26.038 + [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-apps] DisruptionController - test/e2e/apps/disruption.go:72 - [It] should block an eviction until the PDB is updated to allow it [Conformance] - test/e2e/apps/disruption.go:347 - STEP: Creating a pdb that targets all three pods in a test replica set 06/12/23 20:44:27.664 - STEP: Waiting for the pdb to be processed 06/12/23 20:44:27.681 - STEP: First trying to evict a pod which shouldn't be evictable 06/12/23 20:44:29.716 - STEP: Waiting for all pods to be running 06/12/23 20:44:29.716 - Jun 12 20:44:29.734: INFO: pods: 0 < 3 - Jun 12 20:44:31.742: INFO: running pods: 0 < 3 - Jun 12 20:44:33.744: INFO: running pods: 1 < 3 - Jun 12 20:44:35.743: INFO: running pods: 1 < 3 - Jun 12 20:44:37.742: INFO: running pods: 1 < 3 - Jun 12 20:44:39.747: INFO: running pods: 1 < 3 - STEP: locating a running pod 06/12/23 20:44:41.757 - STEP: Updating the pdb to allow a pod to be evicted 06/12/23 20:44:41.793 - STEP: Waiting for the pdb to be processed 06/12/23 20:44:41.825 - STEP: Trying to evict the same pod we tried earlier which should now be evictable 06/12/23 20:44:41.842 - STEP: Waiting for all pods to be running 06/12/23 20:44:41.842 - STEP: Waiting for the pdb to observed all healthy pods 06/12/23 20:44:41.892 - STEP: Patching the pdb to disallow a pod to be evicted 06/12/23 20:44:41.963 - STEP: Waiting for the pdb to be processed 06/12/23 20:44:42.006 - STEP: Waiting for all pods to be running 06/12/23 20:44:42.047 - Jun 12 20:44:42.056: INFO: running pods: 2 < 3 - Jun 12 20:44:44.085: INFO: running pods: 2 < 3 - STEP: locating a running pod 06/12/23 20:44:46.067 - STEP: Deleting the pdb to allow a pod to be evicted 06/12/23 20:44:46.087 - STEP: Waiting for the pdb to be deleted 06/12/23 20:44:46.104 - STEP: Trying to evict the same pod we tried earlier which should now be evictable 06/12/23 20:44:46.112 - STEP: Waiting for all pods to be running 06/12/23 20:44:46.113 - [AfterEach] [sig-apps] DisruptionController + [It] should apply a finalizer to a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:394 + STEP: Creating namespace "e2e-ns-vnd4t" 07/27/23 01:33:26.047 + Jul 27 01:33:26.093: INFO: Namespace "e2e-ns-vnd4t-7766" has []v1.FinalizerName{"kubernetes"} + STEP: Adding e2e finalizer to namespace "e2e-ns-vnd4t-7766" 07/27/23 01:33:26.093 + Jul 27 01:33:26.214: INFO: Namespace "e2e-ns-vnd4t-7766" has []v1.FinalizerName{"kubernetes", "e2e.example.com/fakeFinalizer"} + STEP: Removing e2e finalizer from namespace "e2e-ns-vnd4t-7766" 07/27/23 01:33:26.214 + Jul 27 01:33:26.251: INFO: Namespace "e2e-ns-vnd4t-7766" has []v1.FinalizerName{"kubernetes"} + [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 - Jun 12 20:44:46.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] DisruptionController + Jul 27 01:33:26.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] DisruptionController + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] DisruptionController + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 - STEP: Destroying namespace "disruption-4202" for this suite. 06/12/23 20:44:46.19 + STEP: Destroying namespace "namespaces-6167" for this suite. 07/27/23 01:33:26.261 + STEP: Destroying namespace "e2e-ns-vnd4t-7766" for this suite. 07/27/23 01:33:26.283 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSS +SS ------------------------------ -[sig-storage] EmptyDir wrapper volumes - should not conflict [Conformance] - test/e2e/storage/empty_dir_wrapper.go:67 -[BeforeEach] [sig-storage] EmptyDir wrapper volumes +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if not matching [Conformance] + test/e2e/scheduling/predicates.go:443 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:44:46.22 -Jun 12 20:44:46.220: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename emptydir-wrapper 06/12/23 20:44:46.222 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:44:46.306 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:44:46.314 -[BeforeEach] [sig-storage] EmptyDir wrapper volumes +STEP: Creating a kubernetes client 07/27/23 01:33:26.309 +Jul 27 01:33:26.310: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename sched-pred 07/27/23 01:33:26.31 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:33:26.348 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:33:26.357 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 -[It] should not conflict [Conformance] - test/e2e/storage/empty_dir_wrapper.go:67 -Jun 12 20:44:46.395: INFO: Waiting up to 5m0s for pod "pod-secrets-ac5915b0-4338-4925-a65c-ea30b9520620" in namespace "emptydir-wrapper-7096" to be "running and ready" -Jun 12 20:44:46.406: INFO: Pod "pod-secrets-ac5915b0-4338-4925-a65c-ea30b9520620": Phase="Pending", Reason="", readiness=false. Elapsed: 10.936999ms -Jun 12 20:44:46.406: INFO: The phase of Pod pod-secrets-ac5915b0-4338-4925-a65c-ea30b9520620 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:44:48.415: INFO: Pod "pod-secrets-ac5915b0-4338-4925-a65c-ea30b9520620": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019287069s -Jun 12 20:44:48.415: INFO: The phase of Pod pod-secrets-ac5915b0-4338-4925-a65c-ea30b9520620 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:44:50.414: INFO: Pod "pod-secrets-ac5915b0-4338-4925-a65c-ea30b9520620": Phase="Running", Reason="", readiness=true. Elapsed: 4.018951595s -Jun 12 20:44:50.414: INFO: The phase of Pod pod-secrets-ac5915b0-4338-4925-a65c-ea30b9520620 is Running (Ready = true) -Jun 12 20:44:50.414: INFO: Pod "pod-secrets-ac5915b0-4338-4925-a65c-ea30b9520620" satisfied condition "running and ready" -STEP: Cleaning up the secret 06/12/23 20:44:50.421 -STEP: Cleaning up the configmap 06/12/23 20:44:50.439 -STEP: Cleaning up the pod 06/12/23 20:44:50.457 -[AfterEach] [sig-storage] EmptyDir wrapper volumes +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:97 +Jul 27 01:33:26.366: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Jul 27 01:33:26.393: INFO: Waiting for terminating namespaces to be deleted... +Jul 27 01:33:26.427: INFO: +Logging pods the apiserver thinks is on node 10.245.128.17 before test +Jul 27 01:33:26.485: INFO: calico-node-6gb7d from calico-system started at 2023-07-26 23:19:48 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container calico-node ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: ibm-keepalived-watcher-krnnt from kube-system started at 2023-07-26 23:12:13 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container keepalived-watcher ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: ibm-master-proxy-static-10.245.128.17 from kube-system started at 2023-07-26 23:12:09 +0000 UTC (2 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container ibm-master-proxy-static ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: Container pause ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: ibm-vpc-block-csi-controller-0 from kube-system started at 2023-07-26 23:25:41 +0000 UTC (7 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container csi-attacher ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: Container csi-provisioner ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: Container csi-resizer ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: Container csi-snapshotter ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: Container iks-vpc-block-driver ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: Container liveness-probe ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: Container storage-secret-sidecar ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: ibm-vpc-block-csi-node-pb2sj from kube-system started at 2023-07-26 23:12:13 +0000 UTC (4 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container csi-driver-registrar ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: Container iks-vpc-block-node-driver ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: Container liveness-probe ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: Container storage-secret-sidecar ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: vpn-7d8b749c64-87d9s from kube-system started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container vpn ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: tuned-wnh5v from openshift-cluster-node-tuning-operator started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container tuned ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: csi-snapshot-controller-5b77984679-frszr from openshift-cluster-storage-operator started at 2023-07-26 23:21:55 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container snapshot-controller ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: csi-snapshot-webhook-78b8c8d77c-2pk6s from openshift-cluster-storage-operator started at 2023-07-26 23:21:55 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container webhook ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: console-7fd48bd95f-wksvb from openshift-console started at 2023-07-26 23:27:39 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container console ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: downloads-6874b45df6-w7xkq from openshift-console started at 2023-07-26 23:22:05 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container download-server ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: dns-default-5mw2g from openshift-dns started at 2023-07-26 23:25:39 +0000 UTC (2 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container dns ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: node-resolver-2kt92 from openshift-dns started at 2023-07-26 23:25:38 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container dns-node-resolver ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: node-ca-pmxp9 from openshift-image-registry started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container node-ca ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: ingress-canary-wh5qj from openshift-ingress-canary started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container serve-healthcheck-canary ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: router-default-865b575f54-qjwfv from openshift-ingress started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container router ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: openshift-kube-proxy-r7t77 from openshift-kube-proxy started at 2023-07-26 23:17:29 +0000 UTC (2 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container kube-proxy ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: migrator-77d7ddf546-9g7xm from openshift-kube-storage-version-migrator started at 2023-07-26 23:25:38 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container migrator ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: certified-operators-qlqcc from openshift-marketplace started at 2023-07-26 23:23:19 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container registry-server ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: community-operators-dtgmg from openshift-marketplace started at 2023-07-26 23:23:19 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container registry-server ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: redhat-marketplace-vnvdb from openshift-marketplace started at 2023-07-26 23:23:19 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container registry-server ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: redhat-operators-9qw52 from openshift-marketplace started at 2023-07-27 01:30:34 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container registry-server ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: alertmanager-main-1 from openshift-monitoring started at 2023-07-26 23:27:44 +0000 UTC (6 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container alertmanager ready: true, restart count 1 +Jul 27 01:33:26.485: INFO: Container alertmanager-proxy ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: Container config-reloader ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: Container kube-rbac-proxy-metric ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: Container prom-label-proxy ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: node-exporter-2tscc from openshift-monitoring started at 2023-07-26 23:26:18 +0000 UTC (2 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: Container node-exporter ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: prometheus-adapter-657855c676-qlc95 from openshift-monitoring started at 2023-07-26 23:26:23 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container prometheus-adapter ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: prometheus-k8s-1 from openshift-monitoring started at 2023-07-26 23:27:58 +0000 UTC (6 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container config-reloader ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: Container kube-rbac-proxy-thanos ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: Container prometheus ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: Container prometheus-proxy ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: Container thanos-sidecar ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: prometheus-operator-admission-webhook-84c7bbc8cc-hct4l from openshift-monitoring started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container prometheus-operator-admission-webhook ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: thanos-querier-7f9c896d7f-xqld6 from openshift-monitoring started at 2023-07-26 23:26:32 +0000 UTC (6 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: Container kube-rbac-proxy-metrics ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: Container kube-rbac-proxy-rules ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: Container oauth-proxy ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: Container prom-label-proxy ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: Container thanos-query ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: multus-5x56j from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container kube-multus ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: multus-additional-cni-plugins-p7gf5 from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: multus-admission-controller-8ccd764f4-j68g7 from openshift-multus started at 2023-07-26 23:25:38 +0000 UTC (2 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: Container multus-admission-controller ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: network-metrics-daemon-djvdx from openshift-multus started at 2023-07-26 23:17:25 +0000 UTC (2 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: Container network-metrics-daemon ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: network-check-target-2j7hq from openshift-network-diagnostics started at 2023-07-26 23:17:32 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container network-check-target-container ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: collect-profiles-28173660-9rsfz from openshift-operator-lifecycle-manager started at 2023-07-27 01:00:00 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container collect-profiles ready: false, restart count 0 +Jul 27 01:33:26.485: INFO: collect-profiles-28173690-9m5v7 from openshift-operator-lifecycle-manager started at 2023-07-27 01:30:00 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container collect-profiles ready: false, restart count 0 +Jul 27 01:33:26.485: INFO: packageserver-b9964c68-p2fd4 from openshift-operator-lifecycle-manager started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container packageserver ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: service-ca-665db46585-9cprv from openshift-service-ca started at 2023-07-26 23:21:59 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container service-ca-controller ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: sonobuoy-e2e-job-17fd703895604ed7 from sonobuoy started at 2023-07-27 01:27:09 +0000 UTC (2 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container e2e ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: sonobuoy-systemd-logs-daemon-set-2f6dede7a68c4213-vft4d from sonobuoy started at 2023-07-27 01:27:09 +0000 UTC (2 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: Container systemd-logs ready: true, restart count 0 +Jul 27 01:33:26.485: INFO: tigera-operator-5b48cf996b-5zb5v from tigera-operator started at 2023-07-26 23:12:21 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.485: INFO: Container tigera-operator ready: true, restart count 6 +Jul 27 01:33:26.485: INFO: +Logging pods the apiserver thinks is on node 10.245.128.18 before test +Jul 27 01:33:26.623: INFO: calico-kube-controllers-5575667dcd-ps6n9 from calico-system started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container calico-kube-controllers ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: calico-node-2vsm9 from calico-system started at 2023-07-26 23:19:48 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container calico-node ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: calico-typha-5549cc5cdc-nsmq8 from calico-system started at 2023-07-26 23:19:56 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container calico-typha ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: managed-storage-validation-webhooks-6dfcff48fb-4xxsq from ibm-odf-validation-webhook started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container managed-storage-validation-webhooks ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: managed-storage-validation-webhooks-6dfcff48fb-k6pcc from ibm-odf-validation-webhook started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container managed-storage-validation-webhooks ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: managed-storage-validation-webhooks-6dfcff48fb-swht2 from ibm-odf-validation-webhook started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container managed-storage-validation-webhooks ready: true, restart count 1 +Jul 27 01:33:26.623: INFO: ibm-keepalived-watcher-wjqkn from kube-system started at 2023-07-26 23:12:23 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container keepalived-watcher ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: ibm-master-proxy-static-10.245.128.18 from kube-system started at 2023-07-26 23:12:20 +0000 UTC (2 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container ibm-master-proxy-static ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: Container pause ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: ibm-storage-metrics-agent-9fd89b544-292dm from kube-system started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container ibm-storage-metrics-agent ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: Container storage-secret-sidecar ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: ibm-vpc-block-csi-node-lp4cr from kube-system started at 2023-07-26 23:12:23 +0000 UTC (4 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container csi-driver-registrar ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: Container iks-vpc-block-node-driver ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: Container liveness-probe ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: Container storage-secret-sidecar ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: cluster-node-tuning-operator-5b85c5d47b-9cbp5 from openshift-cluster-node-tuning-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container cluster-node-tuning-operator ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: tuned-zxrv4 from openshift-cluster-node-tuning-operator started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container tuned ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: cluster-samples-operator-588cc6f8cc-fh5hj from openshift-cluster-samples-operator started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container cluster-samples-operator ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: Container cluster-samples-operator-watch ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: cluster-storage-operator-586d5b4d95-tq97j from openshift-cluster-storage-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container cluster-storage-operator ready: true, restart count 1 +Jul 27 01:33:26.623: INFO: csi-snapshot-controller-operator-7c998b6874-9flch from openshift-cluster-storage-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container csi-snapshot-controller-operator ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: console-operator-8486d48d6-4xzr7 from openshift-console-operator started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container console-operator ready: true, restart count 1 +Jul 27 01:33:26.623: INFO: Container conversion-webhook-server ready: true, restart count 2 +Jul 27 01:33:26.623: INFO: dns-operator-7c549b76fd-t56tt from openshift-dns-operator started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container dns-operator ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: dns-default-r982z from openshift-dns started at 2023-07-26 23:25:39 +0000 UTC (2 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container dns ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: node-resolver-txjwq from openshift-dns started at 2023-07-26 23:25:38 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container dns-node-resolver ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: cluster-image-registry-operator-96d4d84cf-65k8l from openshift-image-registry started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container cluster-image-registry-operator ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: node-ca-ntzct from openshift-image-registry started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container node-ca ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: ingress-canary-jphk8 from openshift-ingress-canary started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container serve-healthcheck-canary ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: ingress-operator-64bc7f7964-9sbtr from openshift-ingress-operator started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container ingress-operator ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: insights-operator-5db47f7654-r8xdq from openshift-insights started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container insights-operator ready: true, restart count 1 +Jul 27 01:33:26.623: INFO: openshift-kube-proxy-6hxmn from openshift-kube-proxy started at 2023-07-26 23:17:29 +0000 UTC (2 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container kube-proxy ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: kube-storage-version-migrator-operator-f4b8bf677-c24bz from openshift-kube-storage-version-migrator-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container kube-storage-version-migrator-operator ready: true, restart count 1 +Jul 27 01:33:26.623: INFO: marketplace-operator-5ddbd9fdbc-lrhrq from openshift-marketplace started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container marketplace-operator ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: cluster-monitoring-operator-7448698f65-65wn9 from openshift-monitoring started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container cluster-monitoring-operator ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: node-exporter-d46sh from openshift-monitoring started at 2023-07-26 23:26:18 +0000 UTC (2 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: Container node-exporter ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: multus-additional-cni-plugins-njhzm from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: multus-admission-controller-8ccd764f4-7kmkg from openshift-multus started at 2023-07-26 23:25:53 +0000 UTC (2 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: Container multus-admission-controller ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: multus-zhftn from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container kube-multus ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: network-metrics-daemon-cglg2 from openshift-multus started at 2023-07-26 23:17:25 +0000 UTC (2 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: Container network-metrics-daemon ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: network-check-source-6777f6456-pt5nn from openshift-network-diagnostics started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container check-endpoints ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: network-check-target-85dgs from openshift-network-diagnostics started at 2023-07-26 23:17:32 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container network-check-target-container ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: network-operator-6dddb4f685-gc764 from openshift-network-operator started at 2023-07-26 23:17:11 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container network-operator ready: true, restart count 1 +Jul 27 01:33:26.623: INFO: catalog-operator-69ccd5899d-lrpkv from openshift-operator-lifecycle-manager started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container catalog-operator ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: collect-profiles-28173675-xp5k7 from openshift-operator-lifecycle-manager started at 2023-07-27 01:15:00 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container collect-profiles ready: false, restart count 0 +Jul 27 01:33:26.623: INFO: olm-operator-8448b5677d-bf2sl from openshift-operator-lifecycle-manager started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container olm-operator ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: package-server-manager-579d664b8c-klrwt from openshift-operator-lifecycle-manager started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container package-server-manager ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: packageserver-b9964c68-6gdlp from openshift-operator-lifecycle-manager started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container packageserver ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: metrics-6ff747d58d-llt7w from openshift-roks-metrics started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container metrics ready: true, restart count 2 +Jul 27 01:33:26.623: INFO: push-gateway-6448c6788-hrxtl from openshift-roks-metrics started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container push-gateway ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: service-ca-operator-5db987957b-pftl9 from openshift-service-ca-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container service-ca-operator ready: true, restart count 1 +Jul 27 01:33:26.623: INFO: sonobuoy from sonobuoy started at 2023-07-27 01:26:57 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container kube-sonobuoy ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: sonobuoy-systemd-logs-daemon-set-2f6dede7a68c4213-7p2cx from sonobuoy started at 2023-07-27 01:27:09 +0000 UTC (2 container statuses recorded) +Jul 27 01:33:26.623: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: Container systemd-logs ready: true, restart count 0 +Jul 27 01:33:26.623: INFO: +Logging pods the apiserver thinks is on node 10.245.128.19 before test +Jul 27 01:33:26.781: INFO: calico-node-tnbmn from calico-system started at 2023-07-26 23:19:48 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.781: INFO: Container calico-node ready: true, restart count 0 +Jul 27 01:33:26.781: INFO: calico-typha-5549cc5cdc-25l9k from calico-system started at 2023-07-26 23:19:48 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.781: INFO: Container calico-typha ready: true, restart count 0 +Jul 27 01:33:26.781: INFO: ibm-keepalived-watcher-228gb from kube-system started at 2023-07-26 23:12:15 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.781: INFO: Container keepalived-watcher ready: true, restart count 0 +Jul 27 01:33:26.781: INFO: ibm-master-proxy-static-10.245.128.19 from kube-system started at 2023-07-26 23:12:13 +0000 UTC (2 container statuses recorded) +Jul 27 01:33:26.781: INFO: Container ibm-master-proxy-static ready: true, restart count 0 +Jul 27 01:33:26.781: INFO: Container pause ready: true, restart count 0 +Jul 27 01:33:26.781: INFO: ibm-vpc-block-csi-node-m8dqf from kube-system started at 2023-07-26 23:12:15 +0000 UTC (4 container statuses recorded) +Jul 27 01:33:26.781: INFO: Container csi-driver-registrar ready: true, restart count 0 +Jul 27 01:33:26.781: INFO: Container iks-vpc-block-node-driver ready: true, restart count 0 +Jul 27 01:33:26.781: INFO: Container liveness-probe ready: true, restart count 0 +Jul 27 01:33:26.781: INFO: Container storage-secret-sidecar ready: true, restart count 0 +Jul 27 01:33:26.781: INFO: tuned-8xqng from openshift-cluster-node-tuning-operator started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.781: INFO: Container tuned ready: true, restart count 0 +Jul 27 01:33:26.781: INFO: csi-snapshot-controller-5b77984679-2r5mm from openshift-cluster-storage-operator started at 2023-07-26 23:21:55 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.781: INFO: Container snapshot-controller ready: true, restart count 0 +Jul 27 01:33:26.781: INFO: csi-snapshot-webhook-78b8c8d77c-hmxw5 from openshift-cluster-storage-operator started at 2023-07-26 23:21:55 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.782: INFO: Container webhook ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: console-7fd48bd95f-ws8gt from openshift-console started at 2023-07-26 23:28:05 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.782: INFO: Container console ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: downloads-6874b45df6-fr7cm from openshift-console started at 2023-07-26 23:22:05 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.782: INFO: Container download-server ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: dns-default-vxt6p from openshift-dns started at 2023-07-26 23:25:39 +0000 UTC (2 container statuses recorded) +Jul 27 01:33:26.782: INFO: Container dns ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: node-resolver-s2q44 from openshift-dns started at 2023-07-26 23:25:38 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.782: INFO: Container dns-node-resolver ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: image-pruner-28173600-njvcp from openshift-image-registry started at 2023-07-27 00:00:00 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.782: INFO: Container image-pruner ready: false, restart count 0 +Jul 27 01:33:26.782: INFO: image-registry-69fbbd6d88-9n62b from openshift-image-registry started at 2023-07-26 23:26:03 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.782: INFO: Container registry ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: node-ca-kz4vp from openshift-image-registry started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.782: INFO: Container node-ca ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: ingress-canary-7kzx5 from openshift-ingress-canary started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.782: INFO: Container serve-healthcheck-canary ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: router-default-865b575f54-vf68k from openshift-ingress started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.782: INFO: Container router ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: openshift-kube-proxy-4qg5c from openshift-kube-proxy started at 2023-07-26 23:17:29 +0000 UTC (2 container statuses recorded) +Jul 27 01:33:26.782: INFO: Container kube-proxy ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: alertmanager-main-0 from openshift-monitoring started at 2023-07-26 23:28:18 +0000 UTC (6 container statuses recorded) +Jul 27 01:33:26.782: INFO: Container alertmanager ready: true, restart count 1 +Jul 27 01:33:26.782: INFO: Container alertmanager-proxy ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: Container config-reloader ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy-metric ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: Container prom-label-proxy ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: kube-state-metrics-575bd9d6b6-mg92f from openshift-monitoring started at 2023-07-26 23:26:18 +0000 UTC (3 container statuses recorded) +Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy-main ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy-self ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: Container kube-state-metrics ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: node-exporter-vz8m9 from openshift-monitoring started at 2023-07-26 23:26:18 +0000 UTC (2 container statuses recorded) +Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: Container node-exporter ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: openshift-state-metrics-99754b784-glvlb from openshift-monitoring started at 2023-07-26 23:26:18 +0000 UTC (3 container statuses recorded) +Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy-main ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy-self ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: Container openshift-state-metrics ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: prometheus-adapter-657855c676-qcpn4 from openshift-monitoring started at 2023-07-26 23:26:23 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.782: INFO: Container prometheus-adapter ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: prometheus-k8s-0 from openshift-monitoring started at 2023-07-26 23:28:16 +0000 UTC (6 container statuses recorded) +Jul 27 01:33:26.782: INFO: Container config-reloader ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy-thanos ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: Container prometheus ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: Container prometheus-proxy ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: Container thanos-sidecar ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: prometheus-operator-765bbdfd45-ffb59 from openshift-monitoring started at 2023-07-26 23:26:06 +0000 UTC (2 container statuses recorded) +Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: Container prometheus-operator ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: prometheus-operator-admission-webhook-84c7bbc8cc-h6s7j from openshift-monitoring started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.782: INFO: Container prometheus-operator-admission-webhook ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: telemeter-client-c964ff8c9-xlf8c from openshift-monitoring started at 2023-07-26 23:26:24 +0000 UTC (3 container statuses recorded) +Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: Container reload ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: Container telemeter-client ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: thanos-querier-7f9c896d7f-77gq9 from openshift-monitoring started at 2023-07-26 23:26:32 +0000 UTC (6 container statuses recorded) +Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy-metrics ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy-rules ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: Container oauth-proxy ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: Container prom-label-proxy ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: Container thanos-query ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: multus-287s2 from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.782: INFO: Container kube-multus ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: multus-additional-cni-plugins-xns7c from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.782: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: network-metrics-daemon-xpw2q from openshift-multus started at 2023-07-26 23:17:25 +0000 UTC (2 container statuses recorded) +Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: Container network-metrics-daemon ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: network-check-target-hf22d from openshift-network-diagnostics started at 2023-07-26 23:17:32 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.782: INFO: Container network-check-target-container ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: sonobuoy-systemd-logs-daemon-set-2f6dede7a68c4213-p74pn from sonobuoy started at 2023-07-27 01:27:09 +0000 UTC (2 container statuses recorded) +Jul 27 01:33:26.782: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: Container systemd-logs ready: true, restart count 0 +Jul 27 01:33:26.782: INFO: sysctl-965edbec-a44e-402d-82c3-bb76ee2b699a from sysctl-7508 started at 2023-07-27 01:33:15 +0000 UTC (1 container statuses recorded) +Jul 27 01:33:26.782: INFO: Container test-container ready: false, restart count 0 +[It] validates that NodeSelector is respected if not matching [Conformance] + test/e2e/scheduling/predicates.go:443 +STEP: Trying to schedule Pod with nonempty NodeSelector. 07/27/23 01:33:26.782 +STEP: Considering event: +Type = [Warning], Name = [restricted-pod.1775957ecc0ee0f5], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..] 07/27/23 01:33:27.011 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 -Jun 12 20:44:50.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes +Jul 27 01:33:27.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:88 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 -STEP: Destroying namespace "emptydir-wrapper-7096" for this suite. 06/12/23 20:44:50.506 +STEP: Destroying namespace "sched-pred-9226" for this suite. 07/27/23 01:33:27.999 ------------------------------ -• [4.307 seconds] -[sig-storage] EmptyDir wrapper volumes -test/e2e/storage/utils/framework.go:23 - should not conflict [Conformance] - test/e2e/storage/empty_dir_wrapper.go:67 +• [1.715 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +test/e2e/scheduling/framework.go:40 + validates that NodeSelector is respected if not matching [Conformance] + test/e2e/scheduling/predicates.go:443 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] EmptyDir wrapper volumes + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:44:46.22 - Jun 12 20:44:46.220: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename emptydir-wrapper 06/12/23 20:44:46.222 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:44:46.306 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:44:46.314 - [BeforeEach] [sig-storage] EmptyDir wrapper volumes + STEP: Creating a kubernetes client 07/27/23 01:33:26.309 + Jul 27 01:33:26.310: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename sched-pred 07/27/23 01:33:26.31 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:33:26.348 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:33:26.357 + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 - [It] should not conflict [Conformance] - test/e2e/storage/empty_dir_wrapper.go:67 - Jun 12 20:44:46.395: INFO: Waiting up to 5m0s for pod "pod-secrets-ac5915b0-4338-4925-a65c-ea30b9520620" in namespace "emptydir-wrapper-7096" to be "running and ready" - Jun 12 20:44:46.406: INFO: Pod "pod-secrets-ac5915b0-4338-4925-a65c-ea30b9520620": Phase="Pending", Reason="", readiness=false. Elapsed: 10.936999ms - Jun 12 20:44:46.406: INFO: The phase of Pod pod-secrets-ac5915b0-4338-4925-a65c-ea30b9520620 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:44:48.415: INFO: Pod "pod-secrets-ac5915b0-4338-4925-a65c-ea30b9520620": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019287069s - Jun 12 20:44:48.415: INFO: The phase of Pod pod-secrets-ac5915b0-4338-4925-a65c-ea30b9520620 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:44:50.414: INFO: Pod "pod-secrets-ac5915b0-4338-4925-a65c-ea30b9520620": Phase="Running", Reason="", readiness=true. Elapsed: 4.018951595s - Jun 12 20:44:50.414: INFO: The phase of Pod pod-secrets-ac5915b0-4338-4925-a65c-ea30b9520620 is Running (Ready = true) - Jun 12 20:44:50.414: INFO: Pod "pod-secrets-ac5915b0-4338-4925-a65c-ea30b9520620" satisfied condition "running and ready" - STEP: Cleaning up the secret 06/12/23 20:44:50.421 - STEP: Cleaning up the configmap 06/12/23 20:44:50.439 - STEP: Cleaning up the pod 06/12/23 20:44:50.457 - [AfterEach] [sig-storage] EmptyDir wrapper volumes + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:97 + Jul 27 01:33:26.366: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready + Jul 27 01:33:26.393: INFO: Waiting for terminating namespaces to be deleted... + Jul 27 01:33:26.427: INFO: + Logging pods the apiserver thinks is on node 10.245.128.17 before test + Jul 27 01:33:26.485: INFO: calico-node-6gb7d from calico-system started at 2023-07-26 23:19:48 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container calico-node ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: ibm-keepalived-watcher-krnnt from kube-system started at 2023-07-26 23:12:13 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container keepalived-watcher ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: ibm-master-proxy-static-10.245.128.17 from kube-system started at 2023-07-26 23:12:09 +0000 UTC (2 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container ibm-master-proxy-static ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: Container pause ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: ibm-vpc-block-csi-controller-0 from kube-system started at 2023-07-26 23:25:41 +0000 UTC (7 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container csi-attacher ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: Container csi-provisioner ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: Container csi-resizer ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: Container csi-snapshotter ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: Container iks-vpc-block-driver ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: Container liveness-probe ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: Container storage-secret-sidecar ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: ibm-vpc-block-csi-node-pb2sj from kube-system started at 2023-07-26 23:12:13 +0000 UTC (4 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container csi-driver-registrar ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: Container iks-vpc-block-node-driver ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: Container liveness-probe ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: Container storage-secret-sidecar ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: vpn-7d8b749c64-87d9s from kube-system started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container vpn ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: tuned-wnh5v from openshift-cluster-node-tuning-operator started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container tuned ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: csi-snapshot-controller-5b77984679-frszr from openshift-cluster-storage-operator started at 2023-07-26 23:21:55 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container snapshot-controller ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: csi-snapshot-webhook-78b8c8d77c-2pk6s from openshift-cluster-storage-operator started at 2023-07-26 23:21:55 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container webhook ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: console-7fd48bd95f-wksvb from openshift-console started at 2023-07-26 23:27:39 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container console ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: downloads-6874b45df6-w7xkq from openshift-console started at 2023-07-26 23:22:05 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container download-server ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: dns-default-5mw2g from openshift-dns started at 2023-07-26 23:25:39 +0000 UTC (2 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container dns ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: node-resolver-2kt92 from openshift-dns started at 2023-07-26 23:25:38 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container dns-node-resolver ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: node-ca-pmxp9 from openshift-image-registry started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container node-ca ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: ingress-canary-wh5qj from openshift-ingress-canary started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container serve-healthcheck-canary ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: router-default-865b575f54-qjwfv from openshift-ingress started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container router ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: openshift-kube-proxy-r7t77 from openshift-kube-proxy started at 2023-07-26 23:17:29 +0000 UTC (2 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container kube-proxy ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: migrator-77d7ddf546-9g7xm from openshift-kube-storage-version-migrator started at 2023-07-26 23:25:38 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container migrator ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: certified-operators-qlqcc from openshift-marketplace started at 2023-07-26 23:23:19 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container registry-server ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: community-operators-dtgmg from openshift-marketplace started at 2023-07-26 23:23:19 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container registry-server ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: redhat-marketplace-vnvdb from openshift-marketplace started at 2023-07-26 23:23:19 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container registry-server ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: redhat-operators-9qw52 from openshift-marketplace started at 2023-07-27 01:30:34 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container registry-server ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: alertmanager-main-1 from openshift-monitoring started at 2023-07-26 23:27:44 +0000 UTC (6 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container alertmanager ready: true, restart count 1 + Jul 27 01:33:26.485: INFO: Container alertmanager-proxy ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: Container config-reloader ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: Container kube-rbac-proxy-metric ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: Container prom-label-proxy ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: node-exporter-2tscc from openshift-monitoring started at 2023-07-26 23:26:18 +0000 UTC (2 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: Container node-exporter ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: prometheus-adapter-657855c676-qlc95 from openshift-monitoring started at 2023-07-26 23:26:23 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container prometheus-adapter ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: prometheus-k8s-1 from openshift-monitoring started at 2023-07-26 23:27:58 +0000 UTC (6 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container config-reloader ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: Container kube-rbac-proxy-thanos ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: Container prometheus ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: Container prometheus-proxy ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: Container thanos-sidecar ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: prometheus-operator-admission-webhook-84c7bbc8cc-hct4l from openshift-monitoring started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container prometheus-operator-admission-webhook ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: thanos-querier-7f9c896d7f-xqld6 from openshift-monitoring started at 2023-07-26 23:26:32 +0000 UTC (6 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: Container kube-rbac-proxy-metrics ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: Container kube-rbac-proxy-rules ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: Container oauth-proxy ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: Container prom-label-proxy ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: Container thanos-query ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: multus-5x56j from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container kube-multus ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: multus-additional-cni-plugins-p7gf5 from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: multus-admission-controller-8ccd764f4-j68g7 from openshift-multus started at 2023-07-26 23:25:38 +0000 UTC (2 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: Container multus-admission-controller ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: network-metrics-daemon-djvdx from openshift-multus started at 2023-07-26 23:17:25 +0000 UTC (2 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: Container network-metrics-daemon ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: network-check-target-2j7hq from openshift-network-diagnostics started at 2023-07-26 23:17:32 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container network-check-target-container ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: collect-profiles-28173660-9rsfz from openshift-operator-lifecycle-manager started at 2023-07-27 01:00:00 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container collect-profiles ready: false, restart count 0 + Jul 27 01:33:26.485: INFO: collect-profiles-28173690-9m5v7 from openshift-operator-lifecycle-manager started at 2023-07-27 01:30:00 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container collect-profiles ready: false, restart count 0 + Jul 27 01:33:26.485: INFO: packageserver-b9964c68-p2fd4 from openshift-operator-lifecycle-manager started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container packageserver ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: service-ca-665db46585-9cprv from openshift-service-ca started at 2023-07-26 23:21:59 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container service-ca-controller ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: sonobuoy-e2e-job-17fd703895604ed7 from sonobuoy started at 2023-07-27 01:27:09 +0000 UTC (2 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container e2e ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: Container sonobuoy-worker ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: sonobuoy-systemd-logs-daemon-set-2f6dede7a68c4213-vft4d from sonobuoy started at 2023-07-27 01:27:09 +0000 UTC (2 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container sonobuoy-worker ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: Container systemd-logs ready: true, restart count 0 + Jul 27 01:33:26.485: INFO: tigera-operator-5b48cf996b-5zb5v from tigera-operator started at 2023-07-26 23:12:21 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.485: INFO: Container tigera-operator ready: true, restart count 6 + Jul 27 01:33:26.485: INFO: + Logging pods the apiserver thinks is on node 10.245.128.18 before test + Jul 27 01:33:26.623: INFO: calico-kube-controllers-5575667dcd-ps6n9 from calico-system started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container calico-kube-controllers ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: calico-node-2vsm9 from calico-system started at 2023-07-26 23:19:48 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container calico-node ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: calico-typha-5549cc5cdc-nsmq8 from calico-system started at 2023-07-26 23:19:56 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container calico-typha ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: managed-storage-validation-webhooks-6dfcff48fb-4xxsq from ibm-odf-validation-webhook started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container managed-storage-validation-webhooks ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: managed-storage-validation-webhooks-6dfcff48fb-k6pcc from ibm-odf-validation-webhook started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container managed-storage-validation-webhooks ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: managed-storage-validation-webhooks-6dfcff48fb-swht2 from ibm-odf-validation-webhook started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container managed-storage-validation-webhooks ready: true, restart count 1 + Jul 27 01:33:26.623: INFO: ibm-keepalived-watcher-wjqkn from kube-system started at 2023-07-26 23:12:23 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container keepalived-watcher ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: ibm-master-proxy-static-10.245.128.18 from kube-system started at 2023-07-26 23:12:20 +0000 UTC (2 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container ibm-master-proxy-static ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: Container pause ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: ibm-storage-metrics-agent-9fd89b544-292dm from kube-system started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container ibm-storage-metrics-agent ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: Container storage-secret-sidecar ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: ibm-vpc-block-csi-node-lp4cr from kube-system started at 2023-07-26 23:12:23 +0000 UTC (4 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container csi-driver-registrar ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: Container iks-vpc-block-node-driver ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: Container liveness-probe ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: Container storage-secret-sidecar ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: cluster-node-tuning-operator-5b85c5d47b-9cbp5 from openshift-cluster-node-tuning-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container cluster-node-tuning-operator ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: tuned-zxrv4 from openshift-cluster-node-tuning-operator started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container tuned ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: cluster-samples-operator-588cc6f8cc-fh5hj from openshift-cluster-samples-operator started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container cluster-samples-operator ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: Container cluster-samples-operator-watch ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: cluster-storage-operator-586d5b4d95-tq97j from openshift-cluster-storage-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container cluster-storage-operator ready: true, restart count 1 + Jul 27 01:33:26.623: INFO: csi-snapshot-controller-operator-7c998b6874-9flch from openshift-cluster-storage-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container csi-snapshot-controller-operator ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: console-operator-8486d48d6-4xzr7 from openshift-console-operator started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container console-operator ready: true, restart count 1 + Jul 27 01:33:26.623: INFO: Container conversion-webhook-server ready: true, restart count 2 + Jul 27 01:33:26.623: INFO: dns-operator-7c549b76fd-t56tt from openshift-dns-operator started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container dns-operator ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: dns-default-r982z from openshift-dns started at 2023-07-26 23:25:39 +0000 UTC (2 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container dns ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: node-resolver-txjwq from openshift-dns started at 2023-07-26 23:25:38 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container dns-node-resolver ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: cluster-image-registry-operator-96d4d84cf-65k8l from openshift-image-registry started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container cluster-image-registry-operator ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: node-ca-ntzct from openshift-image-registry started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container node-ca ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: ingress-canary-jphk8 from openshift-ingress-canary started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container serve-healthcheck-canary ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: ingress-operator-64bc7f7964-9sbtr from openshift-ingress-operator started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container ingress-operator ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: insights-operator-5db47f7654-r8xdq from openshift-insights started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container insights-operator ready: true, restart count 1 + Jul 27 01:33:26.623: INFO: openshift-kube-proxy-6hxmn from openshift-kube-proxy started at 2023-07-26 23:17:29 +0000 UTC (2 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container kube-proxy ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: kube-storage-version-migrator-operator-f4b8bf677-c24bz from openshift-kube-storage-version-migrator-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container kube-storage-version-migrator-operator ready: true, restart count 1 + Jul 27 01:33:26.623: INFO: marketplace-operator-5ddbd9fdbc-lrhrq from openshift-marketplace started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container marketplace-operator ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: cluster-monitoring-operator-7448698f65-65wn9 from openshift-monitoring started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container cluster-monitoring-operator ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: node-exporter-d46sh from openshift-monitoring started at 2023-07-26 23:26:18 +0000 UTC (2 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: Container node-exporter ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: multus-additional-cni-plugins-njhzm from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: multus-admission-controller-8ccd764f4-7kmkg from openshift-multus started at 2023-07-26 23:25:53 +0000 UTC (2 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: Container multus-admission-controller ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: multus-zhftn from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container kube-multus ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: network-metrics-daemon-cglg2 from openshift-multus started at 2023-07-26 23:17:25 +0000 UTC (2 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: Container network-metrics-daemon ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: network-check-source-6777f6456-pt5nn from openshift-network-diagnostics started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container check-endpoints ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: network-check-target-85dgs from openshift-network-diagnostics started at 2023-07-26 23:17:32 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container network-check-target-container ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: network-operator-6dddb4f685-gc764 from openshift-network-operator started at 2023-07-26 23:17:11 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container network-operator ready: true, restart count 1 + Jul 27 01:33:26.623: INFO: catalog-operator-69ccd5899d-lrpkv from openshift-operator-lifecycle-manager started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container catalog-operator ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: collect-profiles-28173675-xp5k7 from openshift-operator-lifecycle-manager started at 2023-07-27 01:15:00 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container collect-profiles ready: false, restart count 0 + Jul 27 01:33:26.623: INFO: olm-operator-8448b5677d-bf2sl from openshift-operator-lifecycle-manager started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container olm-operator ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: package-server-manager-579d664b8c-klrwt from openshift-operator-lifecycle-manager started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container package-server-manager ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: packageserver-b9964c68-6gdlp from openshift-operator-lifecycle-manager started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container packageserver ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: metrics-6ff747d58d-llt7w from openshift-roks-metrics started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container metrics ready: true, restart count 2 + Jul 27 01:33:26.623: INFO: push-gateway-6448c6788-hrxtl from openshift-roks-metrics started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container push-gateway ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: service-ca-operator-5db987957b-pftl9 from openshift-service-ca-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container service-ca-operator ready: true, restart count 1 + Jul 27 01:33:26.623: INFO: sonobuoy from sonobuoy started at 2023-07-27 01:26:57 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container kube-sonobuoy ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: sonobuoy-systemd-logs-daemon-set-2f6dede7a68c4213-7p2cx from sonobuoy started at 2023-07-27 01:27:09 +0000 UTC (2 container statuses recorded) + Jul 27 01:33:26.623: INFO: Container sonobuoy-worker ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: Container systemd-logs ready: true, restart count 0 + Jul 27 01:33:26.623: INFO: + Logging pods the apiserver thinks is on node 10.245.128.19 before test + Jul 27 01:33:26.781: INFO: calico-node-tnbmn from calico-system started at 2023-07-26 23:19:48 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.781: INFO: Container calico-node ready: true, restart count 0 + Jul 27 01:33:26.781: INFO: calico-typha-5549cc5cdc-25l9k from calico-system started at 2023-07-26 23:19:48 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.781: INFO: Container calico-typha ready: true, restart count 0 + Jul 27 01:33:26.781: INFO: ibm-keepalived-watcher-228gb from kube-system started at 2023-07-26 23:12:15 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.781: INFO: Container keepalived-watcher ready: true, restart count 0 + Jul 27 01:33:26.781: INFO: ibm-master-proxy-static-10.245.128.19 from kube-system started at 2023-07-26 23:12:13 +0000 UTC (2 container statuses recorded) + Jul 27 01:33:26.781: INFO: Container ibm-master-proxy-static ready: true, restart count 0 + Jul 27 01:33:26.781: INFO: Container pause ready: true, restart count 0 + Jul 27 01:33:26.781: INFO: ibm-vpc-block-csi-node-m8dqf from kube-system started at 2023-07-26 23:12:15 +0000 UTC (4 container statuses recorded) + Jul 27 01:33:26.781: INFO: Container csi-driver-registrar ready: true, restart count 0 + Jul 27 01:33:26.781: INFO: Container iks-vpc-block-node-driver ready: true, restart count 0 + Jul 27 01:33:26.781: INFO: Container liveness-probe ready: true, restart count 0 + Jul 27 01:33:26.781: INFO: Container storage-secret-sidecar ready: true, restart count 0 + Jul 27 01:33:26.781: INFO: tuned-8xqng from openshift-cluster-node-tuning-operator started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.781: INFO: Container tuned ready: true, restart count 0 + Jul 27 01:33:26.781: INFO: csi-snapshot-controller-5b77984679-2r5mm from openshift-cluster-storage-operator started at 2023-07-26 23:21:55 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.781: INFO: Container snapshot-controller ready: true, restart count 0 + Jul 27 01:33:26.781: INFO: csi-snapshot-webhook-78b8c8d77c-hmxw5 from openshift-cluster-storage-operator started at 2023-07-26 23:21:55 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.782: INFO: Container webhook ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: console-7fd48bd95f-ws8gt from openshift-console started at 2023-07-26 23:28:05 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.782: INFO: Container console ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: downloads-6874b45df6-fr7cm from openshift-console started at 2023-07-26 23:22:05 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.782: INFO: Container download-server ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: dns-default-vxt6p from openshift-dns started at 2023-07-26 23:25:39 +0000 UTC (2 container statuses recorded) + Jul 27 01:33:26.782: INFO: Container dns ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: node-resolver-s2q44 from openshift-dns started at 2023-07-26 23:25:38 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.782: INFO: Container dns-node-resolver ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: image-pruner-28173600-njvcp from openshift-image-registry started at 2023-07-27 00:00:00 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.782: INFO: Container image-pruner ready: false, restart count 0 + Jul 27 01:33:26.782: INFO: image-registry-69fbbd6d88-9n62b from openshift-image-registry started at 2023-07-26 23:26:03 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.782: INFO: Container registry ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: node-ca-kz4vp from openshift-image-registry started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.782: INFO: Container node-ca ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: ingress-canary-7kzx5 from openshift-ingress-canary started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.782: INFO: Container serve-healthcheck-canary ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: router-default-865b575f54-vf68k from openshift-ingress started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.782: INFO: Container router ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: openshift-kube-proxy-4qg5c from openshift-kube-proxy started at 2023-07-26 23:17:29 +0000 UTC (2 container statuses recorded) + Jul 27 01:33:26.782: INFO: Container kube-proxy ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: alertmanager-main-0 from openshift-monitoring started at 2023-07-26 23:28:18 +0000 UTC (6 container statuses recorded) + Jul 27 01:33:26.782: INFO: Container alertmanager ready: true, restart count 1 + Jul 27 01:33:26.782: INFO: Container alertmanager-proxy ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: Container config-reloader ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy-metric ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: Container prom-label-proxy ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: kube-state-metrics-575bd9d6b6-mg92f from openshift-monitoring started at 2023-07-26 23:26:18 +0000 UTC (3 container statuses recorded) + Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy-main ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy-self ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: Container kube-state-metrics ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: node-exporter-vz8m9 from openshift-monitoring started at 2023-07-26 23:26:18 +0000 UTC (2 container statuses recorded) + Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: Container node-exporter ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: openshift-state-metrics-99754b784-glvlb from openshift-monitoring started at 2023-07-26 23:26:18 +0000 UTC (3 container statuses recorded) + Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy-main ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy-self ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: Container openshift-state-metrics ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: prometheus-adapter-657855c676-qcpn4 from openshift-monitoring started at 2023-07-26 23:26:23 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.782: INFO: Container prometheus-adapter ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: prometheus-k8s-0 from openshift-monitoring started at 2023-07-26 23:28:16 +0000 UTC (6 container statuses recorded) + Jul 27 01:33:26.782: INFO: Container config-reloader ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy-thanos ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: Container prometheus ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: Container prometheus-proxy ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: Container thanos-sidecar ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: prometheus-operator-765bbdfd45-ffb59 from openshift-monitoring started at 2023-07-26 23:26:06 +0000 UTC (2 container statuses recorded) + Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: Container prometheus-operator ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: prometheus-operator-admission-webhook-84c7bbc8cc-h6s7j from openshift-monitoring started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.782: INFO: Container prometheus-operator-admission-webhook ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: telemeter-client-c964ff8c9-xlf8c from openshift-monitoring started at 2023-07-26 23:26:24 +0000 UTC (3 container statuses recorded) + Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: Container reload ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: Container telemeter-client ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: thanos-querier-7f9c896d7f-77gq9 from openshift-monitoring started at 2023-07-26 23:26:32 +0000 UTC (6 container statuses recorded) + Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy-metrics ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy-rules ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: Container oauth-proxy ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: Container prom-label-proxy ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: Container thanos-query ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: multus-287s2 from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.782: INFO: Container kube-multus ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: multus-additional-cni-plugins-xns7c from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.782: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: network-metrics-daemon-xpw2q from openshift-multus started at 2023-07-26 23:17:25 +0000 UTC (2 container statuses recorded) + Jul 27 01:33:26.782: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: Container network-metrics-daemon ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: network-check-target-hf22d from openshift-network-diagnostics started at 2023-07-26 23:17:32 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.782: INFO: Container network-check-target-container ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: sonobuoy-systemd-logs-daemon-set-2f6dede7a68c4213-p74pn from sonobuoy started at 2023-07-27 01:27:09 +0000 UTC (2 container statuses recorded) + Jul 27 01:33:26.782: INFO: Container sonobuoy-worker ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: Container systemd-logs ready: true, restart count 0 + Jul 27 01:33:26.782: INFO: sysctl-965edbec-a44e-402d-82c3-bb76ee2b699a from sysctl-7508 started at 2023-07-27 01:33:15 +0000 UTC (1 container statuses recorded) + Jul 27 01:33:26.782: INFO: Container test-container ready: false, restart count 0 + [It] validates that NodeSelector is respected if not matching [Conformance] + test/e2e/scheduling/predicates.go:443 + STEP: Trying to schedule Pod with nonempty NodeSelector. 07/27/23 01:33:26.782 + STEP: Considering event: + Type = [Warning], Name = [restricted-pod.1775957ecc0ee0f5], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..] 07/27/23 01:33:27.011 + [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 - Jun 12 20:44:50.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes + Jul 27 01:33:27.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:88 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 - STEP: Destroying namespace "emptydir-wrapper-7096" for this suite. 06/12/23 20:44:50.506 + STEP: Destroying namespace "sched-pred-9226" for this suite. 07/27/23 01:33:27.999 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------- -[sig-storage] EmptyDir volumes - should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:147 -[BeforeEach] [sig-storage] EmptyDir volumes +[sig-api-machinery] ResourceQuota + should be able to update and delete ResourceQuota. [Conformance] + test/e2e/apimachinery/resource_quota.go:884 +[BeforeEach] [sig-api-machinery] ResourceQuota set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:44:50.533 -Jun 12 20:44:50.533: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename emptydir 06/12/23 20:44:50.537 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:44:50.589 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:44:50.598 -[BeforeEach] [sig-storage] EmptyDir volumes +STEP: Creating a kubernetes client 07/27/23 01:33:28.025 +Jul 27 01:33:28.025: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename resourcequota 07/27/23 01:33:28.026 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:33:28.077 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:33:28.086 +[BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:31 -[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:147 -STEP: Creating a pod to test emptydir 0777 on tmpfs 06/12/23 20:44:50.606 -Jun 12 20:44:50.635: INFO: Waiting up to 5m0s for pod "pod-2d4f80c6-4846-4fbc-8a69-032a475288fd" in namespace "emptydir-3158" to be "Succeeded or Failed" -Jun 12 20:44:50.643: INFO: Pod "pod-2d4f80c6-4846-4fbc-8a69-032a475288fd": Phase="Pending", Reason="", readiness=false. Elapsed: 7.914571ms -Jun 12 20:44:52.651: INFO: Pod "pod-2d4f80c6-4846-4fbc-8a69-032a475288fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015689601s -Jun 12 20:44:54.652: INFO: Pod "pod-2d4f80c6-4846-4fbc-8a69-032a475288fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016286424s -Jun 12 20:44:56.652: INFO: Pod "pod-2d4f80c6-4846-4fbc-8a69-032a475288fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016645539s -STEP: Saw pod success 06/12/23 20:44:56.652 -Jun 12 20:44:56.652: INFO: Pod "pod-2d4f80c6-4846-4fbc-8a69-032a475288fd" satisfied condition "Succeeded or Failed" -Jun 12 20:44:56.659: INFO: Trying to get logs from node 10.138.75.70 pod pod-2d4f80c6-4846-4fbc-8a69-032a475288fd container test-container: -STEP: delete the pod 06/12/23 20:44:56.69 -Jun 12 20:44:56.713: INFO: Waiting for pod pod-2d4f80c6-4846-4fbc-8a69-032a475288fd to disappear -Jun 12 20:44:56.719: INFO: Pod pod-2d4f80c6-4846-4fbc-8a69-032a475288fd no longer exists -[AfterEach] [sig-storage] EmptyDir volumes +[It] should be able to update and delete ResourceQuota. [Conformance] + test/e2e/apimachinery/resource_quota.go:884 +STEP: Creating a ResourceQuota 07/27/23 01:33:28.096 +STEP: Getting a ResourceQuota 07/27/23 01:33:28.11 +STEP: Updating a ResourceQuota 07/27/23 01:33:28.119 +STEP: Verifying a ResourceQuota was modified 07/27/23 01:33:28.136 +STEP: Deleting a ResourceQuota 07/27/23 01:33:28.145 +STEP: Verifying the deleted ResourceQuota 07/27/23 01:33:28.161 +[AfterEach] [sig-api-machinery] ResourceQuota test/e2e/framework/node/init/init.go:32 -Jun 12 20:44:56.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +Jul 27 01:33:28.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota tear down framework | framework.go:193 -STEP: Destroying namespace "emptydir-3158" for this suite. 06/12/23 20:44:56.732 +STEP: Destroying namespace "resourcequota-5456" for this suite. 07/27/23 01:33:28.222 ------------------------------ -• [SLOW TEST] [6.219 seconds] -[sig-storage] EmptyDir volumes -test/e2e/common/storage/framework.go:23 - should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:147 +• [0.224 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should be able to update and delete ResourceQuota. [Conformance] + test/e2e/apimachinery/resource_quota.go:884 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] EmptyDir volumes + [BeforeEach] [sig-api-machinery] ResourceQuota set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:44:50.533 - Jun 12 20:44:50.533: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename emptydir 06/12/23 20:44:50.537 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:44:50.589 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:44:50.598 - [BeforeEach] [sig-storage] EmptyDir volumes + STEP: Creating a kubernetes client 07/27/23 01:33:28.025 + Jul 27 01:33:28.025: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename resourcequota 07/27/23 01:33:28.026 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:33:28.077 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:33:28.086 + [BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:31 - [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:147 - STEP: Creating a pod to test emptydir 0777 on tmpfs 06/12/23 20:44:50.606 - Jun 12 20:44:50.635: INFO: Waiting up to 5m0s for pod "pod-2d4f80c6-4846-4fbc-8a69-032a475288fd" in namespace "emptydir-3158" to be "Succeeded or Failed" - Jun 12 20:44:50.643: INFO: Pod "pod-2d4f80c6-4846-4fbc-8a69-032a475288fd": Phase="Pending", Reason="", readiness=false. Elapsed: 7.914571ms - Jun 12 20:44:52.651: INFO: Pod "pod-2d4f80c6-4846-4fbc-8a69-032a475288fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015689601s - Jun 12 20:44:54.652: INFO: Pod "pod-2d4f80c6-4846-4fbc-8a69-032a475288fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016286424s - Jun 12 20:44:56.652: INFO: Pod "pod-2d4f80c6-4846-4fbc-8a69-032a475288fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016645539s - STEP: Saw pod success 06/12/23 20:44:56.652 - Jun 12 20:44:56.652: INFO: Pod "pod-2d4f80c6-4846-4fbc-8a69-032a475288fd" satisfied condition "Succeeded or Failed" - Jun 12 20:44:56.659: INFO: Trying to get logs from node 10.138.75.70 pod pod-2d4f80c6-4846-4fbc-8a69-032a475288fd container test-container: - STEP: delete the pod 06/12/23 20:44:56.69 - Jun 12 20:44:56.713: INFO: Waiting for pod pod-2d4f80c6-4846-4fbc-8a69-032a475288fd to disappear - Jun 12 20:44:56.719: INFO: Pod pod-2d4f80c6-4846-4fbc-8a69-032a475288fd no longer exists - [AfterEach] [sig-storage] EmptyDir volumes + [It] should be able to update and delete ResourceQuota. [Conformance] + test/e2e/apimachinery/resource_quota.go:884 + STEP: Creating a ResourceQuota 07/27/23 01:33:28.096 + STEP: Getting a ResourceQuota 07/27/23 01:33:28.11 + STEP: Updating a ResourceQuota 07/27/23 01:33:28.119 + STEP: Verifying a ResourceQuota was modified 07/27/23 01:33:28.136 + STEP: Deleting a ResourceQuota 07/27/23 01:33:28.145 + STEP: Verifying the deleted ResourceQuota 07/27/23 01:33:28.161 + [AfterEach] [sig-api-machinery] ResourceQuota test/e2e/framework/node/init/init.go:32 - Jun 12 20:44:56.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + Jul 27 01:33:28.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota tear down framework | framework.go:193 - STEP: Destroying namespace "emptydir-3158" for this suite. 06/12/23 20:44:56.732 + STEP: Destroying namespace "resourcequota-5456" for this suite. 07/27/23 01:33:28.222 << End Captured GinkgoWriter Output ------------------------------ -SS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-network] EndpointSlice - should support creating EndpointSlice API operations [Conformance] - test/e2e/network/endpointslice.go:353 -[BeforeEach] [sig-network] EndpointSlice +[sig-apps] Daemon set [Serial] + should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + test/e2e/apps/daemon_set.go:374 +[BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:44:56.76 -Jun 12 20:44:56.760: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename endpointslice 06/12/23 20:44:56.763 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:44:56.814 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:44:56.824 -[BeforeEach] [sig-network] EndpointSlice +STEP: Creating a kubernetes client 07/27/23 01:33:28.25 +Jul 27 01:33:28.250: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename daemonsets 07/27/23 01:33:28.251 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:33:28.295 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:33:28.306 +[BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-network] EndpointSlice - test/e2e/network/endpointslice.go:52 -[It] should support creating EndpointSlice API operations [Conformance] - test/e2e/network/endpointslice.go:353 -STEP: getting /apis 06/12/23 20:44:56.839 -STEP: getting /apis/discovery.k8s.io 06/12/23 20:44:56.848 -STEP: getting /apis/discovery.k8s.iov1 06/12/23 20:44:56.851 -STEP: creating 06/12/23 20:44:56.855 -STEP: getting 06/12/23 20:44:56.905 -STEP: listing 06/12/23 20:44:56.918 -STEP: watching 06/12/23 20:44:56.93 -Jun 12 20:44:56.930: INFO: starting watch -STEP: cluster-wide listing 06/12/23 20:44:56.933 -STEP: cluster-wide watching 06/12/23 20:44:56.953 -Jun 12 20:44:56.953: INFO: starting watch -STEP: patching 06/12/23 20:44:56.956 -STEP: updating 06/12/23 20:44:56.97 -Jun 12 20:44:56.998: INFO: waiting for watch events with expected annotations -Jun 12 20:44:56.998: INFO: saw patched and updated annotations -STEP: deleting 06/12/23 20:44:56.998 -STEP: deleting a collection 06/12/23 20:44:57.046 -[AfterEach] [sig-network] EndpointSlice +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 +[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + test/e2e/apps/daemon_set.go:374 +Jul 27 01:33:28.380: INFO: Creating simple daemon set daemon-set +STEP: Check that daemon pods launch on every node of the cluster. 07/27/23 01:33:28.394 +Jul 27 01:33:28.414: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jul 27 01:33:28.414: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 +Jul 27 01:33:29.440: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jul 27 01:33:29.440: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 +Jul 27 01:33:30.437: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Jul 27 01:33:30.437: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 +Jul 27 01:33:31.438: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Jul 27 01:33:31.438: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +STEP: Update daemon pods image. 07/27/23 01:33:31.484 +STEP: Check that daemon pods images are updated. 07/27/23 01:33:31.519 +Jul 27 01:33:31.529: INFO: Wrong image for pod: daemon-set-8tz2c. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. +Jul 27 01:33:31.529: INFO: Wrong image for pod: daemon-set-c2xb5. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. +Jul 27 01:33:31.529: INFO: Wrong image for pod: daemon-set-fgnxr. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. +Jul 27 01:33:32.556: INFO: Wrong image for pod: daemon-set-8tz2c. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. +Jul 27 01:33:32.556: INFO: Wrong image for pod: daemon-set-c2xb5. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. +Jul 27 01:33:33.554: INFO: Wrong image for pod: daemon-set-8tz2c. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. +Jul 27 01:33:33.554: INFO: Wrong image for pod: daemon-set-c2xb5. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. +Jul 27 01:33:33.554: INFO: Pod daemon-set-pzmck is not available +Jul 27 01:33:34.560: INFO: Wrong image for pod: daemon-set-8tz2c. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. +Jul 27 01:33:34.560: INFO: Wrong image for pod: daemon-set-c2xb5. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. +Jul 27 01:33:34.560: INFO: Pod daemon-set-pzmck is not available +Jul 27 01:33:35.558: INFO: Wrong image for pod: daemon-set-8tz2c. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. +Jul 27 01:33:36.565: INFO: Wrong image for pod: daemon-set-8tz2c. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. +Jul 27 01:33:37.566: INFO: Wrong image for pod: daemon-set-8tz2c. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. +Jul 27 01:33:37.566: INFO: Pod daemon-set-wsjn6 is not available +Jul 27 01:33:38.558: INFO: Wrong image for pod: daemon-set-8tz2c. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. +Jul 27 01:33:39.563: INFO: Pod daemon-set-lmq5l is not available +Jul 27 01:33:40.556: INFO: Pod daemon-set-lmq5l is not available +STEP: Check that daemon pods are still running on every node of the cluster. 07/27/23 01:33:40.574 +Jul 27 01:33:40.594: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Jul 27 01:33:40.595: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 +Jul 27 01:33:41.624: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Jul 27 01:33:41.624: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 +STEP: Deleting DaemonSet "daemon-set" 07/27/23 01:33:41.67 +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4264, will wait for the garbage collector to delete the pods 07/27/23 01:33:41.67 +Jul 27 01:33:41.741: INFO: Deleting DaemonSet.extensions daemon-set took: 12.094991ms +Jul 27 01:33:41.841: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.251278ms +Jul 27 01:33:44.550: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jul 27 01:33:44.550: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Jul 27 01:33:44.560: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"63967"},"items":null} + +Jul 27 01:33:44.567: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"63967"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 -Jun 12 20:44:57.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] EndpointSlice +Jul 27 01:33:44.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] EndpointSlice +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] EndpointSlice +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 -STEP: Destroying namespace "endpointslice-499" for this suite. 06/12/23 20:44:57.108 +STEP: Destroying namespace "daemonsets-4264" for this suite. 07/27/23 01:33:44.617 ------------------------------ -• [0.378 seconds] -[sig-network] EndpointSlice -test/e2e/network/common/framework.go:23 - should support creating EndpointSlice API operations [Conformance] - test/e2e/network/endpointslice.go:353 +• [SLOW TEST] [16.388 seconds] +[sig-apps] Daemon set [Serial] +test/e2e/apps/framework.go:23 + should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + test/e2e/apps/daemon_set.go:374 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] EndpointSlice + [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:44:56.76 - Jun 12 20:44:56.760: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename endpointslice 06/12/23 20:44:56.763 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:44:56.814 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:44:56.824 - [BeforeEach] [sig-network] EndpointSlice + STEP: Creating a kubernetes client 07/27/23 01:33:28.25 + Jul 27 01:33:28.250: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename daemonsets 07/27/23 01:33:28.251 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:33:28.295 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:33:28.306 + [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-network] EndpointSlice - test/e2e/network/endpointslice.go:52 - [It] should support creating EndpointSlice API operations [Conformance] - test/e2e/network/endpointslice.go:353 - STEP: getting /apis 06/12/23 20:44:56.839 - STEP: getting /apis/discovery.k8s.io 06/12/23 20:44:56.848 - STEP: getting /apis/discovery.k8s.iov1 06/12/23 20:44:56.851 - STEP: creating 06/12/23 20:44:56.855 - STEP: getting 06/12/23 20:44:56.905 - STEP: listing 06/12/23 20:44:56.918 - STEP: watching 06/12/23 20:44:56.93 - Jun 12 20:44:56.930: INFO: starting watch - STEP: cluster-wide listing 06/12/23 20:44:56.933 - STEP: cluster-wide watching 06/12/23 20:44:56.953 - Jun 12 20:44:56.953: INFO: starting watch - STEP: patching 06/12/23 20:44:56.956 - STEP: updating 06/12/23 20:44:56.97 - Jun 12 20:44:56.998: INFO: waiting for watch events with expected annotations - Jun 12 20:44:56.998: INFO: saw patched and updated annotations - STEP: deleting 06/12/23 20:44:56.998 - STEP: deleting a collection 06/12/23 20:44:57.046 - [AfterEach] [sig-network] EndpointSlice + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 + [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + test/e2e/apps/daemon_set.go:374 + Jul 27 01:33:28.380: INFO: Creating simple daemon set daemon-set + STEP: Check that daemon pods launch on every node of the cluster. 07/27/23 01:33:28.394 + Jul 27 01:33:28.414: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jul 27 01:33:28.414: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 + Jul 27 01:33:29.440: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jul 27 01:33:29.440: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 + Jul 27 01:33:30.437: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 + Jul 27 01:33:30.437: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 + Jul 27 01:33:31.438: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Jul 27 01:33:31.438: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + STEP: Update daemon pods image. 07/27/23 01:33:31.484 + STEP: Check that daemon pods images are updated. 07/27/23 01:33:31.519 + Jul 27 01:33:31.529: INFO: Wrong image for pod: daemon-set-8tz2c. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. + Jul 27 01:33:31.529: INFO: Wrong image for pod: daemon-set-c2xb5. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. + Jul 27 01:33:31.529: INFO: Wrong image for pod: daemon-set-fgnxr. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. + Jul 27 01:33:32.556: INFO: Wrong image for pod: daemon-set-8tz2c. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. + Jul 27 01:33:32.556: INFO: Wrong image for pod: daemon-set-c2xb5. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. + Jul 27 01:33:33.554: INFO: Wrong image for pod: daemon-set-8tz2c. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. + Jul 27 01:33:33.554: INFO: Wrong image for pod: daemon-set-c2xb5. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. + Jul 27 01:33:33.554: INFO: Pod daemon-set-pzmck is not available + Jul 27 01:33:34.560: INFO: Wrong image for pod: daemon-set-8tz2c. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. + Jul 27 01:33:34.560: INFO: Wrong image for pod: daemon-set-c2xb5. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. + Jul 27 01:33:34.560: INFO: Pod daemon-set-pzmck is not available + Jul 27 01:33:35.558: INFO: Wrong image for pod: daemon-set-8tz2c. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. + Jul 27 01:33:36.565: INFO: Wrong image for pod: daemon-set-8tz2c. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. + Jul 27 01:33:37.566: INFO: Wrong image for pod: daemon-set-8tz2c. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. + Jul 27 01:33:37.566: INFO: Pod daemon-set-wsjn6 is not available + Jul 27 01:33:38.558: INFO: Wrong image for pod: daemon-set-8tz2c. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. + Jul 27 01:33:39.563: INFO: Pod daemon-set-lmq5l is not available + Jul 27 01:33:40.556: INFO: Pod daemon-set-lmq5l is not available + STEP: Check that daemon pods are still running on every node of the cluster. 07/27/23 01:33:40.574 + Jul 27 01:33:40.594: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Jul 27 01:33:40.595: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 + Jul 27 01:33:41.624: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Jul 27 01:33:41.624: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 + STEP: Deleting DaemonSet "daemon-set" 07/27/23 01:33:41.67 + STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4264, will wait for the garbage collector to delete the pods 07/27/23 01:33:41.67 + Jul 27 01:33:41.741: INFO: Deleting DaemonSet.extensions daemon-set took: 12.094991ms + Jul 27 01:33:41.841: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.251278ms + Jul 27 01:33:44.550: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jul 27 01:33:44.550: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set + Jul 27 01:33:44.560: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"63967"},"items":null} + + Jul 27 01:33:44.567: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"63967"},"items":null} + + [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 - Jun 12 20:44:57.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] EndpointSlice + Jul 27 01:33:44.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] EndpointSlice + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] EndpointSlice + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 - STEP: Destroying namespace "endpointslice-499" for this suite. 06/12/23 20:44:57.108 + STEP: Destroying namespace "daemonsets-4264" for this suite. 07/27/23 01:33:44.617 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSS +SSSSSSSSSS ------------------------------ -[sig-node] RuntimeClass - should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance] - test/e2e/common/node/runtimeclass.go:129 -[BeforeEach] [sig-node] RuntimeClass +[sig-network] Service endpoints latency + should not be very high [Conformance] + test/e2e/network/service_latency.go:59 +[BeforeEach] [sig-network] Service endpoints latency set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:44:57.14 -Jun 12 20:44:57.140: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename runtimeclass 06/12/23 20:44:57.142 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:44:57.201 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:44:57.209 -[BeforeEach] [sig-node] RuntimeClass +STEP: Creating a kubernetes client 07/27/23 01:33:44.639 +Jul 27 01:33:44.639: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename svc-latency 07/27/23 01:33:44.64 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:33:44.679 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:33:44.688 +[BeforeEach] [sig-network] Service endpoints latency test/e2e/framework/metrics/init/init.go:31 -[It] should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance] - test/e2e/common/node/runtimeclass.go:129 -Jun 12 20:44:57.264: INFO: Waiting up to 1m20s for at least 1 pods in namespace runtimeclass-2546 to be scheduled -Jun 12 20:44:57.272: INFO: 1 pods are not scheduled: [runtimeclass-2546/test-runtimeclass-runtimeclass-2546-preconfigured-handler-t4d98(00fb6c0b-6b8a-4485-9533-84e08a55e022)] -[AfterEach] [sig-node] RuntimeClass +[It] should not be very high [Conformance] + test/e2e/network/service_latency.go:59 +Jul 27 01:33:44.697: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: creating replication controller svc-latency-rc in namespace svc-latency-61 07/27/23 01:33:44.698 +W0727 01:33:44.724405 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "svc-latency-rc" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "svc-latency-rc" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "svc-latency-rc" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "svc-latency-rc" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +I0727 01:33:44.724571 20 runners.go:193] Created replication controller with name: svc-latency-rc, namespace: svc-latency-61, replica count: 1 +I0727 01:33:45.776387 20 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0727 01:33:46.776848 20 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Jul 27 01:33:46.938: INFO: Created: latency-svc-z474h +Jul 27 01:33:46.948: INFO: Got endpoints: latency-svc-z474h [70.660208ms] +Jul 27 01:33:46.988: INFO: Created: latency-svc-c9l85 +Jul 27 01:33:46.998: INFO: Got endpoints: latency-svc-c9l85 [49.164624ms] +Jul 27 01:33:47.009: INFO: Created: latency-svc-pr826 +Jul 27 01:33:47.015: INFO: Got endpoints: latency-svc-pr826 [67.092307ms] +Jul 27 01:33:47.025: INFO: Created: latency-svc-prz7m +Jul 27 01:33:47.036: INFO: Got endpoints: latency-svc-prz7m [86.94388ms] +Jul 27 01:33:47.043: INFO: Created: latency-svc-xc7pw +Jul 27 01:33:47.054: INFO: Got endpoints: latency-svc-xc7pw [105.701757ms] +Jul 27 01:33:47.064: INFO: Created: latency-svc-4b6fv +Jul 27 01:33:47.075: INFO: Got endpoints: latency-svc-4b6fv [125.358693ms] +Jul 27 01:33:47.084: INFO: Created: latency-svc-lxvm6 +Jul 27 01:33:47.095: INFO: Got endpoints: latency-svc-lxvm6 [145.593209ms] +Jul 27 01:33:47.107: INFO: Created: latency-svc-qvjfl +Jul 27 01:33:47.116: INFO: Got endpoints: latency-svc-qvjfl [167.017329ms] +Jul 27 01:33:47.128: INFO: Created: latency-svc-7nvq4 +Jul 27 01:33:47.138: INFO: Got endpoints: latency-svc-7nvq4 [189.187965ms] +Jul 27 01:33:47.153: INFO: Created: latency-svc-cntx7 +Jul 27 01:33:47.163: INFO: Got endpoints: latency-svc-cntx7 [213.971199ms] +Jul 27 01:33:47.174: INFO: Created: latency-svc-67qkt +Jul 27 01:33:47.186: INFO: Got endpoints: latency-svc-67qkt [237.292803ms] +Jul 27 01:33:47.195: INFO: Created: latency-svc-8fvd9 +Jul 27 01:33:47.207: INFO: Got endpoints: latency-svc-8fvd9 [258.590945ms] +Jul 27 01:33:47.215: INFO: Created: latency-svc-nshjf +Jul 27 01:33:47.226: INFO: Got endpoints: latency-svc-nshjf [277.525322ms] +Jul 27 01:33:47.234: INFO: Created: latency-svc-nnd5m +Jul 27 01:33:47.245: INFO: Got endpoints: latency-svc-nnd5m [296.182309ms] +Jul 27 01:33:47.253: INFO: Created: latency-svc-rcd98 +Jul 27 01:33:47.263: INFO: Got endpoints: latency-svc-rcd98 [313.873863ms] +Jul 27 01:33:47.275: INFO: Created: latency-svc-dfm2v +Jul 27 01:33:47.284: INFO: Got endpoints: latency-svc-dfm2v [334.943907ms] +Jul 27 01:33:47.297: INFO: Created: latency-svc-p4pmn +Jul 27 01:33:47.306: INFO: Got endpoints: latency-svc-p4pmn [308.37464ms] +Jul 27 01:33:47.317: INFO: Created: latency-svc-dwmrl +Jul 27 01:33:47.325: INFO: Got endpoints: latency-svc-dwmrl [310.054525ms] +Jul 27 01:33:47.336: INFO: Created: latency-svc-th28s +Jul 27 01:33:47.345: INFO: Got endpoints: latency-svc-th28s [309.540418ms] +Jul 27 01:33:47.361: INFO: Created: latency-svc-r7f95 +Jul 27 01:33:47.368: INFO: Got endpoints: latency-svc-r7f95 [313.732958ms] +Jul 27 01:33:47.381: INFO: Created: latency-svc-hhqlg +Jul 27 01:33:47.391: INFO: Got endpoints: latency-svc-hhqlg [316.127879ms] +Jul 27 01:33:47.403: INFO: Created: latency-svc-p6d5k +Jul 27 01:33:47.414: INFO: Got endpoints: latency-svc-p6d5k [319.599784ms] +Jul 27 01:33:47.731: INFO: Created: latency-svc-lzsrp +Jul 27 01:33:47.731: INFO: Created: latency-svc-tbkvx +Jul 27 01:33:47.731: INFO: Created: latency-svc-qrq95 +Jul 27 01:33:47.732: INFO: Created: latency-svc-wgdkp +Jul 27 01:33:47.732: INFO: Created: latency-svc-44gc7 +Jul 27 01:33:47.732: INFO: Created: latency-svc-2tvdj +Jul 27 01:33:47.732: INFO: Created: latency-svc-f758b +Jul 27 01:33:47.732: INFO: Created: latency-svc-ncfqk +Jul 27 01:33:47.732: INFO: Created: latency-svc-9jlzm +Jul 27 01:33:47.732: INFO: Created: latency-svc-2wfsz +Jul 27 01:33:47.732: INFO: Created: latency-svc-vbkj2 +Jul 27 01:33:47.733: INFO: Created: latency-svc-glfv2 +Jul 27 01:33:47.733: INFO: Created: latency-svc-t9pqh +Jul 27 01:33:47.733: INFO: Created: latency-svc-pvnt2 +Jul 27 01:33:47.733: INFO: Created: latency-svc-pmsmx +Jul 27 01:33:47.738: INFO: Got endpoints: latency-svc-9jlzm [412.628174ms] +Jul 27 01:33:47.739: INFO: Got endpoints: latency-svc-pvnt2 [432.869175ms] +Jul 27 01:33:47.740: INFO: Got endpoints: latency-svc-ncfqk [394.518099ms] +Jul 27 01:33:47.740: INFO: Got endpoints: latency-svc-tbkvx [456.404278ms] +Jul 27 01:33:47.740: INFO: Got endpoints: latency-svc-glfv2 [372.396489ms] +Jul 27 01:33:47.766: INFO: Got endpoints: latency-svc-pmsmx [351.739586ms] +Jul 27 01:33:47.766: INFO: Got endpoints: latency-svc-lzsrp [502.928694ms] +Jul 27 01:33:47.768: INFO: Got endpoints: latency-svc-f758b [522.319045ms] +Jul 27 01:33:47.768: INFO: Got endpoints: latency-svc-t9pqh [541.452754ms] +Jul 27 01:33:47.768: INFO: Got endpoints: latency-svc-44gc7 [651.819429ms] +Jul 27 01:33:47.772: INFO: Got endpoints: latency-svc-qrq95 [381.328828ms] +Jul 27 01:33:47.772: INFO: Got endpoints: latency-svc-2tvdj [565.282079ms] +Jul 27 01:33:47.780: INFO: Got endpoints: latency-svc-2wfsz [593.738585ms] +Jul 27 01:33:47.789: INFO: Got endpoints: latency-svc-vbkj2 [650.323601ms] +Jul 27 01:33:47.789: INFO: Got endpoints: latency-svc-wgdkp [625.793406ms] +Jul 27 01:33:47.805: INFO: Created: latency-svc-r97vw +Jul 27 01:33:47.810: INFO: Got endpoints: latency-svc-r97vw [71.814822ms] +Jul 27 01:33:47.826: INFO: Created: latency-svc-lpdjs +Jul 27 01:33:47.839: INFO: Got endpoints: latency-svc-lpdjs [100.378018ms] +Jul 27 01:33:47.849: INFO: Created: latency-svc-g957f +Jul 27 01:33:47.858: INFO: Got endpoints: latency-svc-g957f [118.04166ms] +Jul 27 01:33:47.869: INFO: Created: latency-svc-r6w5j +Jul 27 01:33:47.880: INFO: Got endpoints: latency-svc-r6w5j [139.784715ms] +Jul 27 01:33:47.892: INFO: Created: latency-svc-2kcq6 +Jul 27 01:33:47.902: INFO: Got endpoints: latency-svc-2kcq6 [162.250689ms] +Jul 27 01:33:47.930: INFO: Created: latency-svc-tbjhx +Jul 27 01:33:47.930: INFO: Got endpoints: latency-svc-tbjhx [164.321674ms] +Jul 27 01:33:47.944: INFO: Created: latency-svc-bkql4 +Jul 27 01:33:47.947: INFO: Got endpoints: latency-svc-bkql4 [180.819909ms] +Jul 27 01:33:47.961: INFO: Created: latency-svc-mkb4l +Jul 27 01:33:47.972: INFO: Got endpoints: latency-svc-mkb4l [203.871947ms] +Jul 27 01:33:47.982: INFO: Created: latency-svc-hbfdw +Jul 27 01:33:47.993: INFO: Got endpoints: latency-svc-hbfdw [225.169832ms] +Jul 27 01:33:48.003: INFO: Created: latency-svc-vr7cr +Jul 27 01:33:48.014: INFO: Got endpoints: latency-svc-vr7cr [245.999915ms] +Jul 27 01:33:48.024: INFO: Created: latency-svc-szt72 +Jul 27 01:33:48.067: INFO: Got endpoints: latency-svc-szt72 [294.893535ms] +Jul 27 01:33:48.069: INFO: Created: latency-svc-tzm8s +Jul 27 01:33:48.110: INFO: Got endpoints: latency-svc-tzm8s [337.470053ms] +Jul 27 01:33:48.113: INFO: Created: latency-svc-4vm85 +Jul 27 01:33:48.119: INFO: Got endpoints: latency-svc-4vm85 [339.260212ms] +Jul 27 01:33:48.133: INFO: Created: latency-svc-sgwbr +Jul 27 01:33:48.140: INFO: Got endpoints: latency-svc-sgwbr [351.161162ms] +Jul 27 01:33:48.405: INFO: Created: latency-svc-k5vsr +Jul 27 01:33:48.418: INFO: Created: latency-svc-gwzbn +Jul 27 01:33:48.419: INFO: Created: latency-svc-s4ppk +Jul 27 01:33:48.419: INFO: Created: latency-svc-v724f +Jul 27 01:33:48.419: INFO: Created: latency-svc-94dk6 +Jul 27 01:33:48.420: INFO: Created: latency-svc-x9fc5 +Jul 27 01:33:48.420: INFO: Created: latency-svc-hskdr +Jul 27 01:33:48.420: INFO: Created: latency-svc-5dqf9 +Jul 27 01:33:48.420: INFO: Created: latency-svc-vk7g6 +Jul 27 01:33:48.420: INFO: Created: latency-svc-d8mnw +Jul 27 01:33:48.420: INFO: Created: latency-svc-nd2pr +Jul 27 01:33:48.420: INFO: Created: latency-svc-p6zts +Jul 27 01:33:48.420: INFO: Created: latency-svc-mjtdv +Jul 27 01:33:48.423: INFO: Created: latency-svc-rbwls +Jul 27 01:33:48.424: INFO: Created: latency-svc-v896d +Jul 27 01:33:48.424: INFO: Got endpoints: latency-svc-v724f [635.58311ms] +Jul 27 01:33:48.425: INFO: Got endpoints: latency-svc-k5vsr [614.751935ms] +Jul 27 01:33:48.425: INFO: Got endpoints: latency-svc-gwzbn [522.15281ms] +Jul 27 01:33:48.425: INFO: Got endpoints: latency-svc-p6zts [285.108539ms] +Jul 27 01:33:48.425: INFO: Got endpoints: latency-svc-v896d [411.105522ms] +Jul 27 01:33:48.431: INFO: Got endpoints: latency-svc-s4ppk [437.950229ms] +Jul 27 01:33:48.433: INFO: Got endpoints: latency-svc-hskdr [313.931024ms] +Jul 27 01:33:48.441: INFO: Got endpoints: latency-svc-5dqf9 [331.503218ms] +Jul 27 01:33:48.443: INFO: Got endpoints: latency-svc-mjtdv [585.496959ms] +Jul 27 01:33:48.444: INFO: Got endpoints: latency-svc-vk7g6 [496.641036ms] +Jul 27 01:33:48.444: INFO: Got endpoints: latency-svc-nd2pr [376.568661ms] +Jul 27 01:33:48.456: INFO: Got endpoints: latency-svc-x9fc5 [525.215054ms] +Jul 27 01:33:48.456: INFO: Got endpoints: latency-svc-94dk6 [616.362935ms] +Jul 27 01:33:48.462: INFO: Got endpoints: latency-svc-d8mnw [581.978742ms] +Jul 27 01:33:48.462: INFO: Got endpoints: latency-svc-rbwls [490.278805ms] +Jul 27 01:33:48.468: INFO: Created: latency-svc-rm8d4 +Jul 27 01:33:48.476: INFO: Got endpoints: latency-svc-rm8d4 [51.536738ms] +Jul 27 01:33:48.492: INFO: Created: latency-svc-nltqb +Jul 27 01:33:48.505: INFO: Got endpoints: latency-svc-nltqb [79.712722ms] +Jul 27 01:33:48.516: INFO: Created: latency-svc-gggtg +Jul 27 01:33:48.526: INFO: Got endpoints: latency-svc-gggtg [101.1614ms] +Jul 27 01:33:48.534: INFO: Created: latency-svc-pv7px +Jul 27 01:33:48.544: INFO: Got endpoints: latency-svc-pv7px [119.328598ms] +Jul 27 01:33:48.556: INFO: Created: latency-svc-fh6tm +Jul 27 01:33:48.564: INFO: Got endpoints: latency-svc-fh6tm [138.745581ms] +Jul 27 01:33:48.578: INFO: Created: latency-svc-76685 +Jul 27 01:33:48.589: INFO: Got endpoints: latency-svc-76685 [158.302814ms] +Jul 27 01:33:48.598: INFO: Created: latency-svc-9hvl5 +Jul 27 01:33:48.614: INFO: Got endpoints: latency-svc-9hvl5 [180.750466ms] +Jul 27 01:33:48.622: INFO: Created: latency-svc-lxqrp +Jul 27 01:33:48.634: INFO: Got endpoints: latency-svc-lxqrp [193.059735ms] +Jul 27 01:33:48.643: INFO: Created: latency-svc-kmb9t +Jul 27 01:33:48.654: INFO: Got endpoints: latency-svc-kmb9t [210.347816ms] +Jul 27 01:33:48.667: INFO: Created: latency-svc-vsslp +Jul 27 01:33:48.676: INFO: Got endpoints: latency-svc-vsslp [232.449808ms] +Jul 27 01:33:48.683: INFO: Created: latency-svc-b6jpc +Jul 27 01:33:48.693: INFO: Got endpoints: latency-svc-b6jpc [249.53422ms] +Jul 27 01:33:48.707: INFO: Created: latency-svc-5df8m +Jul 27 01:33:48.715: INFO: Got endpoints: latency-svc-5df8m [259.50474ms] +Jul 27 01:33:48.736: INFO: Created: latency-svc-pszn8 +Jul 27 01:33:48.746: INFO: Got endpoints: latency-svc-pszn8 [290.874378ms] +Jul 27 01:33:48.759: INFO: Created: latency-svc-6qw6l +Jul 27 01:33:48.768: INFO: Got endpoints: latency-svc-6qw6l [305.9929ms] +Jul 27 01:33:48.781: INFO: Created: latency-svc-j2rvw +Jul 27 01:33:48.792: INFO: Got endpoints: latency-svc-j2rvw [329.503144ms] +Jul 27 01:33:48.800: INFO: Created: latency-svc-45jmq +Jul 27 01:33:48.810: INFO: Got endpoints: latency-svc-45jmq [333.647526ms] +Jul 27 01:33:48.833: INFO: Created: latency-svc-9plk4 +Jul 27 01:33:48.839: INFO: Got endpoints: latency-svc-9plk4 [334.572435ms] +Jul 27 01:33:48.852: INFO: Created: latency-svc-pzccs +Jul 27 01:33:48.865: INFO: Got endpoints: latency-svc-pzccs [339.335038ms] +Jul 27 01:33:48.873: INFO: Created: latency-svc-4vjcl +Jul 27 01:33:48.884: INFO: Got endpoints: latency-svc-4vjcl [340.683374ms] +Jul 27 01:33:48.895: INFO: Created: latency-svc-8zlqs +Jul 27 01:33:48.908: INFO: Got endpoints: latency-svc-8zlqs [343.965209ms] +Jul 27 01:33:48.918: INFO: Created: latency-svc-lldmr +Jul 27 01:33:48.926: INFO: Got endpoints: latency-svc-lldmr [336.66827ms] +Jul 27 01:33:48.932: INFO: Created: latency-svc-vv2gr +Jul 27 01:33:48.942: INFO: Got endpoints: latency-svc-vv2gr [328.114237ms] +Jul 27 01:33:48.955: INFO: Created: latency-svc-vs9pv +Jul 27 01:33:48.970: INFO: Got endpoints: latency-svc-vs9pv [335.383989ms] +Jul 27 01:33:48.978: INFO: Created: latency-svc-tkw7j +Jul 27 01:33:48.986: INFO: Got endpoints: latency-svc-tkw7j [331.887866ms] +Jul 27 01:33:48.998: INFO: Created: latency-svc-r88n7 +Jul 27 01:33:49.008: INFO: Got endpoints: latency-svc-r88n7 [331.866714ms] +Jul 27 01:33:49.028: INFO: Created: latency-svc-5dj55 +Jul 27 01:33:49.037: INFO: Got endpoints: latency-svc-5dj55 [343.67094ms] +Jul 27 01:33:49.043: INFO: Created: latency-svc-bcr98 +Jul 27 01:33:49.082: INFO: Got endpoints: latency-svc-bcr98 [366.650632ms] +Jul 27 01:33:49.094: INFO: Created: latency-svc-w6ss7 +Jul 27 01:33:49.104: INFO: Got endpoints: latency-svc-w6ss7 [357.367273ms] +Jul 27 01:33:49.116: INFO: Created: latency-svc-w2crd +Jul 27 01:33:49.125: INFO: Got endpoints: latency-svc-w2crd [357.175934ms] +Jul 27 01:33:49.138: INFO: Created: latency-svc-xbh56 +Jul 27 01:33:49.171: INFO: Got endpoints: latency-svc-xbh56 [378.745824ms] +Jul 27 01:33:49.182: INFO: Created: latency-svc-7xbwc +Jul 27 01:33:49.200: INFO: Got endpoints: latency-svc-7xbwc [390.445226ms] +Jul 27 01:33:49.207: INFO: Created: latency-svc-tnj5v +Jul 27 01:33:49.216: INFO: Got endpoints: latency-svc-tnj5v [376.758129ms] +Jul 27 01:33:49.232: INFO: Created: latency-svc-4t2kn +Jul 27 01:33:49.240: INFO: Got endpoints: latency-svc-4t2kn [374.702642ms] +Jul 27 01:33:49.266: INFO: Created: latency-svc-8wdjc +Jul 27 01:33:49.270: INFO: Got endpoints: latency-svc-8wdjc [385.295057ms] +Jul 27 01:33:49.284: INFO: Created: latency-svc-rtmn8 +Jul 27 01:33:49.304: INFO: Got endpoints: latency-svc-rtmn8 [396.193317ms] +Jul 27 01:33:49.308: INFO: Created: latency-svc-f6gpn +Jul 27 01:33:49.324: INFO: Got endpoints: latency-svc-f6gpn [398.134198ms] +Jul 27 01:33:49.333: INFO: Created: latency-svc-qc776 +Jul 27 01:33:49.355: INFO: Got endpoints: latency-svc-qc776 [413.557531ms] +Jul 27 01:33:49.374: INFO: Created: latency-svc-rqc2h +Jul 27 01:33:49.381: INFO: Got endpoints: latency-svc-rqc2h [410.807787ms] +Jul 27 01:33:49.399: INFO: Created: latency-svc-x5dck +Jul 27 01:33:49.408: INFO: Got endpoints: latency-svc-x5dck [422.112786ms] +Jul 27 01:33:49.426: INFO: Created: latency-svc-5zxqq +Jul 27 01:33:49.431: INFO: Got endpoints: latency-svc-5zxqq [423.233987ms] +Jul 27 01:33:49.447: INFO: Created: latency-svc-5qlv7 +Jul 27 01:33:49.458: INFO: Got endpoints: latency-svc-5qlv7 [420.485819ms] +Jul 27 01:33:49.468: INFO: Created: latency-svc-pp8tx +Jul 27 01:33:49.476: INFO: Got endpoints: latency-svc-pp8tx [394.331335ms] +Jul 27 01:33:49.494: INFO: Created: latency-svc-j5z6g +Jul 27 01:33:49.504: INFO: Got endpoints: latency-svc-j5z6g [400.021305ms] +Jul 27 01:33:49.517: INFO: Created: latency-svc-79mp8 +Jul 27 01:33:49.529: INFO: Got endpoints: latency-svc-79mp8 [403.295819ms] +Jul 27 01:33:49.535: INFO: Created: latency-svc-m7jsq +Jul 27 01:33:49.547: INFO: Got endpoints: latency-svc-m7jsq [376.702369ms] +Jul 27 01:33:49.556: INFO: Created: latency-svc-hrd88 +Jul 27 01:33:49.564: INFO: Got endpoints: latency-svc-hrd88 [347.956239ms] +Jul 27 01:33:49.573: INFO: Created: latency-svc-sm828 +Jul 27 01:33:49.583: INFO: Got endpoints: latency-svc-sm828 [382.369621ms] +Jul 27 01:33:49.596: INFO: Created: latency-svc-22jr9 +Jul 27 01:33:49.618: INFO: Got endpoints: latency-svc-22jr9 [378.258839ms] +Jul 27 01:33:49.623: INFO: Created: latency-svc-jcm6r +Jul 27 01:33:49.632: INFO: Got endpoints: latency-svc-jcm6r [362.816897ms] +Jul 27 01:33:49.645: INFO: Created: latency-svc-dtlbv +Jul 27 01:33:49.661: INFO: Got endpoints: latency-svc-dtlbv [357.066289ms] +Jul 27 01:33:49.668: INFO: Created: latency-svc-bmm8h +Jul 27 01:33:49.680: INFO: Got endpoints: latency-svc-bmm8h [355.615223ms] +Jul 27 01:33:49.690: INFO: Created: latency-svc-lg4gd +Jul 27 01:33:49.699: INFO: Got endpoints: latency-svc-lg4gd [343.66534ms] +Jul 27 01:33:49.714: INFO: Created: latency-svc-srglq +Jul 27 01:33:49.724: INFO: Got endpoints: latency-svc-srglq [343.377976ms] +Jul 27 01:33:49.735: INFO: Created: latency-svc-ntscw +Jul 27 01:33:49.751: INFO: Got endpoints: latency-svc-ntscw [342.975895ms] +Jul 27 01:33:49.753: INFO: Created: latency-svc-tfqs4 +Jul 27 01:33:49.762: INFO: Got endpoints: latency-svc-tfqs4 [330.383095ms] +Jul 27 01:33:49.773: INFO: Created: latency-svc-wl6d7 +Jul 27 01:33:49.780: INFO: Got endpoints: latency-svc-wl6d7 [322.629891ms] +Jul 27 01:33:49.796: INFO: Created: latency-svc-r25k2 +Jul 27 01:33:49.804: INFO: Got endpoints: latency-svc-r25k2 [327.967362ms] +Jul 27 01:33:49.817: INFO: Created: latency-svc-jcw6f +Jul 27 01:33:49.828: INFO: Got endpoints: latency-svc-jcw6f [324.265162ms] +Jul 27 01:33:49.838: INFO: Created: latency-svc-9kkxc +Jul 27 01:33:49.849: INFO: Got endpoints: latency-svc-9kkxc [320.325415ms] +Jul 27 01:33:49.861: INFO: Created: latency-svc-fv6ct +Jul 27 01:33:49.868: INFO: Got endpoints: latency-svc-fv6ct [320.645403ms] +Jul 27 01:33:49.880: INFO: Created: latency-svc-6cpvs +Jul 27 01:33:49.891: INFO: Got endpoints: latency-svc-6cpvs [326.901445ms] +Jul 27 01:33:49.901: INFO: Created: latency-svc-4gkjz +Jul 27 01:33:49.909: INFO: Got endpoints: latency-svc-4gkjz [326.346663ms] +Jul 27 01:33:49.926: INFO: Created: latency-svc-n8k45 +Jul 27 01:33:49.934: INFO: Got endpoints: latency-svc-n8k45 [316.110587ms] +Jul 27 01:33:49.945: INFO: Created: latency-svc-hmp6w +Jul 27 01:33:49.956: INFO: Got endpoints: latency-svc-hmp6w [323.4094ms] +Jul 27 01:33:49.966: INFO: Created: latency-svc-57jbk +Jul 27 01:33:49.978: INFO: Got endpoints: latency-svc-57jbk [317.296532ms] +Jul 27 01:33:49.992: INFO: Created: latency-svc-8vwkr +Jul 27 01:33:50.001: INFO: Got endpoints: latency-svc-8vwkr [320.946784ms] +Jul 27 01:33:50.021: INFO: Created: latency-svc-tvs2g +Jul 27 01:33:50.037: INFO: Got endpoints: latency-svc-tvs2g [337.748775ms] +Jul 27 01:33:50.033: INFO: Created: latency-svc-r95zd +Jul 27 01:33:50.048: INFO: Got endpoints: latency-svc-r95zd [323.956104ms] +Jul 27 01:33:50.054: INFO: Created: latency-svc-kg42m +Jul 27 01:33:50.074: INFO: Got endpoints: latency-svc-kg42m [323.124005ms] +Jul 27 01:33:50.091: INFO: Created: latency-svc-xzcmt +Jul 27 01:33:50.097: INFO: Got endpoints: latency-svc-xzcmt [335.472952ms] +Jul 27 01:33:50.113: INFO: Created: latency-svc-lwt4l +Jul 27 01:33:50.129: INFO: Got endpoints: latency-svc-lwt4l [348.344077ms] +Jul 27 01:33:50.135: INFO: Created: latency-svc-sj2ts +Jul 27 01:33:50.146: INFO: Got endpoints: latency-svc-sj2ts [341.982658ms] +Jul 27 01:33:50.163: INFO: Created: latency-svc-sfsld +Jul 27 01:33:50.183: INFO: Got endpoints: latency-svc-sfsld [355.164915ms] +Jul 27 01:33:50.216: INFO: Created: latency-svc-j28hq +Jul 27 01:33:50.233: INFO: Got endpoints: latency-svc-j28hq [383.628956ms] +Jul 27 01:33:50.249: INFO: Created: latency-svc-tpvh9 +Jul 27 01:33:50.249: INFO: Got endpoints: latency-svc-tpvh9 [381.008348ms] +Jul 27 01:33:50.278: INFO: Created: latency-svc-6rk7w +Jul 27 01:33:50.290: INFO: Got endpoints: latency-svc-6rk7w [398.959358ms] +Jul 27 01:33:50.306: INFO: Created: latency-svc-trdrl +Jul 27 01:33:50.317: INFO: Got endpoints: latency-svc-trdrl [407.939613ms] +Jul 27 01:33:50.332: INFO: Created: latency-svc-kf7dv +Jul 27 01:33:50.343: INFO: Got endpoints: latency-svc-kf7dv [408.681171ms] +Jul 27 01:33:50.357: INFO: Created: latency-svc-wjgk4 +Jul 27 01:33:50.368: INFO: Got endpoints: latency-svc-wjgk4 [412.258694ms] +Jul 27 01:33:50.403: INFO: Created: latency-svc-dq4qb +Jul 27 01:33:50.414: INFO: Got endpoints: latency-svc-dq4qb [435.440812ms] +Jul 27 01:33:50.429: INFO: Created: latency-svc-qpc2l +Jul 27 01:33:50.430: INFO: Got endpoints: latency-svc-qpc2l [428.93591ms] +Jul 27 01:33:50.455: INFO: Created: latency-svc-6sl2j +Jul 27 01:33:50.471: INFO: Got endpoints: latency-svc-6sl2j [434.035996ms] +Jul 27 01:33:50.484: INFO: Created: latency-svc-s48dh +Jul 27 01:33:50.511: INFO: Created: latency-svc-6rfd6 +Jul 27 01:33:50.520: INFO: Got endpoints: latency-svc-s48dh [471.670534ms] +Jul 27 01:33:50.529: INFO: Got endpoints: latency-svc-6rfd6 [432.385022ms] +Jul 27 01:33:50.538: INFO: Created: latency-svc-cffsk +Jul 27 01:33:50.551: INFO: Got endpoints: latency-svc-cffsk [454.010295ms] +Jul 27 01:33:50.566: INFO: Created: latency-svc-vqvtv +Jul 27 01:33:50.584: INFO: Created: latency-svc-ffdrc +Jul 27 01:33:50.589: INFO: Got endpoints: latency-svc-vqvtv [460.466805ms] +Jul 27 01:33:50.596: INFO: Got endpoints: latency-svc-ffdrc [449.592741ms] +Jul 27 01:33:50.610: INFO: Created: latency-svc-h8mkv +Jul 27 01:33:50.623: INFO: Got endpoints: latency-svc-h8mkv [439.890935ms] +Jul 27 01:33:50.634: INFO: Created: latency-svc-mfls5 +Jul 27 01:33:50.648: INFO: Got endpoints: latency-svc-mfls5 [415.316213ms] +Jul 27 01:33:50.654: INFO: Created: latency-svc-q2js9 +Jul 27 01:33:50.666: INFO: Got endpoints: latency-svc-q2js9 [416.711876ms] +Jul 27 01:33:50.694: INFO: Created: latency-svc-zgdsc +Jul 27 01:33:50.723: INFO: Got endpoints: latency-svc-zgdsc [433.289265ms] +Jul 27 01:33:50.741: INFO: Created: latency-svc-bdbds +Jul 27 01:33:50.747: INFO: Got endpoints: latency-svc-bdbds [429.920123ms] +Jul 27 01:33:50.786: INFO: Created: latency-svc-5kqsd +Jul 27 01:33:50.797: INFO: Got endpoints: latency-svc-5kqsd [454.04183ms] +Jul 27 01:33:50.870: INFO: Created: latency-svc-rcc2d +Jul 27 01:33:50.880: INFO: Got endpoints: latency-svc-rcc2d [512.074796ms] +Jul 27 01:33:50.890: INFO: Created: latency-svc-5656w +Jul 27 01:33:50.902: INFO: Got endpoints: latency-svc-5656w [488.573493ms] +Jul 27 01:33:50.912: INFO: Created: latency-svc-hr47t +Jul 27 01:33:50.923: INFO: Got endpoints: latency-svc-hr47t [493.230234ms] +Jul 27 01:33:50.969: INFO: Created: latency-svc-gpdzk +Jul 27 01:33:50.973: INFO: Got endpoints: latency-svc-gpdzk [502.076776ms] +Jul 27 01:33:50.993: INFO: Created: latency-svc-4smhp +Jul 27 01:33:51.005: INFO: Got endpoints: latency-svc-4smhp [484.631135ms] +Jul 27 01:33:51.017: INFO: Created: latency-svc-mdght +Jul 27 01:33:51.025: INFO: Got endpoints: latency-svc-mdght [495.332796ms] +Jul 27 01:33:51.054: INFO: Created: latency-svc-997w7 +Jul 27 01:33:51.060: INFO: Got endpoints: latency-svc-997w7 [508.908391ms] +Jul 27 01:33:51.074: INFO: Created: latency-svc-k6mxt +Jul 27 01:33:51.108: INFO: Got endpoints: latency-svc-k6mxt [518.531069ms] +Jul 27 01:33:51.115: INFO: Created: latency-svc-hcgr2 +Jul 27 01:33:51.132: INFO: Got endpoints: latency-svc-hcgr2 [535.925298ms] +Jul 27 01:33:51.142: INFO: Created: latency-svc-rs6pl +Jul 27 01:33:51.152: INFO: Got endpoints: latency-svc-rs6pl [529.200821ms] +Jul 27 01:33:51.192: INFO: Created: latency-svc-dz8js +Jul 27 01:33:51.202: INFO: Got endpoints: latency-svc-dz8js [554.340049ms] +Jul 27 01:33:51.231: INFO: Created: latency-svc-l6knc +Jul 27 01:33:51.244: INFO: Got endpoints: latency-svc-l6knc [577.777672ms] +Jul 27 01:33:51.295: INFO: Created: latency-svc-wj7fb +Jul 27 01:33:51.304: INFO: Got endpoints: latency-svc-wj7fb [580.508586ms] +Jul 27 01:33:51.324: INFO: Created: latency-svc-5rlnd +Jul 27 01:33:51.364: INFO: Got endpoints: latency-svc-5rlnd [616.725651ms] +Jul 27 01:33:51.398: INFO: Created: latency-svc-5vqzq +Jul 27 01:33:51.410: INFO: Got endpoints: latency-svc-5vqzq [613.181257ms] +Jul 27 01:33:51.432: INFO: Created: latency-svc-lc4k9 +Jul 27 01:33:51.452: INFO: Got endpoints: latency-svc-lc4k9 [571.862857ms] +Jul 27 01:33:51.453: INFO: Created: latency-svc-c7vsr +Jul 27 01:33:51.467: INFO: Got endpoints: latency-svc-c7vsr [564.309777ms] +Jul 27 01:33:51.486: INFO: Created: latency-svc-rvc54 +Jul 27 01:33:51.499: INFO: Got endpoints: latency-svc-rvc54 [575.693064ms] +Jul 27 01:33:51.515: INFO: Created: latency-svc-56vkj +Jul 27 01:33:51.529: INFO: Got endpoints: latency-svc-56vkj [555.734936ms] +Jul 27 01:33:51.549: INFO: Created: latency-svc-47gcx +Jul 27 01:33:51.554: INFO: Got endpoints: latency-svc-47gcx [549.686333ms] +Jul 27 01:33:51.565: INFO: Created: latency-svc-w4n2g +Jul 27 01:33:51.576: INFO: Got endpoints: latency-svc-w4n2g [551.473498ms] +Jul 27 01:33:51.589: INFO: Created: latency-svc-vm5wj +Jul 27 01:33:51.600: INFO: Got endpoints: latency-svc-vm5wj [540.226671ms] +Jul 27 01:33:51.616: INFO: Created: latency-svc-dwt9j +Jul 27 01:33:51.626: INFO: Got endpoints: latency-svc-dwt9j [518.263909ms] +Jul 27 01:33:51.643: INFO: Created: latency-svc-hgbjw +Jul 27 01:33:51.654: INFO: Got endpoints: latency-svc-hgbjw [521.667641ms] +Jul 27 01:33:51.672: INFO: Created: latency-svc-rj9xj +Jul 27 01:33:51.676: INFO: Got endpoints: latency-svc-rj9xj [523.125484ms] +Jul 27 01:33:51.688: INFO: Created: latency-svc-gpf4c +Jul 27 01:33:51.696: INFO: Got endpoints: latency-svc-gpf4c [493.614579ms] +Jul 27 01:33:51.708: INFO: Created: latency-svc-mkr42 +Jul 27 01:33:51.713: INFO: Got endpoints: latency-svc-mkr42 [469.71705ms] +Jul 27 01:33:51.726: INFO: Created: latency-svc-rnczk +Jul 27 01:33:51.744: INFO: Got endpoints: latency-svc-rnczk [439.875962ms] +Jul 27 01:33:51.752: INFO: Created: latency-svc-xnn84 +Jul 27 01:33:51.760: INFO: Got endpoints: latency-svc-xnn84 [396.658011ms] +Jul 27 01:33:51.781: INFO: Created: latency-svc-xkb5z +Jul 27 01:33:51.793: INFO: Got endpoints: latency-svc-xkb5z [382.629268ms] +Jul 27 01:33:51.807: INFO: Created: latency-svc-v5qx4 +Jul 27 01:33:51.817: INFO: Got endpoints: latency-svc-v5qx4 [364.751102ms] +Jul 27 01:33:51.833: INFO: Created: latency-svc-jw9lx +Jul 27 01:33:51.845: INFO: Got endpoints: latency-svc-jw9lx [378.517632ms] +Jul 27 01:33:51.853: INFO: Created: latency-svc-7ln9m +Jul 27 01:33:51.870: INFO: Got endpoints: latency-svc-7ln9m [370.974081ms] +Jul 27 01:33:51.886: INFO: Created: latency-svc-s6hbq +Jul 27 01:33:51.896: INFO: Got endpoints: latency-svc-s6hbq [367.586029ms] +Jul 27 01:33:51.916: INFO: Created: latency-svc-7gk4r +Jul 27 01:33:51.924: INFO: Got endpoints: latency-svc-7gk4r [369.348043ms] +Jul 27 01:33:51.944: INFO: Created: latency-svc-sxgdk +Jul 27 01:33:51.952: INFO: Got endpoints: latency-svc-sxgdk [375.759144ms] +Jul 27 01:33:51.973: INFO: Created: latency-svc-bvj6z +Jul 27 01:33:51.984: INFO: Got endpoints: latency-svc-bvj6z [383.509314ms] +Jul 27 01:33:52.000: INFO: Created: latency-svc-hlc4k +Jul 27 01:33:52.012: INFO: Got endpoints: latency-svc-hlc4k [386.142168ms] +Jul 27 01:33:52.032: INFO: Created: latency-svc-wrhsf +Jul 27 01:33:52.039: INFO: Got endpoints: latency-svc-wrhsf [385.567113ms] +Jul 27 01:33:52.057: INFO: Created: latency-svc-6dtnh +Jul 27 01:33:52.070: INFO: Got endpoints: latency-svc-6dtnh [394.278337ms] +Jul 27 01:33:52.080: INFO: Created: latency-svc-nn8lz +Jul 27 01:33:52.088: INFO: Got endpoints: latency-svc-nn8lz [392.178546ms] +Jul 27 01:33:52.110: INFO: Created: latency-svc-9wrq9 +Jul 27 01:33:52.117: INFO: Got endpoints: latency-svc-9wrq9 [404.097549ms] +Jul 27 01:33:52.118: INFO: Latencies: [49.164624ms 51.536738ms 67.092307ms 71.814822ms 79.712722ms 86.94388ms 100.378018ms 101.1614ms 105.701757ms 118.04166ms 119.328598ms 125.358693ms 138.745581ms 139.784715ms 145.593209ms 158.302814ms 162.250689ms 164.321674ms 167.017329ms 180.750466ms 180.819909ms 189.187965ms 193.059735ms 203.871947ms 210.347816ms 213.971199ms 225.169832ms 232.449808ms 237.292803ms 245.999915ms 249.53422ms 258.590945ms 259.50474ms 277.525322ms 285.108539ms 290.874378ms 294.893535ms 296.182309ms 305.9929ms 308.37464ms 309.540418ms 310.054525ms 313.732958ms 313.873863ms 313.931024ms 316.110587ms 316.127879ms 317.296532ms 319.599784ms 320.325415ms 320.645403ms 320.946784ms 322.629891ms 323.124005ms 323.4094ms 323.956104ms 324.265162ms 326.346663ms 326.901445ms 327.967362ms 328.114237ms 329.503144ms 330.383095ms 331.503218ms 331.866714ms 331.887866ms 333.647526ms 334.572435ms 334.943907ms 335.383989ms 335.472952ms 336.66827ms 337.470053ms 337.748775ms 339.260212ms 339.335038ms 340.683374ms 341.982658ms 342.975895ms 343.377976ms 343.66534ms 343.67094ms 343.965209ms 347.956239ms 348.344077ms 351.161162ms 351.739586ms 355.164915ms 355.615223ms 357.066289ms 357.175934ms 357.367273ms 362.816897ms 364.751102ms 366.650632ms 367.586029ms 369.348043ms 370.974081ms 372.396489ms 374.702642ms 375.759144ms 376.568661ms 376.702369ms 376.758129ms 378.258839ms 378.517632ms 378.745824ms 381.008348ms 381.328828ms 382.369621ms 382.629268ms 383.509314ms 383.628956ms 385.295057ms 385.567113ms 386.142168ms 390.445226ms 392.178546ms 394.278337ms 394.331335ms 394.518099ms 396.193317ms 396.658011ms 398.134198ms 398.959358ms 400.021305ms 403.295819ms 404.097549ms 407.939613ms 408.681171ms 410.807787ms 411.105522ms 412.258694ms 412.628174ms 413.557531ms 415.316213ms 416.711876ms 420.485819ms 422.112786ms 423.233987ms 428.93591ms 429.920123ms 432.385022ms 432.869175ms 433.289265ms 434.035996ms 435.440812ms 437.950229ms 439.875962ms 439.890935ms 449.592741ms 454.010295ms 454.04183ms 456.404278ms 460.466805ms 469.71705ms 471.670534ms 484.631135ms 488.573493ms 490.278805ms 493.230234ms 493.614579ms 495.332796ms 496.641036ms 502.076776ms 502.928694ms 508.908391ms 512.074796ms 518.263909ms 518.531069ms 521.667641ms 522.15281ms 522.319045ms 523.125484ms 525.215054ms 529.200821ms 535.925298ms 540.226671ms 541.452754ms 549.686333ms 551.473498ms 554.340049ms 555.734936ms 564.309777ms 565.282079ms 571.862857ms 575.693064ms 577.777672ms 580.508586ms 581.978742ms 585.496959ms 593.738585ms 613.181257ms 614.751935ms 616.362935ms 616.725651ms 625.793406ms 635.58311ms 650.323601ms 651.819429ms] +Jul 27 01:33:52.118: INFO: 50 %ile: 375.759144ms +Jul 27 01:33:52.118: INFO: 90 %ile: 551.473498ms +Jul 27 01:33:52.118: INFO: 99 %ile: 650.323601ms +Jul 27 01:33:52.118: INFO: Total sample count: 200 +[AfterEach] [sig-network] Service endpoints latency test/e2e/framework/node/init/init.go:32 -Jun 12 20:44:59.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] RuntimeClass +Jul 27 01:33:52.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Service endpoints latency test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] RuntimeClass +[DeferCleanup (Each)] [sig-network] Service endpoints latency dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] RuntimeClass +[DeferCleanup (Each)] [sig-network] Service endpoints latency tear down framework | framework.go:193 -STEP: Destroying namespace "runtimeclass-2546" for this suite. 06/12/23 20:44:59.316 +STEP: Destroying namespace "svc-latency-61" for this suite. 07/27/23 01:33:52.134 ------------------------------ -• [2.199 seconds] -[sig-node] RuntimeClass -test/e2e/common/node/framework.go:23 - should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance] - test/e2e/common/node/runtimeclass.go:129 +• [SLOW TEST] [7.531 seconds] +[sig-network] Service endpoints latency +test/e2e/network/common/framework.go:23 + should not be very high [Conformance] + test/e2e/network/service_latency.go:59 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] RuntimeClass + [BeforeEach] [sig-network] Service endpoints latency set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:44:57.14 - Jun 12 20:44:57.140: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename runtimeclass 06/12/23 20:44:57.142 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:44:57.201 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:44:57.209 - [BeforeEach] [sig-node] RuntimeClass + STEP: Creating a kubernetes client 07/27/23 01:33:44.639 + Jul 27 01:33:44.639: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename svc-latency 07/27/23 01:33:44.64 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:33:44.679 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:33:44.688 + [BeforeEach] [sig-network] Service endpoints latency test/e2e/framework/metrics/init/init.go:31 - [It] should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance] - test/e2e/common/node/runtimeclass.go:129 - Jun 12 20:44:57.264: INFO: Waiting up to 1m20s for at least 1 pods in namespace runtimeclass-2546 to be scheduled - Jun 12 20:44:57.272: INFO: 1 pods are not scheduled: [runtimeclass-2546/test-runtimeclass-runtimeclass-2546-preconfigured-handler-t4d98(00fb6c0b-6b8a-4485-9533-84e08a55e022)] - [AfterEach] [sig-node] RuntimeClass + [It] should not be very high [Conformance] + test/e2e/network/service_latency.go:59 + Jul 27 01:33:44.697: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: creating replication controller svc-latency-rc in namespace svc-latency-61 07/27/23 01:33:44.698 + W0727 01:33:44.724405 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "svc-latency-rc" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "svc-latency-rc" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "svc-latency-rc" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "svc-latency-rc" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + I0727 01:33:44.724571 20 runners.go:193] Created replication controller with name: svc-latency-rc, namespace: svc-latency-61, replica count: 1 + I0727 01:33:45.776387 20 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + I0727 01:33:46.776848 20 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Jul 27 01:33:46.938: INFO: Created: latency-svc-z474h + Jul 27 01:33:46.948: INFO: Got endpoints: latency-svc-z474h [70.660208ms] + Jul 27 01:33:46.988: INFO: Created: latency-svc-c9l85 + Jul 27 01:33:46.998: INFO: Got endpoints: latency-svc-c9l85 [49.164624ms] + Jul 27 01:33:47.009: INFO: Created: latency-svc-pr826 + Jul 27 01:33:47.015: INFO: Got endpoints: latency-svc-pr826 [67.092307ms] + Jul 27 01:33:47.025: INFO: Created: latency-svc-prz7m + Jul 27 01:33:47.036: INFO: Got endpoints: latency-svc-prz7m [86.94388ms] + Jul 27 01:33:47.043: INFO: Created: latency-svc-xc7pw + Jul 27 01:33:47.054: INFO: Got endpoints: latency-svc-xc7pw [105.701757ms] + Jul 27 01:33:47.064: INFO: Created: latency-svc-4b6fv + Jul 27 01:33:47.075: INFO: Got endpoints: latency-svc-4b6fv [125.358693ms] + Jul 27 01:33:47.084: INFO: Created: latency-svc-lxvm6 + Jul 27 01:33:47.095: INFO: Got endpoints: latency-svc-lxvm6 [145.593209ms] + Jul 27 01:33:47.107: INFO: Created: latency-svc-qvjfl + Jul 27 01:33:47.116: INFO: Got endpoints: latency-svc-qvjfl [167.017329ms] + Jul 27 01:33:47.128: INFO: Created: latency-svc-7nvq4 + Jul 27 01:33:47.138: INFO: Got endpoints: latency-svc-7nvq4 [189.187965ms] + Jul 27 01:33:47.153: INFO: Created: latency-svc-cntx7 + Jul 27 01:33:47.163: INFO: Got endpoints: latency-svc-cntx7 [213.971199ms] + Jul 27 01:33:47.174: INFO: Created: latency-svc-67qkt + Jul 27 01:33:47.186: INFO: Got endpoints: latency-svc-67qkt [237.292803ms] + Jul 27 01:33:47.195: INFO: Created: latency-svc-8fvd9 + Jul 27 01:33:47.207: INFO: Got endpoints: latency-svc-8fvd9 [258.590945ms] + Jul 27 01:33:47.215: INFO: Created: latency-svc-nshjf + Jul 27 01:33:47.226: INFO: Got endpoints: latency-svc-nshjf [277.525322ms] + Jul 27 01:33:47.234: INFO: Created: latency-svc-nnd5m + Jul 27 01:33:47.245: INFO: Got endpoints: latency-svc-nnd5m [296.182309ms] + Jul 27 01:33:47.253: INFO: Created: latency-svc-rcd98 + Jul 27 01:33:47.263: INFO: Got endpoints: latency-svc-rcd98 [313.873863ms] + Jul 27 01:33:47.275: INFO: Created: latency-svc-dfm2v + Jul 27 01:33:47.284: INFO: Got endpoints: latency-svc-dfm2v [334.943907ms] + Jul 27 01:33:47.297: INFO: Created: latency-svc-p4pmn + Jul 27 01:33:47.306: INFO: Got endpoints: latency-svc-p4pmn [308.37464ms] + Jul 27 01:33:47.317: INFO: Created: latency-svc-dwmrl + Jul 27 01:33:47.325: INFO: Got endpoints: latency-svc-dwmrl [310.054525ms] + Jul 27 01:33:47.336: INFO: Created: latency-svc-th28s + Jul 27 01:33:47.345: INFO: Got endpoints: latency-svc-th28s [309.540418ms] + Jul 27 01:33:47.361: INFO: Created: latency-svc-r7f95 + Jul 27 01:33:47.368: INFO: Got endpoints: latency-svc-r7f95 [313.732958ms] + Jul 27 01:33:47.381: INFO: Created: latency-svc-hhqlg + Jul 27 01:33:47.391: INFO: Got endpoints: latency-svc-hhqlg [316.127879ms] + Jul 27 01:33:47.403: INFO: Created: latency-svc-p6d5k + Jul 27 01:33:47.414: INFO: Got endpoints: latency-svc-p6d5k [319.599784ms] + Jul 27 01:33:47.731: INFO: Created: latency-svc-lzsrp + Jul 27 01:33:47.731: INFO: Created: latency-svc-tbkvx + Jul 27 01:33:47.731: INFO: Created: latency-svc-qrq95 + Jul 27 01:33:47.732: INFO: Created: latency-svc-wgdkp + Jul 27 01:33:47.732: INFO: Created: latency-svc-44gc7 + Jul 27 01:33:47.732: INFO: Created: latency-svc-2tvdj + Jul 27 01:33:47.732: INFO: Created: latency-svc-f758b + Jul 27 01:33:47.732: INFO: Created: latency-svc-ncfqk + Jul 27 01:33:47.732: INFO: Created: latency-svc-9jlzm + Jul 27 01:33:47.732: INFO: Created: latency-svc-2wfsz + Jul 27 01:33:47.732: INFO: Created: latency-svc-vbkj2 + Jul 27 01:33:47.733: INFO: Created: latency-svc-glfv2 + Jul 27 01:33:47.733: INFO: Created: latency-svc-t9pqh + Jul 27 01:33:47.733: INFO: Created: latency-svc-pvnt2 + Jul 27 01:33:47.733: INFO: Created: latency-svc-pmsmx + Jul 27 01:33:47.738: INFO: Got endpoints: latency-svc-9jlzm [412.628174ms] + Jul 27 01:33:47.739: INFO: Got endpoints: latency-svc-pvnt2 [432.869175ms] + Jul 27 01:33:47.740: INFO: Got endpoints: latency-svc-ncfqk [394.518099ms] + Jul 27 01:33:47.740: INFO: Got endpoints: latency-svc-tbkvx [456.404278ms] + Jul 27 01:33:47.740: INFO: Got endpoints: latency-svc-glfv2 [372.396489ms] + Jul 27 01:33:47.766: INFO: Got endpoints: latency-svc-pmsmx [351.739586ms] + Jul 27 01:33:47.766: INFO: Got endpoints: latency-svc-lzsrp [502.928694ms] + Jul 27 01:33:47.768: INFO: Got endpoints: latency-svc-f758b [522.319045ms] + Jul 27 01:33:47.768: INFO: Got endpoints: latency-svc-t9pqh [541.452754ms] + Jul 27 01:33:47.768: INFO: Got endpoints: latency-svc-44gc7 [651.819429ms] + Jul 27 01:33:47.772: INFO: Got endpoints: latency-svc-qrq95 [381.328828ms] + Jul 27 01:33:47.772: INFO: Got endpoints: latency-svc-2tvdj [565.282079ms] + Jul 27 01:33:47.780: INFO: Got endpoints: latency-svc-2wfsz [593.738585ms] + Jul 27 01:33:47.789: INFO: Got endpoints: latency-svc-vbkj2 [650.323601ms] + Jul 27 01:33:47.789: INFO: Got endpoints: latency-svc-wgdkp [625.793406ms] + Jul 27 01:33:47.805: INFO: Created: latency-svc-r97vw + Jul 27 01:33:47.810: INFO: Got endpoints: latency-svc-r97vw [71.814822ms] + Jul 27 01:33:47.826: INFO: Created: latency-svc-lpdjs + Jul 27 01:33:47.839: INFO: Got endpoints: latency-svc-lpdjs [100.378018ms] + Jul 27 01:33:47.849: INFO: Created: latency-svc-g957f + Jul 27 01:33:47.858: INFO: Got endpoints: latency-svc-g957f [118.04166ms] + Jul 27 01:33:47.869: INFO: Created: latency-svc-r6w5j + Jul 27 01:33:47.880: INFO: Got endpoints: latency-svc-r6w5j [139.784715ms] + Jul 27 01:33:47.892: INFO: Created: latency-svc-2kcq6 + Jul 27 01:33:47.902: INFO: Got endpoints: latency-svc-2kcq6 [162.250689ms] + Jul 27 01:33:47.930: INFO: Created: latency-svc-tbjhx + Jul 27 01:33:47.930: INFO: Got endpoints: latency-svc-tbjhx [164.321674ms] + Jul 27 01:33:47.944: INFO: Created: latency-svc-bkql4 + Jul 27 01:33:47.947: INFO: Got endpoints: latency-svc-bkql4 [180.819909ms] + Jul 27 01:33:47.961: INFO: Created: latency-svc-mkb4l + Jul 27 01:33:47.972: INFO: Got endpoints: latency-svc-mkb4l [203.871947ms] + Jul 27 01:33:47.982: INFO: Created: latency-svc-hbfdw + Jul 27 01:33:47.993: INFO: Got endpoints: latency-svc-hbfdw [225.169832ms] + Jul 27 01:33:48.003: INFO: Created: latency-svc-vr7cr + Jul 27 01:33:48.014: INFO: Got endpoints: latency-svc-vr7cr [245.999915ms] + Jul 27 01:33:48.024: INFO: Created: latency-svc-szt72 + Jul 27 01:33:48.067: INFO: Got endpoints: latency-svc-szt72 [294.893535ms] + Jul 27 01:33:48.069: INFO: Created: latency-svc-tzm8s + Jul 27 01:33:48.110: INFO: Got endpoints: latency-svc-tzm8s [337.470053ms] + Jul 27 01:33:48.113: INFO: Created: latency-svc-4vm85 + Jul 27 01:33:48.119: INFO: Got endpoints: latency-svc-4vm85 [339.260212ms] + Jul 27 01:33:48.133: INFO: Created: latency-svc-sgwbr + Jul 27 01:33:48.140: INFO: Got endpoints: latency-svc-sgwbr [351.161162ms] + Jul 27 01:33:48.405: INFO: Created: latency-svc-k5vsr + Jul 27 01:33:48.418: INFO: Created: latency-svc-gwzbn + Jul 27 01:33:48.419: INFO: Created: latency-svc-s4ppk + Jul 27 01:33:48.419: INFO: Created: latency-svc-v724f + Jul 27 01:33:48.419: INFO: Created: latency-svc-94dk6 + Jul 27 01:33:48.420: INFO: Created: latency-svc-x9fc5 + Jul 27 01:33:48.420: INFO: Created: latency-svc-hskdr + Jul 27 01:33:48.420: INFO: Created: latency-svc-5dqf9 + Jul 27 01:33:48.420: INFO: Created: latency-svc-vk7g6 + Jul 27 01:33:48.420: INFO: Created: latency-svc-d8mnw + Jul 27 01:33:48.420: INFO: Created: latency-svc-nd2pr + Jul 27 01:33:48.420: INFO: Created: latency-svc-p6zts + Jul 27 01:33:48.420: INFO: Created: latency-svc-mjtdv + Jul 27 01:33:48.423: INFO: Created: latency-svc-rbwls + Jul 27 01:33:48.424: INFO: Created: latency-svc-v896d + Jul 27 01:33:48.424: INFO: Got endpoints: latency-svc-v724f [635.58311ms] + Jul 27 01:33:48.425: INFO: Got endpoints: latency-svc-k5vsr [614.751935ms] + Jul 27 01:33:48.425: INFO: Got endpoints: latency-svc-gwzbn [522.15281ms] + Jul 27 01:33:48.425: INFO: Got endpoints: latency-svc-p6zts [285.108539ms] + Jul 27 01:33:48.425: INFO: Got endpoints: latency-svc-v896d [411.105522ms] + Jul 27 01:33:48.431: INFO: Got endpoints: latency-svc-s4ppk [437.950229ms] + Jul 27 01:33:48.433: INFO: Got endpoints: latency-svc-hskdr [313.931024ms] + Jul 27 01:33:48.441: INFO: Got endpoints: latency-svc-5dqf9 [331.503218ms] + Jul 27 01:33:48.443: INFO: Got endpoints: latency-svc-mjtdv [585.496959ms] + Jul 27 01:33:48.444: INFO: Got endpoints: latency-svc-vk7g6 [496.641036ms] + Jul 27 01:33:48.444: INFO: Got endpoints: latency-svc-nd2pr [376.568661ms] + Jul 27 01:33:48.456: INFO: Got endpoints: latency-svc-x9fc5 [525.215054ms] + Jul 27 01:33:48.456: INFO: Got endpoints: latency-svc-94dk6 [616.362935ms] + Jul 27 01:33:48.462: INFO: Got endpoints: latency-svc-d8mnw [581.978742ms] + Jul 27 01:33:48.462: INFO: Got endpoints: latency-svc-rbwls [490.278805ms] + Jul 27 01:33:48.468: INFO: Created: latency-svc-rm8d4 + Jul 27 01:33:48.476: INFO: Got endpoints: latency-svc-rm8d4 [51.536738ms] + Jul 27 01:33:48.492: INFO: Created: latency-svc-nltqb + Jul 27 01:33:48.505: INFO: Got endpoints: latency-svc-nltqb [79.712722ms] + Jul 27 01:33:48.516: INFO: Created: latency-svc-gggtg + Jul 27 01:33:48.526: INFO: Got endpoints: latency-svc-gggtg [101.1614ms] + Jul 27 01:33:48.534: INFO: Created: latency-svc-pv7px + Jul 27 01:33:48.544: INFO: Got endpoints: latency-svc-pv7px [119.328598ms] + Jul 27 01:33:48.556: INFO: Created: latency-svc-fh6tm + Jul 27 01:33:48.564: INFO: Got endpoints: latency-svc-fh6tm [138.745581ms] + Jul 27 01:33:48.578: INFO: Created: latency-svc-76685 + Jul 27 01:33:48.589: INFO: Got endpoints: latency-svc-76685 [158.302814ms] + Jul 27 01:33:48.598: INFO: Created: latency-svc-9hvl5 + Jul 27 01:33:48.614: INFO: Got endpoints: latency-svc-9hvl5 [180.750466ms] + Jul 27 01:33:48.622: INFO: Created: latency-svc-lxqrp + Jul 27 01:33:48.634: INFO: Got endpoints: latency-svc-lxqrp [193.059735ms] + Jul 27 01:33:48.643: INFO: Created: latency-svc-kmb9t + Jul 27 01:33:48.654: INFO: Got endpoints: latency-svc-kmb9t [210.347816ms] + Jul 27 01:33:48.667: INFO: Created: latency-svc-vsslp + Jul 27 01:33:48.676: INFO: Got endpoints: latency-svc-vsslp [232.449808ms] + Jul 27 01:33:48.683: INFO: Created: latency-svc-b6jpc + Jul 27 01:33:48.693: INFO: Got endpoints: latency-svc-b6jpc [249.53422ms] + Jul 27 01:33:48.707: INFO: Created: latency-svc-5df8m + Jul 27 01:33:48.715: INFO: Got endpoints: latency-svc-5df8m [259.50474ms] + Jul 27 01:33:48.736: INFO: Created: latency-svc-pszn8 + Jul 27 01:33:48.746: INFO: Got endpoints: latency-svc-pszn8 [290.874378ms] + Jul 27 01:33:48.759: INFO: Created: latency-svc-6qw6l + Jul 27 01:33:48.768: INFO: Got endpoints: latency-svc-6qw6l [305.9929ms] + Jul 27 01:33:48.781: INFO: Created: latency-svc-j2rvw + Jul 27 01:33:48.792: INFO: Got endpoints: latency-svc-j2rvw [329.503144ms] + Jul 27 01:33:48.800: INFO: Created: latency-svc-45jmq + Jul 27 01:33:48.810: INFO: Got endpoints: latency-svc-45jmq [333.647526ms] + Jul 27 01:33:48.833: INFO: Created: latency-svc-9plk4 + Jul 27 01:33:48.839: INFO: Got endpoints: latency-svc-9plk4 [334.572435ms] + Jul 27 01:33:48.852: INFO: Created: latency-svc-pzccs + Jul 27 01:33:48.865: INFO: Got endpoints: latency-svc-pzccs [339.335038ms] + Jul 27 01:33:48.873: INFO: Created: latency-svc-4vjcl + Jul 27 01:33:48.884: INFO: Got endpoints: latency-svc-4vjcl [340.683374ms] + Jul 27 01:33:48.895: INFO: Created: latency-svc-8zlqs + Jul 27 01:33:48.908: INFO: Got endpoints: latency-svc-8zlqs [343.965209ms] + Jul 27 01:33:48.918: INFO: Created: latency-svc-lldmr + Jul 27 01:33:48.926: INFO: Got endpoints: latency-svc-lldmr [336.66827ms] + Jul 27 01:33:48.932: INFO: Created: latency-svc-vv2gr + Jul 27 01:33:48.942: INFO: Got endpoints: latency-svc-vv2gr [328.114237ms] + Jul 27 01:33:48.955: INFO: Created: latency-svc-vs9pv + Jul 27 01:33:48.970: INFO: Got endpoints: latency-svc-vs9pv [335.383989ms] + Jul 27 01:33:48.978: INFO: Created: latency-svc-tkw7j + Jul 27 01:33:48.986: INFO: Got endpoints: latency-svc-tkw7j [331.887866ms] + Jul 27 01:33:48.998: INFO: Created: latency-svc-r88n7 + Jul 27 01:33:49.008: INFO: Got endpoints: latency-svc-r88n7 [331.866714ms] + Jul 27 01:33:49.028: INFO: Created: latency-svc-5dj55 + Jul 27 01:33:49.037: INFO: Got endpoints: latency-svc-5dj55 [343.67094ms] + Jul 27 01:33:49.043: INFO: Created: latency-svc-bcr98 + Jul 27 01:33:49.082: INFO: Got endpoints: latency-svc-bcr98 [366.650632ms] + Jul 27 01:33:49.094: INFO: Created: latency-svc-w6ss7 + Jul 27 01:33:49.104: INFO: Got endpoints: latency-svc-w6ss7 [357.367273ms] + Jul 27 01:33:49.116: INFO: Created: latency-svc-w2crd + Jul 27 01:33:49.125: INFO: Got endpoints: latency-svc-w2crd [357.175934ms] + Jul 27 01:33:49.138: INFO: Created: latency-svc-xbh56 + Jul 27 01:33:49.171: INFO: Got endpoints: latency-svc-xbh56 [378.745824ms] + Jul 27 01:33:49.182: INFO: Created: latency-svc-7xbwc + Jul 27 01:33:49.200: INFO: Got endpoints: latency-svc-7xbwc [390.445226ms] + Jul 27 01:33:49.207: INFO: Created: latency-svc-tnj5v + Jul 27 01:33:49.216: INFO: Got endpoints: latency-svc-tnj5v [376.758129ms] + Jul 27 01:33:49.232: INFO: Created: latency-svc-4t2kn + Jul 27 01:33:49.240: INFO: Got endpoints: latency-svc-4t2kn [374.702642ms] + Jul 27 01:33:49.266: INFO: Created: latency-svc-8wdjc + Jul 27 01:33:49.270: INFO: Got endpoints: latency-svc-8wdjc [385.295057ms] + Jul 27 01:33:49.284: INFO: Created: latency-svc-rtmn8 + Jul 27 01:33:49.304: INFO: Got endpoints: latency-svc-rtmn8 [396.193317ms] + Jul 27 01:33:49.308: INFO: Created: latency-svc-f6gpn + Jul 27 01:33:49.324: INFO: Got endpoints: latency-svc-f6gpn [398.134198ms] + Jul 27 01:33:49.333: INFO: Created: latency-svc-qc776 + Jul 27 01:33:49.355: INFO: Got endpoints: latency-svc-qc776 [413.557531ms] + Jul 27 01:33:49.374: INFO: Created: latency-svc-rqc2h + Jul 27 01:33:49.381: INFO: Got endpoints: latency-svc-rqc2h [410.807787ms] + Jul 27 01:33:49.399: INFO: Created: latency-svc-x5dck + Jul 27 01:33:49.408: INFO: Got endpoints: latency-svc-x5dck [422.112786ms] + Jul 27 01:33:49.426: INFO: Created: latency-svc-5zxqq + Jul 27 01:33:49.431: INFO: Got endpoints: latency-svc-5zxqq [423.233987ms] + Jul 27 01:33:49.447: INFO: Created: latency-svc-5qlv7 + Jul 27 01:33:49.458: INFO: Got endpoints: latency-svc-5qlv7 [420.485819ms] + Jul 27 01:33:49.468: INFO: Created: latency-svc-pp8tx + Jul 27 01:33:49.476: INFO: Got endpoints: latency-svc-pp8tx [394.331335ms] + Jul 27 01:33:49.494: INFO: Created: latency-svc-j5z6g + Jul 27 01:33:49.504: INFO: Got endpoints: latency-svc-j5z6g [400.021305ms] + Jul 27 01:33:49.517: INFO: Created: latency-svc-79mp8 + Jul 27 01:33:49.529: INFO: Got endpoints: latency-svc-79mp8 [403.295819ms] + Jul 27 01:33:49.535: INFO: Created: latency-svc-m7jsq + Jul 27 01:33:49.547: INFO: Got endpoints: latency-svc-m7jsq [376.702369ms] + Jul 27 01:33:49.556: INFO: Created: latency-svc-hrd88 + Jul 27 01:33:49.564: INFO: Got endpoints: latency-svc-hrd88 [347.956239ms] + Jul 27 01:33:49.573: INFO: Created: latency-svc-sm828 + Jul 27 01:33:49.583: INFO: Got endpoints: latency-svc-sm828 [382.369621ms] + Jul 27 01:33:49.596: INFO: Created: latency-svc-22jr9 + Jul 27 01:33:49.618: INFO: Got endpoints: latency-svc-22jr9 [378.258839ms] + Jul 27 01:33:49.623: INFO: Created: latency-svc-jcm6r + Jul 27 01:33:49.632: INFO: Got endpoints: latency-svc-jcm6r [362.816897ms] + Jul 27 01:33:49.645: INFO: Created: latency-svc-dtlbv + Jul 27 01:33:49.661: INFO: Got endpoints: latency-svc-dtlbv [357.066289ms] + Jul 27 01:33:49.668: INFO: Created: latency-svc-bmm8h + Jul 27 01:33:49.680: INFO: Got endpoints: latency-svc-bmm8h [355.615223ms] + Jul 27 01:33:49.690: INFO: Created: latency-svc-lg4gd + Jul 27 01:33:49.699: INFO: Got endpoints: latency-svc-lg4gd [343.66534ms] + Jul 27 01:33:49.714: INFO: Created: latency-svc-srglq + Jul 27 01:33:49.724: INFO: Got endpoints: latency-svc-srglq [343.377976ms] + Jul 27 01:33:49.735: INFO: Created: latency-svc-ntscw + Jul 27 01:33:49.751: INFO: Got endpoints: latency-svc-ntscw [342.975895ms] + Jul 27 01:33:49.753: INFO: Created: latency-svc-tfqs4 + Jul 27 01:33:49.762: INFO: Got endpoints: latency-svc-tfqs4 [330.383095ms] + Jul 27 01:33:49.773: INFO: Created: latency-svc-wl6d7 + Jul 27 01:33:49.780: INFO: Got endpoints: latency-svc-wl6d7 [322.629891ms] + Jul 27 01:33:49.796: INFO: Created: latency-svc-r25k2 + Jul 27 01:33:49.804: INFO: Got endpoints: latency-svc-r25k2 [327.967362ms] + Jul 27 01:33:49.817: INFO: Created: latency-svc-jcw6f + Jul 27 01:33:49.828: INFO: Got endpoints: latency-svc-jcw6f [324.265162ms] + Jul 27 01:33:49.838: INFO: Created: latency-svc-9kkxc + Jul 27 01:33:49.849: INFO: Got endpoints: latency-svc-9kkxc [320.325415ms] + Jul 27 01:33:49.861: INFO: Created: latency-svc-fv6ct + Jul 27 01:33:49.868: INFO: Got endpoints: latency-svc-fv6ct [320.645403ms] + Jul 27 01:33:49.880: INFO: Created: latency-svc-6cpvs + Jul 27 01:33:49.891: INFO: Got endpoints: latency-svc-6cpvs [326.901445ms] + Jul 27 01:33:49.901: INFO: Created: latency-svc-4gkjz + Jul 27 01:33:49.909: INFO: Got endpoints: latency-svc-4gkjz [326.346663ms] + Jul 27 01:33:49.926: INFO: Created: latency-svc-n8k45 + Jul 27 01:33:49.934: INFO: Got endpoints: latency-svc-n8k45 [316.110587ms] + Jul 27 01:33:49.945: INFO: Created: latency-svc-hmp6w + Jul 27 01:33:49.956: INFO: Got endpoints: latency-svc-hmp6w [323.4094ms] + Jul 27 01:33:49.966: INFO: Created: latency-svc-57jbk + Jul 27 01:33:49.978: INFO: Got endpoints: latency-svc-57jbk [317.296532ms] + Jul 27 01:33:49.992: INFO: Created: latency-svc-8vwkr + Jul 27 01:33:50.001: INFO: Got endpoints: latency-svc-8vwkr [320.946784ms] + Jul 27 01:33:50.021: INFO: Created: latency-svc-tvs2g + Jul 27 01:33:50.037: INFO: Got endpoints: latency-svc-tvs2g [337.748775ms] + Jul 27 01:33:50.033: INFO: Created: latency-svc-r95zd + Jul 27 01:33:50.048: INFO: Got endpoints: latency-svc-r95zd [323.956104ms] + Jul 27 01:33:50.054: INFO: Created: latency-svc-kg42m + Jul 27 01:33:50.074: INFO: Got endpoints: latency-svc-kg42m [323.124005ms] + Jul 27 01:33:50.091: INFO: Created: latency-svc-xzcmt + Jul 27 01:33:50.097: INFO: Got endpoints: latency-svc-xzcmt [335.472952ms] + Jul 27 01:33:50.113: INFO: Created: latency-svc-lwt4l + Jul 27 01:33:50.129: INFO: Got endpoints: latency-svc-lwt4l [348.344077ms] + Jul 27 01:33:50.135: INFO: Created: latency-svc-sj2ts + Jul 27 01:33:50.146: INFO: Got endpoints: latency-svc-sj2ts [341.982658ms] + Jul 27 01:33:50.163: INFO: Created: latency-svc-sfsld + Jul 27 01:33:50.183: INFO: Got endpoints: latency-svc-sfsld [355.164915ms] + Jul 27 01:33:50.216: INFO: Created: latency-svc-j28hq + Jul 27 01:33:50.233: INFO: Got endpoints: latency-svc-j28hq [383.628956ms] + Jul 27 01:33:50.249: INFO: Created: latency-svc-tpvh9 + Jul 27 01:33:50.249: INFO: Got endpoints: latency-svc-tpvh9 [381.008348ms] + Jul 27 01:33:50.278: INFO: Created: latency-svc-6rk7w + Jul 27 01:33:50.290: INFO: Got endpoints: latency-svc-6rk7w [398.959358ms] + Jul 27 01:33:50.306: INFO: Created: latency-svc-trdrl + Jul 27 01:33:50.317: INFO: Got endpoints: latency-svc-trdrl [407.939613ms] + Jul 27 01:33:50.332: INFO: Created: latency-svc-kf7dv + Jul 27 01:33:50.343: INFO: Got endpoints: latency-svc-kf7dv [408.681171ms] + Jul 27 01:33:50.357: INFO: Created: latency-svc-wjgk4 + Jul 27 01:33:50.368: INFO: Got endpoints: latency-svc-wjgk4 [412.258694ms] + Jul 27 01:33:50.403: INFO: Created: latency-svc-dq4qb + Jul 27 01:33:50.414: INFO: Got endpoints: latency-svc-dq4qb [435.440812ms] + Jul 27 01:33:50.429: INFO: Created: latency-svc-qpc2l + Jul 27 01:33:50.430: INFO: Got endpoints: latency-svc-qpc2l [428.93591ms] + Jul 27 01:33:50.455: INFO: Created: latency-svc-6sl2j + Jul 27 01:33:50.471: INFO: Got endpoints: latency-svc-6sl2j [434.035996ms] + Jul 27 01:33:50.484: INFO: Created: latency-svc-s48dh + Jul 27 01:33:50.511: INFO: Created: latency-svc-6rfd6 + Jul 27 01:33:50.520: INFO: Got endpoints: latency-svc-s48dh [471.670534ms] + Jul 27 01:33:50.529: INFO: Got endpoints: latency-svc-6rfd6 [432.385022ms] + Jul 27 01:33:50.538: INFO: Created: latency-svc-cffsk + Jul 27 01:33:50.551: INFO: Got endpoints: latency-svc-cffsk [454.010295ms] + Jul 27 01:33:50.566: INFO: Created: latency-svc-vqvtv + Jul 27 01:33:50.584: INFO: Created: latency-svc-ffdrc + Jul 27 01:33:50.589: INFO: Got endpoints: latency-svc-vqvtv [460.466805ms] + Jul 27 01:33:50.596: INFO: Got endpoints: latency-svc-ffdrc [449.592741ms] + Jul 27 01:33:50.610: INFO: Created: latency-svc-h8mkv + Jul 27 01:33:50.623: INFO: Got endpoints: latency-svc-h8mkv [439.890935ms] + Jul 27 01:33:50.634: INFO: Created: latency-svc-mfls5 + Jul 27 01:33:50.648: INFO: Got endpoints: latency-svc-mfls5 [415.316213ms] + Jul 27 01:33:50.654: INFO: Created: latency-svc-q2js9 + Jul 27 01:33:50.666: INFO: Got endpoints: latency-svc-q2js9 [416.711876ms] + Jul 27 01:33:50.694: INFO: Created: latency-svc-zgdsc + Jul 27 01:33:50.723: INFO: Got endpoints: latency-svc-zgdsc [433.289265ms] + Jul 27 01:33:50.741: INFO: Created: latency-svc-bdbds + Jul 27 01:33:50.747: INFO: Got endpoints: latency-svc-bdbds [429.920123ms] + Jul 27 01:33:50.786: INFO: Created: latency-svc-5kqsd + Jul 27 01:33:50.797: INFO: Got endpoints: latency-svc-5kqsd [454.04183ms] + Jul 27 01:33:50.870: INFO: Created: latency-svc-rcc2d + Jul 27 01:33:50.880: INFO: Got endpoints: latency-svc-rcc2d [512.074796ms] + Jul 27 01:33:50.890: INFO: Created: latency-svc-5656w + Jul 27 01:33:50.902: INFO: Got endpoints: latency-svc-5656w [488.573493ms] + Jul 27 01:33:50.912: INFO: Created: latency-svc-hr47t + Jul 27 01:33:50.923: INFO: Got endpoints: latency-svc-hr47t [493.230234ms] + Jul 27 01:33:50.969: INFO: Created: latency-svc-gpdzk + Jul 27 01:33:50.973: INFO: Got endpoints: latency-svc-gpdzk [502.076776ms] + Jul 27 01:33:50.993: INFO: Created: latency-svc-4smhp + Jul 27 01:33:51.005: INFO: Got endpoints: latency-svc-4smhp [484.631135ms] + Jul 27 01:33:51.017: INFO: Created: latency-svc-mdght + Jul 27 01:33:51.025: INFO: Got endpoints: latency-svc-mdght [495.332796ms] + Jul 27 01:33:51.054: INFO: Created: latency-svc-997w7 + Jul 27 01:33:51.060: INFO: Got endpoints: latency-svc-997w7 [508.908391ms] + Jul 27 01:33:51.074: INFO: Created: latency-svc-k6mxt + Jul 27 01:33:51.108: INFO: Got endpoints: latency-svc-k6mxt [518.531069ms] + Jul 27 01:33:51.115: INFO: Created: latency-svc-hcgr2 + Jul 27 01:33:51.132: INFO: Got endpoints: latency-svc-hcgr2 [535.925298ms] + Jul 27 01:33:51.142: INFO: Created: latency-svc-rs6pl + Jul 27 01:33:51.152: INFO: Got endpoints: latency-svc-rs6pl [529.200821ms] + Jul 27 01:33:51.192: INFO: Created: latency-svc-dz8js + Jul 27 01:33:51.202: INFO: Got endpoints: latency-svc-dz8js [554.340049ms] + Jul 27 01:33:51.231: INFO: Created: latency-svc-l6knc + Jul 27 01:33:51.244: INFO: Got endpoints: latency-svc-l6knc [577.777672ms] + Jul 27 01:33:51.295: INFO: Created: latency-svc-wj7fb + Jul 27 01:33:51.304: INFO: Got endpoints: latency-svc-wj7fb [580.508586ms] + Jul 27 01:33:51.324: INFO: Created: latency-svc-5rlnd + Jul 27 01:33:51.364: INFO: Got endpoints: latency-svc-5rlnd [616.725651ms] + Jul 27 01:33:51.398: INFO: Created: latency-svc-5vqzq + Jul 27 01:33:51.410: INFO: Got endpoints: latency-svc-5vqzq [613.181257ms] + Jul 27 01:33:51.432: INFO: Created: latency-svc-lc4k9 + Jul 27 01:33:51.452: INFO: Got endpoints: latency-svc-lc4k9 [571.862857ms] + Jul 27 01:33:51.453: INFO: Created: latency-svc-c7vsr + Jul 27 01:33:51.467: INFO: Got endpoints: latency-svc-c7vsr [564.309777ms] + Jul 27 01:33:51.486: INFO: Created: latency-svc-rvc54 + Jul 27 01:33:51.499: INFO: Got endpoints: latency-svc-rvc54 [575.693064ms] + Jul 27 01:33:51.515: INFO: Created: latency-svc-56vkj + Jul 27 01:33:51.529: INFO: Got endpoints: latency-svc-56vkj [555.734936ms] + Jul 27 01:33:51.549: INFO: Created: latency-svc-47gcx + Jul 27 01:33:51.554: INFO: Got endpoints: latency-svc-47gcx [549.686333ms] + Jul 27 01:33:51.565: INFO: Created: latency-svc-w4n2g + Jul 27 01:33:51.576: INFO: Got endpoints: latency-svc-w4n2g [551.473498ms] + Jul 27 01:33:51.589: INFO: Created: latency-svc-vm5wj + Jul 27 01:33:51.600: INFO: Got endpoints: latency-svc-vm5wj [540.226671ms] + Jul 27 01:33:51.616: INFO: Created: latency-svc-dwt9j + Jul 27 01:33:51.626: INFO: Got endpoints: latency-svc-dwt9j [518.263909ms] + Jul 27 01:33:51.643: INFO: Created: latency-svc-hgbjw + Jul 27 01:33:51.654: INFO: Got endpoints: latency-svc-hgbjw [521.667641ms] + Jul 27 01:33:51.672: INFO: Created: latency-svc-rj9xj + Jul 27 01:33:51.676: INFO: Got endpoints: latency-svc-rj9xj [523.125484ms] + Jul 27 01:33:51.688: INFO: Created: latency-svc-gpf4c + Jul 27 01:33:51.696: INFO: Got endpoints: latency-svc-gpf4c [493.614579ms] + Jul 27 01:33:51.708: INFO: Created: latency-svc-mkr42 + Jul 27 01:33:51.713: INFO: Got endpoints: latency-svc-mkr42 [469.71705ms] + Jul 27 01:33:51.726: INFO: Created: latency-svc-rnczk + Jul 27 01:33:51.744: INFO: Got endpoints: latency-svc-rnczk [439.875962ms] + Jul 27 01:33:51.752: INFO: Created: latency-svc-xnn84 + Jul 27 01:33:51.760: INFO: Got endpoints: latency-svc-xnn84 [396.658011ms] + Jul 27 01:33:51.781: INFO: Created: latency-svc-xkb5z + Jul 27 01:33:51.793: INFO: Got endpoints: latency-svc-xkb5z [382.629268ms] + Jul 27 01:33:51.807: INFO: Created: latency-svc-v5qx4 + Jul 27 01:33:51.817: INFO: Got endpoints: latency-svc-v5qx4 [364.751102ms] + Jul 27 01:33:51.833: INFO: Created: latency-svc-jw9lx + Jul 27 01:33:51.845: INFO: Got endpoints: latency-svc-jw9lx [378.517632ms] + Jul 27 01:33:51.853: INFO: Created: latency-svc-7ln9m + Jul 27 01:33:51.870: INFO: Got endpoints: latency-svc-7ln9m [370.974081ms] + Jul 27 01:33:51.886: INFO: Created: latency-svc-s6hbq + Jul 27 01:33:51.896: INFO: Got endpoints: latency-svc-s6hbq [367.586029ms] + Jul 27 01:33:51.916: INFO: Created: latency-svc-7gk4r + Jul 27 01:33:51.924: INFO: Got endpoints: latency-svc-7gk4r [369.348043ms] + Jul 27 01:33:51.944: INFO: Created: latency-svc-sxgdk + Jul 27 01:33:51.952: INFO: Got endpoints: latency-svc-sxgdk [375.759144ms] + Jul 27 01:33:51.973: INFO: Created: latency-svc-bvj6z + Jul 27 01:33:51.984: INFO: Got endpoints: latency-svc-bvj6z [383.509314ms] + Jul 27 01:33:52.000: INFO: Created: latency-svc-hlc4k + Jul 27 01:33:52.012: INFO: Got endpoints: latency-svc-hlc4k [386.142168ms] + Jul 27 01:33:52.032: INFO: Created: latency-svc-wrhsf + Jul 27 01:33:52.039: INFO: Got endpoints: latency-svc-wrhsf [385.567113ms] + Jul 27 01:33:52.057: INFO: Created: latency-svc-6dtnh + Jul 27 01:33:52.070: INFO: Got endpoints: latency-svc-6dtnh [394.278337ms] + Jul 27 01:33:52.080: INFO: Created: latency-svc-nn8lz + Jul 27 01:33:52.088: INFO: Got endpoints: latency-svc-nn8lz [392.178546ms] + Jul 27 01:33:52.110: INFO: Created: latency-svc-9wrq9 + Jul 27 01:33:52.117: INFO: Got endpoints: latency-svc-9wrq9 [404.097549ms] + Jul 27 01:33:52.118: INFO: Latencies: [49.164624ms 51.536738ms 67.092307ms 71.814822ms 79.712722ms 86.94388ms 100.378018ms 101.1614ms 105.701757ms 118.04166ms 119.328598ms 125.358693ms 138.745581ms 139.784715ms 145.593209ms 158.302814ms 162.250689ms 164.321674ms 167.017329ms 180.750466ms 180.819909ms 189.187965ms 193.059735ms 203.871947ms 210.347816ms 213.971199ms 225.169832ms 232.449808ms 237.292803ms 245.999915ms 249.53422ms 258.590945ms 259.50474ms 277.525322ms 285.108539ms 290.874378ms 294.893535ms 296.182309ms 305.9929ms 308.37464ms 309.540418ms 310.054525ms 313.732958ms 313.873863ms 313.931024ms 316.110587ms 316.127879ms 317.296532ms 319.599784ms 320.325415ms 320.645403ms 320.946784ms 322.629891ms 323.124005ms 323.4094ms 323.956104ms 324.265162ms 326.346663ms 326.901445ms 327.967362ms 328.114237ms 329.503144ms 330.383095ms 331.503218ms 331.866714ms 331.887866ms 333.647526ms 334.572435ms 334.943907ms 335.383989ms 335.472952ms 336.66827ms 337.470053ms 337.748775ms 339.260212ms 339.335038ms 340.683374ms 341.982658ms 342.975895ms 343.377976ms 343.66534ms 343.67094ms 343.965209ms 347.956239ms 348.344077ms 351.161162ms 351.739586ms 355.164915ms 355.615223ms 357.066289ms 357.175934ms 357.367273ms 362.816897ms 364.751102ms 366.650632ms 367.586029ms 369.348043ms 370.974081ms 372.396489ms 374.702642ms 375.759144ms 376.568661ms 376.702369ms 376.758129ms 378.258839ms 378.517632ms 378.745824ms 381.008348ms 381.328828ms 382.369621ms 382.629268ms 383.509314ms 383.628956ms 385.295057ms 385.567113ms 386.142168ms 390.445226ms 392.178546ms 394.278337ms 394.331335ms 394.518099ms 396.193317ms 396.658011ms 398.134198ms 398.959358ms 400.021305ms 403.295819ms 404.097549ms 407.939613ms 408.681171ms 410.807787ms 411.105522ms 412.258694ms 412.628174ms 413.557531ms 415.316213ms 416.711876ms 420.485819ms 422.112786ms 423.233987ms 428.93591ms 429.920123ms 432.385022ms 432.869175ms 433.289265ms 434.035996ms 435.440812ms 437.950229ms 439.875962ms 439.890935ms 449.592741ms 454.010295ms 454.04183ms 456.404278ms 460.466805ms 469.71705ms 471.670534ms 484.631135ms 488.573493ms 490.278805ms 493.230234ms 493.614579ms 495.332796ms 496.641036ms 502.076776ms 502.928694ms 508.908391ms 512.074796ms 518.263909ms 518.531069ms 521.667641ms 522.15281ms 522.319045ms 523.125484ms 525.215054ms 529.200821ms 535.925298ms 540.226671ms 541.452754ms 549.686333ms 551.473498ms 554.340049ms 555.734936ms 564.309777ms 565.282079ms 571.862857ms 575.693064ms 577.777672ms 580.508586ms 581.978742ms 585.496959ms 593.738585ms 613.181257ms 614.751935ms 616.362935ms 616.725651ms 625.793406ms 635.58311ms 650.323601ms 651.819429ms] + Jul 27 01:33:52.118: INFO: 50 %ile: 375.759144ms + Jul 27 01:33:52.118: INFO: 90 %ile: 551.473498ms + Jul 27 01:33:52.118: INFO: 99 %ile: 650.323601ms + Jul 27 01:33:52.118: INFO: Total sample count: 200 + [AfterEach] [sig-network] Service endpoints latency test/e2e/framework/node/init/init.go:32 - Jun 12 20:44:59.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] RuntimeClass + Jul 27 01:33:52.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Service endpoints latency test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] RuntimeClass + [DeferCleanup (Each)] [sig-network] Service endpoints latency dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] RuntimeClass + [DeferCleanup (Each)] [sig-network] Service endpoints latency tear down framework | framework.go:193 - STEP: Destroying namespace "runtimeclass-2546" for this suite. 06/12/23 20:44:59.316 + STEP: Destroying namespace "svc-latency-61" for this suite. 07/27/23 01:33:52.134 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-auth] Certificates API [Privileged:ClusterAdmin] - should support CSR API operations [Conformance] - test/e2e/auth/certificates.go:200 -[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] +[sig-auth] ServiceAccounts + should update a ServiceAccount [Conformance] + test/e2e/auth/service_accounts.go:810 +[BeforeEach] [sig-auth] ServiceAccounts set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:44:59.349 -Jun 12 20:44:59.349: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename certificates 06/12/23 20:44:59.353 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:44:59.414 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:44:59.427 -[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] +STEP: Creating a kubernetes client 07/27/23 01:33:52.171 +Jul 27 01:33:52.171: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename svcaccounts 07/27/23 01:33:52.172 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:33:52.232 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:33:52.244 +[BeforeEach] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:31 -[It] should support CSR API operations [Conformance] - test/e2e/auth/certificates.go:200 -STEP: getting /apis 06/12/23 20:45:01.136 -STEP: getting /apis/certificates.k8s.io 06/12/23 20:45:01.176 -STEP: getting /apis/certificates.k8s.io/v1 06/12/23 20:45:01.184 -STEP: creating 06/12/23 20:45:01.189 -STEP: getting 06/12/23 20:45:01.226 -STEP: listing 06/12/23 20:45:01.238 -STEP: watching 06/12/23 20:45:01.246 -Jun 12 20:45:01.246: INFO: starting watch -STEP: patching 06/12/23 20:45:01.25 -STEP: updating 06/12/23 20:45:01.262 -Jun 12 20:45:01.273: INFO: waiting for watch events with expected annotations -Jun 12 20:45:01.273: INFO: saw patched and updated annotations -STEP: getting /approval 06/12/23 20:45:01.273 -STEP: patching /approval 06/12/23 20:45:01.28 -STEP: updating /approval 06/12/23 20:45:01.293 -STEP: getting /status 06/12/23 20:45:01.307 -STEP: patching /status 06/12/23 20:45:01.315 -STEP: updating /status 06/12/23 20:45:01.351 -STEP: deleting 06/12/23 20:45:01.385 -STEP: deleting a collection 06/12/23 20:45:01.413 -[AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] +[It] should update a ServiceAccount [Conformance] + test/e2e/auth/service_accounts.go:810 +STEP: Creating ServiceAccount "e2e-sa-t5c92" 07/27/23 01:33:52.253 +Jul 27 01:33:52.283: INFO: AutomountServiceAccountToken: false +STEP: Updating ServiceAccount "e2e-sa-t5c92" 07/27/23 01:33:52.283 +Jul 27 01:33:52.336: INFO: AutomountServiceAccountToken: true +[AfterEach] [sig-auth] ServiceAccounts test/e2e/framework/node/init/init.go:32 -Jun 12 20:45:01.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-auth] Certificates API [Privileged:ClusterAdmin] +Jul 27 01:33:52.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-auth] Certificates API [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-auth] ServiceAccounts dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-auth] Certificates API [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-auth] ServiceAccounts tear down framework | framework.go:193 -STEP: Destroying namespace "certificates-7198" for this suite. 06/12/23 20:45:01.459 +STEP: Destroying namespace "svcaccounts-3108" for this suite. 07/27/23 01:33:52.349 ------------------------------ -• [2.135 seconds] -[sig-auth] Certificates API [Privileged:ClusterAdmin] +• [0.217 seconds] +[sig-auth] ServiceAccounts test/e2e/auth/framework.go:23 - should support CSR API operations [Conformance] - test/e2e/auth/certificates.go:200 + should update a ServiceAccount [Conformance] + test/e2e/auth/service_accounts.go:810 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + [BeforeEach] [sig-auth] ServiceAccounts set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:44:59.349 - Jun 12 20:44:59.349: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename certificates 06/12/23 20:44:59.353 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:44:59.414 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:44:59.427 - [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + STEP: Creating a kubernetes client 07/27/23 01:33:52.171 + Jul 27 01:33:52.171: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename svcaccounts 07/27/23 01:33:52.172 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:33:52.232 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:33:52.244 + [BeforeEach] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:31 - [It] should support CSR API operations [Conformance] - test/e2e/auth/certificates.go:200 - STEP: getting /apis 06/12/23 20:45:01.136 - STEP: getting /apis/certificates.k8s.io 06/12/23 20:45:01.176 - STEP: getting /apis/certificates.k8s.io/v1 06/12/23 20:45:01.184 - STEP: creating 06/12/23 20:45:01.189 - STEP: getting 06/12/23 20:45:01.226 - STEP: listing 06/12/23 20:45:01.238 - STEP: watching 06/12/23 20:45:01.246 - Jun 12 20:45:01.246: INFO: starting watch - STEP: patching 06/12/23 20:45:01.25 - STEP: updating 06/12/23 20:45:01.262 - Jun 12 20:45:01.273: INFO: waiting for watch events with expected annotations - Jun 12 20:45:01.273: INFO: saw patched and updated annotations - STEP: getting /approval 06/12/23 20:45:01.273 - STEP: patching /approval 06/12/23 20:45:01.28 - STEP: updating /approval 06/12/23 20:45:01.293 - STEP: getting /status 06/12/23 20:45:01.307 - STEP: patching /status 06/12/23 20:45:01.315 - STEP: updating /status 06/12/23 20:45:01.351 - STEP: deleting 06/12/23 20:45:01.385 - STEP: deleting a collection 06/12/23 20:45:01.413 - [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + [It] should update a ServiceAccount [Conformance] + test/e2e/auth/service_accounts.go:810 + STEP: Creating ServiceAccount "e2e-sa-t5c92" 07/27/23 01:33:52.253 + Jul 27 01:33:52.283: INFO: AutomountServiceAccountToken: false + STEP: Updating ServiceAccount "e2e-sa-t5c92" 07/27/23 01:33:52.283 + Jul 27 01:33:52.336: INFO: AutomountServiceAccountToken: true + [AfterEach] [sig-auth] ServiceAccounts test/e2e/framework/node/init/init.go:32 - Jun 12 20:45:01.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-auth] Certificates API [Privileged:ClusterAdmin] + Jul 27 01:33:52.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-auth] Certificates API [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-auth] ServiceAccounts dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-auth] Certificates API [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-auth] ServiceAccounts tear down framework | framework.go:193 - STEP: Destroying namespace "certificates-7198" for this suite. 06/12/23 20:45:01.459 + STEP: Destroying namespace "svcaccounts-3108" for this suite. 07/27/23 01:33:52.349 << End Captured GinkgoWriter Output ------------------------------ -S +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment - should run the lifecycle of a Deployment [Conformance] - test/e2e/apps/deployment.go:185 + deployment should support rollover [Conformance] + test/e2e/apps/deployment.go:132 [BeforeEach] [sig-apps] Deployment set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:45:01.485 -Jun 12 20:45:01.485: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename deployment 06/12/23 20:45:01.489 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:45:01.544 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:45:01.553 +STEP: Creating a kubernetes client 07/27/23 01:33:52.39 +Jul 27 01:33:52.390: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename deployment 07/27/23 01:33:52.39 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:33:52.499 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:33:52.533 [BeforeEach] [sig-apps] Deployment test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Deployment test/e2e/apps/deployment.go:91 -[It] should run the lifecycle of a Deployment [Conformance] - test/e2e/apps/deployment.go:185 -STEP: creating a Deployment 06/12/23 20:45:01.584 -STEP: waiting for Deployment to be created 06/12/23 20:45:01.599 -STEP: waiting for all Replicas to be Ready 06/12/23 20:45:01.605 -Jun 12 20:45:01.612: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 0 and labels map[test-deployment-static:true] -Jun 12 20:45:01.612: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 0 and labels map[test-deployment-static:true] -Jun 12 20:45:01.643: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 0 and labels map[test-deployment-static:true] -Jun 12 20:45:01.643: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 0 and labels map[test-deployment-static:true] -Jun 12 20:45:01.686: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 0 and labels map[test-deployment-static:true] -Jun 12 20:45:01.686: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 0 and labels map[test-deployment-static:true] -Jun 12 20:45:01.715: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 0 and labels map[test-deployment-static:true] -Jun 12 20:45:01.715: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 0 and labels map[test-deployment-static:true] -Jun 12 20:45:04.325: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 and labels map[test-deployment-static:true] -Jun 12 20:45:04.326: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 and labels map[test-deployment-static:true] -Jun 12 20:45:04.806: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 and labels map[test-deployment-static:true] -STEP: patching the Deployment 06/12/23 20:45:04.807 -W0612 20:45:04.820946 23 warnings.go:70] unknown field "spec.template.spec.TerminationGracePeriodSeconds" -Jun 12 20:45:04.858: INFO: observed event type ADDED -STEP: waiting for Replicas to scale 06/12/23 20:45:04.858 -Jun 12 20:45:04.877: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 0 -Jun 12 20:45:04.880: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 0 -Jun 12 20:45:04.880: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 0 -Jun 12 20:45:04.880: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 0 -Jun 12 20:45:04.881: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 0 -Jun 12 20:45:04.881: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 0 -Jun 12 20:45:04.881: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 0 -Jun 12 20:45:04.881: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 0 -Jun 12 20:45:04.882: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 -Jun 12 20:45:04.882: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 -Jun 12 20:45:04.882: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 -Jun 12 20:45:04.884: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 -Jun 12 20:45:04.884: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 -Jun 12 20:45:04.884: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 -Jun 12 20:45:04.884: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 -Jun 12 20:45:04.884: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 -Jun 12 20:45:04.923: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 -Jun 12 20:45:04.923: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 -Jun 12 20:45:04.923: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 -Jun 12 20:45:04.923: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 -Jun 12 20:45:04.958: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 -Jun 12 20:45:04.958: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 -Jun 12 20:45:07.290: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 -Jun 12 20:45:07.290: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 -Jun 12 20:45:07.383: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 -STEP: listing Deployments 06/12/23 20:45:07.383 -Jun 12 20:45:07.421: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] -STEP: updating the Deployment 06/12/23 20:45:07.421 -Jun 12 20:45:07.450: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 -STEP: fetching the DeploymentStatus 06/12/23 20:45:07.45 -Jun 12 20:45:07.467: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] -Jun 12 20:45:07.470: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] -Jun 12 20:45:07.504: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] -Jun 12 20:45:07.524: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] -Jun 12 20:45:07.542: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] -Jun 12 20:45:11.973: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] -Jun 12 20:45:12.003: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] -Jun 12 20:45:12.018: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] -Jun 12 20:45:12.057: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] -Jun 12 20:45:12.070: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] -Jun 12 20:45:14.454: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] -STEP: patching the DeploymentStatus 06/12/23 20:45:14.486 -STEP: fetching the DeploymentStatus 06/12/23 20:45:14.515 -Jun 12 20:45:14.529: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 -Jun 12 20:45:14.530: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 -Jun 12 20:45:14.530: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 -Jun 12 20:45:14.531: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 -Jun 12 20:45:14.531: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 -Jun 12 20:45:14.532: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 -Jun 12 20:45:14.532: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 -Jun 12 20:45:14.532: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 3 -Jun 12 20:45:14.532: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 -Jun 12 20:45:14.532: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 -Jun 12 20:45:14.533: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 3 -STEP: deleting the Deployment 06/12/23 20:45:14.533 -Jun 12 20:45:14.561: INFO: observed event type MODIFIED -Jun 12 20:45:14.561: INFO: observed event type MODIFIED -Jun 12 20:45:14.561: INFO: observed event type MODIFIED -Jun 12 20:45:14.562: INFO: observed event type MODIFIED -Jun 12 20:45:14.562: INFO: observed event type MODIFIED -Jun 12 20:45:14.562: INFO: observed event type MODIFIED -Jun 12 20:45:14.562: INFO: observed event type MODIFIED -Jun 12 20:45:14.562: INFO: observed event type MODIFIED -Jun 12 20:45:14.562: INFO: observed event type MODIFIED -Jun 12 20:45:14.563: INFO: observed event type MODIFIED -Jun 12 20:45:14.563: INFO: observed event type MODIFIED -Jun 12 20:45:14.563: INFO: observed event type MODIFIED +[It] deployment should support rollover [Conformance] + test/e2e/apps/deployment.go:132 +Jul 27 01:33:52.567: INFO: Pod name rollover-pod: Found 0 pods out of 1 +Jul 27 01:33:57.576: INFO: Pod name rollover-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running 07/27/23 01:33:57.576 +Jul 27 01:33:57.576: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready +Jul 27 01:33:59.589: INFO: Creating deployment "test-rollover-deployment" +Jul 27 01:33:59.617: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations +Jul 27 01:34:01.641: INFO: Check revision of new replica set for deployment "test-rollover-deployment" +Jul 27 01:34:01.659: INFO: Ensure that both replica sets have 1 created replica +Jul 27 01:34:01.687: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update +Jul 27 01:34:01.710: INFO: Updating deployment test-rollover-deployment +Jul 27 01:34:01.710: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller +Jul 27 01:34:03.739: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 +Jul 27 01:34:03.756: INFO: Make sure deployment "test-rollover-deployment" is complete +Jul 27 01:34:03.774: INFO: all replica sets need to contain the pod-template-hash label +Jul 27 01:34:03.774: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 34, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jul 27 01:34:05.792: INFO: all replica sets need to contain the pod-template-hash label +Jul 27 01:34:05.792: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 34, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jul 27 01:34:07.804: INFO: all replica sets need to contain the pod-template-hash label +Jul 27 01:34:07.804: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 34, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jul 27 01:34:09.797: INFO: all replica sets need to contain the pod-template-hash label +Jul 27 01:34:09.797: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 34, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jul 27 01:34:11.800: INFO: all replica sets need to contain the pod-template-hash label +Jul 27 01:34:11.800: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 34, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jul 27 01:34:13.792: INFO: +Jul 27 01:34:13.792: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 34, 13, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jul 27 01:34:15.797: INFO: +Jul 27 01:34:15.797: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment test/e2e/apps/deployment.go:84 -Jun 12 20:45:14.576: INFO: Log out all the ReplicaSets if there is no deployment created +Jul 27 01:34:15.821: INFO: Deployment "test-rollover-deployment": +&Deployment{ObjectMeta:{test-rollover-deployment deployment-2376 13a55010-312c-45b1-a99b-35036dc81fe2 66029 2 2023-07-27 01:33:59 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-07-27 01:34:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 01:34:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004caec68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-07-27 01:33:59 +0000 UTC,LastTransitionTime:2023-07-27 01:33:59 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-6c6df9974f" has successfully progressed.,LastUpdateTime:2023-07-27 01:34:13 +0000 UTC,LastTransitionTime:2023-07-27 01:33:59 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Jul 27 01:34:15.829: INFO: New ReplicaSet "test-rollover-deployment-6c6df9974f" of Deployment "test-rollover-deployment": +&ReplicaSet{ObjectMeta:{test-rollover-deployment-6c6df9974f deployment-2376 3e91529e-f419-45df-b3a7-36da96b560fe 66019 2 2023-07-27 01:34:01 +0000 UTC map[name:rollover-pod pod-template-hash:6c6df9974f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 13a55010-312c-45b1-a99b-35036dc81fe2 0xc004bcedf7 0xc004bcedf8}] [] [{kube-controller-manager Update apps/v1 2023-07-27 01:34:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13a55010-312c-45b1-a99b-35036dc81fe2\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 01:34:13 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 6c6df9974f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:6c6df9974f] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004bceea8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Jul 27 01:34:15.829: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": +Jul 27 01:34:15.829: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-2376 f363d553-e9a2-46cb-abdc-e1604e191a96 66028 2 2023-07-27 01:33:52 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 13a55010-312c-45b1-a99b-35036dc81fe2 0xc004bcecc7 0xc004bcecc8}] [] [{e2e.test Update apps/v1 2023-07-27 01:33:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 01:34:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13a55010-312c-45b1-a99b-35036dc81fe2\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2023-07-27 01:34:13 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004bced88 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Jul 27 01:34:15.829: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-768dcbc65b deployment-2376 a50378b3-6fb6-4222-b770-f691365714a3 65397 2 2023-07-27 01:33:59 +0000 UTC map[name:rollover-pod pod-template-hash:768dcbc65b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 13a55010-312c-45b1-a99b-35036dc81fe2 0xc004bcef17 0xc004bcef18}] [] [{kube-controller-manager Update apps/v1 2023-07-27 01:34:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13a55010-312c-45b1-a99b-35036dc81fe2\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 01:34:01 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 768dcbc65b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:768dcbc65b] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004bcefc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Jul 27 01:34:15.838: INFO: Pod "test-rollover-deployment-6c6df9974f-rsldm" is available: +&Pod{ObjectMeta:{test-rollover-deployment-6c6df9974f-rsldm test-rollover-deployment-6c6df9974f- deployment-2376 531e45ca-ecdd-4dd6-96ef-5ebb3284bc6a 65566 0 2023-07-27 01:34:01 +0000 UTC map[name:rollover-pod pod-template-hash:6c6df9974f] map[cni.projectcalico.org/containerID:63384dfc7f315dd47c1cd7dca4f34b82fec65c7773c3dbc3b3f64d37c82cca67 cni.projectcalico.org/podIP:172.17.230.183/32 cni.projectcalico.org/podIPs:172.17.230.183/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.230.183" + ], + "default": true, + "dns": {} +}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-rollover-deployment-6c6df9974f 3e91529e-f419-45df-b3a7-36da96b560fe 0xc004bcf527 0xc004bcf528}] [] [{kube-controller-manager Update v1 2023-07-27 01:34:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3e91529e-f419-45df-b3a7-36da96b560fe\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-07-27 01:34:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-07-27 01:34:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-07-27 01:34:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.230.183\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6ngpn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6ngpn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.18,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c33,c22,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-8dx9f,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:34:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:34:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:34:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:34:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.18,PodIP:172.17.230.183,StartTime:2023-07-27 01:34:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-07-27 01:34:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,ImageID:registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://77e1ee2b2690f4b1212b75d376399387319dc8c83e7663e24a8f021bb3c55f59,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.17.230.183,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment test/e2e/framework/node/init/init.go:32 -Jun 12 20:45:14.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +Jul 27 01:34:15.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Deployment test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Deployment dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Deployment tear down framework | framework.go:193 -STEP: Destroying namespace "deployment-4199" for this suite. 06/12/23 20:45:14.609 +STEP: Destroying namespace "deployment-2376" for this suite. 07/27/23 01:34:15.85 ------------------------------ -• [SLOW TEST] [13.171 seconds] +• [SLOW TEST] [23.482 seconds] [sig-apps] Deployment test/e2e/apps/framework.go:23 - should run the lifecycle of a Deployment [Conformance] - test/e2e/apps/deployment.go:185 + deployment should support rollover [Conformance] + test/e2e/apps/deployment.go:132 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-apps] Deployment set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:45:01.485 - Jun 12 20:45:01.485: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename deployment 06/12/23 20:45:01.489 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:45:01.544 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:45:01.553 + STEP: Creating a kubernetes client 07/27/23 01:33:52.39 + Jul 27 01:33:52.390: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename deployment 07/27/23 01:33:52.39 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:33:52.499 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:33:52.533 [BeforeEach] [sig-apps] Deployment test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Deployment test/e2e/apps/deployment.go:91 - [It] should run the lifecycle of a Deployment [Conformance] - test/e2e/apps/deployment.go:185 - STEP: creating a Deployment 06/12/23 20:45:01.584 - STEP: waiting for Deployment to be created 06/12/23 20:45:01.599 - STEP: waiting for all Replicas to be Ready 06/12/23 20:45:01.605 - Jun 12 20:45:01.612: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 0 and labels map[test-deployment-static:true] - Jun 12 20:45:01.612: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 0 and labels map[test-deployment-static:true] - Jun 12 20:45:01.643: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 0 and labels map[test-deployment-static:true] - Jun 12 20:45:01.643: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 0 and labels map[test-deployment-static:true] - Jun 12 20:45:01.686: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 0 and labels map[test-deployment-static:true] - Jun 12 20:45:01.686: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 0 and labels map[test-deployment-static:true] - Jun 12 20:45:01.715: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 0 and labels map[test-deployment-static:true] - Jun 12 20:45:01.715: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 0 and labels map[test-deployment-static:true] - Jun 12 20:45:04.325: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 and labels map[test-deployment-static:true] - Jun 12 20:45:04.326: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 and labels map[test-deployment-static:true] - Jun 12 20:45:04.806: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 and labels map[test-deployment-static:true] - STEP: patching the Deployment 06/12/23 20:45:04.807 - W0612 20:45:04.820946 23 warnings.go:70] unknown field "spec.template.spec.TerminationGracePeriodSeconds" - Jun 12 20:45:04.858: INFO: observed event type ADDED - STEP: waiting for Replicas to scale 06/12/23 20:45:04.858 - Jun 12 20:45:04.877: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 0 - Jun 12 20:45:04.880: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 0 - Jun 12 20:45:04.880: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 0 - Jun 12 20:45:04.880: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 0 - Jun 12 20:45:04.881: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 0 - Jun 12 20:45:04.881: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 0 - Jun 12 20:45:04.881: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 0 - Jun 12 20:45:04.881: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 0 - Jun 12 20:45:04.882: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 - Jun 12 20:45:04.882: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 - Jun 12 20:45:04.882: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 - Jun 12 20:45:04.884: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 - Jun 12 20:45:04.884: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 - Jun 12 20:45:04.884: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 - Jun 12 20:45:04.884: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 - Jun 12 20:45:04.884: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 - Jun 12 20:45:04.923: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 - Jun 12 20:45:04.923: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 - Jun 12 20:45:04.923: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 - Jun 12 20:45:04.923: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 - Jun 12 20:45:04.958: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 - Jun 12 20:45:04.958: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 - Jun 12 20:45:07.290: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 - Jun 12 20:45:07.290: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 - Jun 12 20:45:07.383: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 - STEP: listing Deployments 06/12/23 20:45:07.383 - Jun 12 20:45:07.421: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] - STEP: updating the Deployment 06/12/23 20:45:07.421 - Jun 12 20:45:07.450: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 - STEP: fetching the DeploymentStatus 06/12/23 20:45:07.45 - Jun 12 20:45:07.467: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] - Jun 12 20:45:07.470: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] - Jun 12 20:45:07.504: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] - Jun 12 20:45:07.524: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] - Jun 12 20:45:07.542: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] - Jun 12 20:45:11.973: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] - Jun 12 20:45:12.003: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] - Jun 12 20:45:12.018: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] - Jun 12 20:45:12.057: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] - Jun 12 20:45:12.070: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] - Jun 12 20:45:14.454: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] - STEP: patching the DeploymentStatus 06/12/23 20:45:14.486 - STEP: fetching the DeploymentStatus 06/12/23 20:45:14.515 - Jun 12 20:45:14.529: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 - Jun 12 20:45:14.530: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 - Jun 12 20:45:14.530: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 - Jun 12 20:45:14.531: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 - Jun 12 20:45:14.531: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 1 - Jun 12 20:45:14.532: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 - Jun 12 20:45:14.532: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 - Jun 12 20:45:14.532: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 3 - Jun 12 20:45:14.532: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 - Jun 12 20:45:14.532: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 2 - Jun 12 20:45:14.533: INFO: observed Deployment test-deployment in namespace deployment-4199 with ReadyReplicas 3 - STEP: deleting the Deployment 06/12/23 20:45:14.533 - Jun 12 20:45:14.561: INFO: observed event type MODIFIED - Jun 12 20:45:14.561: INFO: observed event type MODIFIED - Jun 12 20:45:14.561: INFO: observed event type MODIFIED - Jun 12 20:45:14.562: INFO: observed event type MODIFIED - Jun 12 20:45:14.562: INFO: observed event type MODIFIED - Jun 12 20:45:14.562: INFO: observed event type MODIFIED - Jun 12 20:45:14.562: INFO: observed event type MODIFIED - Jun 12 20:45:14.562: INFO: observed event type MODIFIED - Jun 12 20:45:14.562: INFO: observed event type MODIFIED - Jun 12 20:45:14.563: INFO: observed event type MODIFIED - Jun 12 20:45:14.563: INFO: observed event type MODIFIED - Jun 12 20:45:14.563: INFO: observed event type MODIFIED + [It] deployment should support rollover [Conformance] + test/e2e/apps/deployment.go:132 + Jul 27 01:33:52.567: INFO: Pod name rollover-pod: Found 0 pods out of 1 + Jul 27 01:33:57.576: INFO: Pod name rollover-pod: Found 1 pods out of 1 + STEP: ensuring each pod is running 07/27/23 01:33:57.576 + Jul 27 01:33:57.576: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready + Jul 27 01:33:59.589: INFO: Creating deployment "test-rollover-deployment" + Jul 27 01:33:59.617: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations + Jul 27 01:34:01.641: INFO: Check revision of new replica set for deployment "test-rollover-deployment" + Jul 27 01:34:01.659: INFO: Ensure that both replica sets have 1 created replica + Jul 27 01:34:01.687: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update + Jul 27 01:34:01.710: INFO: Updating deployment test-rollover-deployment + Jul 27 01:34:01.710: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller + Jul 27 01:34:03.739: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 + Jul 27 01:34:03.756: INFO: Make sure deployment "test-rollover-deployment" is complete + Jul 27 01:34:03.774: INFO: all replica sets need to contain the pod-template-hash label + Jul 27 01:34:03.774: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 34, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jul 27 01:34:05.792: INFO: all replica sets need to contain the pod-template-hash label + Jul 27 01:34:05.792: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 34, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jul 27 01:34:07.804: INFO: all replica sets need to contain the pod-template-hash label + Jul 27 01:34:07.804: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 34, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jul 27 01:34:09.797: INFO: all replica sets need to contain the pod-template-hash label + Jul 27 01:34:09.797: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 34, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jul 27 01:34:11.800: INFO: all replica sets need to contain the pod-template-hash label + Jul 27 01:34:11.800: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 34, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jul 27 01:34:13.792: INFO: + Jul 27 01:34:13.792: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 34, 13, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 33, 59, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jul 27 01:34:15.797: INFO: + Jul 27 01:34:15.797: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment test/e2e/apps/deployment.go:84 - Jun 12 20:45:14.576: INFO: Log out all the ReplicaSets if there is no deployment created + Jul 27 01:34:15.821: INFO: Deployment "test-rollover-deployment": + &Deployment{ObjectMeta:{test-rollover-deployment deployment-2376 13a55010-312c-45b1-a99b-35036dc81fe2 66029 2 2023-07-27 01:33:59 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-07-27 01:34:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 01:34:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004caec68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-07-27 01:33:59 +0000 UTC,LastTransitionTime:2023-07-27 01:33:59 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-6c6df9974f" has successfully progressed.,LastUpdateTime:2023-07-27 01:34:13 +0000 UTC,LastTransitionTime:2023-07-27 01:33:59 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + + Jul 27 01:34:15.829: INFO: New ReplicaSet "test-rollover-deployment-6c6df9974f" of Deployment "test-rollover-deployment": + &ReplicaSet{ObjectMeta:{test-rollover-deployment-6c6df9974f deployment-2376 3e91529e-f419-45df-b3a7-36da96b560fe 66019 2 2023-07-27 01:34:01 +0000 UTC map[name:rollover-pod pod-template-hash:6c6df9974f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 13a55010-312c-45b1-a99b-35036dc81fe2 0xc004bcedf7 0xc004bcedf8}] [] [{kube-controller-manager Update apps/v1 2023-07-27 01:34:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13a55010-312c-45b1-a99b-35036dc81fe2\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 01:34:13 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 6c6df9974f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:6c6df9974f] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004bceea8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} + Jul 27 01:34:15.829: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": + Jul 27 01:34:15.829: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-2376 f363d553-e9a2-46cb-abdc-e1604e191a96 66028 2 2023-07-27 01:33:52 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 13a55010-312c-45b1-a99b-35036dc81fe2 0xc004bcecc7 0xc004bcecc8}] [] [{e2e.test Update apps/v1 2023-07-27 01:33:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 01:34:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13a55010-312c-45b1-a99b-35036dc81fe2\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2023-07-27 01:34:13 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004bced88 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + Jul 27 01:34:15.829: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-768dcbc65b deployment-2376 a50378b3-6fb6-4222-b770-f691365714a3 65397 2 2023-07-27 01:33:59 +0000 UTC map[name:rollover-pod pod-template-hash:768dcbc65b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 13a55010-312c-45b1-a99b-35036dc81fe2 0xc004bcef17 0xc004bcef18}] [] [{kube-controller-manager Update apps/v1 2023-07-27 01:34:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13a55010-312c-45b1-a99b-35036dc81fe2\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 01:34:01 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 768dcbc65b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:768dcbc65b] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004bcefc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + Jul 27 01:34:15.838: INFO: Pod "test-rollover-deployment-6c6df9974f-rsldm" is available: + &Pod{ObjectMeta:{test-rollover-deployment-6c6df9974f-rsldm test-rollover-deployment-6c6df9974f- deployment-2376 531e45ca-ecdd-4dd6-96ef-5ebb3284bc6a 65566 0 2023-07-27 01:34:01 +0000 UTC map[name:rollover-pod pod-template-hash:6c6df9974f] map[cni.projectcalico.org/containerID:63384dfc7f315dd47c1cd7dca4f34b82fec65c7773c3dbc3b3f64d37c82cca67 cni.projectcalico.org/podIP:172.17.230.183/32 cni.projectcalico.org/podIPs:172.17.230.183/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.230.183" + ], + "default": true, + "dns": {} + }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-rollover-deployment-6c6df9974f 3e91529e-f419-45df-b3a7-36da96b560fe 0xc004bcf527 0xc004bcf528}] [] [{kube-controller-manager Update v1 2023-07-27 01:34:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3e91529e-f419-45df-b3a7-36da96b560fe\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-07-27 01:34:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-07-27 01:34:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-07-27 01:34:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.230.183\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6ngpn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6ngpn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.18,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c33,c22,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-8dx9f,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:34:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:34:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:34:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:34:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.18,PodIP:172.17.230.183,StartTime:2023-07-27 01:34:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-07-27 01:34:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,ImageID:registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://77e1ee2b2690f4b1212b75d376399387319dc8c83e7663e24a8f021bb3c55f59,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.17.230.183,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment test/e2e/framework/node/init/init.go:32 - Jun 12 20:45:14.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + Jul 27 01:34:15.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Deployment test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Deployment dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Deployment tear down framework | framework.go:193 - STEP: Destroying namespace "deployment-4199" for this suite. 06/12/23 20:45:14.609 + STEP: Destroying namespace "deployment-2376" for this suite. 07/27/23 01:34:15.85 << End Captured GinkgoWriter Output ------------------------------ -SSSSSS ------------------------------- -[sig-network] IngressClass API - should support creating IngressClass API operations [Conformance] - test/e2e/network/ingressclass.go:223 -[BeforeEach] [sig-network] IngressClass API +[sig-cli] Kubectl client Kubectl cluster-info + should check if Kubernetes control plane services is included in cluster-info [Conformance] + test/e2e/kubectl/kubectl.go:1250 +[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:45:14.677 -Jun 12 20:45:14.677: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename ingressclass 06/12/23 20:45:14.682 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:45:14.746 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:45:14.788 -[BeforeEach] [sig-network] IngressClass API +STEP: Creating a kubernetes client 07/27/23 01:34:15.871 +Jul 27 01:34:15.871: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename kubectl 07/27/23 01:34:15.872 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:34:15.914 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:34:15.935 +[BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-network] IngressClass API - test/e2e/network/ingressclass.go:211 -[It] should support creating IngressClass API operations [Conformance] - test/e2e/network/ingressclass.go:223 -STEP: getting /apis 06/12/23 20:45:14.83 -STEP: getting /apis/networking.k8s.io 06/12/23 20:45:14.869 -STEP: getting /apis/networking.k8s.iov1 06/12/23 20:45:14.875 -STEP: creating 06/12/23 20:45:14.916 -STEP: getting 06/12/23 20:45:14.95 -STEP: listing 06/12/23 20:45:14.994 -STEP: watching 06/12/23 20:45:15.016 -Jun 12 20:45:15.016: INFO: starting watch -STEP: patching 06/12/23 20:45:15.02 -STEP: updating 06/12/23 20:45:15.03 -Jun 12 20:45:15.039: INFO: waiting for watch events with expected annotations -Jun 12 20:45:15.039: INFO: saw patched and updated annotations -STEP: deleting 06/12/23 20:45:15.039 -STEP: deleting a collection 06/12/23 20:45:15.061 -[AfterEach] [sig-network] IngressClass API +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[It] should check if Kubernetes control plane services is included in cluster-info [Conformance] + test/e2e/kubectl/kubectl.go:1250 +STEP: validating cluster-info 07/27/23 01:34:15.948 +Jul 27 01:34:15.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7579 cluster-info' +Jul 27 01:34:16.171: INFO: stderr: "" +Jul 27 01:34:16.171: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://172.21.0.1:443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" +[AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 -Jun 12 20:45:15.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] IngressClass API +Jul 27 01:34:16.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] IngressClass API +[DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] IngressClass API +[DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 -STEP: Destroying namespace "ingressclass-8184" for this suite. 06/12/23 20:45:15.112 +STEP: Destroying namespace "kubectl-7579" for this suite. 07/27/23 01:34:16.183 ------------------------------ -• [0.473 seconds] -[sig-network] IngressClass API -test/e2e/network/common/framework.go:23 - should support creating IngressClass API operations [Conformance] - test/e2e/network/ingressclass.go:223 +• [0.335 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl cluster-info + test/e2e/kubectl/kubectl.go:1244 + should check if Kubernetes control plane services is included in cluster-info [Conformance] + test/e2e/kubectl/kubectl.go:1250 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] IngressClass API + [BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:45:14.677 - Jun 12 20:45:14.677: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename ingressclass 06/12/23 20:45:14.682 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:45:14.746 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:45:14.788 - [BeforeEach] [sig-network] IngressClass API + STEP: Creating a kubernetes client 07/27/23 01:34:15.871 + Jul 27 01:34:15.871: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename kubectl 07/27/23 01:34:15.872 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:34:15.914 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:34:15.935 + [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-network] IngressClass API - test/e2e/network/ingressclass.go:211 - [It] should support creating IngressClass API operations [Conformance] - test/e2e/network/ingressclass.go:223 - STEP: getting /apis 06/12/23 20:45:14.83 - STEP: getting /apis/networking.k8s.io 06/12/23 20:45:14.869 - STEP: getting /apis/networking.k8s.iov1 06/12/23 20:45:14.875 - STEP: creating 06/12/23 20:45:14.916 - STEP: getting 06/12/23 20:45:14.95 - STEP: listing 06/12/23 20:45:14.994 - STEP: watching 06/12/23 20:45:15.016 - Jun 12 20:45:15.016: INFO: starting watch - STEP: patching 06/12/23 20:45:15.02 - STEP: updating 06/12/23 20:45:15.03 - Jun 12 20:45:15.039: INFO: waiting for watch events with expected annotations - Jun 12 20:45:15.039: INFO: saw patched and updated annotations - STEP: deleting 06/12/23 20:45:15.039 - STEP: deleting a collection 06/12/23 20:45:15.061 - [AfterEach] [sig-network] IngressClass API + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] + test/e2e/kubectl/kubectl.go:1250 + STEP: validating cluster-info 07/27/23 01:34:15.948 + Jul 27 01:34:15.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7579 cluster-info' + Jul 27 01:34:16.171: INFO: stderr: "" + Jul 27 01:34:16.171: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://172.21.0.1:443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" + [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 - Jun 12 20:45:15.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] IngressClass API + Jul 27 01:34:16.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] IngressClass API + [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] IngressClass API + [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 - STEP: Destroying namespace "ingressclass-8184" for this suite. 06/12/23 20:45:15.112 + STEP: Destroying namespace "kubectl-7579" for this suite. 07/27/23 01:34:16.183 << End Captured GinkgoWriter Output ------------------------------ -SSSSSS +SSSSSSSSSSSSSS ------------------------------ -[sig-storage] Projected secret - should be consumable from pods in volume [NodeConformance] [Conformance] - test/e2e/common/storage/projected_secret.go:46 -[BeforeEach] [sig-storage] Projected secret +[sig-storage] Secrets + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:68 +[BeforeEach] [sig-storage] Secrets set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:45:15.153 -Jun 12 20:45:15.153: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename projected 06/12/23 20:45:15.156 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:45:15.238 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:45:15.249 -[BeforeEach] [sig-storage] Projected secret +STEP: Creating a kubernetes client 07/27/23 01:34:16.207 +Jul 27 01:34:16.207: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename secrets 07/27/23 01:34:16.208 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:34:16.248 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:34:16.259 +[BeforeEach] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:31 -[It] should be consumable from pods in volume [NodeConformance] [Conformance] - test/e2e/common/storage/projected_secret.go:46 -STEP: Creating projection with secret that has name projected-secret-test-21cd559b-300f-4255-8ba5-edf087f0c6f2 06/12/23 20:45:15.284 -STEP: Creating a pod to test consume secrets 06/12/23 20:45:15.303 -Jun 12 20:45:15.400: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-443b9283-da48-48ca-b92e-83b5e21d3432" in namespace "projected-1553" to be "Succeeded or Failed" -Jun 12 20:45:15.412: INFO: Pod "pod-projected-secrets-443b9283-da48-48ca-b92e-83b5e21d3432": Phase="Pending", Reason="", readiness=false. Elapsed: 12.267723ms -Jun 12 20:45:17.461: INFO: Pod "pod-projected-secrets-443b9283-da48-48ca-b92e-83b5e21d3432": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060592147s -Jun 12 20:45:19.439: INFO: Pod "pod-projected-secrets-443b9283-da48-48ca-b92e-83b5e21d3432": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039082168s -Jun 12 20:45:21.422: INFO: Pod "pod-projected-secrets-443b9283-da48-48ca-b92e-83b5e21d3432": Phase="Pending", Reason="", readiness=false. Elapsed: 6.021801163s -Jun 12 20:45:23.429: INFO: Pod "pod-projected-secrets-443b9283-da48-48ca-b92e-83b5e21d3432": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.028588336s -STEP: Saw pod success 06/12/23 20:45:23.429 -Jun 12 20:45:23.429: INFO: Pod "pod-projected-secrets-443b9283-da48-48ca-b92e-83b5e21d3432" satisfied condition "Succeeded or Failed" -Jun 12 20:45:23.436: INFO: Trying to get logs from node 10.138.75.70 pod pod-projected-secrets-443b9283-da48-48ca-b92e-83b5e21d3432 container projected-secret-volume-test: -STEP: delete the pod 06/12/23 20:45:23.451 -Jun 12 20:45:23.471: INFO: Waiting for pod pod-projected-secrets-443b9283-da48-48ca-b92e-83b5e21d3432 to disappear -Jun 12 20:45:23.477: INFO: Pod pod-projected-secrets-443b9283-da48-48ca-b92e-83b5e21d3432 no longer exists -[AfterEach] [sig-storage] Projected secret +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:68 +STEP: Creating secret with name secret-test-8de5b7d9-6652-4541-94b9-ca75d8d55c02 07/27/23 01:34:16.269 +STEP: Creating a pod to test consume secrets 07/27/23 01:34:16.283 +Jul 27 01:34:16.312: INFO: Waiting up to 5m0s for pod "pod-secrets-03ef88bb-10a4-4ff8-b039-7af2164dd856" in namespace "secrets-319" to be "Succeeded or Failed" +Jul 27 01:34:16.322: INFO: Pod "pod-secrets-03ef88bb-10a4-4ff8-b039-7af2164dd856": Phase="Pending", Reason="", readiness=false. Elapsed: 9.454354ms +Jul 27 01:34:18.332: INFO: Pod "pod-secrets-03ef88bb-10a4-4ff8-b039-7af2164dd856": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019089399s +Jul 27 01:34:20.341: INFO: Pod "pod-secrets-03ef88bb-10a4-4ff8-b039-7af2164dd856": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028577047s +STEP: Saw pod success 07/27/23 01:34:20.341 +Jul 27 01:34:20.341: INFO: Pod "pod-secrets-03ef88bb-10a4-4ff8-b039-7af2164dd856" satisfied condition "Succeeded or Failed" +Jul 27 01:34:20.382: INFO: Trying to get logs from node 10.245.128.19 pod pod-secrets-03ef88bb-10a4-4ff8-b039-7af2164dd856 container secret-volume-test: +STEP: delete the pod 07/27/23 01:34:20.406 +Jul 27 01:34:20.435: INFO: Waiting for pod pod-secrets-03ef88bb-10a4-4ff8-b039-7af2164dd856 to disappear +Jul 27 01:34:20.463: INFO: Pod pod-secrets-03ef88bb-10a4-4ff8-b039-7af2164dd856 no longer exists +[AfterEach] [sig-storage] Secrets test/e2e/framework/node/init/init.go:32 -Jun 12 20:45:23.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Projected secret +Jul 27 01:34:20.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Projected secret +[DeferCleanup (Each)] [sig-storage] Secrets dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Projected secret +[DeferCleanup (Each)] [sig-storage] Secrets tear down framework | framework.go:193 -STEP: Destroying namespace "projected-1553" for this suite. 06/12/23 20:45:23.489 +STEP: Destroying namespace "secrets-319" for this suite. 07/27/23 01:34:20.499 ------------------------------ -• [SLOW TEST] [8.356 seconds] -[sig-storage] Projected secret +• [4.325 seconds] +[sig-storage] Secrets test/e2e/common/storage/framework.go:23 - should be consumable from pods in volume [NodeConformance] [Conformance] - test/e2e/common/storage/projected_secret.go:46 + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:68 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Projected secret + [BeforeEach] [sig-storage] Secrets set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:45:15.153 - Jun 12 20:45:15.153: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename projected 06/12/23 20:45:15.156 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:45:15.238 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:45:15.249 - [BeforeEach] [sig-storage] Projected secret + STEP: Creating a kubernetes client 07/27/23 01:34:16.207 + Jul 27 01:34:16.207: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename secrets 07/27/23 01:34:16.208 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:34:16.248 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:34:16.259 + [BeforeEach] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:31 - [It] should be consumable from pods in volume [NodeConformance] [Conformance] - test/e2e/common/storage/projected_secret.go:46 - STEP: Creating projection with secret that has name projected-secret-test-21cd559b-300f-4255-8ba5-edf087f0c6f2 06/12/23 20:45:15.284 - STEP: Creating a pod to test consume secrets 06/12/23 20:45:15.303 - Jun 12 20:45:15.400: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-443b9283-da48-48ca-b92e-83b5e21d3432" in namespace "projected-1553" to be "Succeeded or Failed" - Jun 12 20:45:15.412: INFO: Pod "pod-projected-secrets-443b9283-da48-48ca-b92e-83b5e21d3432": Phase="Pending", Reason="", readiness=false. Elapsed: 12.267723ms - Jun 12 20:45:17.461: INFO: Pod "pod-projected-secrets-443b9283-da48-48ca-b92e-83b5e21d3432": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060592147s - Jun 12 20:45:19.439: INFO: Pod "pod-projected-secrets-443b9283-da48-48ca-b92e-83b5e21d3432": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039082168s - Jun 12 20:45:21.422: INFO: Pod "pod-projected-secrets-443b9283-da48-48ca-b92e-83b5e21d3432": Phase="Pending", Reason="", readiness=false. Elapsed: 6.021801163s - Jun 12 20:45:23.429: INFO: Pod "pod-projected-secrets-443b9283-da48-48ca-b92e-83b5e21d3432": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.028588336s - STEP: Saw pod success 06/12/23 20:45:23.429 - Jun 12 20:45:23.429: INFO: Pod "pod-projected-secrets-443b9283-da48-48ca-b92e-83b5e21d3432" satisfied condition "Succeeded or Failed" - Jun 12 20:45:23.436: INFO: Trying to get logs from node 10.138.75.70 pod pod-projected-secrets-443b9283-da48-48ca-b92e-83b5e21d3432 container projected-secret-volume-test: - STEP: delete the pod 06/12/23 20:45:23.451 - Jun 12 20:45:23.471: INFO: Waiting for pod pod-projected-secrets-443b9283-da48-48ca-b92e-83b5e21d3432 to disappear - Jun 12 20:45:23.477: INFO: Pod pod-projected-secrets-443b9283-da48-48ca-b92e-83b5e21d3432 no longer exists - [AfterEach] [sig-storage] Projected secret + [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:68 + STEP: Creating secret with name secret-test-8de5b7d9-6652-4541-94b9-ca75d8d55c02 07/27/23 01:34:16.269 + STEP: Creating a pod to test consume secrets 07/27/23 01:34:16.283 + Jul 27 01:34:16.312: INFO: Waiting up to 5m0s for pod "pod-secrets-03ef88bb-10a4-4ff8-b039-7af2164dd856" in namespace "secrets-319" to be "Succeeded or Failed" + Jul 27 01:34:16.322: INFO: Pod "pod-secrets-03ef88bb-10a4-4ff8-b039-7af2164dd856": Phase="Pending", Reason="", readiness=false. Elapsed: 9.454354ms + Jul 27 01:34:18.332: INFO: Pod "pod-secrets-03ef88bb-10a4-4ff8-b039-7af2164dd856": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019089399s + Jul 27 01:34:20.341: INFO: Pod "pod-secrets-03ef88bb-10a4-4ff8-b039-7af2164dd856": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028577047s + STEP: Saw pod success 07/27/23 01:34:20.341 + Jul 27 01:34:20.341: INFO: Pod "pod-secrets-03ef88bb-10a4-4ff8-b039-7af2164dd856" satisfied condition "Succeeded or Failed" + Jul 27 01:34:20.382: INFO: Trying to get logs from node 10.245.128.19 pod pod-secrets-03ef88bb-10a4-4ff8-b039-7af2164dd856 container secret-volume-test: + STEP: delete the pod 07/27/23 01:34:20.406 + Jul 27 01:34:20.435: INFO: Waiting for pod pod-secrets-03ef88bb-10a4-4ff8-b039-7af2164dd856 to disappear + Jul 27 01:34:20.463: INFO: Pod pod-secrets-03ef88bb-10a4-4ff8-b039-7af2164dd856 no longer exists + [AfterEach] [sig-storage] Secrets test/e2e/framework/node/init/init.go:32 - Jun 12 20:45:23.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Projected secret + Jul 27 01:34:20.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Projected secret + [DeferCleanup (Each)] [sig-storage] Secrets dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Projected secret + [DeferCleanup (Each)] [sig-storage] Secrets tear down framework | framework.go:193 - STEP: Destroying namespace "projected-1553" for this suite. 06/12/23 20:45:23.489 + STEP: Destroying namespace "secrets-319" for this suite. 07/27/23 01:34:20.499 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSS +SSSSSSSSSS ------------------------------ -[sig-storage] ConfigMap - should be consumable from pods in volume as non-root [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:74 -[BeforeEach] [sig-storage] ConfigMap +[sig-storage] EmptyDir volumes + should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:187 +[BeforeEach] [sig-storage] EmptyDir volumes set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:45:23.511 -Jun 12 20:45:23.511: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename configmap 06/12/23 20:45:23.515 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:45:23.572 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:45:23.581 -[BeforeEach] [sig-storage] ConfigMap +STEP: Creating a kubernetes client 07/27/23 01:34:20.532 +Jul 27 01:34:20.532: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename emptydir 07/27/23 01:34:20.543 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:34:20.628 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:34:20.736 +[BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:31 -[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:74 -STEP: Creating configMap with name configmap-test-volume-a3f03d63-cd1b-440a-bd39-e022c6d636fe 06/12/23 20:45:23.591 -STEP: Creating a pod to test consume configMaps 06/12/23 20:45:23.604 -Jun 12 20:45:23.625: INFO: Waiting up to 5m0s for pod "pod-configmaps-5e976fd8-70cb-4b14-af3d-1738639ba3e0" in namespace "configmap-5342" to be "Succeeded or Failed" -Jun 12 20:45:23.633: INFO: Pod "pod-configmaps-5e976fd8-70cb-4b14-af3d-1738639ba3e0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.473918ms -Jun 12 20:45:25.641: INFO: Pod "pod-configmaps-5e976fd8-70cb-4b14-af3d-1738639ba3e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015844356s -Jun 12 20:45:27.642: INFO: Pod "pod-configmaps-5e976fd8-70cb-4b14-af3d-1738639ba3e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017057939s -Jun 12 20:45:29.650: INFO: Pod "pod-configmaps-5e976fd8-70cb-4b14-af3d-1738639ba3e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025146811s -STEP: Saw pod success 06/12/23 20:45:29.65 -Jun 12 20:45:29.650: INFO: Pod "pod-configmaps-5e976fd8-70cb-4b14-af3d-1738639ba3e0" satisfied condition "Succeeded or Failed" -Jun 12 20:45:29.661: INFO: Trying to get logs from node 10.138.75.112 pod pod-configmaps-5e976fd8-70cb-4b14-af3d-1738639ba3e0 container agnhost-container: -STEP: delete the pod 06/12/23 20:45:29.722 -Jun 12 20:45:29.760: INFO: Waiting for pod pod-configmaps-5e976fd8-70cb-4b14-af3d-1738639ba3e0 to disappear -Jun 12 20:45:29.766: INFO: Pod pod-configmaps-5e976fd8-70cb-4b14-af3d-1738639ba3e0 no longer exists -[AfterEach] [sig-storage] ConfigMap +[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:187 +STEP: Creating a pod to test emptydir 0777 on node default medium 07/27/23 01:34:20.753 +Jul 27 01:34:20.798: INFO: Waiting up to 5m0s for pod "pod-8ad1d437-765f-49d3-9184-30b2de0bae09" in namespace "emptydir-1083" to be "Succeeded or Failed" +Jul 27 01:34:20.827: INFO: Pod "pod-8ad1d437-765f-49d3-9184-30b2de0bae09": Phase="Pending", Reason="", readiness=false. Elapsed: 28.977054ms +Jul 27 01:34:22.851: INFO: Pod "pod-8ad1d437-765f-49d3-9184-30b2de0bae09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052897049s +Jul 27 01:34:24.851: INFO: Pod "pod-8ad1d437-765f-49d3-9184-30b2de0bae09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053471913s +STEP: Saw pod success 07/27/23 01:34:24.851 +Jul 27 01:34:24.852: INFO: Pod "pod-8ad1d437-765f-49d3-9184-30b2de0bae09" satisfied condition "Succeeded or Failed" +Jul 27 01:34:24.861: INFO: Trying to get logs from node 10.245.128.19 pod pod-8ad1d437-765f-49d3-9184-30b2de0bae09 container test-container: +STEP: delete the pod 07/27/23 01:34:24.906 +Jul 27 01:34:24.926: INFO: Waiting for pod pod-8ad1d437-765f-49d3-9184-30b2de0bae09 to disappear +Jul 27 01:34:24.934: INFO: Pod pod-8ad1d437-765f-49d3-9184-30b2de0bae09 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/node/init/init.go:32 -Jun 12 20:45:29.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] ConfigMap +Jul 27 01:34:24.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] ConfigMap +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] ConfigMap +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes tear down framework | framework.go:193 -STEP: Destroying namespace "configmap-5342" for this suite. 06/12/23 20:45:29.779 +STEP: Destroying namespace "emptydir-1083" for this suite. 07/27/23 01:34:24.947 ------------------------------ -• [SLOW TEST] [6.290 seconds] -[sig-storage] ConfigMap +• [4.437 seconds] +[sig-storage] EmptyDir volumes test/e2e/common/storage/framework.go:23 - should be consumable from pods in volume as non-root [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:74 + should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:187 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] ConfigMap + [BeforeEach] [sig-storage] EmptyDir volumes set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:45:23.511 - Jun 12 20:45:23.511: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename configmap 06/12/23 20:45:23.515 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:45:23.572 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:45:23.581 - [BeforeEach] [sig-storage] ConfigMap + STEP: Creating a kubernetes client 07/27/23 01:34:20.532 + Jul 27 01:34:20.532: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename emptydir 07/27/23 01:34:20.543 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:34:20.628 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:34:20.736 + [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:31 - [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:74 - STEP: Creating configMap with name configmap-test-volume-a3f03d63-cd1b-440a-bd39-e022c6d636fe 06/12/23 20:45:23.591 - STEP: Creating a pod to test consume configMaps 06/12/23 20:45:23.604 - Jun 12 20:45:23.625: INFO: Waiting up to 5m0s for pod "pod-configmaps-5e976fd8-70cb-4b14-af3d-1738639ba3e0" in namespace "configmap-5342" to be "Succeeded or Failed" - Jun 12 20:45:23.633: INFO: Pod "pod-configmaps-5e976fd8-70cb-4b14-af3d-1738639ba3e0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.473918ms - Jun 12 20:45:25.641: INFO: Pod "pod-configmaps-5e976fd8-70cb-4b14-af3d-1738639ba3e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015844356s - Jun 12 20:45:27.642: INFO: Pod "pod-configmaps-5e976fd8-70cb-4b14-af3d-1738639ba3e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017057939s - Jun 12 20:45:29.650: INFO: Pod "pod-configmaps-5e976fd8-70cb-4b14-af3d-1738639ba3e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025146811s - STEP: Saw pod success 06/12/23 20:45:29.65 - Jun 12 20:45:29.650: INFO: Pod "pod-configmaps-5e976fd8-70cb-4b14-af3d-1738639ba3e0" satisfied condition "Succeeded or Failed" - Jun 12 20:45:29.661: INFO: Trying to get logs from node 10.138.75.112 pod pod-configmaps-5e976fd8-70cb-4b14-af3d-1738639ba3e0 container agnhost-container: - STEP: delete the pod 06/12/23 20:45:29.722 - Jun 12 20:45:29.760: INFO: Waiting for pod pod-configmaps-5e976fd8-70cb-4b14-af3d-1738639ba3e0 to disappear - Jun 12 20:45:29.766: INFO: Pod pod-configmaps-5e976fd8-70cb-4b14-af3d-1738639ba3e0 no longer exists - [AfterEach] [sig-storage] ConfigMap + [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:187 + STEP: Creating a pod to test emptydir 0777 on node default medium 07/27/23 01:34:20.753 + Jul 27 01:34:20.798: INFO: Waiting up to 5m0s for pod "pod-8ad1d437-765f-49d3-9184-30b2de0bae09" in namespace "emptydir-1083" to be "Succeeded or Failed" + Jul 27 01:34:20.827: INFO: Pod "pod-8ad1d437-765f-49d3-9184-30b2de0bae09": Phase="Pending", Reason="", readiness=false. Elapsed: 28.977054ms + Jul 27 01:34:22.851: INFO: Pod "pod-8ad1d437-765f-49d3-9184-30b2de0bae09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052897049s + Jul 27 01:34:24.851: INFO: Pod "pod-8ad1d437-765f-49d3-9184-30b2de0bae09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053471913s + STEP: Saw pod success 07/27/23 01:34:24.851 + Jul 27 01:34:24.852: INFO: Pod "pod-8ad1d437-765f-49d3-9184-30b2de0bae09" satisfied condition "Succeeded or Failed" + Jul 27 01:34:24.861: INFO: Trying to get logs from node 10.245.128.19 pod pod-8ad1d437-765f-49d3-9184-30b2de0bae09 container test-container: + STEP: delete the pod 07/27/23 01:34:24.906 + Jul 27 01:34:24.926: INFO: Waiting for pod pod-8ad1d437-765f-49d3-9184-30b2de0bae09 to disappear + Jul 27 01:34:24.934: INFO: Pod pod-8ad1d437-765f-49d3-9184-30b2de0bae09 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/node/init/init.go:32 - Jun 12 20:45:29.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] ConfigMap + Jul 27 01:34:24.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] ConfigMap + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] ConfigMap + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes tear down framework | framework.go:193 - STEP: Destroying namespace "configmap-5342" for this suite. 06/12/23 20:45:29.779 + STEP: Destroying namespace "emptydir-1083" for this suite. 07/27/23 01:34:24.947 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------- -[sig-storage] Projected secret - should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] - test/e2e/common/storage/projected_secret.go:119 -[BeforeEach] [sig-storage] Projected secret +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:122 +[BeforeEach] [sig-network] Networking set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:45:29.811 -Jun 12 20:45:29.813: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename projected 06/12/23 20:45:29.816 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:45:29.877 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:45:29.888 -[BeforeEach] [sig-storage] Projected secret +STEP: Creating a kubernetes client 07/27/23 01:34:24.969 +Jul 27 01:34:24.969: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename pod-network-test 07/27/23 01:34:24.97 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:34:25.011 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:34:25.019 +[BeforeEach] [sig-network] Networking test/e2e/framework/metrics/init/init.go:31 -[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] - test/e2e/common/storage/projected_secret.go:119 -STEP: Creating secret with name projected-secret-test-9277e0ba-c14c-4353-89d1-672078124fdb 06/12/23 20:45:29.904 -STEP: Creating a pod to test consume secrets 06/12/23 20:45:29.918 -Jun 12 20:45:29.939: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bf0b05e3-81bc-401f-a19a-927f0b35c454" in namespace "projected-5044" to be "Succeeded or Failed" -Jun 12 20:45:29.956: INFO: Pod "pod-projected-secrets-bf0b05e3-81bc-401f-a19a-927f0b35c454": Phase="Pending", Reason="", readiness=false. Elapsed: 16.685574ms -Jun 12 20:45:31.963: INFO: Pod "pod-projected-secrets-bf0b05e3-81bc-401f-a19a-927f0b35c454": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023599904s -Jun 12 20:45:33.964: INFO: Pod "pod-projected-secrets-bf0b05e3-81bc-401f-a19a-927f0b35c454": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024563767s -Jun 12 20:45:35.966: INFO: Pod "pod-projected-secrets-bf0b05e3-81bc-401f-a19a-927f0b35c454": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026540523s -STEP: Saw pod success 06/12/23 20:45:35.966 -Jun 12 20:45:35.967: INFO: Pod "pod-projected-secrets-bf0b05e3-81bc-401f-a19a-927f0b35c454" satisfied condition "Succeeded or Failed" -Jun 12 20:45:35.976: INFO: Trying to get logs from node 10.138.75.112 pod pod-projected-secrets-bf0b05e3-81bc-401f-a19a-927f0b35c454 container secret-volume-test: -STEP: delete the pod 06/12/23 20:45:36.024 -Jun 12 20:45:36.062: INFO: Waiting for pod pod-projected-secrets-bf0b05e3-81bc-401f-a19a-927f0b35c454 to disappear -Jun 12 20:45:36.068: INFO: Pod pod-projected-secrets-bf0b05e3-81bc-401f-a19a-927f0b35c454 no longer exists -[AfterEach] [sig-storage] Projected secret +[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:122 +STEP: Performing setup for networking test in namespace pod-network-test-1266 07/27/23 01:34:25.028 +STEP: creating a selector 07/27/23 01:34:25.029 +STEP: Creating the service pods in kubernetes 07/27/23 01:34:25.029 +Jul 27 01:34:25.029: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Jul 27 01:34:25.116: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-1266" to be "running and ready" +Jul 27 01:34:25.131: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 15.344962ms +Jul 27 01:34:25.131: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Jul 27 01:34:27.143: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.026624817s +Jul 27 01:34:27.143: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 01:34:29.141: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.025104536s +Jul 27 01:34:29.141: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 01:34:31.140: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.024358739s +Jul 27 01:34:31.140: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 01:34:33.141: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.02527627s +Jul 27 01:34:33.141: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 01:34:35.140: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.024212501s +Jul 27 01:34:35.140: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 01:34:37.141: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.02488052s +Jul 27 01:34:37.141: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 01:34:39.147: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.030964844s +Jul 27 01:34:39.147: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 01:34:41.141: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.025096101s +Jul 27 01:34:41.141: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 01:34:43.147: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.030752514s +Jul 27 01:34:43.147: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 01:34:45.141: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.024626816s +Jul 27 01:34:45.141: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 01:34:47.141: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 22.025251391s +Jul 27 01:34:47.141: INFO: The phase of Pod netserver-0 is Running (Ready = true) +Jul 27 01:34:47.141: INFO: Pod "netserver-0" satisfied condition "running and ready" +Jul 27 01:34:47.152: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-1266" to be "running and ready" +Jul 27 01:34:47.160: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 8.612575ms +Jul 27 01:34:47.160: INFO: The phase of Pod netserver-1 is Running (Ready = true) +Jul 27 01:34:47.160: INFO: Pod "netserver-1" satisfied condition "running and ready" +Jul 27 01:34:47.169: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-1266" to be "running and ready" +Jul 27 01:34:47.177: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 7.731222ms +Jul 27 01:34:47.177: INFO: The phase of Pod netserver-2 is Running (Ready = true) +Jul 27 01:34:47.177: INFO: Pod "netserver-2" satisfied condition "running and ready" +STEP: Creating test pods 07/27/23 01:34:47.185 +Jul 27 01:34:47.220: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-1266" to be "running" +Jul 27 01:34:47.230: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 9.581827ms +Jul 27 01:34:49.239: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019003921s +Jul 27 01:34:51.240: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.019587561s +Jul 27 01:34:51.240: INFO: Pod "test-container-pod" satisfied condition "running" +Jul 27 01:34:51.247: INFO: Waiting up to 5m0s for pod "host-test-container-pod" in namespace "pod-network-test-1266" to be "running" +Jul 27 01:34:51.255: INFO: Pod "host-test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 7.836577ms +Jul 27 01:34:51.255: INFO: Pod "host-test-container-pod" satisfied condition "running" +Jul 27 01:34:51.262: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 +Jul 27 01:34:51.262: INFO: Going to poll 172.17.218.45 on port 8081 at least 0 times, with a maximum of 39 tries before failing +Jul 27 01:34:51.269: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.17.218.45 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1266 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jul 27 01:34:51.269: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 01:34:51.270: INFO: ExecWithOptions: Clientset creation +Jul 27 01:34:51.270: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-1266/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+172.17.218.45+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +Jul 27 01:34:52.460: INFO: Found all 1 expected endpoints: [netserver-0] +Jul 27 01:34:52.460: INFO: Going to poll 172.17.230.158 on port 8081 at least 0 times, with a maximum of 39 tries before failing +Jul 27 01:34:52.488: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.17.230.158 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1266 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jul 27 01:34:52.488: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 01:34:52.488: INFO: ExecWithOptions: Clientset creation +Jul 27 01:34:52.488: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-1266/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+172.17.230.158+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +Jul 27 01:34:53.758: INFO: Found all 1 expected endpoints: [netserver-1] +Jul 27 01:34:53.758: INFO: Going to poll 172.17.225.48 on port 8081 at least 0 times, with a maximum of 39 tries before failing +Jul 27 01:34:53.768: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.17.225.48 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1266 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jul 27 01:34:53.768: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 01:34:53.768: INFO: ExecWithOptions: Clientset creation +Jul 27 01:34:53.769: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-1266/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+172.17.225.48+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +Jul 27 01:34:54.985: INFO: Found all 1 expected endpoints: [netserver-2] +[AfterEach] [sig-network] Networking test/e2e/framework/node/init/init.go:32 -Jun 12 20:45:36.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Projected secret +Jul 27 01:34:54.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Networking test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Projected secret +[DeferCleanup (Each)] [sig-network] Networking dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Projected secret +[DeferCleanup (Each)] [sig-network] Networking tear down framework | framework.go:193 -STEP: Destroying namespace "projected-5044" for this suite. 06/12/23 20:45:36.081 +STEP: Destroying namespace "pod-network-test-1266" for this suite. 07/27/23 01:34:54.999 ------------------------------ -• [SLOW TEST] [6.291 seconds] -[sig-storage] Projected secret -test/e2e/common/storage/framework.go:23 - should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] - test/e2e/common/storage/projected_secret.go:119 - - Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Projected secret - set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:45:29.811 - Jun 12 20:45:29.813: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename projected 06/12/23 20:45:29.816 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:45:29.877 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:45:29.888 - [BeforeEach] [sig-storage] Projected secret +• [SLOW TEST] [30.051 seconds] +[sig-network] Networking +test/e2e/common/network/framework.go:23 + Granular Checks: Pods + test/e2e/common/network/networking.go:32 + should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:122 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Networking + set up framework | framework.go:178 + STEP: Creating a kubernetes client 07/27/23 01:34:24.969 + Jul 27 01:34:24.969: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename pod-network-test 07/27/23 01:34:24.97 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:34:25.011 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:34:25.019 + [BeforeEach] [sig-network] Networking test/e2e/framework/metrics/init/init.go:31 - [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] - test/e2e/common/storage/projected_secret.go:119 - STEP: Creating secret with name projected-secret-test-9277e0ba-c14c-4353-89d1-672078124fdb 06/12/23 20:45:29.904 - STEP: Creating a pod to test consume secrets 06/12/23 20:45:29.918 - Jun 12 20:45:29.939: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bf0b05e3-81bc-401f-a19a-927f0b35c454" in namespace "projected-5044" to be "Succeeded or Failed" - Jun 12 20:45:29.956: INFO: Pod "pod-projected-secrets-bf0b05e3-81bc-401f-a19a-927f0b35c454": Phase="Pending", Reason="", readiness=false. Elapsed: 16.685574ms - Jun 12 20:45:31.963: INFO: Pod "pod-projected-secrets-bf0b05e3-81bc-401f-a19a-927f0b35c454": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023599904s - Jun 12 20:45:33.964: INFO: Pod "pod-projected-secrets-bf0b05e3-81bc-401f-a19a-927f0b35c454": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024563767s - Jun 12 20:45:35.966: INFO: Pod "pod-projected-secrets-bf0b05e3-81bc-401f-a19a-927f0b35c454": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026540523s - STEP: Saw pod success 06/12/23 20:45:35.966 - Jun 12 20:45:35.967: INFO: Pod "pod-projected-secrets-bf0b05e3-81bc-401f-a19a-927f0b35c454" satisfied condition "Succeeded or Failed" - Jun 12 20:45:35.976: INFO: Trying to get logs from node 10.138.75.112 pod pod-projected-secrets-bf0b05e3-81bc-401f-a19a-927f0b35c454 container secret-volume-test: - STEP: delete the pod 06/12/23 20:45:36.024 - Jun 12 20:45:36.062: INFO: Waiting for pod pod-projected-secrets-bf0b05e3-81bc-401f-a19a-927f0b35c454 to disappear - Jun 12 20:45:36.068: INFO: Pod pod-projected-secrets-bf0b05e3-81bc-401f-a19a-927f0b35c454 no longer exists - [AfterEach] [sig-storage] Projected secret + [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:122 + STEP: Performing setup for networking test in namespace pod-network-test-1266 07/27/23 01:34:25.028 + STEP: creating a selector 07/27/23 01:34:25.029 + STEP: Creating the service pods in kubernetes 07/27/23 01:34:25.029 + Jul 27 01:34:25.029: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable + Jul 27 01:34:25.116: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-1266" to be "running and ready" + Jul 27 01:34:25.131: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 15.344962ms + Jul 27 01:34:25.131: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) + Jul 27 01:34:27.143: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.026624817s + Jul 27 01:34:27.143: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 01:34:29.141: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.025104536s + Jul 27 01:34:29.141: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 01:34:31.140: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.024358739s + Jul 27 01:34:31.140: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 01:34:33.141: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.02527627s + Jul 27 01:34:33.141: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 01:34:35.140: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.024212501s + Jul 27 01:34:35.140: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 01:34:37.141: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.02488052s + Jul 27 01:34:37.141: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 01:34:39.147: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.030964844s + Jul 27 01:34:39.147: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 01:34:41.141: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.025096101s + Jul 27 01:34:41.141: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 01:34:43.147: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.030752514s + Jul 27 01:34:43.147: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 01:34:45.141: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.024626816s + Jul 27 01:34:45.141: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 01:34:47.141: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 22.025251391s + Jul 27 01:34:47.141: INFO: The phase of Pod netserver-0 is Running (Ready = true) + Jul 27 01:34:47.141: INFO: Pod "netserver-0" satisfied condition "running and ready" + Jul 27 01:34:47.152: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-1266" to be "running and ready" + Jul 27 01:34:47.160: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 8.612575ms + Jul 27 01:34:47.160: INFO: The phase of Pod netserver-1 is Running (Ready = true) + Jul 27 01:34:47.160: INFO: Pod "netserver-1" satisfied condition "running and ready" + Jul 27 01:34:47.169: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-1266" to be "running and ready" + Jul 27 01:34:47.177: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 7.731222ms + Jul 27 01:34:47.177: INFO: The phase of Pod netserver-2 is Running (Ready = true) + Jul 27 01:34:47.177: INFO: Pod "netserver-2" satisfied condition "running and ready" + STEP: Creating test pods 07/27/23 01:34:47.185 + Jul 27 01:34:47.220: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-1266" to be "running" + Jul 27 01:34:47.230: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 9.581827ms + Jul 27 01:34:49.239: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019003921s + Jul 27 01:34:51.240: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.019587561s + Jul 27 01:34:51.240: INFO: Pod "test-container-pod" satisfied condition "running" + Jul 27 01:34:51.247: INFO: Waiting up to 5m0s for pod "host-test-container-pod" in namespace "pod-network-test-1266" to be "running" + Jul 27 01:34:51.255: INFO: Pod "host-test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 7.836577ms + Jul 27 01:34:51.255: INFO: Pod "host-test-container-pod" satisfied condition "running" + Jul 27 01:34:51.262: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 + Jul 27 01:34:51.262: INFO: Going to poll 172.17.218.45 on port 8081 at least 0 times, with a maximum of 39 tries before failing + Jul 27 01:34:51.269: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.17.218.45 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1266 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jul 27 01:34:51.269: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 01:34:51.270: INFO: ExecWithOptions: Clientset creation + Jul 27 01:34:51.270: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-1266/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+172.17.218.45+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) + Jul 27 01:34:52.460: INFO: Found all 1 expected endpoints: [netserver-0] + Jul 27 01:34:52.460: INFO: Going to poll 172.17.230.158 on port 8081 at least 0 times, with a maximum of 39 tries before failing + Jul 27 01:34:52.488: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.17.230.158 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1266 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jul 27 01:34:52.488: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 01:34:52.488: INFO: ExecWithOptions: Clientset creation + Jul 27 01:34:52.488: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-1266/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+172.17.230.158+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) + Jul 27 01:34:53.758: INFO: Found all 1 expected endpoints: [netserver-1] + Jul 27 01:34:53.758: INFO: Going to poll 172.17.225.48 on port 8081 at least 0 times, with a maximum of 39 tries before failing + Jul 27 01:34:53.768: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.17.225.48 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1266 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jul 27 01:34:53.768: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 01:34:53.768: INFO: ExecWithOptions: Clientset creation + Jul 27 01:34:53.769: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-1266/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+172.17.225.48+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) + Jul 27 01:34:54.985: INFO: Found all 1 expected endpoints: [netserver-2] + [AfterEach] [sig-network] Networking test/e2e/framework/node/init/init.go:32 - Jun 12 20:45:36.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Projected secret + Jul 27 01:34:54.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Networking test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Projected secret + [DeferCleanup (Each)] [sig-network] Networking dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Projected secret + [DeferCleanup (Each)] [sig-network] Networking tear down framework | framework.go:193 - STEP: Destroying namespace "projected-5044" for this suite. 06/12/23 20:45:36.081 + STEP: Destroying namespace "pod-network-test-1266" for this suite. 07/27/23 01:34:54.999 << End Captured GinkgoWriter Output ------------------------------ -SSSSS +SSSS ------------------------------ -[sig-storage] EmptyDir volumes - volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:87 -[BeforeEach] [sig-storage] EmptyDir volumes +[sig-api-machinery] ResourceQuota + should apply changes to a resourcequota status [Conformance] + test/e2e/apimachinery/resource_quota.go:1010 +[BeforeEach] [sig-api-machinery] ResourceQuota set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:45:36.106 -Jun 12 20:45:36.107: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename emptydir 06/12/23 20:45:36.111 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:45:36.159 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:45:36.17 -[BeforeEach] [sig-storage] EmptyDir volumes +STEP: Creating a kubernetes client 07/27/23 01:34:55.021 +Jul 27 01:34:55.021: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename resourcequota 07/27/23 01:34:55.022 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:34:55.072 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:34:55.08 +[BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:31 -[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:87 -STEP: Creating a pod to test emptydir volume type on tmpfs 06/12/23 20:45:36.181 -W0612 20:45:36.202992 23 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "test-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "test-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "test-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "test-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") -Jun 12 20:45:36.203: INFO: Waiting up to 5m0s for pod "pod-b984e357-2f05-4e5b-a878-3ed7cda0c616" in namespace "emptydir-1784" to be "Succeeded or Failed" -Jun 12 20:45:36.210: INFO: Pod "pod-b984e357-2f05-4e5b-a878-3ed7cda0c616": Phase="Pending", Reason="", readiness=false. Elapsed: 7.186236ms -Jun 12 20:45:38.218: INFO: Pod "pod-b984e357-2f05-4e5b-a878-3ed7cda0c616": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015276206s -Jun 12 20:45:40.217: INFO: Pod "pod-b984e357-2f05-4e5b-a878-3ed7cda0c616": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014132414s -Jun 12 20:45:42.219: INFO: Pod "pod-b984e357-2f05-4e5b-a878-3ed7cda0c616": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015887724s -STEP: Saw pod success 06/12/23 20:45:42.219 -Jun 12 20:45:42.220: INFO: Pod "pod-b984e357-2f05-4e5b-a878-3ed7cda0c616" satisfied condition "Succeeded or Failed" -Jun 12 20:45:42.226: INFO: Trying to get logs from node 10.138.75.112 pod pod-b984e357-2f05-4e5b-a878-3ed7cda0c616 container test-container: -STEP: delete the pod 06/12/23 20:45:42.247 -Jun 12 20:45:42.264: INFO: Waiting for pod pod-b984e357-2f05-4e5b-a878-3ed7cda0c616 to disappear -Jun 12 20:45:42.270: INFO: Pod pod-b984e357-2f05-4e5b-a878-3ed7cda0c616 no longer exists -[AfterEach] [sig-storage] EmptyDir volumes +[It] should apply changes to a resourcequota status [Conformance] + test/e2e/apimachinery/resource_quota.go:1010 +STEP: Creating resourceQuota "e2e-rq-status-tbslq" 07/27/23 01:34:55.096 +Jul 27 01:34:55.218: INFO: Resource quota "e2e-rq-status-tbslq" reports spec: hard cpu limit of 500m +Jul 27 01:34:55.218: INFO: Resource quota "e2e-rq-status-tbslq" reports spec: hard memory limit of 500Mi +STEP: Updating resourceQuota "e2e-rq-status-tbslq" /status 07/27/23 01:34:55.218 +STEP: Confirm /status for "e2e-rq-status-tbslq" resourceQuota via watch 07/27/23 01:34:55.237 +Jul 27 01:34:55.242: INFO: observed resourceQuota "e2e-rq-status-tbslq" in namespace "resourcequota-1088" with hard status: v1.ResourceList(nil) +Jul 27 01:34:55.242: INFO: Found resourceQuota "e2e-rq-status-tbslq" in namespace "resourcequota-1088" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:500, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}} +Jul 27 01:34:55.242: INFO: ResourceQuota "e2e-rq-status-tbslq" /status was updated +STEP: Patching hard spec values for cpu & memory 07/27/23 01:34:55.249 +Jul 27 01:34:55.264: INFO: Resource quota "e2e-rq-status-tbslq" reports spec: hard cpu limit of 1 +Jul 27 01:34:55.264: INFO: Resource quota "e2e-rq-status-tbslq" reports spec: hard memory limit of 1Gi +STEP: Patching "e2e-rq-status-tbslq" /status 07/27/23 01:34:55.264 +STEP: Confirm /status for "e2e-rq-status-tbslq" resourceQuota via watch 07/27/23 01:34:55.279 +Jul 27 01:34:55.286: INFO: observed resourceQuota "e2e-rq-status-tbslq" in namespace "resourcequota-1088" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:500, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}} +Jul 27 01:34:55.286: INFO: Found resourceQuota "e2e-rq-status-tbslq" in namespace "resourcequota-1088" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:1, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}} +Jul 27 01:34:55.286: INFO: ResourceQuota "e2e-rq-status-tbslq" /status was patched +STEP: Get "e2e-rq-status-tbslq" /status 07/27/23 01:34:55.286 +Jul 27 01:34:55.295: INFO: Resourcequota "e2e-rq-status-tbslq" reports status: hard cpu of 1 +Jul 27 01:34:55.295: INFO: Resourcequota "e2e-rq-status-tbslq" reports status: hard memory of 1Gi +STEP: Repatching "e2e-rq-status-tbslq" /status before checking Spec is unchanged 07/27/23 01:34:55.303 +Jul 27 01:34:55.315: INFO: Resourcequota "e2e-rq-status-tbslq" reports status: hard cpu of 2 +Jul 27 01:34:55.315: INFO: Resourcequota "e2e-rq-status-tbslq" reports status: hard memory of 2Gi +Jul 27 01:34:55.319: INFO: Found resourceQuota "e2e-rq-status-tbslq" in namespace "resourcequota-1088" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:2, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"2", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:2147483648, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"2Gi", Format:"BinarySI"}} +Jul 27 01:35:45.340: INFO: ResourceQuota "e2e-rq-status-tbslq" Spec was unchanged and /status reset +[AfterEach] [sig-api-machinery] ResourceQuota test/e2e/framework/node/init/init.go:32 -Jun 12 20:45:42.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +Jul 27 01:35:45.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota tear down framework | framework.go:193 -STEP: Destroying namespace "emptydir-1784" for this suite. 06/12/23 20:45:42.284 +STEP: Destroying namespace "resourcequota-1088" for this suite. 07/27/23 01:35:45.354 ------------------------------ -• [SLOW TEST] [6.198 seconds] -[sig-storage] EmptyDir volumes -test/e2e/common/storage/framework.go:23 - volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:87 +• [SLOW TEST] [50.430 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should apply changes to a resourcequota status [Conformance] + test/e2e/apimachinery/resource_quota.go:1010 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] EmptyDir volumes + [BeforeEach] [sig-api-machinery] ResourceQuota set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:45:36.106 - Jun 12 20:45:36.107: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename emptydir 06/12/23 20:45:36.111 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:45:36.159 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:45:36.17 - [BeforeEach] [sig-storage] EmptyDir volumes + STEP: Creating a kubernetes client 07/27/23 01:34:55.021 + Jul 27 01:34:55.021: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename resourcequota 07/27/23 01:34:55.022 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:34:55.072 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:34:55.08 + [BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:31 - [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:87 - STEP: Creating a pod to test emptydir volume type on tmpfs 06/12/23 20:45:36.181 - W0612 20:45:36.202992 23 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "test-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "test-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "test-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "test-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") - Jun 12 20:45:36.203: INFO: Waiting up to 5m0s for pod "pod-b984e357-2f05-4e5b-a878-3ed7cda0c616" in namespace "emptydir-1784" to be "Succeeded or Failed" - Jun 12 20:45:36.210: INFO: Pod "pod-b984e357-2f05-4e5b-a878-3ed7cda0c616": Phase="Pending", Reason="", readiness=false. Elapsed: 7.186236ms - Jun 12 20:45:38.218: INFO: Pod "pod-b984e357-2f05-4e5b-a878-3ed7cda0c616": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015276206s - Jun 12 20:45:40.217: INFO: Pod "pod-b984e357-2f05-4e5b-a878-3ed7cda0c616": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014132414s - Jun 12 20:45:42.219: INFO: Pod "pod-b984e357-2f05-4e5b-a878-3ed7cda0c616": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015887724s - STEP: Saw pod success 06/12/23 20:45:42.219 - Jun 12 20:45:42.220: INFO: Pod "pod-b984e357-2f05-4e5b-a878-3ed7cda0c616" satisfied condition "Succeeded or Failed" - Jun 12 20:45:42.226: INFO: Trying to get logs from node 10.138.75.112 pod pod-b984e357-2f05-4e5b-a878-3ed7cda0c616 container test-container: - STEP: delete the pod 06/12/23 20:45:42.247 - Jun 12 20:45:42.264: INFO: Waiting for pod pod-b984e357-2f05-4e5b-a878-3ed7cda0c616 to disappear - Jun 12 20:45:42.270: INFO: Pod pod-b984e357-2f05-4e5b-a878-3ed7cda0c616 no longer exists - [AfterEach] [sig-storage] EmptyDir volumes + [It] should apply changes to a resourcequota status [Conformance] + test/e2e/apimachinery/resource_quota.go:1010 + STEP: Creating resourceQuota "e2e-rq-status-tbslq" 07/27/23 01:34:55.096 + Jul 27 01:34:55.218: INFO: Resource quota "e2e-rq-status-tbslq" reports spec: hard cpu limit of 500m + Jul 27 01:34:55.218: INFO: Resource quota "e2e-rq-status-tbslq" reports spec: hard memory limit of 500Mi + STEP: Updating resourceQuota "e2e-rq-status-tbslq" /status 07/27/23 01:34:55.218 + STEP: Confirm /status for "e2e-rq-status-tbslq" resourceQuota via watch 07/27/23 01:34:55.237 + Jul 27 01:34:55.242: INFO: observed resourceQuota "e2e-rq-status-tbslq" in namespace "resourcequota-1088" with hard status: v1.ResourceList(nil) + Jul 27 01:34:55.242: INFO: Found resourceQuota "e2e-rq-status-tbslq" in namespace "resourcequota-1088" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:500, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}} + Jul 27 01:34:55.242: INFO: ResourceQuota "e2e-rq-status-tbslq" /status was updated + STEP: Patching hard spec values for cpu & memory 07/27/23 01:34:55.249 + Jul 27 01:34:55.264: INFO: Resource quota "e2e-rq-status-tbslq" reports spec: hard cpu limit of 1 + Jul 27 01:34:55.264: INFO: Resource quota "e2e-rq-status-tbslq" reports spec: hard memory limit of 1Gi + STEP: Patching "e2e-rq-status-tbslq" /status 07/27/23 01:34:55.264 + STEP: Confirm /status for "e2e-rq-status-tbslq" resourceQuota via watch 07/27/23 01:34:55.279 + Jul 27 01:34:55.286: INFO: observed resourceQuota "e2e-rq-status-tbslq" in namespace "resourcequota-1088" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:500, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}} + Jul 27 01:34:55.286: INFO: Found resourceQuota "e2e-rq-status-tbslq" in namespace "resourcequota-1088" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:1, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}} + Jul 27 01:34:55.286: INFO: ResourceQuota "e2e-rq-status-tbslq" /status was patched + STEP: Get "e2e-rq-status-tbslq" /status 07/27/23 01:34:55.286 + Jul 27 01:34:55.295: INFO: Resourcequota "e2e-rq-status-tbslq" reports status: hard cpu of 1 + Jul 27 01:34:55.295: INFO: Resourcequota "e2e-rq-status-tbslq" reports status: hard memory of 1Gi + STEP: Repatching "e2e-rq-status-tbslq" /status before checking Spec is unchanged 07/27/23 01:34:55.303 + Jul 27 01:34:55.315: INFO: Resourcequota "e2e-rq-status-tbslq" reports status: hard cpu of 2 + Jul 27 01:34:55.315: INFO: Resourcequota "e2e-rq-status-tbslq" reports status: hard memory of 2Gi + Jul 27 01:34:55.319: INFO: Found resourceQuota "e2e-rq-status-tbslq" in namespace "resourcequota-1088" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:2, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"2", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:2147483648, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"2Gi", Format:"BinarySI"}} + Jul 27 01:35:45.340: INFO: ResourceQuota "e2e-rq-status-tbslq" Spec was unchanged and /status reset + [AfterEach] [sig-api-machinery] ResourceQuota test/e2e/framework/node/init/init.go:32 - Jun 12 20:45:42.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + Jul 27 01:35:45.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota tear down framework | framework.go:193 - STEP: Destroying namespace "emptydir-1784" for this suite. 06/12/23 20:45:42.284 + STEP: Destroying namespace "resourcequota-1088" for this suite. 07/27/23 01:35:45.354 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] ConfigMap - should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:423 -[BeforeEach] [sig-storage] ConfigMap +[sig-network] Services + should be able to change the type from ExternalName to NodePort [Conformance] + test/e2e/network/service.go:1477 +[BeforeEach] [sig-network] Services set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:45:42.314 -Jun 12 20:45:42.314: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename configmap 06/12/23 20:45:42.316 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:45:42.374 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:45:42.386 -[BeforeEach] [sig-storage] ConfigMap +STEP: Creating a kubernetes client 07/27/23 01:35:45.452 +Jul 27 01:35:45.452: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename services 07/27/23 01:35:45.453 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:35:45.517 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:35:45.527 +[BeforeEach] [sig-network] Services test/e2e/framework/metrics/init/init.go:31 -[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:423 -STEP: Creating configMap with name configmap-test-volume-7d10844a-7470-4a2b-85ab-d7c50aa606b9 06/12/23 20:45:42.428 -STEP: Creating a pod to test consume configMaps 06/12/23 20:45:42.474 -Jun 12 20:45:42.554: INFO: Waiting up to 5m0s for pod "pod-configmaps-5a093a8b-e38c-42d0-bf71-666d6b9b4cde" in namespace "configmap-8171" to be "Succeeded or Failed" -Jun 12 20:45:42.570: INFO: Pod "pod-configmaps-5a093a8b-e38c-42d0-bf71-666d6b9b4cde": Phase="Pending", Reason="", readiness=false. Elapsed: 15.444411ms -Jun 12 20:45:44.578: INFO: Pod "pod-configmaps-5a093a8b-e38c-42d0-bf71-666d6b9b4cde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023205709s -Jun 12 20:45:46.577: INFO: Pod "pod-configmaps-5a093a8b-e38c-42d0-bf71-666d6b9b4cde": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022901611s -Jun 12 20:45:48.580: INFO: Pod "pod-configmaps-5a093a8b-e38c-42d0-bf71-666d6b9b4cde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025677937s -STEP: Saw pod success 06/12/23 20:45:48.58 -Jun 12 20:45:48.580: INFO: Pod "pod-configmaps-5a093a8b-e38c-42d0-bf71-666d6b9b4cde" satisfied condition "Succeeded or Failed" -Jun 12 20:45:48.596: INFO: Trying to get logs from node 10.138.75.112 pod pod-configmaps-5a093a8b-e38c-42d0-bf71-666d6b9b4cde container configmap-volume-test: -STEP: delete the pod 06/12/23 20:45:48.615 -Jun 12 20:45:48.642: INFO: Waiting for pod pod-configmaps-5a093a8b-e38c-42d0-bf71-666d6b9b4cde to disappear -Jun 12 20:45:48.659: INFO: Pod pod-configmaps-5a093a8b-e38c-42d0-bf71-666d6b9b4cde no longer exists -[AfterEach] [sig-storage] ConfigMap +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should be able to change the type from ExternalName to NodePort [Conformance] + test/e2e/network/service.go:1477 +STEP: creating a service externalname-service with the type=ExternalName in namespace services-4332 07/27/23 01:35:45.537 +STEP: changing the ExternalName service to type=NodePort 07/27/23 01:35:45.559 +STEP: creating replication controller externalname-service in namespace services-4332 07/27/23 01:35:45.642 +I0727 01:35:45.661543 20 runners.go:193] Created replication controller with name: externalname-service, namespace: services-4332, replica count: 2 +I0727 01:35:48.713274 20 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Jul 27 01:35:48.713: INFO: Creating new exec pod +Jul 27 01:35:48.736: INFO: Waiting up to 5m0s for pod "execpodh59s4" in namespace "services-4332" to be "running" +Jul 27 01:35:48.743: INFO: Pod "execpodh59s4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.509937ms +Jul 27 01:35:50.753: INFO: Pod "execpodh59s4": Phase="Running", Reason="", readiness=true. Elapsed: 2.016921459s +Jul 27 01:35:50.753: INFO: Pod "execpodh59s4" satisfied condition "running" +Jul 27 01:35:51.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-4332 exec execpodh59s4 -- /bin/sh -x -c nc -v -z -w 2 externalname-service 80' +Jul 27 01:35:52.089: INFO: stderr: "+ nc -v -z -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Jul 27 01:35:52.089: INFO: stdout: "" +Jul 27 01:35:52.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-4332 exec execpodh59s4 -- /bin/sh -x -c nc -v -z -w 2 172.21.25.96 80' +Jul 27 01:35:52.282: INFO: stderr: "+ nc -v -z -w 2 172.21.25.96 80\nConnection to 172.21.25.96 80 port [tcp/http] succeeded!\n" +Jul 27 01:35:52.282: INFO: stdout: "" +Jul 27 01:35:52.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-4332 exec execpodh59s4 -- /bin/sh -x -c nc -v -z -w 2 10.245.128.17 31428' +Jul 27 01:35:52.493: INFO: stderr: "+ nc -v -z -w 2 10.245.128.17 31428\nConnection to 10.245.128.17 31428 port [tcp/*] succeeded!\n" +Jul 27 01:35:52.493: INFO: stdout: "" +Jul 27 01:35:52.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-4332 exec execpodh59s4 -- /bin/sh -x -c nc -v -z -w 2 10.245.128.19 31428' +Jul 27 01:35:52.763: INFO: stderr: "+ nc -v -z -w 2 10.245.128.19 31428\nConnection to 10.245.128.19 31428 port [tcp/*] succeeded!\n" +Jul 27 01:35:52.763: INFO: stdout: "" +Jul 27 01:35:52.763: INFO: Cleaning up the ExternalName to NodePort test service +[AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 -Jun 12 20:45:48.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] ConfigMap +Jul 27 01:35:52.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] ConfigMap +[DeferCleanup (Each)] [sig-network] Services dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] ConfigMap +[DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 -STEP: Destroying namespace "configmap-8171" for this suite. 06/12/23 20:45:48.673 +STEP: Destroying namespace "services-4332" for this suite. 07/27/23 01:35:52.858 ------------------------------ -• [SLOW TEST] [6.379 seconds] -[sig-storage] ConfigMap -test/e2e/common/storage/framework.go:23 - should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:423 +• [SLOW TEST] [7.429 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should be able to change the type from ExternalName to NodePort [Conformance] + test/e2e/network/service.go:1477 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] ConfigMap + [BeforeEach] [sig-network] Services set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:45:42.314 - Jun 12 20:45:42.314: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename configmap 06/12/23 20:45:42.316 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:45:42.374 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:45:42.386 - [BeforeEach] [sig-storage] ConfigMap + STEP: Creating a kubernetes client 07/27/23 01:35:45.452 + Jul 27 01:35:45.452: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename services 07/27/23 01:35:45.453 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:35:45.517 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:35:45.527 + [BeforeEach] [sig-network] Services test/e2e/framework/metrics/init/init.go:31 - [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:423 - STEP: Creating configMap with name configmap-test-volume-7d10844a-7470-4a2b-85ab-d7c50aa606b9 06/12/23 20:45:42.428 - STEP: Creating a pod to test consume configMaps 06/12/23 20:45:42.474 - Jun 12 20:45:42.554: INFO: Waiting up to 5m0s for pod "pod-configmaps-5a093a8b-e38c-42d0-bf71-666d6b9b4cde" in namespace "configmap-8171" to be "Succeeded or Failed" - Jun 12 20:45:42.570: INFO: Pod "pod-configmaps-5a093a8b-e38c-42d0-bf71-666d6b9b4cde": Phase="Pending", Reason="", readiness=false. Elapsed: 15.444411ms - Jun 12 20:45:44.578: INFO: Pod "pod-configmaps-5a093a8b-e38c-42d0-bf71-666d6b9b4cde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023205709s - Jun 12 20:45:46.577: INFO: Pod "pod-configmaps-5a093a8b-e38c-42d0-bf71-666d6b9b4cde": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022901611s - Jun 12 20:45:48.580: INFO: Pod "pod-configmaps-5a093a8b-e38c-42d0-bf71-666d6b9b4cde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025677937s - STEP: Saw pod success 06/12/23 20:45:48.58 - Jun 12 20:45:48.580: INFO: Pod "pod-configmaps-5a093a8b-e38c-42d0-bf71-666d6b9b4cde" satisfied condition "Succeeded or Failed" - Jun 12 20:45:48.596: INFO: Trying to get logs from node 10.138.75.112 pod pod-configmaps-5a093a8b-e38c-42d0-bf71-666d6b9b4cde container configmap-volume-test: - STEP: delete the pod 06/12/23 20:45:48.615 - Jun 12 20:45:48.642: INFO: Waiting for pod pod-configmaps-5a093a8b-e38c-42d0-bf71-666d6b9b4cde to disappear - Jun 12 20:45:48.659: INFO: Pod pod-configmaps-5a093a8b-e38c-42d0-bf71-666d6b9b4cde no longer exists - [AfterEach] [sig-storage] ConfigMap + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should be able to change the type from ExternalName to NodePort [Conformance] + test/e2e/network/service.go:1477 + STEP: creating a service externalname-service with the type=ExternalName in namespace services-4332 07/27/23 01:35:45.537 + STEP: changing the ExternalName service to type=NodePort 07/27/23 01:35:45.559 + STEP: creating replication controller externalname-service in namespace services-4332 07/27/23 01:35:45.642 + I0727 01:35:45.661543 20 runners.go:193] Created replication controller with name: externalname-service, namespace: services-4332, replica count: 2 + I0727 01:35:48.713274 20 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Jul 27 01:35:48.713: INFO: Creating new exec pod + Jul 27 01:35:48.736: INFO: Waiting up to 5m0s for pod "execpodh59s4" in namespace "services-4332" to be "running" + Jul 27 01:35:48.743: INFO: Pod "execpodh59s4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.509937ms + Jul 27 01:35:50.753: INFO: Pod "execpodh59s4": Phase="Running", Reason="", readiness=true. Elapsed: 2.016921459s + Jul 27 01:35:50.753: INFO: Pod "execpodh59s4" satisfied condition "running" + Jul 27 01:35:51.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-4332 exec execpodh59s4 -- /bin/sh -x -c nc -v -z -w 2 externalname-service 80' + Jul 27 01:35:52.089: INFO: stderr: "+ nc -v -z -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" + Jul 27 01:35:52.089: INFO: stdout: "" + Jul 27 01:35:52.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-4332 exec execpodh59s4 -- /bin/sh -x -c nc -v -z -w 2 172.21.25.96 80' + Jul 27 01:35:52.282: INFO: stderr: "+ nc -v -z -w 2 172.21.25.96 80\nConnection to 172.21.25.96 80 port [tcp/http] succeeded!\n" + Jul 27 01:35:52.282: INFO: stdout: "" + Jul 27 01:35:52.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-4332 exec execpodh59s4 -- /bin/sh -x -c nc -v -z -w 2 10.245.128.17 31428' + Jul 27 01:35:52.493: INFO: stderr: "+ nc -v -z -w 2 10.245.128.17 31428\nConnection to 10.245.128.17 31428 port [tcp/*] succeeded!\n" + Jul 27 01:35:52.493: INFO: stdout: "" + Jul 27 01:35:52.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-4332 exec execpodh59s4 -- /bin/sh -x -c nc -v -z -w 2 10.245.128.19 31428' + Jul 27 01:35:52.763: INFO: stderr: "+ nc -v -z -w 2 10.245.128.19 31428\nConnection to 10.245.128.19 31428 port [tcp/*] succeeded!\n" + Jul 27 01:35:52.763: INFO: stdout: "" + Jul 27 01:35:52.763: INFO: Cleaning up the ExternalName to NodePort test service + [AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 - Jun 12 20:45:48.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] ConfigMap + Jul 27 01:35:52.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] ConfigMap + [DeferCleanup (Each)] [sig-network] Services dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] ConfigMap + [DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 - STEP: Destroying namespace "configmap-8171" for this suite. 06/12/23 20:45:48.673 + STEP: Destroying namespace "services-4332" for this suite. 07/27/23 01:35:52.858 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSS +SSSSSSSSSS ------------------------------ -[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] - Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] - test/e2e/apps/statefulset.go:587 -[BeforeEach] [sig-apps] StatefulSet +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource [Conformance] + test/e2e/apimachinery/webhook.go:291 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:45:48.697 -Jun 12 20:45:48.697: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename statefulset 06/12/23 20:45:48.699 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:45:48.756 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:45:48.774 -[BeforeEach] [sig-apps] StatefulSet +STEP: Creating a kubernetes client 07/27/23 01:35:52.882 +Jul 27 01:35:52.882: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename webhook 07/27/23 01:35:52.883 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:35:52.925 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:35:52.934 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-apps] StatefulSet - test/e2e/apps/statefulset.go:98 -[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:113 -STEP: Creating service test in namespace statefulset-3156 06/12/23 20:45:48.784 -[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] - test/e2e/apps/statefulset.go:587 -STEP: Initializing watcher for selector baz=blah,foo=bar 06/12/23 20:45:48.801 -STEP: Creating stateful set ss in namespace statefulset-3156 06/12/23 20:45:48.809 -STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3156 06/12/23 20:45:48.854 -Jun 12 20:45:48.863: INFO: Found 0 stateful pods, waiting for 1 -Jun 12 20:45:58.871: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true -STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod 06/12/23 20:45:58.872 -Jun 12 20:45:58.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-3156 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' -Jun 12 20:45:59.500: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" -Jun 12 20:45:59.500: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" -Jun 12 20:45:59.500: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' - -Jun 12 20:45:59.507: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true -Jun 12 20:46:09.515: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false -Jun 12 20:46:09.515: INFO: Waiting for statefulset status.replicas updated to 0 -Jun 12 20:46:09.559: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999995465s -Jun 12 20:46:10.568: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.990282403s -Jun 12 20:46:11.575: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.982439566s -Jun 12 20:46:12.584: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.973453003s -Jun 12 20:46:13.592: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.965545631s -Jun 12 20:46:14.601: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.956638525s -Jun 12 20:46:15.609: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.948506745s -Jun 12 20:46:16.616: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.941046416s -Jun 12 20:46:17.626: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.933250737s -Jun 12 20:46:18.635: INFO: Verifying statefulset ss doesn't scale past 1 for another 923.423822ms -STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3156 06/12/23 20:46:19.635 -Jun 12 20:46:19.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-3156 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' -Jun 12 20:46:20.086: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" -Jun 12 20:46:20.086: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" -Jun 12 20:46:20.086: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' - -Jun 12 20:46:20.095: INFO: Found 1 stateful pods, waiting for 3 -Jun 12 20:46:30.109: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true -Jun 12 20:46:30.109: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true -Jun 12 20:46:30.109: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true -STEP: Verifying that stateful set ss was scaled up in order 06/12/23 20:46:30.109 -STEP: Scale down will halt with unhealthy stateful pod 06/12/23 20:46:30.109 -Jun 12 20:46:30.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-3156 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' -Jun 12 20:46:30.667: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" -Jun 12 20:46:30.667: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" -Jun 12 20:46:30.667: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' - -Jun 12 20:46:30.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-3156 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' -Jun 12 20:46:31.162: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" -Jun 12 20:46:31.162: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" -Jun 12 20:46:31.162: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' - -Jun 12 20:46:31.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-3156 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' -Jun 12 20:46:31.627: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" -Jun 12 20:46:31.627: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" -Jun 12 20:46:31.628: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' - -Jun 12 20:46:31.628: INFO: Waiting for statefulset status.replicas updated to 0 -Jun 12 20:46:31.637: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 -Jun 12 20:46:41.662: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false -Jun 12 20:46:41.662: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false -Jun 12 20:46:41.662: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false -Jun 12 20:46:41.692: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999997815s -Jun 12 20:46:42.702: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992879187s -Jun 12 20:46:43.713: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.982403086s -Jun 12 20:46:44.722: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.97193703s -Jun 12 20:46:45.732: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.962953081s -Jun 12 20:46:46.742: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.952443873s -Jun 12 20:46:47.751: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.942644727s -Jun 12 20:46:48.762: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.933243146s -Jun 12 20:46:49.783: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.923147817s -Jun 12 20:46:50.792: INFO: Verifying statefulset ss doesn't scale past 3 for another 902.236313ms -STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3156 06/12/23 20:46:51.793 -Jun 12 20:46:51.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-3156 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' -Jun 12 20:46:52.238: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" -Jun 12 20:46:52.238: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" -Jun 12 20:46:52.238: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' - -Jun 12 20:46:52.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-3156 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' -Jun 12 20:46:52.781: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" -Jun 12 20:46:52.781: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" -Jun 12 20:46:52.781: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' - -Jun 12 20:46:52.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-3156 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' -Jun 12 20:46:53.256: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" -Jun 12 20:46:53.256: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" -Jun 12 20:46:53.256: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' - -Jun 12 20:46:53.256: INFO: Scaling statefulset ss to 0 -STEP: Verifying that stateful set ss was scaled down in reverse order 06/12/23 20:47:03.299 -[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:124 -Jun 12 20:47:03.300: INFO: Deleting all statefulset in ns statefulset-3156 -Jun 12 20:47:03.310: INFO: Scaling statefulset ss to 0 -Jun 12 20:47:03.337: INFO: Waiting for statefulset status.replicas updated to 0 -Jun 12 20:47:03.348: INFO: Deleting statefulset ss -[AfterEach] [sig-apps] StatefulSet +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 07/27/23 01:35:53.045 +STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 01:35:53.667 +STEP: Deploying the webhook pod 07/27/23 01:35:53.708 +STEP: Wait for the deployment to be ready 07/27/23 01:35:53.747 +Jul 27 01:35:53.776: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 07/27/23 01:35:55.805 +STEP: Verifying the service has paired with the endpoint 07/27/23 01:35:55.841 +Jul 27 01:35:56.841: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource [Conformance] + test/e2e/apimachinery/webhook.go:291 +Jul 27 01:35:56.853: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-994-crds.webhook.example.com via the AdmissionRegistration API 07/27/23 01:35:57.389 +STEP: Creating a custom resource that should be mutated by the webhook 07/27/23 01:35:57.468 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 -Jun 12 20:47:03.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] StatefulSet +Jul 27 01:36:00.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] StatefulSet +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] StatefulSet +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 -STEP: Destroying namespace "statefulset-3156" for this suite. 06/12/23 20:47:03.402 +STEP: Destroying namespace "webhook-2102" for this suite. 07/27/23 01:36:00.39 +STEP: Destroying namespace "webhook-2102-markers" for this suite. 07/27/23 01:36:00.474 ------------------------------ -• [SLOW TEST] [74.726 seconds] -[sig-apps] StatefulSet -test/e2e/apps/framework.go:23 - Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:103 - Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] - test/e2e/apps/statefulset.go:587 +• [SLOW TEST] [7.623 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should mutate custom resource [Conformance] + test/e2e/apimachinery/webhook.go:291 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] StatefulSet + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:45:48.697 - Jun 12 20:45:48.697: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename statefulset 06/12/23 20:45:48.699 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:45:48.756 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:45:48.774 - [BeforeEach] [sig-apps] StatefulSet + STEP: Creating a kubernetes client 07/27/23 01:35:52.882 + Jul 27 01:35:52.882: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename webhook 07/27/23 01:35:52.883 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:35:52.925 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:35:52.934 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-apps] StatefulSet - test/e2e/apps/statefulset.go:98 - [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:113 - STEP: Creating service test in namespace statefulset-3156 06/12/23 20:45:48.784 - [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] - test/e2e/apps/statefulset.go:587 - STEP: Initializing watcher for selector baz=blah,foo=bar 06/12/23 20:45:48.801 - STEP: Creating stateful set ss in namespace statefulset-3156 06/12/23 20:45:48.809 - STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3156 06/12/23 20:45:48.854 - Jun 12 20:45:48.863: INFO: Found 0 stateful pods, waiting for 1 - Jun 12 20:45:58.871: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true - STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod 06/12/23 20:45:58.872 - Jun 12 20:45:58.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-3156 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' - Jun 12 20:45:59.500: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" - Jun 12 20:45:59.500: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" - Jun 12 20:45:59.500: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' - - Jun 12 20:45:59.507: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true - Jun 12 20:46:09.515: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false - Jun 12 20:46:09.515: INFO: Waiting for statefulset status.replicas updated to 0 - Jun 12 20:46:09.559: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999995465s - Jun 12 20:46:10.568: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.990282403s - Jun 12 20:46:11.575: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.982439566s - Jun 12 20:46:12.584: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.973453003s - Jun 12 20:46:13.592: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.965545631s - Jun 12 20:46:14.601: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.956638525s - Jun 12 20:46:15.609: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.948506745s - Jun 12 20:46:16.616: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.941046416s - Jun 12 20:46:17.626: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.933250737s - Jun 12 20:46:18.635: INFO: Verifying statefulset ss doesn't scale past 1 for another 923.423822ms - STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3156 06/12/23 20:46:19.635 - Jun 12 20:46:19.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-3156 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' - Jun 12 20:46:20.086: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" - Jun 12 20:46:20.086: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" - Jun 12 20:46:20.086: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' - - Jun 12 20:46:20.095: INFO: Found 1 stateful pods, waiting for 3 - Jun 12 20:46:30.109: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true - Jun 12 20:46:30.109: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true - Jun 12 20:46:30.109: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true - STEP: Verifying that stateful set ss was scaled up in order 06/12/23 20:46:30.109 - STEP: Scale down will halt with unhealthy stateful pod 06/12/23 20:46:30.109 - Jun 12 20:46:30.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-3156 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' - Jun 12 20:46:30.667: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" - Jun 12 20:46:30.667: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" - Jun 12 20:46:30.667: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' - - Jun 12 20:46:30.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-3156 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' - Jun 12 20:46:31.162: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" - Jun 12 20:46:31.162: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" - Jun 12 20:46:31.162: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' - - Jun 12 20:46:31.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-3156 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' - Jun 12 20:46:31.627: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" - Jun 12 20:46:31.627: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" - Jun 12 20:46:31.628: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' - - Jun 12 20:46:31.628: INFO: Waiting for statefulset status.replicas updated to 0 - Jun 12 20:46:31.637: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 - Jun 12 20:46:41.662: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false - Jun 12 20:46:41.662: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false - Jun 12 20:46:41.662: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false - Jun 12 20:46:41.692: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999997815s - Jun 12 20:46:42.702: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992879187s - Jun 12 20:46:43.713: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.982403086s - Jun 12 20:46:44.722: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.97193703s - Jun 12 20:46:45.732: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.962953081s - Jun 12 20:46:46.742: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.952443873s - Jun 12 20:46:47.751: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.942644727s - Jun 12 20:46:48.762: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.933243146s - Jun 12 20:46:49.783: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.923147817s - Jun 12 20:46:50.792: INFO: Verifying statefulset ss doesn't scale past 3 for another 902.236313ms - STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3156 06/12/23 20:46:51.793 - Jun 12 20:46:51.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-3156 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' - Jun 12 20:46:52.238: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" - Jun 12 20:46:52.238: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" - Jun 12 20:46:52.238: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' - - Jun 12 20:46:52.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-3156 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' - Jun 12 20:46:52.781: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" - Jun 12 20:46:52.781: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" - Jun 12 20:46:52.781: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' - - Jun 12 20:46:52.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-3156 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' - Jun 12 20:46:53.256: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" - Jun 12 20:46:53.256: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" - Jun 12 20:46:53.256: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' - - Jun 12 20:46:53.256: INFO: Scaling statefulset ss to 0 - STEP: Verifying that stateful set ss was scaled down in reverse order 06/12/23 20:47:03.299 - [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:124 - Jun 12 20:47:03.300: INFO: Deleting all statefulset in ns statefulset-3156 - Jun 12 20:47:03.310: INFO: Scaling statefulset ss to 0 - Jun 12 20:47:03.337: INFO: Waiting for statefulset status.replicas updated to 0 - Jun 12 20:47:03.348: INFO: Deleting statefulset ss - [AfterEach] [sig-apps] StatefulSet + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 07/27/23 01:35:53.045 + STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 01:35:53.667 + STEP: Deploying the webhook pod 07/27/23 01:35:53.708 + STEP: Wait for the deployment to be ready 07/27/23 01:35:53.747 + Jul 27 01:35:53.776: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 07/27/23 01:35:55.805 + STEP: Verifying the service has paired with the endpoint 07/27/23 01:35:55.841 + Jul 27 01:35:56.841: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should mutate custom resource [Conformance] + test/e2e/apimachinery/webhook.go:291 + Jul 27 01:35:56.853: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Registering the mutating webhook for custom resource e2e-test-webhook-994-crds.webhook.example.com via the AdmissionRegistration API 07/27/23 01:35:57.389 + STEP: Creating a custom resource that should be mutated by the webhook 07/27/23 01:35:57.468 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 20:47:03.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] StatefulSet + Jul 27 01:36:00.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] StatefulSet + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] StatefulSet + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "statefulset-3156" for this suite. 06/12/23 20:47:03.402 + STEP: Destroying namespace "webhook-2102" for this suite. 07/27/23 01:36:00.39 + STEP: Destroying namespace "webhook-2102-markers" for this suite. 07/27/23 01:36:00.474 << End Captured GinkgoWriter Output ------------------------------ -S +SSSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] EmptyDir volumes - should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:217 -[BeforeEach] [sig-storage] EmptyDir volumes +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a secret. [Conformance] + test/e2e/apimachinery/resource_quota.go:160 +[BeforeEach] [sig-api-machinery] ResourceQuota set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:47:03.424 -Jun 12 20:47:03.424: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename emptydir 06/12/23 20:47:03.43 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:47:03.484 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:47:03.529 -[BeforeEach] [sig-storage] EmptyDir volumes +STEP: Creating a kubernetes client 07/27/23 01:36:00.506 +Jul 27 01:36:00.506: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename resourcequota 07/27/23 01:36:00.507 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:36:00.593 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:36:00.602 +[BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:31 -[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:217 -STEP: Creating a pod to test emptydir 0777 on node default medium 06/12/23 20:47:03.557 -Jun 12 20:47:03.580: INFO: Waiting up to 5m0s for pod "pod-1395403c-fd78-4c39-8403-b7ee5c6c4f7f" in namespace "emptydir-4505" to be "Succeeded or Failed" -Jun 12 20:47:03.590: INFO: Pod "pod-1395403c-fd78-4c39-8403-b7ee5c6c4f7f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.385995ms -Jun 12 20:47:05.597: INFO: Pod "pod-1395403c-fd78-4c39-8403-b7ee5c6c4f7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016572822s -Jun 12 20:47:07.599: INFO: Pod "pod-1395403c-fd78-4c39-8403-b7ee5c6c4f7f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018172345s -Jun 12 20:47:09.600: INFO: Pod "pod-1395403c-fd78-4c39-8403-b7ee5c6c4f7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019351615s -STEP: Saw pod success 06/12/23 20:47:09.6 -Jun 12 20:47:09.600: INFO: Pod "pod-1395403c-fd78-4c39-8403-b7ee5c6c4f7f" satisfied condition "Succeeded or Failed" -Jun 12 20:47:09.617: INFO: Trying to get logs from node 10.138.75.70 pod pod-1395403c-fd78-4c39-8403-b7ee5c6c4f7f container test-container: -STEP: delete the pod 06/12/23 20:47:09.704 -Jun 12 20:47:09.726: INFO: Waiting for pod pod-1395403c-fd78-4c39-8403-b7ee5c6c4f7f to disappear -Jun 12 20:47:09.733: INFO: Pod pod-1395403c-fd78-4c39-8403-b7ee5c6c4f7f no longer exists -[AfterEach] [sig-storage] EmptyDir volumes +[It] should create a ResourceQuota and capture the life of a secret. [Conformance] + test/e2e/apimachinery/resource_quota.go:160 +STEP: Discovering how many secrets are in namespace by default 07/27/23 01:36:00.61 +STEP: Counting existing ResourceQuota 07/27/23 01:36:06.629 +STEP: Creating a ResourceQuota 07/27/23 01:36:11.638 +STEP: Ensuring resource quota status is calculated 07/27/23 01:36:11.651 +STEP: Creating a Secret 07/27/23 01:36:13.661 +STEP: Ensuring resource quota status captures secret creation 07/27/23 01:36:13.685 +STEP: Deleting a secret 07/27/23 01:36:15.695 +STEP: Ensuring resource quota status released usage 07/27/23 01:36:15.711 +[AfterEach] [sig-api-machinery] ResourceQuota test/e2e/framework/node/init/init.go:32 -Jun 12 20:47:09.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +Jul 27 01:36:17.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota tear down framework | framework.go:193 -STEP: Destroying namespace "emptydir-4505" for this suite. 06/12/23 20:47:09.749 +STEP: Destroying namespace "resourcequota-6958" for this suite. 07/27/23 01:36:17.735 ------------------------------ -• [SLOW TEST] [6.347 seconds] -[sig-storage] EmptyDir volumes -test/e2e/common/storage/framework.go:23 - should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:217 +• [SLOW TEST] [17.279 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a secret. [Conformance] + test/e2e/apimachinery/resource_quota.go:160 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] EmptyDir volumes + [BeforeEach] [sig-api-machinery] ResourceQuota set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:47:03.424 - Jun 12 20:47:03.424: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename emptydir 06/12/23 20:47:03.43 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:47:03.484 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:47:03.529 - [BeforeEach] [sig-storage] EmptyDir volumes + STEP: Creating a kubernetes client 07/27/23 01:36:00.506 + Jul 27 01:36:00.506: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename resourcequota 07/27/23 01:36:00.507 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:36:00.593 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:36:00.602 + [BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:31 - [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:217 - STEP: Creating a pod to test emptydir 0777 on node default medium 06/12/23 20:47:03.557 - Jun 12 20:47:03.580: INFO: Waiting up to 5m0s for pod "pod-1395403c-fd78-4c39-8403-b7ee5c6c4f7f" in namespace "emptydir-4505" to be "Succeeded or Failed" - Jun 12 20:47:03.590: INFO: Pod "pod-1395403c-fd78-4c39-8403-b7ee5c6c4f7f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.385995ms - Jun 12 20:47:05.597: INFO: Pod "pod-1395403c-fd78-4c39-8403-b7ee5c6c4f7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016572822s - Jun 12 20:47:07.599: INFO: Pod "pod-1395403c-fd78-4c39-8403-b7ee5c6c4f7f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018172345s - Jun 12 20:47:09.600: INFO: Pod "pod-1395403c-fd78-4c39-8403-b7ee5c6c4f7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019351615s - STEP: Saw pod success 06/12/23 20:47:09.6 - Jun 12 20:47:09.600: INFO: Pod "pod-1395403c-fd78-4c39-8403-b7ee5c6c4f7f" satisfied condition "Succeeded or Failed" - Jun 12 20:47:09.617: INFO: Trying to get logs from node 10.138.75.70 pod pod-1395403c-fd78-4c39-8403-b7ee5c6c4f7f container test-container: - STEP: delete the pod 06/12/23 20:47:09.704 - Jun 12 20:47:09.726: INFO: Waiting for pod pod-1395403c-fd78-4c39-8403-b7ee5c6c4f7f to disappear - Jun 12 20:47:09.733: INFO: Pod pod-1395403c-fd78-4c39-8403-b7ee5c6c4f7f no longer exists - [AfterEach] [sig-storage] EmptyDir volumes + [It] should create a ResourceQuota and capture the life of a secret. [Conformance] + test/e2e/apimachinery/resource_quota.go:160 + STEP: Discovering how many secrets are in namespace by default 07/27/23 01:36:00.61 + STEP: Counting existing ResourceQuota 07/27/23 01:36:06.629 + STEP: Creating a ResourceQuota 07/27/23 01:36:11.638 + STEP: Ensuring resource quota status is calculated 07/27/23 01:36:11.651 + STEP: Creating a Secret 07/27/23 01:36:13.661 + STEP: Ensuring resource quota status captures secret creation 07/27/23 01:36:13.685 + STEP: Deleting a secret 07/27/23 01:36:15.695 + STEP: Ensuring resource quota status released usage 07/27/23 01:36:15.711 + [AfterEach] [sig-api-machinery] ResourceQuota test/e2e/framework/node/init/init.go:32 - Jun 12 20:47:09.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + Jul 27 01:36:17.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota tear down framework | framework.go:193 - STEP: Destroying namespace "emptydir-4505" for this suite. 06/12/23 20:47:09.749 + STEP: Destroying namespace "resourcequota-6958" for this suite. 07/27/23 01:36:17.735 << End Captured GinkgoWriter Output ------------------------------ -S +SSSSSSSSSSSSSS ------------------------------ -[sig-api-machinery] Namespaces [Serial] - should ensure that all pods are removed when a namespace is deleted [Conformance] - test/e2e/apimachinery/namespace.go:243 -[BeforeEach] [sig-api-machinery] Namespaces [Serial] +[sig-node] RuntimeClass + should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:156 +[BeforeEach] [sig-node] RuntimeClass set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:47:09.771 -Jun 12 20:47:09.772: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename namespaces 06/12/23 20:47:09.774 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:47:09.848 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:47:09.863 -[BeforeEach] [sig-api-machinery] Namespaces [Serial] +STEP: Creating a kubernetes client 07/27/23 01:36:17.786 +Jul 27 01:36:17.786: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename runtimeclass 07/27/23 01:36:17.786 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:36:17.829 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:36:17.838 +[BeforeEach] [sig-node] RuntimeClass test/e2e/framework/metrics/init/init.go:31 -[It] should ensure that all pods are removed when a namespace is deleted [Conformance] - test/e2e/apimachinery/namespace.go:243 -STEP: Creating a test namespace 06/12/23 20:47:09.873 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:47:09.943 -STEP: Creating a pod in the namespace 06/12/23 20:47:09.956 -STEP: Waiting for the pod to have running status 06/12/23 20:47:09.984 -Jun 12 20:47:09.984: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "nsdeletetest-9403" to be "running" -Jun 12 20:47:09.994: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 10.309494ms -Jun 12 20:47:12.002: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017918895s -Jun 12 20:47:14.002: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.017948114s -Jun 12 20:47:14.002: INFO: Pod "test-pod" satisfied condition "running" -STEP: Deleting the namespace 06/12/23 20:47:14.002 -STEP: Waiting for the namespace to be removed. 06/12/23 20:47:14.027 -STEP: Recreating the namespace 06/12/23 20:47:27.038 -STEP: Verifying there are no pods in the namespace 06/12/23 20:47:27.179 -[AfterEach] [sig-api-machinery] Namespaces [Serial] +[It] should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:156 +STEP: Deleting RuntimeClass runtimeclass-2787-delete-me 07/27/23 01:36:17.864 +STEP: Waiting for the RuntimeClass to disappear 07/27/23 01:36:17.912 +[AfterEach] [sig-node] RuntimeClass test/e2e/framework/node/init/init.go:32 -Jun 12 20:47:27.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] +Jul 27 01:36:17.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] RuntimeClass test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] +[DeferCleanup (Each)] [sig-node] RuntimeClass dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] +[DeferCleanup (Each)] [sig-node] RuntimeClass tear down framework | framework.go:193 -STEP: Destroying namespace "namespaces-5549" for this suite. 06/12/23 20:47:27.208 -STEP: Destroying namespace "nsdeletetest-9403" for this suite. 06/12/23 20:47:27.231 -Jun 12 20:47:27.247: INFO: Namespace nsdeletetest-9403 was already deleted -STEP: Destroying namespace "nsdeletetest-6459" for this suite. 06/12/23 20:47:27.247 +STEP: Destroying namespace "runtimeclass-2787" for this suite. 07/27/23 01:36:17.958 ------------------------------ -• [SLOW TEST] [17.499 seconds] -[sig-api-machinery] Namespaces [Serial] -test/e2e/apimachinery/framework.go:23 - should ensure that all pods are removed when a namespace is deleted [Conformance] - test/e2e/apimachinery/namespace.go:243 +• [0.194 seconds] +[sig-node] RuntimeClass +test/e2e/common/node/framework.go:23 + should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:156 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] Namespaces [Serial] + [BeforeEach] [sig-node] RuntimeClass set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:47:09.771 - Jun 12 20:47:09.772: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename namespaces 06/12/23 20:47:09.774 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:47:09.848 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:47:09.863 - [BeforeEach] [sig-api-machinery] Namespaces [Serial] + STEP: Creating a kubernetes client 07/27/23 01:36:17.786 + Jul 27 01:36:17.786: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename runtimeclass 07/27/23 01:36:17.786 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:36:17.829 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:36:17.838 + [BeforeEach] [sig-node] RuntimeClass test/e2e/framework/metrics/init/init.go:31 - [It] should ensure that all pods are removed when a namespace is deleted [Conformance] - test/e2e/apimachinery/namespace.go:243 - STEP: Creating a test namespace 06/12/23 20:47:09.873 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:47:09.943 - STEP: Creating a pod in the namespace 06/12/23 20:47:09.956 - STEP: Waiting for the pod to have running status 06/12/23 20:47:09.984 - Jun 12 20:47:09.984: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "nsdeletetest-9403" to be "running" - Jun 12 20:47:09.994: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 10.309494ms - Jun 12 20:47:12.002: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017918895s - Jun 12 20:47:14.002: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.017948114s - Jun 12 20:47:14.002: INFO: Pod "test-pod" satisfied condition "running" - STEP: Deleting the namespace 06/12/23 20:47:14.002 - STEP: Waiting for the namespace to be removed. 06/12/23 20:47:14.027 - STEP: Recreating the namespace 06/12/23 20:47:27.038 - STEP: Verifying there are no pods in the namespace 06/12/23 20:47:27.179 - [AfterEach] [sig-api-machinery] Namespaces [Serial] + [It] should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:156 + STEP: Deleting RuntimeClass runtimeclass-2787-delete-me 07/27/23 01:36:17.864 + STEP: Waiting for the RuntimeClass to disappear 07/27/23 01:36:17.912 + [AfterEach] [sig-node] RuntimeClass test/e2e/framework/node/init/init.go:32 - Jun 12 20:47:27.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + Jul 27 01:36:17.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] RuntimeClass test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + [DeferCleanup (Each)] [sig-node] RuntimeClass dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + [DeferCleanup (Each)] [sig-node] RuntimeClass tear down framework | framework.go:193 - STEP: Destroying namespace "namespaces-5549" for this suite. 06/12/23 20:47:27.208 - STEP: Destroying namespace "nsdeletetest-9403" for this suite. 06/12/23 20:47:27.231 - Jun 12 20:47:27.247: INFO: Namespace nsdeletetest-9403 was already deleted - STEP: Destroying namespace "nsdeletetest-6459" for this suite. 06/12/23 20:47:27.247 + STEP: Destroying namespace "runtimeclass-2787" for this suite. 07/27/23 01:36:17.958 << End Captured GinkgoWriter Output ------------------------------ -S +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-api-machinery] ResourceQuota - should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] - test/e2e/apimachinery/resource_quota.go:75 -[BeforeEach] [sig-api-machinery] ResourceQuota +[sig-api-machinery] Watchers + should receive events on concurrent watches in same order [Conformance] + test/e2e/apimachinery/watch.go:334 +[BeforeEach] [sig-api-machinery] Watchers set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:47:27.277 -Jun 12 20:47:27.277: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename resourcequota 06/12/23 20:47:27.279 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:47:27.339 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:47:27.347 -[BeforeEach] [sig-api-machinery] ResourceQuota +STEP: Creating a kubernetes client 07/27/23 01:36:17.982 +Jul 27 01:36:17.982: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename watch 07/27/23 01:36:17.983 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:36:18.023 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:36:18.031 +[BeforeEach] [sig-api-machinery] Watchers test/e2e/framework/metrics/init/init.go:31 -[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] - test/e2e/apimachinery/resource_quota.go:75 -STEP: Counting existing ResourceQuota 06/12/23 20:47:27.362 -STEP: Creating a ResourceQuota 06/12/23 20:47:32.442 -STEP: Ensuring resource quota status is calculated 06/12/23 20:47:32.554 -[AfterEach] [sig-api-machinery] ResourceQuota +[It] should receive events on concurrent watches in same order [Conformance] + test/e2e/apimachinery/watch.go:334 +STEP: getting a starting resourceVersion 07/27/23 01:36:18.04 +STEP: starting a background goroutine to produce watch events 07/27/23 01:36:18.057 +STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order 07/27/23 01:36:18.057 +[AfterEach] [sig-api-machinery] Watchers test/e2e/framework/node/init/init.go:32 -Jun 12 20:47:34.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +Jul 27 01:36:20.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Watchers test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +[DeferCleanup (Each)] [sig-api-machinery] Watchers dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +[DeferCleanup (Each)] [sig-api-machinery] Watchers tear down framework | framework.go:193 -STEP: Destroying namespace "resourcequota-2649" for this suite. 06/12/23 20:47:34.747 +STEP: Destroying namespace "watch-7721" for this suite. 07/27/23 01:36:20.854 ------------------------------ -• [SLOW TEST] [7.501 seconds] -[sig-api-machinery] ResourceQuota +• [2.925 seconds] +[sig-api-machinery] Watchers test/e2e/apimachinery/framework.go:23 - should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] - test/e2e/apimachinery/resource_quota.go:75 - - Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] ResourceQuota - set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:47:27.277 - Jun 12 20:47:27.277: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename resourcequota 06/12/23 20:47:27.279 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:47:27.339 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:47:27.347 - [BeforeEach] [sig-api-machinery] ResourceQuota - test/e2e/framework/metrics/init/init.go:31 - [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] - test/e2e/apimachinery/resource_quota.go:75 - STEP: Counting existing ResourceQuota 06/12/23 20:47:27.362 - STEP: Creating a ResourceQuota 06/12/23 20:47:32.442 - STEP: Ensuring resource quota status is calculated 06/12/23 20:47:32.554 - [AfterEach] [sig-api-machinery] ResourceQuota - test/e2e/framework/node/init/init.go:32 - Jun 12 20:47:34.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota - test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota - dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota - tear down framework | framework.go:193 - STEP: Destroying namespace "resourcequota-2649" for this suite. 06/12/23 20:47:34.747 - << End Captured GinkgoWriter Output ------------------------------- -SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------- -[sig-node] Probing container - with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] - test/e2e/common/node/container_probe.go:72 -[BeforeEach] [sig-node] Probing container - set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:47:34.79 -Jun 12 20:47:34.790: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename container-probe 06/12/23 20:47:34.793 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:47:34.846 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:47:34.906 -[BeforeEach] [sig-node] Probing container - test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-node] Probing container - test/e2e/common/node/container_probe.go:63 -[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] - test/e2e/common/node/container_probe.go:72 -Jun 12 20:47:35.021: INFO: Waiting up to 5m0s for pod "test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c" in namespace "container-probe-5627" to be "running and ready" -Jun 12 20:47:35.032: INFO: Pod "test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.003086ms -Jun 12 20:47:35.033: INFO: The phase of Pod test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:47:37.042: INFO: Pod "test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02023542s -Jun 12 20:47:37.042: INFO: The phase of Pod test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:47:39.041: INFO: Pod "test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c": Phase="Running", Reason="", readiness=false. Elapsed: 4.019447943s -Jun 12 20:47:39.041: INFO: The phase of Pod test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c is Running (Ready = false) -Jun 12 20:47:41.042: INFO: Pod "test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c": Phase="Running", Reason="", readiness=false. Elapsed: 6.020748963s -Jun 12 20:47:41.042: INFO: The phase of Pod test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c is Running (Ready = false) -Jun 12 20:47:43.044: INFO: Pod "test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c": Phase="Running", Reason="", readiness=false. Elapsed: 8.022223597s -Jun 12 20:47:43.044: INFO: The phase of Pod test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c is Running (Ready = false) -Jun 12 20:47:45.042: INFO: Pod "test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c": Phase="Running", Reason="", readiness=false. Elapsed: 10.020101213s -Jun 12 20:47:45.042: INFO: The phase of Pod test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c is Running (Ready = false) -Jun 12 20:47:47.042: INFO: Pod "test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c": Phase="Running", Reason="", readiness=false. Elapsed: 12.020496808s -Jun 12 20:47:47.042: INFO: The phase of Pod test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c is Running (Ready = false) -Jun 12 20:47:49.042: INFO: Pod "test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c": Phase="Running", Reason="", readiness=false. Elapsed: 14.020581388s -Jun 12 20:47:49.042: INFO: The phase of Pod test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c is Running (Ready = false) -Jun 12 20:47:51.044: INFO: Pod "test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c": Phase="Running", Reason="", readiness=false. Elapsed: 16.022501911s -Jun 12 20:47:51.044: INFO: The phase of Pod test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c is Running (Ready = false) -Jun 12 20:47:53.043: INFO: Pod "test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c": Phase="Running", Reason="", readiness=false. Elapsed: 18.020976512s -Jun 12 20:47:53.043: INFO: The phase of Pod test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c is Running (Ready = false) -Jun 12 20:47:55.041: INFO: Pod "test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c": Phase="Running", Reason="", readiness=false. Elapsed: 20.019340856s -Jun 12 20:47:55.041: INFO: The phase of Pod test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c is Running (Ready = false) -Jun 12 20:47:57.041: INFO: Pod "test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c": Phase="Running", Reason="", readiness=true. Elapsed: 22.019875628s -Jun 12 20:47:57.041: INFO: The phase of Pod test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c is Running (Ready = true) -Jun 12 20:47:57.042: INFO: Pod "test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c" satisfied condition "running and ready" -Jun 12 20:47:57.048: INFO: Container started at 2023-06-12 20:47:36 +0000 UTC, pod became ready at 2023-06-12 20:47:55 +0000 UTC -[AfterEach] [sig-node] Probing container - test/e2e/framework/node/init/init.go:32 -Jun 12 20:47:57.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Probing container - test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Probing container - dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Probing container - tear down framework | framework.go:193 -STEP: Destroying namespace "container-probe-5627" for this suite. 06/12/23 20:47:57.061 ------------------------------- -• [SLOW TEST] [22.296 seconds] -[sig-node] Probing container -test/e2e/common/node/framework.go:23 - with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] - test/e2e/common/node/container_probe.go:72 + should receive events on concurrent watches in same order [Conformance] + test/e2e/apimachinery/watch.go:334 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Probing container + [BeforeEach] [sig-api-machinery] Watchers set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:47:34.79 - Jun 12 20:47:34.790: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename container-probe 06/12/23 20:47:34.793 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:47:34.846 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:47:34.906 - [BeforeEach] [sig-node] Probing container + STEP: Creating a kubernetes client 07/27/23 01:36:17.982 + Jul 27 01:36:17.982: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename watch 07/27/23 01:36:17.983 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:36:18.023 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:36:18.031 + [BeforeEach] [sig-api-machinery] Watchers test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-node] Probing container - test/e2e/common/node/container_probe.go:63 - [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] - test/e2e/common/node/container_probe.go:72 - Jun 12 20:47:35.021: INFO: Waiting up to 5m0s for pod "test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c" in namespace "container-probe-5627" to be "running and ready" - Jun 12 20:47:35.032: INFO: Pod "test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.003086ms - Jun 12 20:47:35.033: INFO: The phase of Pod test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:47:37.042: INFO: Pod "test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02023542s - Jun 12 20:47:37.042: INFO: The phase of Pod test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:47:39.041: INFO: Pod "test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c": Phase="Running", Reason="", readiness=false. Elapsed: 4.019447943s - Jun 12 20:47:39.041: INFO: The phase of Pod test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c is Running (Ready = false) - Jun 12 20:47:41.042: INFO: Pod "test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c": Phase="Running", Reason="", readiness=false. Elapsed: 6.020748963s - Jun 12 20:47:41.042: INFO: The phase of Pod test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c is Running (Ready = false) - Jun 12 20:47:43.044: INFO: Pod "test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c": Phase="Running", Reason="", readiness=false. Elapsed: 8.022223597s - Jun 12 20:47:43.044: INFO: The phase of Pod test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c is Running (Ready = false) - Jun 12 20:47:45.042: INFO: Pod "test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c": Phase="Running", Reason="", readiness=false. Elapsed: 10.020101213s - Jun 12 20:47:45.042: INFO: The phase of Pod test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c is Running (Ready = false) - Jun 12 20:47:47.042: INFO: Pod "test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c": Phase="Running", Reason="", readiness=false. Elapsed: 12.020496808s - Jun 12 20:47:47.042: INFO: The phase of Pod test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c is Running (Ready = false) - Jun 12 20:47:49.042: INFO: Pod "test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c": Phase="Running", Reason="", readiness=false. Elapsed: 14.020581388s - Jun 12 20:47:49.042: INFO: The phase of Pod test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c is Running (Ready = false) - Jun 12 20:47:51.044: INFO: Pod "test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c": Phase="Running", Reason="", readiness=false. Elapsed: 16.022501911s - Jun 12 20:47:51.044: INFO: The phase of Pod test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c is Running (Ready = false) - Jun 12 20:47:53.043: INFO: Pod "test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c": Phase="Running", Reason="", readiness=false. Elapsed: 18.020976512s - Jun 12 20:47:53.043: INFO: The phase of Pod test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c is Running (Ready = false) - Jun 12 20:47:55.041: INFO: Pod "test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c": Phase="Running", Reason="", readiness=false. Elapsed: 20.019340856s - Jun 12 20:47:55.041: INFO: The phase of Pod test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c is Running (Ready = false) - Jun 12 20:47:57.041: INFO: Pod "test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c": Phase="Running", Reason="", readiness=true. Elapsed: 22.019875628s - Jun 12 20:47:57.041: INFO: The phase of Pod test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c is Running (Ready = true) - Jun 12 20:47:57.042: INFO: Pod "test-webserver-ad774bb9-f082-4738-bb8b-9270cd844c9c" satisfied condition "running and ready" - Jun 12 20:47:57.048: INFO: Container started at 2023-06-12 20:47:36 +0000 UTC, pod became ready at 2023-06-12 20:47:55 +0000 UTC - [AfterEach] [sig-node] Probing container + [It] should receive events on concurrent watches in same order [Conformance] + test/e2e/apimachinery/watch.go:334 + STEP: getting a starting resourceVersion 07/27/23 01:36:18.04 + STEP: starting a background goroutine to produce watch events 07/27/23 01:36:18.057 + STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order 07/27/23 01:36:18.057 + [AfterEach] [sig-api-machinery] Watchers test/e2e/framework/node/init/init.go:32 - Jun 12 20:47:57.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Probing container + Jul 27 01:36:20.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Watchers test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Probing container + [DeferCleanup (Each)] [sig-api-machinery] Watchers dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Probing container + [DeferCleanup (Each)] [sig-api-machinery] Watchers tear down framework | framework.go:193 - STEP: Destroying namespace "container-probe-5627" for this suite. 06/12/23 20:47:57.061 + STEP: Destroying namespace "watch-7721" for this suite. 07/27/23 01:36:20.854 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] - should run and stop simple daemon [Conformance] - test/e2e/apps/daemon_set.go:166 + should retry creating failed daemon pods [Conformance] + test/e2e/apps/daemon_set.go:294 [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:47:57.09 -Jun 12 20:47:57.090: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename daemonsets 06/12/23 20:47:57.092 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:47:57.143 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:47:57.167 +STEP: Creating a kubernetes client 07/27/23 01:36:20.907 +Jul 27 01:36:20.908: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename daemonsets 07/27/23 01:36:20.908 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:36:20.952 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:36:20.962 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:146 -[It] should run and stop simple daemon [Conformance] - test/e2e/apps/daemon_set.go:166 -STEP: Creating simple DaemonSet "daemon-set" 06/12/23 20:47:57.232 -STEP: Check that daemon pods launch on every node of the cluster. 06/12/23 20:47:57.253 -Jun 12 20:47:57.280: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 20:47:57.280: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 -Jun 12 20:47:58.306: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 20:47:58.306: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 -Jun 12 20:47:59.301: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 20:47:59.301: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 -Jun 12 20:48:00.303: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 -Jun 12 20:48:00.303: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 -Jun 12 20:48:01.308: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 -Jun 12 20:48:01.308: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set -STEP: Stop a daemon pod, check that the daemon pod is revived. 06/12/23 20:48:01.317 -Jun 12 20:48:01.354: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 -Jun 12 20:48:01.354: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 -Jun 12 20:48:02.379: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 -Jun 12 20:48:02.380: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 -Jun 12 20:48:03.375: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 -Jun 12 20:48:03.375: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 -Jun 12 20:48:04.468: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 -Jun 12 20:48:04.468: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 -Jun 12 20:48:05.481: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 -Jun 12 20:48:05.482: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 -Jun 12 20:48:06.443: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 -Jun 12 20:48:06.549: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 -Jun 12 20:48:07.468: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 -Jun 12 20:48:07.468: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +[It] should retry creating failed daemon pods [Conformance] + test/e2e/apps/daemon_set.go:294 +STEP: Creating a simple DaemonSet "daemon-set" 07/27/23 01:36:21.061 +STEP: Check that daemon pods launch on every node of the cluster. 07/27/23 01:36:21.075 +Jul 27 01:36:21.095: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jul 27 01:36:21.095: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 +Jul 27 01:36:22.125: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jul 27 01:36:22.125: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 +Jul 27 01:36:23.150: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Jul 27 01:36:23.150: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. 07/27/23 01:36:23.158 +Jul 27 01:36:23.209: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Jul 27 01:36:23.209: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 +Jul 27 01:36:24.233: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Jul 27 01:36:24.233: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 +Jul 27 01:36:25.233: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Jul 27 01:36:25.233: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 +Jul 27 01:36:26.232: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Jul 27 01:36:26.232: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +STEP: Wait for the failed daemon pod to be completely deleted. 07/27/23 01:36:26.232 [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:111 -STEP: Deleting DaemonSet "daemon-set" 06/12/23 20:48:07.482 -STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7934, will wait for the garbage collector to delete the pods 06/12/23 20:48:07.574 -Jun 12 20:48:07.748: INFO: Deleting DaemonSet.extensions daemon-set took: 26.912777ms -Jun 12 20:48:07.872: INFO: Terminating DaemonSet.extensions daemon-set pods took: 123.615705ms -Jun 12 20:48:11.579: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 20:48:11.579: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set -Jun 12 20:48:11.589: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"79113"},"items":null} +STEP: Deleting DaemonSet "daemon-set" 07/27/23 01:36:26.248 +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2026, will wait for the garbage collector to delete the pods 07/27/23 01:36:26.248 +Jul 27 01:36:26.321: INFO: Deleting DaemonSet.extensions daemon-set took: 14.287109ms +Jul 27 01:36:26.422: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.3801ms +Jul 27 01:36:28.434: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jul 27 01:36:28.434: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Jul 27 01:36:28.442: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"67842"},"items":null} -Jun 12 20:48:11.595: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"79113"},"items":null} +Jul 27 01:36:28.451: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"67842"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 -Jun 12 20:48:11.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +Jul 27 01:36:28.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 -STEP: Destroying namespace "daemonsets-7934" for this suite. 06/12/23 20:48:11.638 +STEP: Destroying namespace "daemonsets-2026" for this suite. 07/27/23 01:36:28.509 ------------------------------ -• [SLOW TEST] [14.568 seconds] +• [SLOW TEST] [7.625 seconds] [sig-apps] Daemon set [Serial] test/e2e/apps/framework.go:23 - should run and stop simple daemon [Conformance] - test/e2e/apps/daemon_set.go:166 + should retry creating failed daemon pods [Conformance] + test/e2e/apps/daemon_set.go:294 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:47:57.09 - Jun 12 20:47:57.090: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename daemonsets 06/12/23 20:47:57.092 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:47:57.143 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:47:57.167 + STEP: Creating a kubernetes client 07/27/23 01:36:20.907 + Jul 27 01:36:20.908: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename daemonsets 07/27/23 01:36:20.908 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:36:20.952 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:36:20.962 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:146 - [It] should run and stop simple daemon [Conformance] - test/e2e/apps/daemon_set.go:166 - STEP: Creating simple DaemonSet "daemon-set" 06/12/23 20:47:57.232 - STEP: Check that daemon pods launch on every node of the cluster. 06/12/23 20:47:57.253 - Jun 12 20:47:57.280: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 20:47:57.280: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 - Jun 12 20:47:58.306: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 20:47:58.306: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 - Jun 12 20:47:59.301: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 20:47:59.301: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 - Jun 12 20:48:00.303: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 - Jun 12 20:48:00.303: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 - Jun 12 20:48:01.308: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 - Jun 12 20:48:01.308: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set - STEP: Stop a daemon pod, check that the daemon pod is revived. 06/12/23 20:48:01.317 - Jun 12 20:48:01.354: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 - Jun 12 20:48:01.354: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 - Jun 12 20:48:02.379: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 - Jun 12 20:48:02.380: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 - Jun 12 20:48:03.375: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 - Jun 12 20:48:03.375: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 - Jun 12 20:48:04.468: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 - Jun 12 20:48:04.468: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 - Jun 12 20:48:05.481: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 - Jun 12 20:48:05.482: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 - Jun 12 20:48:06.443: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 - Jun 12 20:48:06.549: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 - Jun 12 20:48:07.468: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 - Jun 12 20:48:07.468: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + [It] should retry creating failed daemon pods [Conformance] + test/e2e/apps/daemon_set.go:294 + STEP: Creating a simple DaemonSet "daemon-set" 07/27/23 01:36:21.061 + STEP: Check that daemon pods launch on every node of the cluster. 07/27/23 01:36:21.075 + Jul 27 01:36:21.095: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jul 27 01:36:21.095: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 + Jul 27 01:36:22.125: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jul 27 01:36:22.125: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 + Jul 27 01:36:23.150: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Jul 27 01:36:23.150: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. 07/27/23 01:36:23.158 + Jul 27 01:36:23.209: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Jul 27 01:36:23.209: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 + Jul 27 01:36:24.233: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Jul 27 01:36:24.233: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 + Jul 27 01:36:25.233: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Jul 27 01:36:25.233: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 + Jul 27 01:36:26.232: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Jul 27 01:36:26.232: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + STEP: Wait for the failed daemon pod to be completely deleted. 07/27/23 01:36:26.232 [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:111 - STEP: Deleting DaemonSet "daemon-set" 06/12/23 20:48:07.482 - STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7934, will wait for the garbage collector to delete the pods 06/12/23 20:48:07.574 - Jun 12 20:48:07.748: INFO: Deleting DaemonSet.extensions daemon-set took: 26.912777ms - Jun 12 20:48:07.872: INFO: Terminating DaemonSet.extensions daemon-set pods took: 123.615705ms - Jun 12 20:48:11.579: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 20:48:11.579: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set - Jun 12 20:48:11.589: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"79113"},"items":null} + STEP: Deleting DaemonSet "daemon-set" 07/27/23 01:36:26.248 + STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2026, will wait for the garbage collector to delete the pods 07/27/23 01:36:26.248 + Jul 27 01:36:26.321: INFO: Deleting DaemonSet.extensions daemon-set took: 14.287109ms + Jul 27 01:36:26.422: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.3801ms + Jul 27 01:36:28.434: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jul 27 01:36:28.434: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set + Jul 27 01:36:28.442: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"67842"},"items":null} - Jun 12 20:48:11.595: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"79113"},"items":null} + Jul 27 01:36:28.451: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"67842"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 - Jun 12 20:48:11.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + Jul 27 01:36:28.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 - STEP: Destroying namespace "daemonsets-7934" for this suite. 06/12/23 20:48:11.638 + STEP: Destroying namespace "daemonsets-2026" for this suite. 07/27/23 01:36:28.509 << End Captured GinkgoWriter Output ------------------------------ -SSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] - works for CRD preserving unknown fields in an embedded object [Conformance] - test/e2e/apimachinery/crd_publish_openapi.go:236 -[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:105 +[BeforeEach] [sig-network] Networking set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:48:11.659 -Jun 12 20:48:11.659: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename crd-publish-openapi 06/12/23 20:48:11.662 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:48:11.717 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:48:11.728 -[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +STEP: Creating a kubernetes client 07/27/23 01:36:28.534 +Jul 27 01:36:28.534: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename pod-network-test 07/27/23 01:36:28.534 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:36:28.581 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:36:28.593 +[BeforeEach] [sig-network] Networking test/e2e/framework/metrics/init/init.go:31 -[It] works for CRD preserving unknown fields in an embedded object [Conformance] - test/e2e/apimachinery/crd_publish_openapi.go:236 -Jun 12 20:48:11.743: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: kubectl validation (kubectl create and apply) allows request with any unknown properties 06/12/23 20:48:19.936 -Jun 12 20:48:19.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-8752 --namespace=crd-publish-openapi-8752 create -f -' -Jun 12 20:48:23.810: INFO: stderr: "" -Jun 12 20:48:23.810: INFO: stdout: "e2e-test-crd-publish-openapi-4659-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" -Jun 12 20:48:23.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-8752 --namespace=crd-publish-openapi-8752 delete e2e-test-crd-publish-openapi-4659-crds test-cr' -Jun 12 20:48:24.205: INFO: stderr: "" -Jun 12 20:48:24.206: INFO: stdout: "e2e-test-crd-publish-openapi-4659-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" -Jun 12 20:48:24.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-8752 --namespace=crd-publish-openapi-8752 apply -f -' -Jun 12 20:48:26.344: INFO: stderr: "" -Jun 12 20:48:26.344: INFO: stdout: "e2e-test-crd-publish-openapi-4659-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" -Jun 12 20:48:26.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-8752 --namespace=crd-publish-openapi-8752 delete e2e-test-crd-publish-openapi-4659-crds test-cr' -Jun 12 20:48:26.658: INFO: stderr: "" -Jun 12 20:48:26.658: INFO: stdout: "e2e-test-crd-publish-openapi-4659-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" -STEP: kubectl explain works to explain CR 06/12/23 20:48:26.658 -Jun 12 20:48:26.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-8752 explain e2e-test-crd-publish-openapi-4659-crds' -Jun 12 20:48:27.744: INFO: stderr: "" -Jun 12 20:48:27.744: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-4659-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" -[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:105 +STEP: Performing setup for networking test in namespace pod-network-test-3706 07/27/23 01:36:28.606 +STEP: creating a selector 07/27/23 01:36:28.606 +STEP: Creating the service pods in kubernetes 07/27/23 01:36:28.606 +Jul 27 01:36:28.606: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Jul 27 01:36:28.699: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-3706" to be "running and ready" +Jul 27 01:36:28.710: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 11.212646ms +Jul 27 01:36:28.711: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Jul 27 01:36:30.721: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021589317s +Jul 27 01:36:30.721: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Jul 27 01:36:32.719: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.020151729s +Jul 27 01:36:32.719: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 01:36:34.722: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.022519826s +Jul 27 01:36:34.722: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 01:36:36.721: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.021297785s +Jul 27 01:36:36.721: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 01:36:38.721: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.021519291s +Jul 27 01:36:38.721: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 01:36:40.720: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.020833876s +Jul 27 01:36:40.720: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 01:36:42.719: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.019871912s +Jul 27 01:36:42.719: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 01:36:44.720: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.020334684s +Jul 27 01:36:44.720: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 01:36:46.720: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.020362134s +Jul 27 01:36:46.720: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 01:36:48.722: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.022345329s +Jul 27 01:36:48.722: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 01:36:50.721: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 22.022183693s +Jul 27 01:36:50.721: INFO: The phase of Pod netserver-0 is Running (Ready = true) +Jul 27 01:36:50.721: INFO: Pod "netserver-0" satisfied condition "running and ready" +Jul 27 01:36:50.730: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-3706" to be "running and ready" +Jul 27 01:36:50.738: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 8.109554ms +Jul 27 01:36:50.738: INFO: The phase of Pod netserver-1 is Running (Ready = true) +Jul 27 01:36:50.738: INFO: Pod "netserver-1" satisfied condition "running and ready" +Jul 27 01:36:50.746: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-3706" to be "running and ready" +Jul 27 01:36:50.754: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 7.904056ms +Jul 27 01:36:50.754: INFO: The phase of Pod netserver-2 is Running (Ready = true) +Jul 27 01:36:50.754: INFO: Pod "netserver-2" satisfied condition "running and ready" +STEP: Creating test pods 07/27/23 01:36:50.762 +Jul 27 01:36:50.797: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-3706" to be "running" +Jul 27 01:36:50.805: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 7.464937ms +Jul 27 01:36:52.814: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.016614183s +Jul 27 01:36:52.814: INFO: Pod "test-container-pod" satisfied condition "running" +Jul 27 01:36:52.823: INFO: Waiting up to 5m0s for pod "host-test-container-pod" in namespace "pod-network-test-3706" to be "running" +Jul 27 01:36:52.832: INFO: Pod "host-test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 8.785211ms +Jul 27 01:36:52.832: INFO: Pod "host-test-container-pod" satisfied condition "running" +Jul 27 01:36:52.839: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 +Jul 27 01:36:52.839: INFO: Going to poll 172.17.218.47 on port 8083 at least 0 times, with a maximum of 39 tries before failing +Jul 27 01:36:52.860: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.17.218.47:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3706 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jul 27 01:36:52.860: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 01:36:52.861: INFO: ExecWithOptions: Clientset creation +Jul 27 01:36:52.861: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-3706/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F172.17.218.47%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +Jul 27 01:36:53.087: INFO: Found all 1 expected endpoints: [netserver-0] +Jul 27 01:36:53.087: INFO: Going to poll 172.17.230.190 on port 8083 at least 0 times, with a maximum of 39 tries before failing +Jul 27 01:36:53.096: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.17.230.190:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3706 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jul 27 01:36:53.096: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 01:36:53.097: INFO: ExecWithOptions: Clientset creation +Jul 27 01:36:53.097: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-3706/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F172.17.230.190%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +Jul 27 01:36:53.273: INFO: Found all 1 expected endpoints: [netserver-1] +Jul 27 01:36:53.273: INFO: Going to poll 172.17.225.52 on port 8083 at least 0 times, with a maximum of 39 tries before failing +Jul 27 01:36:53.283: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.17.225.52:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3706 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jul 27 01:36:53.283: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 01:36:53.283: INFO: ExecWithOptions: Clientset creation +Jul 27 01:36:53.283: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-3706/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F172.17.225.52%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +Jul 27 01:36:53.433: INFO: Found all 1 expected endpoints: [netserver-2] +[AfterEach] [sig-network] Networking test/e2e/framework/node/init/init.go:32 -Jun 12 20:48:35.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +Jul 27 01:36:53.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Networking test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-network] Networking dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-network] Networking tear down framework | framework.go:193 -STEP: Destroying namespace "crd-publish-openapi-8752" for this suite. 06/12/23 20:48:35.315 +STEP: Destroying namespace "pod-network-test-3706" for this suite. 07/27/23 01:36:53.448 ------------------------------ -• [SLOW TEST] [23.671 seconds] -[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] -test/e2e/apimachinery/framework.go:23 - works for CRD preserving unknown fields in an embedded object [Conformance] - test/e2e/apimachinery/crd_publish_openapi.go:236 +• [SLOW TEST] [24.944 seconds] +[sig-network] Networking +test/e2e/common/network/framework.go:23 + Granular Checks: Pods + test/e2e/common/network/networking.go:32 + should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:105 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + [BeforeEach] [sig-network] Networking set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:48:11.659 - Jun 12 20:48:11.659: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename crd-publish-openapi 06/12/23 20:48:11.662 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:48:11.717 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:48:11.728 - [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + STEP: Creating a kubernetes client 07/27/23 01:36:28.534 + Jul 27 01:36:28.534: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename pod-network-test 07/27/23 01:36:28.534 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:36:28.581 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:36:28.593 + [BeforeEach] [sig-network] Networking test/e2e/framework/metrics/init/init.go:31 - [It] works for CRD preserving unknown fields in an embedded object [Conformance] - test/e2e/apimachinery/crd_publish_openapi.go:236 - Jun 12 20:48:11.743: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: kubectl validation (kubectl create and apply) allows request with any unknown properties 06/12/23 20:48:19.936 - Jun 12 20:48:19.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-8752 --namespace=crd-publish-openapi-8752 create -f -' - Jun 12 20:48:23.810: INFO: stderr: "" - Jun 12 20:48:23.810: INFO: stdout: "e2e-test-crd-publish-openapi-4659-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" - Jun 12 20:48:23.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-8752 --namespace=crd-publish-openapi-8752 delete e2e-test-crd-publish-openapi-4659-crds test-cr' - Jun 12 20:48:24.205: INFO: stderr: "" - Jun 12 20:48:24.206: INFO: stdout: "e2e-test-crd-publish-openapi-4659-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" - Jun 12 20:48:24.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-8752 --namespace=crd-publish-openapi-8752 apply -f -' - Jun 12 20:48:26.344: INFO: stderr: "" - Jun 12 20:48:26.344: INFO: stdout: "e2e-test-crd-publish-openapi-4659-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" - Jun 12 20:48:26.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-8752 --namespace=crd-publish-openapi-8752 delete e2e-test-crd-publish-openapi-4659-crds test-cr' - Jun 12 20:48:26.658: INFO: stderr: "" - Jun 12 20:48:26.658: INFO: stdout: "e2e-test-crd-publish-openapi-4659-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" - STEP: kubectl explain works to explain CR 06/12/23 20:48:26.658 - Jun 12 20:48:26.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-8752 explain e2e-test-crd-publish-openapi-4659-crds' - Jun 12 20:48:27.744: INFO: stderr: "" - Jun 12 20:48:27.744: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-4659-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" - [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:105 + STEP: Performing setup for networking test in namespace pod-network-test-3706 07/27/23 01:36:28.606 + STEP: creating a selector 07/27/23 01:36:28.606 + STEP: Creating the service pods in kubernetes 07/27/23 01:36:28.606 + Jul 27 01:36:28.606: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable + Jul 27 01:36:28.699: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-3706" to be "running and ready" + Jul 27 01:36:28.710: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 11.212646ms + Jul 27 01:36:28.711: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) + Jul 27 01:36:30.721: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021589317s + Jul 27 01:36:30.721: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) + Jul 27 01:36:32.719: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.020151729s + Jul 27 01:36:32.719: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 01:36:34.722: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.022519826s + Jul 27 01:36:34.722: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 01:36:36.721: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.021297785s + Jul 27 01:36:36.721: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 01:36:38.721: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.021519291s + Jul 27 01:36:38.721: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 01:36:40.720: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.020833876s + Jul 27 01:36:40.720: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 01:36:42.719: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.019871912s + Jul 27 01:36:42.719: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 01:36:44.720: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.020334684s + Jul 27 01:36:44.720: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 01:36:46.720: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.020362134s + Jul 27 01:36:46.720: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 01:36:48.722: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.022345329s + Jul 27 01:36:48.722: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 01:36:50.721: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 22.022183693s + Jul 27 01:36:50.721: INFO: The phase of Pod netserver-0 is Running (Ready = true) + Jul 27 01:36:50.721: INFO: Pod "netserver-0" satisfied condition "running and ready" + Jul 27 01:36:50.730: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-3706" to be "running and ready" + Jul 27 01:36:50.738: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 8.109554ms + Jul 27 01:36:50.738: INFO: The phase of Pod netserver-1 is Running (Ready = true) + Jul 27 01:36:50.738: INFO: Pod "netserver-1" satisfied condition "running and ready" + Jul 27 01:36:50.746: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-3706" to be "running and ready" + Jul 27 01:36:50.754: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 7.904056ms + Jul 27 01:36:50.754: INFO: The phase of Pod netserver-2 is Running (Ready = true) + Jul 27 01:36:50.754: INFO: Pod "netserver-2" satisfied condition "running and ready" + STEP: Creating test pods 07/27/23 01:36:50.762 + Jul 27 01:36:50.797: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-3706" to be "running" + Jul 27 01:36:50.805: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 7.464937ms + Jul 27 01:36:52.814: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.016614183s + Jul 27 01:36:52.814: INFO: Pod "test-container-pod" satisfied condition "running" + Jul 27 01:36:52.823: INFO: Waiting up to 5m0s for pod "host-test-container-pod" in namespace "pod-network-test-3706" to be "running" + Jul 27 01:36:52.832: INFO: Pod "host-test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 8.785211ms + Jul 27 01:36:52.832: INFO: Pod "host-test-container-pod" satisfied condition "running" + Jul 27 01:36:52.839: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 + Jul 27 01:36:52.839: INFO: Going to poll 172.17.218.47 on port 8083 at least 0 times, with a maximum of 39 tries before failing + Jul 27 01:36:52.860: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.17.218.47:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3706 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jul 27 01:36:52.860: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 01:36:52.861: INFO: ExecWithOptions: Clientset creation + Jul 27 01:36:52.861: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-3706/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F172.17.218.47%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) + Jul 27 01:36:53.087: INFO: Found all 1 expected endpoints: [netserver-0] + Jul 27 01:36:53.087: INFO: Going to poll 172.17.230.190 on port 8083 at least 0 times, with a maximum of 39 tries before failing + Jul 27 01:36:53.096: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.17.230.190:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3706 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jul 27 01:36:53.096: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 01:36:53.097: INFO: ExecWithOptions: Clientset creation + Jul 27 01:36:53.097: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-3706/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F172.17.230.190%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) + Jul 27 01:36:53.273: INFO: Found all 1 expected endpoints: [netserver-1] + Jul 27 01:36:53.273: INFO: Going to poll 172.17.225.52 on port 8083 at least 0 times, with a maximum of 39 tries before failing + Jul 27 01:36:53.283: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.17.225.52:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3706 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jul 27 01:36:53.283: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 01:36:53.283: INFO: ExecWithOptions: Clientset creation + Jul 27 01:36:53.283: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-3706/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F172.17.225.52%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) + Jul 27 01:36:53.433: INFO: Found all 1 expected endpoints: [netserver-2] + [AfterEach] [sig-network] Networking test/e2e/framework/node/init/init.go:32 - Jun 12 20:48:35.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + Jul 27 01:36:53.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Networking test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-network] Networking dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-network] Networking tear down framework | framework.go:193 - STEP: Destroying namespace "crd-publish-openapi-8752" for this suite. 06/12/23 20:48:35.315 + STEP: Destroying namespace "pod-network-test-3706" for this suite. 07/27/23 01:36:53.448 << End Captured GinkgoWriter Output ------------------------------ -[sig-storage] Downward API volume - should update annotations on modification [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:162 -[BeforeEach] [sig-storage] Downward API volume +SSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:89 +[BeforeEach] [sig-storage] ConfigMap set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:48:35.331 -Jun 12 20:48:35.331: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename downward-api 06/12/23 20:48:35.334 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:48:35.404 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:48:35.413 -[BeforeEach] [sig-storage] Downward API volume +STEP: Creating a kubernetes client 07/27/23 01:36:53.479 +Jul 27 01:36:53.479: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename configmap 07/27/23 01:36:53.48 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:36:53.523 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:36:53.532 +[BeforeEach] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-storage] Downward API volume - test/e2e/common/storage/downwardapi_volume.go:44 -[It] should update annotations on modification [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:162 -STEP: Creating the pod 06/12/23 20:48:35.426 -Jun 12 20:48:35.457: INFO: Waiting up to 5m0s for pod "annotationupdatee37280f9-0343-40b6-be8a-9549e589c5f7" in namespace "downward-api-855" to be "running and ready" -Jun 12 20:48:35.479: INFO: Pod "annotationupdatee37280f9-0343-40b6-be8a-9549e589c5f7": Phase="Pending", Reason="", readiness=false. Elapsed: 21.683585ms -Jun 12 20:48:35.479: INFO: The phase of Pod annotationupdatee37280f9-0343-40b6-be8a-9549e589c5f7 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:48:37.540: INFO: Pod "annotationupdatee37280f9-0343-40b6-be8a-9549e589c5f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08304097s -Jun 12 20:48:37.540: INFO: The phase of Pod annotationupdatee37280f9-0343-40b6-be8a-9549e589c5f7 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:48:39.549: INFO: Pod "annotationupdatee37280f9-0343-40b6-be8a-9549e589c5f7": Phase="Running", Reason="", readiness=true. Elapsed: 4.091720095s -Jun 12 20:48:39.549: INFO: The phase of Pod annotationupdatee37280f9-0343-40b6-be8a-9549e589c5f7 is Running (Ready = true) -Jun 12 20:48:39.549: INFO: Pod "annotationupdatee37280f9-0343-40b6-be8a-9549e589c5f7" satisfied condition "running and ready" -Jun 12 20:48:40.464: INFO: Successfully updated pod "annotationupdatee37280f9-0343-40b6-be8a-9549e589c5f7" -[AfterEach] [sig-storage] Downward API volume +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:89 +STEP: Creating configMap with name configmap-test-volume-map-763c6f37-4aff-427d-81d6-285007d4e662 07/27/23 01:36:53.542 +STEP: Creating a pod to test consume configMaps 07/27/23 01:36:53.6 +Jul 27 01:36:53.629: INFO: Waiting up to 5m0s for pod "pod-configmaps-85fb4acb-7dd0-4401-a281-00a953589b45" in namespace "configmap-6451" to be "Succeeded or Failed" +Jul 27 01:36:53.638: INFO: Pod "pod-configmaps-85fb4acb-7dd0-4401-a281-00a953589b45": Phase="Pending", Reason="", readiness=false. Elapsed: 8.903386ms +Jul 27 01:36:55.648: INFO: Pod "pod-configmaps-85fb4acb-7dd0-4401-a281-00a953589b45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019485078s +Jul 27 01:36:57.647: INFO: Pod "pod-configmaps-85fb4acb-7dd0-4401-a281-00a953589b45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018313251s +STEP: Saw pod success 07/27/23 01:36:57.647 +Jul 27 01:36:57.647: INFO: Pod "pod-configmaps-85fb4acb-7dd0-4401-a281-00a953589b45" satisfied condition "Succeeded or Failed" +Jul 27 01:36:57.656: INFO: Trying to get logs from node 10.245.128.18 pod pod-configmaps-85fb4acb-7dd0-4401-a281-00a953589b45 container agnhost-container: +STEP: delete the pod 07/27/23 01:36:57.7 +Jul 27 01:36:57.728: INFO: Waiting for pod pod-configmaps-85fb4acb-7dd0-4401-a281-00a953589b45 to disappear +Jul 27 01:36:57.737: INFO: Pod pod-configmaps-85fb4acb-7dd0-4401-a281-00a953589b45 no longer exists +[AfterEach] [sig-storage] ConfigMap test/e2e/framework/node/init/init.go:32 -Jun 12 20:48:42.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Downward API volume +Jul 27 01:36:57.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Downward API volume +[DeferCleanup (Each)] [sig-storage] ConfigMap dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Downward API volume +[DeferCleanup (Each)] [sig-storage] ConfigMap tear down framework | framework.go:193 -STEP: Destroying namespace "downward-api-855" for this suite. 06/12/23 20:48:42.567 +STEP: Destroying namespace "configmap-6451" for this suite. 07/27/23 01:36:57.751 ------------------------------ -• [SLOW TEST] [7.253 seconds] -[sig-storage] Downward API volume +• [4.302 seconds] +[sig-storage] ConfigMap test/e2e/common/storage/framework.go:23 - should update annotations on modification [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:162 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:89 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Downward API volume + [BeforeEach] [sig-storage] ConfigMap set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:48:35.331 - Jun 12 20:48:35.331: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename downward-api 06/12/23 20:48:35.334 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:48:35.404 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:48:35.413 - [BeforeEach] [sig-storage] Downward API volume + STEP: Creating a kubernetes client 07/27/23 01:36:53.479 + Jul 27 01:36:53.479: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename configmap 07/27/23 01:36:53.48 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:36:53.523 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:36:53.532 + [BeforeEach] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-storage] Downward API volume - test/e2e/common/storage/downwardapi_volume.go:44 - [It] should update annotations on modification [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:162 - STEP: Creating the pod 06/12/23 20:48:35.426 - Jun 12 20:48:35.457: INFO: Waiting up to 5m0s for pod "annotationupdatee37280f9-0343-40b6-be8a-9549e589c5f7" in namespace "downward-api-855" to be "running and ready" - Jun 12 20:48:35.479: INFO: Pod "annotationupdatee37280f9-0343-40b6-be8a-9549e589c5f7": Phase="Pending", Reason="", readiness=false. Elapsed: 21.683585ms - Jun 12 20:48:35.479: INFO: The phase of Pod annotationupdatee37280f9-0343-40b6-be8a-9549e589c5f7 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:48:37.540: INFO: Pod "annotationupdatee37280f9-0343-40b6-be8a-9549e589c5f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08304097s - Jun 12 20:48:37.540: INFO: The phase of Pod annotationupdatee37280f9-0343-40b6-be8a-9549e589c5f7 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:48:39.549: INFO: Pod "annotationupdatee37280f9-0343-40b6-be8a-9549e589c5f7": Phase="Running", Reason="", readiness=true. Elapsed: 4.091720095s - Jun 12 20:48:39.549: INFO: The phase of Pod annotationupdatee37280f9-0343-40b6-be8a-9549e589c5f7 is Running (Ready = true) - Jun 12 20:48:39.549: INFO: Pod "annotationupdatee37280f9-0343-40b6-be8a-9549e589c5f7" satisfied condition "running and ready" - Jun 12 20:48:40.464: INFO: Successfully updated pod "annotationupdatee37280f9-0343-40b6-be8a-9549e589c5f7" - [AfterEach] [sig-storage] Downward API volume + [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:89 + STEP: Creating configMap with name configmap-test-volume-map-763c6f37-4aff-427d-81d6-285007d4e662 07/27/23 01:36:53.542 + STEP: Creating a pod to test consume configMaps 07/27/23 01:36:53.6 + Jul 27 01:36:53.629: INFO: Waiting up to 5m0s for pod "pod-configmaps-85fb4acb-7dd0-4401-a281-00a953589b45" in namespace "configmap-6451" to be "Succeeded or Failed" + Jul 27 01:36:53.638: INFO: Pod "pod-configmaps-85fb4acb-7dd0-4401-a281-00a953589b45": Phase="Pending", Reason="", readiness=false. Elapsed: 8.903386ms + Jul 27 01:36:55.648: INFO: Pod "pod-configmaps-85fb4acb-7dd0-4401-a281-00a953589b45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019485078s + Jul 27 01:36:57.647: INFO: Pod "pod-configmaps-85fb4acb-7dd0-4401-a281-00a953589b45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018313251s + STEP: Saw pod success 07/27/23 01:36:57.647 + Jul 27 01:36:57.647: INFO: Pod "pod-configmaps-85fb4acb-7dd0-4401-a281-00a953589b45" satisfied condition "Succeeded or Failed" + Jul 27 01:36:57.656: INFO: Trying to get logs from node 10.245.128.18 pod pod-configmaps-85fb4acb-7dd0-4401-a281-00a953589b45 container agnhost-container: + STEP: delete the pod 07/27/23 01:36:57.7 + Jul 27 01:36:57.728: INFO: Waiting for pod pod-configmaps-85fb4acb-7dd0-4401-a281-00a953589b45 to disappear + Jul 27 01:36:57.737: INFO: Pod pod-configmaps-85fb4acb-7dd0-4401-a281-00a953589b45 no longer exists + [AfterEach] [sig-storage] ConfigMap test/e2e/framework/node/init/init.go:32 - Jun 12 20:48:42.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Downward API volume + Jul 27 01:36:57.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Downward API volume + [DeferCleanup (Each)] [sig-storage] ConfigMap dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Downward API volume + [DeferCleanup (Each)] [sig-storage] ConfigMap tear down framework | framework.go:193 - STEP: Destroying namespace "downward-api-855" for this suite. 06/12/23 20:48:42.567 + STEP: Destroying namespace "configmap-6451" for this suite. 07/27/23 01:36:57.751 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - listing validating webhooks should work [Conformance] - test/e2e/apimachinery/webhook.go:582 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[sig-apps] CronJob + should not schedule jobs when suspended [Slow] [Conformance] + test/e2e/apps/cronjob.go:96 +[BeforeEach] [sig-apps] CronJob set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:48:42.586 -Jun 12 20:48:42.586: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename webhook 06/12/23 20:48:42.592 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:48:42.709 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:48:42.725 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +STEP: Creating a kubernetes client 07/27/23 01:36:57.783 +Jul 27 01:36:57.783: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename cronjob 07/27/23 01:36:57.784 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:36:57.851 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:36:57.86 +[BeforeEach] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:90 -STEP: Setting up server cert 06/12/23 20:48:42.785 -STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 20:48:43.785 -STEP: Deploying the webhook pod 06/12/23 20:48:43.843 -STEP: Wait for the deployment to be ready 06/12/23 20:48:43.938 -Jun 12 20:48:43.987: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set -Jun 12 20:48:46.013: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 20, 48, 44, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 48, 44, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 20, 48, 44, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 48, 43, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} -STEP: Deploying the webhook service 06/12/23 20:48:48.024 -STEP: Verifying the service has paired with the endpoint 06/12/23 20:48:48.064 -Jun 12 20:48:49.065: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 -[It] listing validating webhooks should work [Conformance] - test/e2e/apimachinery/webhook.go:582 -STEP: Listing all of the created validation webhooks 06/12/23 20:48:49.318 -STEP: Creating a configMap that does not comply to the validation webhook rules 06/12/23 20:48:49.424 -STEP: Deleting the collection of validation webhooks 06/12/23 20:48:49.508 -STEP: Creating a configMap that does not comply to the validation webhook rules 06/12/23 20:48:49.72 -[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[It] should not schedule jobs when suspended [Slow] [Conformance] + test/e2e/apps/cronjob.go:96 +STEP: Creating a suspended cronjob 07/27/23 01:36:57.869 +W0727 01:36:57.891425 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "c" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "c" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "c" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "c" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +STEP: Ensuring no jobs are scheduled 07/27/23 01:36:57.891 +STEP: Ensuring no job exists by listing jobs explicitly 07/27/23 01:41:57.926 +STEP: Removing cronjob 07/27/23 01:41:57.962 +[AfterEach] [sig-apps] CronJob test/e2e/framework/node/init/init.go:32 -Jun 12 20:48:49.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:105 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +Jul 27 01:41:57.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-apps] CronJob dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-apps] CronJob tear down framework | framework.go:193 -STEP: Destroying namespace "webhook-3492" for this suite. 06/12/23 20:48:49.949 -STEP: Destroying namespace "webhook-3492-markers" for this suite. 06/12/23 20:48:49.966 +STEP: Destroying namespace "cronjob-215" for this suite. 07/27/23 01:41:58.035 ------------------------------ -• [SLOW TEST] [7.395 seconds] -[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] -test/e2e/apimachinery/framework.go:23 - listing validating webhooks should work [Conformance] - test/e2e/apimachinery/webhook.go:582 +• [SLOW TEST] [300.303 seconds] +[sig-apps] CronJob +test/e2e/apps/framework.go:23 + should not schedule jobs when suspended [Slow] [Conformance] + test/e2e/apps/cronjob.go:96 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [BeforeEach] [sig-apps] CronJob set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:48:42.586 - Jun 12 20:48:42.586: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename webhook 06/12/23 20:48:42.592 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:48:42.709 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:48:42.725 - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + STEP: Creating a kubernetes client 07/27/23 01:36:57.783 + Jul 27 01:36:57.783: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename cronjob 07/27/23 01:36:57.784 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:36:57.851 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:36:57.86 + [BeforeEach] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:90 - STEP: Setting up server cert 06/12/23 20:48:42.785 - STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 20:48:43.785 - STEP: Deploying the webhook pod 06/12/23 20:48:43.843 - STEP: Wait for the deployment to be ready 06/12/23 20:48:43.938 - Jun 12 20:48:43.987: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set - Jun 12 20:48:46.013: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 20, 48, 44, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 48, 44, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 20, 48, 44, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 48, 43, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} - STEP: Deploying the webhook service 06/12/23 20:48:48.024 - STEP: Verifying the service has paired with the endpoint 06/12/23 20:48:48.064 - Jun 12 20:48:49.065: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 - [It] listing validating webhooks should work [Conformance] - test/e2e/apimachinery/webhook.go:582 - STEP: Listing all of the created validation webhooks 06/12/23 20:48:49.318 - STEP: Creating a configMap that does not comply to the validation webhook rules 06/12/23 20:48:49.424 - STEP: Deleting the collection of validation webhooks 06/12/23 20:48:49.508 - STEP: Creating a configMap that does not comply to the validation webhook rules 06/12/23 20:48:49.72 - [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [It] should not schedule jobs when suspended [Slow] [Conformance] + test/e2e/apps/cronjob.go:96 + STEP: Creating a suspended cronjob 07/27/23 01:36:57.869 + W0727 01:36:57.891425 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "c" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "c" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "c" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "c" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + STEP: Ensuring no jobs are scheduled 07/27/23 01:36:57.891 + STEP: Ensuring no job exists by listing jobs explicitly 07/27/23 01:41:57.926 + STEP: Removing cronjob 07/27/23 01:41:57.962 + [AfterEach] [sig-apps] CronJob test/e2e/framework/node/init/init.go:32 - Jun 12 20:48:49.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:105 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + Jul 27 01:41:57.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-apps] CronJob dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-apps] CronJob tear down framework | framework.go:193 - STEP: Destroying namespace "webhook-3492" for this suite. 06/12/23 20:48:49.949 - STEP: Destroying namespace "webhook-3492-markers" for this suite. 06/12/23 20:48:49.966 + STEP: Destroying namespace "cronjob-215" for this suite. 07/27/23 01:41:58.035 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSS +SSSSSS ------------------------------ -[sig-scheduling] SchedulerPredicates [Serial] - validates resource limits of pods that are allowed to run [Conformance] - test/e2e/scheduling/predicates.go:331 -[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] +[sig-api-machinery] ResourceQuota + should verify ResourceQuota with best effort scope. [Conformance] + test/e2e/apimachinery/resource_quota.go:803 +[BeforeEach] [sig-api-machinery] ResourceQuota set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:48:49.987 -Jun 12 20:48:49.987: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename sched-pred 06/12/23 20:48:49.989 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:48:50.036 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:48:50.059 -[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] +STEP: Creating a kubernetes client 07/27/23 01:41:58.086 +Jul 27 01:41:58.086: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename resourcequota 07/27/23 01:41:58.087 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:41:58.144 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:41:58.197 +[BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] - test/e2e/scheduling/predicates.go:97 -Jun 12 20:48:50.105: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready -Jun 12 20:48:50.164: INFO: Waiting for terminating namespaces to be deleted... -Jun 12 20:48:50.192: INFO: -Logging pods the apiserver thinks is on node 10.138.75.112 before test -Jun 12 20:48:50.251: INFO: calico-node-b9sdb from calico-system started at 2023-06-12 17:53:02 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.251: INFO: Container calico-node ready: true, restart count 0 -Jun 12 20:48:50.252: INFO: calico-typha-74d94b74f5-dc6td from calico-system started at 2023-06-12 17:53:09 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.252: INFO: Container calico-typha ready: true, restart count 0 -Jun 12 20:48:50.252: INFO: ibm-cloud-provider-ip-168-1-198-197-75947fc545-gxzn7 from ibm-system started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.252: INFO: Container ibm-cloud-provider-ip-168-1-198-197 ready: true, restart count 0 -Jun 12 20:48:50.252: INFO: ibm-keepalived-watcher-5hc6v from kube-system started at 2023-06-12 17:40:13 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.252: INFO: Container keepalived-watcher ready: true, restart count 0 -Jun 12 20:48:50.252: INFO: ibm-master-proxy-static-10.138.75.112 from kube-system started at 2023-06-12 17:40:09 +0000 UTC (2 container statuses recorded) -Jun 12 20:48:50.252: INFO: Container ibm-master-proxy-static ready: true, restart count 0 -Jun 12 20:48:50.252: INFO: Container pause ready: true, restart count 0 -Jun 12 20:48:50.252: INFO: ibmcloud-block-storage-driver-5zqmj from kube-system started at 2023-06-12 17:40:20 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.253: INFO: Container ibmcloud-block-storage-driver-container ready: true, restart count 0 -Jun 12 20:48:50.253: INFO: tuned-phslc from openshift-cluster-node-tuning-operator started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.253: INFO: Container tuned ready: true, restart count 0 -Jun 12 20:48:50.253: INFO: csi-snapshot-controller-7f8879b9ff-p456r from openshift-cluster-storage-operator started at 2023-06-12 17:55:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.253: INFO: Container snapshot-controller ready: true, restart count 0 -Jun 12 20:48:50.253: INFO: csi-snapshot-webhook-7bd9594b6d-bp5dr from openshift-cluster-storage-operator started at 2023-06-12 17:55:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.253: INFO: Container webhook ready: true, restart count 0 -Jun 12 20:48:50.253: INFO: console-5bf97c7949-w5sn5 from openshift-console started at 2023-06-12 18:01:02 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.253: INFO: Container console ready: true, restart count 0 -Jun 12 20:48:50.253: INFO: downloads-8b57f44bb-55ss5 from openshift-console started at 2023-06-12 17:55:24 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.253: INFO: Container download-server ready: true, restart count 0 -Jun 12 20:48:50.253: INFO: dns-default-hpnqj from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (2 container statuses recorded) -Jun 12 20:48:50.254: INFO: Container dns ready: true, restart count 0 -Jun 12 20:48:50.254: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:48:50.254: INFO: node-resolver-5st6j from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.254: INFO: Container dns-node-resolver ready: true, restart count 0 -Jun 12 20:48:50.254: INFO: image-registry-6c79bcf5c4-p7ss4 from openshift-image-registry started at 2023-06-12 18:00:30 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.254: INFO: Container registry ready: true, restart count 0 -Jun 12 20:48:50.254: INFO: node-ca-qm7sb from openshift-image-registry started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.254: INFO: Container node-ca ready: true, restart count 0 -Jun 12 20:48:50.254: INFO: ingress-canary-5qpcw from openshift-ingress-canary started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.254: INFO: Container serve-healthcheck-canary ready: true, restart count 0 -Jun 12 20:48:50.254: INFO: router-default-7d454f944c-62qgz from openshift-ingress started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.254: INFO: Container router ready: true, restart count 0 -Jun 12 20:48:50.255: INFO: openshift-kube-proxy-b9xs9 from openshift-kube-proxy started at 2023-06-12 17:47:53 +0000 UTC (2 container statuses recorded) -Jun 12 20:48:50.255: INFO: Container kube-proxy ready: true, restart count 0 -Jun 12 20:48:50.255: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:48:50.255: INFO: migrator-cfb6c8f7c-vx2tr from openshift-kube-storage-version-migrator started at 2023-06-12 17:55:28 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.255: INFO: Container migrator ready: true, restart count 0 -Jun 12 20:48:50.255: INFO: community-operators-fm8cx from openshift-marketplace started at 2023-06-12 17:57:19 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.255: INFO: Container registry-server ready: true, restart count 0 -Jun 12 20:48:50.255: INFO: redhat-operators-pr47d from openshift-marketplace started at 2023-06-12 19:05:36 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.255: INFO: Container registry-server ready: true, restart count 0 -Jun 12 20:48:50.255: INFO: alertmanager-main-1 from openshift-monitoring started at 2023-06-12 18:01:06 +0000 UTC (6 container statuses recorded) -Jun 12 20:48:50.255: INFO: Container alertmanager ready: true, restart count 1 -Jun 12 20:48:50.255: INFO: Container alertmanager-proxy ready: true, restart count 0 -Jun 12 20:48:50.255: INFO: Container config-reloader ready: true, restart count 0 -Jun 12 20:48:50.255: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:48:50.256: INFO: Container kube-rbac-proxy-metric ready: true, restart count 0 -Jun 12 20:48:50.256: INFO: Container prom-label-proxy ready: true, restart count 0 -Jun 12 20:48:50.256: INFO: kube-state-metrics-6ccfb58dc4-rgnnh from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (3 container statuses recorded) -Jun 12 20:48:50.256: INFO: Container kube-rbac-proxy-main ready: true, restart count 0 -Jun 12 20:48:50.256: INFO: Container kube-rbac-proxy-self ready: true, restart count 0 -Jun 12 20:48:50.256: INFO: Container kube-state-metrics ready: true, restart count 0 -Jun 12 20:48:50.256: INFO: node-exporter-r799t from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (2 container statuses recorded) -Jun 12 20:48:50.256: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:48:50.256: INFO: Container node-exporter ready: true, restart count 0 -Jun 12 20:48:50.256: INFO: prometheus-adapter-7c58c77c58-xfd55 from openshift-monitoring started at 2023-06-12 17:59:36 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.257: INFO: Container prometheus-adapter ready: true, restart count 0 -Jun 12 20:48:50.257: INFO: prometheus-k8s-0 from openshift-monitoring started at 2023-06-12 18:01:32 +0000 UTC (6 container statuses recorded) -Jun 12 20:48:50.257: INFO: Container config-reloader ready: true, restart count 0 -Jun 12 20:48:50.257: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:48:50.257: INFO: Container kube-rbac-proxy-thanos ready: true, restart count 0 -Jun 12 20:48:50.257: INFO: Container prometheus ready: true, restart count 0 -Jun 12 20:48:50.257: INFO: Container prometheus-proxy ready: true, restart count 0 -Jun 12 20:48:50.257: INFO: Container thanos-sidecar ready: true, restart count 0 -Jun 12 20:48:50.258: INFO: prometheus-operator-admission-webhook-5d679565bb-66wnf from openshift-monitoring started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.258: INFO: Container prometheus-operator-admission-webhook ready: true, restart count 0 -Jun 12 20:48:50.258: INFO: thanos-querier-6497df7b9-djrsc from openshift-monitoring started at 2023-06-12 17:59:42 +0000 UTC (6 container statuses recorded) -Jun 12 20:48:50.258: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:48:50.258: INFO: Container kube-rbac-proxy-metrics ready: true, restart count 0 -Jun 12 20:48:50.258: INFO: Container kube-rbac-proxy-rules ready: true, restart count 0 -Jun 12 20:48:50.258: INFO: Container oauth-proxy ready: true, restart count 0 -Jun 12 20:48:50.258: INFO: Container prom-label-proxy ready: true, restart count 0 -Jun 12 20:48:50.258: INFO: Container thanos-query ready: true, restart count 0 -Jun 12 20:48:50.258: INFO: multus-additional-cni-plugins-zpr6c from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.259: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 -Jun 12 20:48:50.259: INFO: multus-q452d from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.259: INFO: Container kube-multus ready: true, restart count 0 -Jun 12 20:48:50.259: INFO: network-metrics-daemon-vx56x from openshift-multus started at 2023-06-12 17:47:49 +0000 UTC (2 container statuses recorded) -Jun 12 20:48:50.259: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:48:50.259: INFO: Container network-metrics-daemon ready: true, restart count 0 -Jun 12 20:48:50.259: INFO: network-check-target-lfvfw from openshift-network-diagnostics started at 2023-06-12 17:47:56 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.259: INFO: Container network-check-target-container ready: true, restart count 0 -Jun 12 20:48:50.259: INFO: network-operator-5498bf7dc6-xv8r2 from openshift-network-operator started at 2023-06-12 17:47:21 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.259: INFO: Container network-operator ready: true, restart count 1 -Jun 12 20:48:50.260: INFO: packageserver-7f8bd8c95b-fgfhz from openshift-operator-lifecycle-manager started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.260: INFO: Container packageserver ready: true, restart count 0 -Jun 12 20:48:50.260: INFO: sonobuoy-systemd-logs-daemon-set-129e475d543c42cc-xk7f7 from sonobuoy started at 2023-06-12 20:39:06 +0000 UTC (2 container statuses recorded) -Jun 12 20:48:50.260: INFO: Container sonobuoy-worker ready: true, restart count 0 -Jun 12 20:48:50.260: INFO: Container systemd-logs ready: true, restart count 0 -Jun 12 20:48:50.260: INFO: -Logging pods the apiserver thinks is on node 10.138.75.116 before test -Jun 12 20:48:50.333: INFO: calico-kube-controllers-58944988fc-kv6pq from calico-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.333: INFO: Container calico-kube-controllers ready: true, restart count 0 -Jun 12 20:48:50.333: INFO: calico-node-nhd4m from calico-system started at 2023-06-12 17:53:02 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.333: INFO: Container calico-node ready: true, restart count 0 -Jun 12 20:48:50.333: INFO: ibm-file-plugin-5f8cc7b66-hc7b9 from kube-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.333: INFO: Container ibm-file-plugin-container ready: true, restart count 0 -Jun 12 20:48:50.333: INFO: ibm-keepalived-watcher-zp24l from kube-system started at 2023-06-12 17:40:01 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.333: INFO: Container keepalived-watcher ready: true, restart count 0 -Jun 12 20:48:50.333: INFO: ibm-master-proxy-static-10.138.75.116 from kube-system started at 2023-06-12 17:39:58 +0000 UTC (2 container statuses recorded) -Jun 12 20:48:50.333: INFO: Container ibm-master-proxy-static ready: true, restart count 0 -Jun 12 20:48:50.333: INFO: Container pause ready: true, restart count 0 -Jun 12 20:48:50.333: INFO: ibm-storage-watcher-f4db746b4-mlm76 from kube-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.333: INFO: Container ibm-storage-watcher-container ready: true, restart count 0 -Jun 12 20:48:50.333: INFO: ibmcloud-block-storage-driver-4wh25 from kube-system started at 2023-06-12 17:40:09 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.333: INFO: Container ibmcloud-block-storage-driver-container ready: true, restart count 0 -Jun 12 20:48:50.333: INFO: ibmcloud-block-storage-plugin-5f85bc9665-2ltn5 from kube-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.333: INFO: Container ibmcloud-block-storage-plugin-container ready: true, restart count 0 -Jun 12 20:48:50.333: INFO: vpn-7bc564c55c-htxd6 from kube-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.333: INFO: Container vpn ready: true, restart count 0 -Jun 12 20:48:50.333: INFO: cluster-node-tuning-operator-5f6cff5c99-z22gd from openshift-cluster-node-tuning-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.333: INFO: Container cluster-node-tuning-operator ready: true, restart count 0 -Jun 12 20:48:50.333: INFO: tuned-44pqh from openshift-cluster-node-tuning-operator started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.333: INFO: Container tuned ready: true, restart count 0 -Jun 12 20:48:50.333: INFO: cluster-samples-operator-597884bb5d-bv9cn from openshift-cluster-samples-operator started at 2023-06-12 17:54:08 +0000 UTC (2 container statuses recorded) -Jun 12 20:48:50.333: INFO: Container cluster-samples-operator ready: true, restart count 0 -Jun 12 20:48:50.333: INFO: Container cluster-samples-operator-watch ready: true, restart count 0 -Jun 12 20:48:50.333: INFO: cluster-storage-operator-75bb97486-7xrgf from openshift-cluster-storage-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.333: INFO: Container cluster-storage-operator ready: true, restart count 1 -Jun 12 20:48:50.333: INFO: csi-snapshot-controller-operator-69df8b995f-flpdz from openshift-cluster-storage-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.333: INFO: Container csi-snapshot-controller-operator ready: true, restart count 0 -Jun 12 20:48:50.333: INFO: console-operator-747447cc44-5hk9p from openshift-console-operator started at 2023-06-12 17:54:08 +0000 UTC (2 container statuses recorded) -Jun 12 20:48:50.333: INFO: Container console-operator ready: true, restart count 1 -Jun 12 20:48:50.333: INFO: Container conversion-webhook-server ready: true, restart count 2 -Jun 12 20:48:50.333: INFO: console-5bf97c7949-22prk from openshift-console started at 2023-06-12 18:01:30 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.333: INFO: Container console ready: true, restart count 0 -Jun 12 20:48:50.333: INFO: dns-operator-65c495d75-cd4fc from openshift-dns-operator started at 2023-06-12 17:54:08 +0000 UTC (2 container statuses recorded) -Jun 12 20:48:50.333: INFO: Container dns-operator ready: true, restart count 0 -Jun 12 20:48:50.333: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:48:50.333: INFO: dns-default-cw4pt from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (2 container statuses recorded) -Jun 12 20:48:50.333: INFO: Container dns ready: true, restart count 0 -Jun 12 20:48:50.333: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:48:50.333: INFO: node-resolver-8mss5 from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.333: INFO: Container dns-node-resolver ready: true, restart count 0 -Jun 12 20:48:50.333: INFO: cluster-image-registry-operator-f9c46b94f-swtmm from openshift-image-registry started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.333: INFO: Container cluster-image-registry-operator ready: true, restart count 0 -Jun 12 20:48:50.333: INFO: node-ca-5cs7d from openshift-image-registry started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.333: INFO: Container node-ca ready: true, restart count 0 -Jun 12 20:48:50.333: INFO: registry-pvc-permissions-j28ls from openshift-image-registry started at 2023-06-12 18:00:38 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.333: INFO: Container pvc-permissions ready: false, restart count 0 -Jun 12 20:48:50.333: INFO: ingress-canary-9xbwx from openshift-ingress-canary started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.333: INFO: Container serve-healthcheck-canary ready: true, restart count 0 -Jun 12 20:48:50.333: INFO: ingress-operator-57d9f78b9c-59cl8 from openshift-ingress-operator started at 2023-06-12 17:54:08 +0000 UTC (2 container statuses recorded) -Jun 12 20:48:50.333: INFO: Container ingress-operator ready: true, restart count 0 -Jun 12 20:48:50.333: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:48:50.333: INFO: insights-operator-7dfcfbc664-j8swm from openshift-insights started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.333: INFO: Container insights-operator ready: true, restart count 1 -Jun 12 20:48:50.333: INFO: openshift-kube-proxy-5hl4f from openshift-kube-proxy started at 2023-06-12 17:47:53 +0000 UTC (2 container statuses recorded) -Jun 12 20:48:50.334: INFO: Container kube-proxy ready: true, restart count 0 -Jun 12 20:48:50.334: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:48:50.334: INFO: kube-storage-version-migrator-operator-689b97b878-cqw2l from openshift-kube-storage-version-migrator-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.334: INFO: Container kube-storage-version-migrator-operator ready: true, restart count 1 -Jun 12 20:48:50.334: INFO: marketplace-operator-769ddf547d-mm52g from openshift-marketplace started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.334: INFO: Container marketplace-operator ready: true, restart count 0 -Jun 12 20:48:50.334: INFO: cluster-monitoring-operator-7df766d4db-cnq44 from openshift-monitoring started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.334: INFO: Container cluster-monitoring-operator ready: true, restart count 0 -Jun 12 20:48:50.334: INFO: node-exporter-s9sgk from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (2 container statuses recorded) -Jun 12 20:48:50.334: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:48:50.334: INFO: Container node-exporter ready: true, restart count 0 -Jun 12 20:48:50.334: INFO: multus-additional-cni-plugins-rsr27 from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.334: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 -Jun 12 20:48:50.334: INFO: multus-admission-controller-5894dd7875-bfbwp from openshift-multus started at 2023-06-12 17:58:31 +0000 UTC (2 container statuses recorded) -Jun 12 20:48:50.334: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:48:50.334: INFO: Container multus-admission-controller ready: true, restart count 0 -Jun 12 20:48:50.334: INFO: multus-ln9rr from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.334: INFO: Container kube-multus ready: true, restart count 0 -Jun 12 20:48:50.334: INFO: network-metrics-daemon-75s49 from openshift-multus started at 2023-06-12 17:47:49 +0000 UTC (2 container statuses recorded) -Jun 12 20:48:50.334: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:48:50.334: INFO: Container network-metrics-daemon ready: true, restart count 0 -Jun 12 20:48:50.334: INFO: network-check-source-7f6b75fdb6-8882l from openshift-network-diagnostics started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.334: INFO: Container check-endpoints ready: true, restart count 0 -Jun 12 20:48:50.334: INFO: network-check-target-kjfll from openshift-network-diagnostics started at 2023-06-12 17:47:56 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.334: INFO: Container network-check-target-container ready: true, restart count 0 -Jun 12 20:48:50.334: INFO: catalog-operator-874999f59-jggx9 from openshift-operator-lifecycle-manager started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.334: INFO: Container catalog-operator ready: true, restart count 0 -Jun 12 20:48:50.334: INFO: collect-profiles-28110015-d4v2k from openshift-operator-lifecycle-manager started at 2023-06-12 20:15:00 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.334: INFO: Container collect-profiles ready: false, restart count 0 -Jun 12 20:48:50.334: INFO: collect-profiles-28110030-fzbkf from openshift-operator-lifecycle-manager started at 2023-06-12 20:30:00 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.334: INFO: Container collect-profiles ready: false, restart count 0 -Jun 12 20:48:50.334: INFO: collect-profiles-28110045-fcbk8 from openshift-operator-lifecycle-manager started at 2023-06-12 20:45:00 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.334: INFO: Container collect-profiles ready: false, restart count 0 -Jun 12 20:48:50.334: INFO: olm-operator-bdbf4b468-8vj6q from openshift-operator-lifecycle-manager started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.334: INFO: Container olm-operator ready: true, restart count 0 -Jun 12 20:48:50.334: INFO: package-server-manager-5b897cb946-pz59r from openshift-operator-lifecycle-manager started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.334: INFO: Container package-server-manager ready: true, restart count 0 -Jun 12 20:48:50.334: INFO: packageserver-7f8bd8c95b-2zntg from openshift-operator-lifecycle-manager started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.334: INFO: Container packageserver ready: true, restart count 0 -Jun 12 20:48:50.334: INFO: metrics-78c5579cb7-nlfqq from openshift-roks-metrics started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.334: INFO: Container metrics ready: true, restart count 3 -Jun 12 20:48:50.334: INFO: push-gateway-85f6799b47-cgtdt from openshift-roks-metrics started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.334: INFO: Container push-gateway ready: true, restart count 0 -Jun 12 20:48:50.334: INFO: service-ca-operator-86d6dcd567-8jc2t from openshift-service-ca-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.334: INFO: Container service-ca-operator ready: true, restart count 1 -Jun 12 20:48:50.334: INFO: service-ca-7c79786568-vhxsl from openshift-service-ca started at 2023-06-12 17:55:23 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.334: INFO: Container service-ca-controller ready: true, restart count 0 -Jun 12 20:48:50.334: INFO: sonobuoy-e2e-job-9876719f3d1644bf from sonobuoy started at 2023-06-12 20:39:06 +0000 UTC (2 container statuses recorded) -Jun 12 20:48:50.334: INFO: Container e2e ready: true, restart count 0 -Jun 12 20:48:50.334: INFO: Container sonobuoy-worker ready: true, restart count 0 -Jun 12 20:48:50.334: INFO: sonobuoy-systemd-logs-daemon-set-129e475d543c42cc-nbw64 from sonobuoy started at 2023-06-12 20:39:07 +0000 UTC (2 container statuses recorded) -Jun 12 20:48:50.334: INFO: Container sonobuoy-worker ready: true, restart count 0 -Jun 12 20:48:50.334: INFO: Container systemd-logs ready: true, restart count 0 -Jun 12 20:48:50.334: INFO: tigera-operator-5b48cf996b-z7p6p from tigera-operator started at 2023-06-12 17:40:11 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.334: INFO: Container tigera-operator ready: true, restart count 7 -Jun 12 20:48:50.334: INFO: -Logging pods the apiserver thinks is on node 10.138.75.70 before test -Jun 12 20:48:50.392: INFO: calico-node-v822j from calico-system started at 2023-06-12 17:53:02 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.392: INFO: Container calico-node ready: true, restart count 0 -Jun 12 20:48:50.392: INFO: calico-typha-74d94b74f5-db4zz from calico-system started at 2023-06-12 17:53:02 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.392: INFO: Container calico-typha ready: true, restart count 0 -Jun 12 20:48:50.392: INFO: annotationupdatee37280f9-0343-40b6-be8a-9549e589c5f7 from downward-api-855 started at 2023-06-12 20:48:35 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.392: INFO: Container client-container ready: true, restart count 0 -Jun 12 20:48:50.392: INFO: ibm-cloud-provider-ip-168-1-198-197-75947fc545-9m2wx from ibm-system started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.392: INFO: Container ibm-cloud-provider-ip-168-1-198-197 ready: true, restart count 0 -Jun 12 20:48:50.392: INFO: ibm-keepalived-watcher-nl9l9 from kube-system started at 2023-06-12 17:40:20 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.392: INFO: Container keepalived-watcher ready: true, restart count 0 -Jun 12 20:48:50.392: INFO: ibm-master-proxy-static-10.138.75.70 from kube-system started at 2023-06-12 17:40:17 +0000 UTC (2 container statuses recorded) -Jun 12 20:48:50.392: INFO: Container ibm-master-proxy-static ready: true, restart count 0 -Jun 12 20:48:50.392: INFO: Container pause ready: true, restart count 0 -Jun 12 20:48:50.392: INFO: ibmcloud-block-storage-driver-jl8fq from kube-system started at 2023-06-12 17:40:28 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.392: INFO: Container ibmcloud-block-storage-driver-container ready: true, restart count 0 -Jun 12 20:48:50.392: INFO: tuned-dmlsr from openshift-cluster-node-tuning-operator started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.392: INFO: Container tuned ready: true, restart count 0 -Jun 12 20:48:50.392: INFO: csi-snapshot-controller-7f8879b9ff-lhkmp from openshift-cluster-storage-operator started at 2023-06-12 17:55:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.392: INFO: Container snapshot-controller ready: true, restart count 0 -Jun 12 20:48:50.392: INFO: csi-snapshot-webhook-7bd9594b6d-9f476 from openshift-cluster-storage-operator started at 2023-06-12 17:55:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.392: INFO: Container webhook ready: true, restart count 0 -Jun 12 20:48:50.392: INFO: downloads-8b57f44bb-f7r76 from openshift-console started at 2023-06-12 17:55:24 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.392: INFO: Container download-server ready: true, restart count 0 -Jun 12 20:48:50.392: INFO: dns-default-5d2sp from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (2 container statuses recorded) -Jun 12 20:48:50.392: INFO: Container dns ready: true, restart count 0 -Jun 12 20:48:50.392: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:48:50.392: INFO: node-resolver-lf2bx from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.392: INFO: Container dns-node-resolver ready: true, restart count 0 -Jun 12 20:48:50.392: INFO: node-ca-mwjbd from openshift-image-registry started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.392: INFO: Container node-ca ready: true, restart count 0 -Jun 12 20:48:50.392: INFO: ingress-canary-xwc5b from openshift-ingress-canary started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.392: INFO: Container serve-healthcheck-canary ready: true, restart count 0 -Jun 12 20:48:50.392: INFO: router-default-7d454f944c-s862z from openshift-ingress started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.392: INFO: Container router ready: true, restart count 0 -Jun 12 20:48:50.392: INFO: openshift-kube-proxy-rckf9 from openshift-kube-proxy started at 2023-06-12 17:47:53 +0000 UTC (2 container statuses recorded) -Jun 12 20:48:50.392: INFO: Container kube-proxy ready: true, restart count 0 -Jun 12 20:48:50.392: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:48:50.392: INFO: certified-operators-9jhxm from openshift-marketplace started at 2023-06-12 17:57:19 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.392: INFO: Container registry-server ready: true, restart count 0 -Jun 12 20:48:50.392: INFO: redhat-marketplace-n9tcn from openshift-marketplace started at 2023-06-12 17:57:19 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.392: INFO: Container registry-server ready: true, restart count 0 -Jun 12 20:48:50.392: INFO: alertmanager-main-0 from openshift-monitoring started at 2023-06-12 18:01:41 +0000 UTC (6 container statuses recorded) -Jun 12 20:48:50.392: INFO: Container alertmanager ready: true, restart count 1 -Jun 12 20:48:50.392: INFO: Container alertmanager-proxy ready: true, restart count 0 -Jun 12 20:48:50.392: INFO: Container config-reloader ready: true, restart count 0 -Jun 12 20:48:50.392: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: Container kube-rbac-proxy-metric ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: Container prom-label-proxy ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: node-exporter-5vgf6 from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (2 container statuses recorded) -Jun 12 20:48:50.393: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: Container node-exporter ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: openshift-state-metrics-7d7f8b4cf8-6kdhb from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (3 container statuses recorded) -Jun 12 20:48:50.393: INFO: Container kube-rbac-proxy-main ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: Container kube-rbac-proxy-self ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: Container openshift-state-metrics ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: prometheus-adapter-7c58c77c58-2j47k from openshift-monitoring started at 2023-06-12 17:59:36 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.393: INFO: Container prometheus-adapter ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: prometheus-k8s-1 from openshift-monitoring started at 2023-06-12 18:01:12 +0000 UTC (6 container statuses recorded) -Jun 12 20:48:50.393: INFO: Container config-reloader ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: Container kube-rbac-proxy-thanos ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: Container prometheus ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: Container prometheus-proxy ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: Container thanos-sidecar ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: prometheus-operator-5d978dbf9c-zvq6g from openshift-monitoring started at 2023-06-12 17:59:19 +0000 UTC (2 container statuses recorded) -Jun 12 20:48:50.393: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: Container prometheus-operator ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: prometheus-operator-admission-webhook-5d679565bb-sj42p from openshift-monitoring started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.393: INFO: Container prometheus-operator-admission-webhook ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: telemeter-client-55c7b57d84-vh47h from openshift-monitoring started at 2023-06-12 17:59:37 +0000 UTC (3 container statuses recorded) -Jun 12 20:48:50.393: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: Container reload ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: Container telemeter-client ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: thanos-querier-6497df7b9-pg2z9 from openshift-monitoring started at 2023-06-12 17:59:42 +0000 UTC (6 container statuses recorded) -Jun 12 20:48:50.393: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: Container kube-rbac-proxy-metrics ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: Container kube-rbac-proxy-rules ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: Container oauth-proxy ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: Container prom-label-proxy ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: Container thanos-query ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: multus-26bfs from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.393: INFO: Container kube-multus ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: multus-additional-cni-plugins-9vls6 from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.393: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: multus-admission-controller-5894dd7875-xldt9 from openshift-multus started at 2023-06-12 17:58:44 +0000 UTC (2 container statuses recorded) -Jun 12 20:48:50.393: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: Container multus-admission-controller ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: network-metrics-daemon-g9zzs from openshift-multus started at 2023-06-12 17:47:49 +0000 UTC (2 container statuses recorded) -Jun 12 20:48:50.393: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: Container network-metrics-daemon ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: network-check-target-l622r from openshift-network-diagnostics started at 2023-06-12 17:47:56 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.393: INFO: Container network-check-target-container ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: sonobuoy from sonobuoy started at 2023-06-12 20:38:54 +0000 UTC (1 container statuses recorded) -Jun 12 20:48:50.393: INFO: Container kube-sonobuoy ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: sonobuoy-systemd-logs-daemon-set-129e475d543c42cc-4dn8s from sonobuoy started at 2023-06-12 20:39:07 +0000 UTC (2 container statuses recorded) -Jun 12 20:48:50.393: INFO: Container sonobuoy-worker ready: true, restart count 0 -Jun 12 20:48:50.393: INFO: Container systemd-logs ready: true, restart count 0 -[It] validates resource limits of pods that are allowed to run [Conformance] - test/e2e/scheduling/predicates.go:331 -STEP: verifying the node has the label node 10.138.75.112 06/12/23 20:48:50.523 -STEP: verifying the node has the label node 10.138.75.116 06/12/23 20:48:50.59 -STEP: verifying the node has the label node 10.138.75.70 06/12/23 20:48:50.646 -Jun 12 20:48:50.748: INFO: Pod calico-kube-controllers-58944988fc-kv6pq requesting resource cpu=10m on Node 10.138.75.116 -Jun 12 20:48:50.748: INFO: Pod calico-node-b9sdb requesting resource cpu=250m on Node 10.138.75.112 -Jun 12 20:48:50.748: INFO: Pod calico-node-nhd4m requesting resource cpu=250m on Node 10.138.75.116 -Jun 12 20:48:50.748: INFO: Pod calico-node-v822j requesting resource cpu=250m on Node 10.138.75.70 -Jun 12 20:48:50.748: INFO: Pod calico-typha-74d94b74f5-db4zz requesting resource cpu=250m on Node 10.138.75.70 -Jun 12 20:48:50.748: INFO: Pod calico-typha-74d94b74f5-dc6td requesting resource cpu=250m on Node 10.138.75.112 -Jun 12 20:48:50.748: INFO: Pod annotationupdatee37280f9-0343-40b6-be8a-9549e589c5f7 requesting resource cpu=0m on Node 10.138.75.70 -Jun 12 20:48:50.748: INFO: Pod ibm-cloud-provider-ip-168-1-198-197-75947fc545-9m2wx requesting resource cpu=5m on Node 10.138.75.70 -Jun 12 20:48:50.748: INFO: Pod ibm-cloud-provider-ip-168-1-198-197-75947fc545-gxzn7 requesting resource cpu=5m on Node 10.138.75.112 -Jun 12 20:48:50.748: INFO: Pod ibm-file-plugin-5f8cc7b66-hc7b9 requesting resource cpu=50m on Node 10.138.75.116 -Jun 12 20:48:50.748: INFO: Pod ibm-keepalived-watcher-5hc6v requesting resource cpu=5m on Node 10.138.75.112 -Jun 12 20:48:50.748: INFO: Pod ibm-keepalived-watcher-nl9l9 requesting resource cpu=5m on Node 10.138.75.70 -Jun 12 20:48:50.748: INFO: Pod ibm-keepalived-watcher-zp24l requesting resource cpu=5m on Node 10.138.75.116 -Jun 12 20:48:50.748: INFO: Pod ibm-master-proxy-static-10.138.75.112 requesting resource cpu=26m on Node 10.138.75.112 -Jun 12 20:48:50.748: INFO: Pod ibm-master-proxy-static-10.138.75.116 requesting resource cpu=26m on Node 10.138.75.116 -Jun 12 20:48:50.748: INFO: Pod ibm-master-proxy-static-10.138.75.70 requesting resource cpu=26m on Node 10.138.75.70 -Jun 12 20:48:50.748: INFO: Pod ibm-storage-watcher-f4db746b4-mlm76 requesting resource cpu=50m on Node 10.138.75.116 -Jun 12 20:48:50.748: INFO: Pod ibmcloud-block-storage-driver-4wh25 requesting resource cpu=50m on Node 10.138.75.116 -Jun 12 20:48:50.748: INFO: Pod ibmcloud-block-storage-driver-5zqmj requesting resource cpu=50m on Node 10.138.75.112 -Jun 12 20:48:50.748: INFO: Pod ibmcloud-block-storage-driver-jl8fq requesting resource cpu=50m on Node 10.138.75.70 -Jun 12 20:48:50.748: INFO: Pod ibmcloud-block-storage-plugin-5f85bc9665-2ltn5 requesting resource cpu=50m on Node 10.138.75.116 -Jun 12 20:48:50.748: INFO: Pod vpn-7bc564c55c-htxd6 requesting resource cpu=5m on Node 10.138.75.116 -Jun 12 20:48:50.748: INFO: Pod cluster-node-tuning-operator-5f6cff5c99-z22gd requesting resource cpu=10m on Node 10.138.75.116 -Jun 12 20:48:50.748: INFO: Pod tuned-44pqh requesting resource cpu=10m on Node 10.138.75.116 -Jun 12 20:48:50.748: INFO: Pod tuned-dmlsr requesting resource cpu=10m on Node 10.138.75.70 -Jun 12 20:48:50.748: INFO: Pod tuned-phslc requesting resource cpu=10m on Node 10.138.75.112 -Jun 12 20:48:50.749: INFO: Pod cluster-samples-operator-597884bb5d-bv9cn requesting resource cpu=20m on Node 10.138.75.116 -Jun 12 20:48:50.749: INFO: Pod cluster-storage-operator-75bb97486-7xrgf requesting resource cpu=10m on Node 10.138.75.116 -Jun 12 20:48:50.749: INFO: Pod csi-snapshot-controller-7f8879b9ff-lhkmp requesting resource cpu=10m on Node 10.138.75.70 -Jun 12 20:48:50.749: INFO: Pod csi-snapshot-controller-7f8879b9ff-p456r requesting resource cpu=10m on Node 10.138.75.112 -Jun 12 20:48:50.749: INFO: Pod csi-snapshot-controller-operator-69df8b995f-flpdz requesting resource cpu=10m on Node 10.138.75.116 -Jun 12 20:48:50.749: INFO: Pod csi-snapshot-webhook-7bd9594b6d-9f476 requesting resource cpu=10m on Node 10.138.75.70 -Jun 12 20:48:50.749: INFO: Pod csi-snapshot-webhook-7bd9594b6d-bp5dr requesting resource cpu=10m on Node 10.138.75.112 -Jun 12 20:48:50.749: INFO: Pod console-operator-747447cc44-5hk9p requesting resource cpu=20m on Node 10.138.75.116 -Jun 12 20:48:50.749: INFO: Pod console-5bf97c7949-22prk requesting resource cpu=10m on Node 10.138.75.116 -Jun 12 20:48:50.749: INFO: Pod console-5bf97c7949-w5sn5 requesting resource cpu=10m on Node 10.138.75.112 -Jun 12 20:48:50.749: INFO: Pod downloads-8b57f44bb-55ss5 requesting resource cpu=10m on Node 10.138.75.112 -Jun 12 20:48:50.749: INFO: Pod downloads-8b57f44bb-f7r76 requesting resource cpu=10m on Node 10.138.75.70 -Jun 12 20:48:50.749: INFO: Pod dns-operator-65c495d75-cd4fc requesting resource cpu=20m on Node 10.138.75.116 -Jun 12 20:48:50.749: INFO: Pod dns-default-5d2sp requesting resource cpu=60m on Node 10.138.75.70 -Jun 12 20:48:50.749: INFO: Pod dns-default-cw4pt requesting resource cpu=60m on Node 10.138.75.116 -Jun 12 20:48:50.749: INFO: Pod dns-default-hpnqj requesting resource cpu=60m on Node 10.138.75.112 -Jun 12 20:48:50.749: INFO: Pod node-resolver-5st6j requesting resource cpu=5m on Node 10.138.75.112 -Jun 12 20:48:50.749: INFO: Pod node-resolver-8mss5 requesting resource cpu=5m on Node 10.138.75.116 -Jun 12 20:48:50.749: INFO: Pod node-resolver-lf2bx requesting resource cpu=5m on Node 10.138.75.70 -Jun 12 20:48:50.749: INFO: Pod cluster-image-registry-operator-f9c46b94f-swtmm requesting resource cpu=10m on Node 10.138.75.116 -Jun 12 20:48:50.749: INFO: Pod image-registry-6c79bcf5c4-p7ss4 requesting resource cpu=100m on Node 10.138.75.112 -Jun 12 20:48:50.749: INFO: Pod node-ca-5cs7d requesting resource cpu=10m on Node 10.138.75.116 -Jun 12 20:48:50.749: INFO: Pod node-ca-mwjbd requesting resource cpu=10m on Node 10.138.75.70 -Jun 12 20:48:50.749: INFO: Pod node-ca-qm7sb requesting resource cpu=10m on Node 10.138.75.112 -Jun 12 20:48:50.749: INFO: Pod ingress-canary-5qpcw requesting resource cpu=10m on Node 10.138.75.112 -Jun 12 20:48:50.749: INFO: Pod ingress-canary-9xbwx requesting resource cpu=10m on Node 10.138.75.116 -Jun 12 20:48:50.749: INFO: Pod ingress-canary-xwc5b requesting resource cpu=10m on Node 10.138.75.70 -Jun 12 20:48:50.749: INFO: Pod ingress-operator-57d9f78b9c-59cl8 requesting resource cpu=20m on Node 10.138.75.116 -Jun 12 20:48:50.749: INFO: Pod router-default-7d454f944c-62qgz requesting resource cpu=100m on Node 10.138.75.112 -Jun 12 20:48:50.749: INFO: Pod router-default-7d454f944c-s862z requesting resource cpu=100m on Node 10.138.75.70 -Jun 12 20:48:50.749: INFO: Pod insights-operator-7dfcfbc664-j8swm requesting resource cpu=10m on Node 10.138.75.116 -Jun 12 20:48:50.749: INFO: Pod openshift-kube-proxy-5hl4f requesting resource cpu=110m on Node 10.138.75.116 -Jun 12 20:48:50.749: INFO: Pod openshift-kube-proxy-b9xs9 requesting resource cpu=110m on Node 10.138.75.112 -Jun 12 20:48:50.749: INFO: Pod openshift-kube-proxy-rckf9 requesting resource cpu=110m on Node 10.138.75.70 -Jun 12 20:48:50.749: INFO: Pod kube-storage-version-migrator-operator-689b97b878-cqw2l requesting resource cpu=10m on Node 10.138.75.116 -Jun 12 20:48:50.749: INFO: Pod migrator-cfb6c8f7c-vx2tr requesting resource cpu=10m on Node 10.138.75.112 -Jun 12 20:48:50.749: INFO: Pod certified-operators-9jhxm requesting resource cpu=10m on Node 10.138.75.70 -Jun 12 20:48:50.749: INFO: Pod community-operators-fm8cx requesting resource cpu=10m on Node 10.138.75.112 -Jun 12 20:48:50.749: INFO: Pod marketplace-operator-769ddf547d-mm52g requesting resource cpu=10m on Node 10.138.75.116 -Jun 12 20:48:50.749: INFO: Pod redhat-marketplace-n9tcn requesting resource cpu=10m on Node 10.138.75.70 -Jun 12 20:48:50.749: INFO: Pod redhat-operators-pr47d requesting resource cpu=10m on Node 10.138.75.112 -Jun 12 20:48:50.749: INFO: Pod alertmanager-main-0 requesting resource cpu=9m on Node 10.138.75.70 -Jun 12 20:48:50.749: INFO: Pod alertmanager-main-1 requesting resource cpu=9m on Node 10.138.75.112 -Jun 12 20:48:50.749: INFO: Pod cluster-monitoring-operator-7df766d4db-cnq44 requesting resource cpu=10m on Node 10.138.75.116 -Jun 12 20:48:50.749: INFO: Pod kube-state-metrics-6ccfb58dc4-rgnnh requesting resource cpu=4m on Node 10.138.75.112 -Jun 12 20:48:50.749: INFO: Pod node-exporter-5vgf6 requesting resource cpu=9m on Node 10.138.75.70 -Jun 12 20:48:50.749: INFO: Pod node-exporter-r799t requesting resource cpu=9m on Node 10.138.75.112 -Jun 12 20:48:50.749: INFO: Pod node-exporter-s9sgk requesting resource cpu=9m on Node 10.138.75.116 -Jun 12 20:48:50.749: INFO: Pod openshift-state-metrics-7d7f8b4cf8-6kdhb requesting resource cpu=3m on Node 10.138.75.70 -Jun 12 20:48:50.749: INFO: Pod prometheus-adapter-7c58c77c58-2j47k requesting resource cpu=1m on Node 10.138.75.70 -Jun 12 20:48:50.749: INFO: Pod prometheus-adapter-7c58c77c58-xfd55 requesting resource cpu=1m on Node 10.138.75.112 -Jun 12 20:48:50.749: INFO: Pod prometheus-k8s-0 requesting resource cpu=75m on Node 10.138.75.112 -Jun 12 20:48:50.749: INFO: Pod prometheus-k8s-1 requesting resource cpu=75m on Node 10.138.75.70 -Jun 12 20:48:50.749: INFO: Pod prometheus-operator-5d978dbf9c-zvq6g requesting resource cpu=6m on Node 10.138.75.70 -Jun 12 20:48:50.749: INFO: Pod prometheus-operator-admission-webhook-5d679565bb-66wnf requesting resource cpu=5m on Node 10.138.75.112 -Jun 12 20:48:50.749: INFO: Pod prometheus-operator-admission-webhook-5d679565bb-sj42p requesting resource cpu=5m on Node 10.138.75.70 -Jun 12 20:48:50.749: INFO: Pod telemeter-client-55c7b57d84-vh47h requesting resource cpu=3m on Node 10.138.75.70 -Jun 12 20:48:50.749: INFO: Pod thanos-querier-6497df7b9-djrsc requesting resource cpu=15m on Node 10.138.75.112 -Jun 12 20:48:50.749: INFO: Pod thanos-querier-6497df7b9-pg2z9 requesting resource cpu=15m on Node 10.138.75.70 -Jun 12 20:48:50.749: INFO: Pod multus-26bfs requesting resource cpu=10m on Node 10.138.75.70 -Jun 12 20:48:50.749: INFO: Pod multus-additional-cni-plugins-9vls6 requesting resource cpu=10m on Node 10.138.75.70 -Jun 12 20:48:50.749: INFO: Pod multus-additional-cni-plugins-rsr27 requesting resource cpu=10m on Node 10.138.75.116 -Jun 12 20:48:50.749: INFO: Pod multus-additional-cni-plugins-zpr6c requesting resource cpu=10m on Node 10.138.75.112 -Jun 12 20:48:50.749: INFO: Pod multus-admission-controller-5894dd7875-bfbwp requesting resource cpu=20m on Node 10.138.75.116 -Jun 12 20:48:50.749: INFO: Pod multus-admission-controller-5894dd7875-xldt9 requesting resource cpu=20m on Node 10.138.75.70 -Jun 12 20:48:50.750: INFO: Pod multus-ln9rr requesting resource cpu=10m on Node 10.138.75.116 -Jun 12 20:48:50.750: INFO: Pod multus-q452d requesting resource cpu=10m on Node 10.138.75.112 -Jun 12 20:48:50.750: INFO: Pod network-metrics-daemon-75s49 requesting resource cpu=20m on Node 10.138.75.116 -Jun 12 20:48:50.750: INFO: Pod network-metrics-daemon-g9zzs requesting resource cpu=20m on Node 10.138.75.70 -Jun 12 20:48:50.750: INFO: Pod network-metrics-daemon-vx56x requesting resource cpu=20m on Node 10.138.75.112 -Jun 12 20:48:50.750: INFO: Pod network-check-source-7f6b75fdb6-8882l requesting resource cpu=10m on Node 10.138.75.116 -Jun 12 20:48:50.750: INFO: Pod network-check-target-kjfll requesting resource cpu=10m on Node 10.138.75.116 -Jun 12 20:48:50.750: INFO: Pod network-check-target-l622r requesting resource cpu=10m on Node 10.138.75.70 -Jun 12 20:48:50.750: INFO: Pod network-check-target-lfvfw requesting resource cpu=10m on Node 10.138.75.112 -Jun 12 20:48:50.750: INFO: Pod network-operator-5498bf7dc6-xv8r2 requesting resource cpu=10m on Node 10.138.75.112 -Jun 12 20:48:50.750: INFO: Pod catalog-operator-874999f59-jggx9 requesting resource cpu=10m on Node 10.138.75.116 -Jun 12 20:48:50.750: INFO: Pod olm-operator-bdbf4b468-8vj6q requesting resource cpu=10m on Node 10.138.75.116 -Jun 12 20:48:50.750: INFO: Pod package-server-manager-5b897cb946-pz59r requesting resource cpu=10m on Node 10.138.75.116 -Jun 12 20:48:50.750: INFO: Pod packageserver-7f8bd8c95b-2zntg requesting resource cpu=10m on Node 10.138.75.116 -Jun 12 20:48:50.750: INFO: Pod packageserver-7f8bd8c95b-fgfhz requesting resource cpu=10m on Node 10.138.75.112 -Jun 12 20:48:50.750: INFO: Pod metrics-78c5579cb7-nlfqq requesting resource cpu=10m on Node 10.138.75.116 -Jun 12 20:48:50.750: INFO: Pod push-gateway-85f6799b47-cgtdt requesting resource cpu=10m on Node 10.138.75.116 -Jun 12 20:48:50.750: INFO: Pod service-ca-operator-86d6dcd567-8jc2t requesting resource cpu=10m on Node 10.138.75.116 -Jun 12 20:48:50.750: INFO: Pod service-ca-7c79786568-vhxsl requesting resource cpu=10m on Node 10.138.75.116 -Jun 12 20:48:50.750: INFO: Pod sonobuoy requesting resource cpu=0m on Node 10.138.75.70 -Jun 12 20:48:50.750: INFO: Pod sonobuoy-e2e-job-9876719f3d1644bf requesting resource cpu=0m on Node 10.138.75.116 -Jun 12 20:48:50.750: INFO: Pod sonobuoy-systemd-logs-daemon-set-129e475d543c42cc-4dn8s requesting resource cpu=0m on Node 10.138.75.70 -Jun 12 20:48:50.750: INFO: Pod sonobuoy-systemd-logs-daemon-set-129e475d543c42cc-nbw64 requesting resource cpu=0m on Node 10.138.75.116 -Jun 12 20:48:50.750: INFO: Pod sonobuoy-systemd-logs-daemon-set-129e475d543c42cc-xk7f7 requesting resource cpu=0m on Node 10.138.75.112 -Jun 12 20:48:50.750: INFO: Pod tigera-operator-5b48cf996b-z7p6p requesting resource cpu=100m on Node 10.138.75.116 -STEP: Starting Pods to consume most of the cluster CPU. 06/12/23 20:48:50.75 -Jun 12 20:48:50.750: INFO: Creating a pod which consumes cpu=1862m on Node 10.138.75.112 -Jun 12 20:48:50.772: INFO: Creating a pod which consumes cpu=1939m on Node 10.138.75.116 -Jun 12 20:48:50.793: INFO: Creating a pod which consumes cpu=1941m on Node 10.138.75.70 -Jun 12 20:48:50.821: INFO: Waiting up to 5m0s for pod "filler-pod-f16d4b1d-39ab-4904-98df-1c93e81e0f1d" in namespace "sched-pred-9679" to be "running" -Jun 12 20:48:50.843: INFO: Pod "filler-pod-f16d4b1d-39ab-4904-98df-1c93e81e0f1d": Phase="Pending", Reason="", readiness=false. Elapsed: 21.641344ms -Jun 12 20:48:52.856: INFO: Pod "filler-pod-f16d4b1d-39ab-4904-98df-1c93e81e0f1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034825724s -Jun 12 20:48:54.857: INFO: Pod "filler-pod-f16d4b1d-39ab-4904-98df-1c93e81e0f1d": Phase="Running", Reason="", readiness=true. Elapsed: 4.035298099s -Jun 12 20:48:54.857: INFO: Pod "filler-pod-f16d4b1d-39ab-4904-98df-1c93e81e0f1d" satisfied condition "running" -Jun 12 20:48:54.857: INFO: Waiting up to 5m0s for pod "filler-pod-2797e78f-b5c4-459b-bea5-06c9162568fa" in namespace "sched-pred-9679" to be "running" -Jun 12 20:48:54.867: INFO: Pod "filler-pod-2797e78f-b5c4-459b-bea5-06c9162568fa": Phase="Running", Reason="", readiness=true. Elapsed: 9.514513ms -Jun 12 20:48:54.867: INFO: Pod "filler-pod-2797e78f-b5c4-459b-bea5-06c9162568fa" satisfied condition "running" -Jun 12 20:48:54.867: INFO: Waiting up to 5m0s for pod "filler-pod-b9f9f977-5b79-429a-a2ff-4ce9bc353917" in namespace "sched-pred-9679" to be "running" -Jun 12 20:48:54.894: INFO: Pod "filler-pod-b9f9f977-5b79-429a-a2ff-4ce9bc353917": Phase="Running", Reason="", readiness=true. Elapsed: 27.199081ms -Jun 12 20:48:54.894: INFO: Pod "filler-pod-b9f9f977-5b79-429a-a2ff-4ce9bc353917" satisfied condition "running" -STEP: Creating another pod that requires unavailable amount of CPU. 06/12/23 20:48:54.894 -STEP: Considering event: -Type = [Normal], Name = [filler-pod-2797e78f-b5c4-459b-bea5-06c9162568fa.1768046df765a1a0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9679/filler-pod-2797e78f-b5c4-459b-bea5-06c9162568fa to 10.138.75.116] 06/12/23 20:48:54.907 -STEP: Considering event: -Type = [Normal], Name = [filler-pod-2797e78f-b5c4-459b-bea5-06c9162568fa.1768046e3ad52946], Reason = [AddedInterface], Message = [Add eth0 [172.30.185.115/32] from k8s-pod-network] 06/12/23 20:48:54.908 -STEP: Considering event: -Type = [Normal], Name = [filler-pod-2797e78f-b5c4-459b-bea5-06c9162568fa.1768046e4f3ef96a], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 06/12/23 20:48:54.908 -STEP: Considering event: -Type = [Normal], Name = [filler-pod-2797e78f-b5c4-459b-bea5-06c9162568fa.1768046e70743832], Reason = [Created], Message = [Created container filler-pod-2797e78f-b5c4-459b-bea5-06c9162568fa] 06/12/23 20:48:54.908 -STEP: Considering event: -Type = [Normal], Name = [filler-pod-2797e78f-b5c4-459b-bea5-06c9162568fa.1768046e7457e0bb], Reason = [Started], Message = [Started container filler-pod-2797e78f-b5c4-459b-bea5-06c9162568fa] 06/12/23 20:48:54.909 -STEP: Considering event: -Type = [Normal], Name = [filler-pod-b9f9f977-5b79-429a-a2ff-4ce9bc353917.1768046df8bac5c2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9679/filler-pod-b9f9f977-5b79-429a-a2ff-4ce9bc353917 to 10.138.75.70] 06/12/23 20:48:54.909 -STEP: Considering event: -Type = [Normal], Name = [filler-pod-b9f9f977-5b79-429a-a2ff-4ce9bc353917.1768046e3b22ce9c], Reason = [AddedInterface], Message = [Add eth0 [172.30.224.48/32] from k8s-pod-network] 06/12/23 20:48:54.909 -STEP: Considering event: -Type = [Normal], Name = [filler-pod-b9f9f977-5b79-429a-a2ff-4ce9bc353917.1768046e5855bc2d], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 06/12/23 20:48:54.91 -STEP: Considering event: -Type = [Normal], Name = [filler-pod-b9f9f977-5b79-429a-a2ff-4ce9bc353917.1768046e6bb431a6], Reason = [Created], Message = [Created container filler-pod-b9f9f977-5b79-429a-a2ff-4ce9bc353917] 06/12/23 20:48:54.91 -STEP: Considering event: -Type = [Normal], Name = [filler-pod-b9f9f977-5b79-429a-a2ff-4ce9bc353917.1768046e6f1d133b], Reason = [Started], Message = [Started container filler-pod-b9f9f977-5b79-429a-a2ff-4ce9bc353917] 06/12/23 20:48:54.91 -STEP: Considering event: -Type = [Normal], Name = [filler-pod-f16d4b1d-39ab-4904-98df-1c93e81e0f1d.1768046df635161f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9679/filler-pod-f16d4b1d-39ab-4904-98df-1c93e81e0f1d to 10.138.75.112] 06/12/23 20:48:54.911 -STEP: Considering event: -Type = [Normal], Name = [filler-pod-f16d4b1d-39ab-4904-98df-1c93e81e0f1d.1768046e3d077cb0], Reason = [AddedInterface], Message = [Add eth0 [172.30.161.119/32] from k8s-pod-network] 06/12/23 20:48:54.911 -STEP: Considering event: -Type = [Normal], Name = [filler-pod-f16d4b1d-39ab-4904-98df-1c93e81e0f1d.1768046e5751ada0], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 06/12/23 20:48:54.911 -STEP: Considering event: -Type = [Normal], Name = [filler-pod-f16d4b1d-39ab-4904-98df-1c93e81e0f1d.1768046e6d51d85f], Reason = [Created], Message = [Created container filler-pod-f16d4b1d-39ab-4904-98df-1c93e81e0f1d] 06/12/23 20:48:54.912 -STEP: Considering event: -Type = [Normal], Name = [filler-pod-f16d4b1d-39ab-4904-98df-1c93e81e0f1d.1768046e70feba97], Reason = [Started], Message = [Started container filler-pod-f16d4b1d-39ab-4904-98df-1c93e81e0f1d] 06/12/23 20:48:54.912 -STEP: Considering event: -Type = [Warning], Name = [additional-pod.1768046eedd7ad21], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 Insufficient cpu. preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod..] 06/12/23 20:48:54.944 -STEP: removing the label node off the node 10.138.75.112 06/12/23 20:48:55.945 -STEP: verifying the node doesn't have the label node 06/12/23 20:48:55.985 -STEP: removing the label node off the node 10.138.75.116 06/12/23 20:48:56.038 -STEP: verifying the node doesn't have the label node 06/12/23 20:48:56.103 -STEP: removing the label node off the node 10.138.75.70 06/12/23 20:48:56.153 -STEP: verifying the node doesn't have the label node 06/12/23 20:48:56.207 -[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] +[It] should verify ResourceQuota with best effort scope. [Conformance] + test/e2e/apimachinery/resource_quota.go:803 +STEP: Creating a ResourceQuota with best effort scope 07/27/23 01:41:58.213 +STEP: Ensuring ResourceQuota status is calculated 07/27/23 01:41:58.231 +STEP: Creating a ResourceQuota with not best effort scope 07/27/23 01:42:00.24 +STEP: Ensuring ResourceQuota status is calculated 07/27/23 01:42:00.254 +STEP: Creating a best-effort pod 07/27/23 01:42:02.287 +STEP: Ensuring resource quota with best effort scope captures the pod usage 07/27/23 01:42:02.355 +STEP: Ensuring resource quota with not best effort ignored the pod usage 07/27/23 01:42:04.368 +STEP: Deleting the pod 07/27/23 01:42:06.378 +STEP: Ensuring resource quota status released the pod usage 07/27/23 01:42:06.432 +STEP: Creating a not best-effort pod 07/27/23 01:42:08.441 +STEP: Ensuring resource quota with not best effort scope captures the pod usage 07/27/23 01:42:08.468 +STEP: Ensuring resource quota with best effort scope ignored the pod usage 07/27/23 01:42:10.48 +STEP: Deleting the pod 07/27/23 01:42:12.489 +STEP: Ensuring resource quota status released the pod usage 07/27/23 01:42:12.55 +[AfterEach] [sig-api-machinery] ResourceQuota test/e2e/framework/node/init/init.go:32 -Jun 12 20:48:56.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] - test/e2e/scheduling/predicates.go:88 -[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] +Jul 27 01:42:14.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota tear down framework | framework.go:193 -STEP: Destroying namespace "sched-pred-9679" for this suite. 06/12/23 20:48:56.251 +STEP: Destroying namespace "resourcequota-7643" for this suite. 07/27/23 01:42:14.573 ------------------------------ -• [SLOW TEST] [6.287 seconds] -[sig-scheduling] SchedulerPredicates [Serial] -test/e2e/scheduling/framework.go:40 - validates resource limits of pods that are allowed to run [Conformance] - test/e2e/scheduling/predicates.go:331 +• [SLOW TEST] [16.527 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should verify ResourceQuota with best effort scope. [Conformance] + test/e2e/apimachinery/resource_quota.go:803 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + [BeforeEach] [sig-api-machinery] ResourceQuota set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:48:49.987 - Jun 12 20:48:49.987: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename sched-pred 06/12/23 20:48:49.989 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:48:50.036 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:48:50.059 - [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + STEP: Creating a kubernetes client 07/27/23 01:41:58.086 + Jul 27 01:41:58.086: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename resourcequota 07/27/23 01:41:58.087 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:41:58.144 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:41:58.197 + [BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] - test/e2e/scheduling/predicates.go:97 - Jun 12 20:48:50.105: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready - Jun 12 20:48:50.164: INFO: Waiting for terminating namespaces to be deleted... - Jun 12 20:48:50.192: INFO: - Logging pods the apiserver thinks is on node 10.138.75.112 before test - Jun 12 20:48:50.251: INFO: calico-node-b9sdb from calico-system started at 2023-06-12 17:53:02 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.251: INFO: Container calico-node ready: true, restart count 0 - Jun 12 20:48:50.252: INFO: calico-typha-74d94b74f5-dc6td from calico-system started at 2023-06-12 17:53:09 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.252: INFO: Container calico-typha ready: true, restart count 0 - Jun 12 20:48:50.252: INFO: ibm-cloud-provider-ip-168-1-198-197-75947fc545-gxzn7 from ibm-system started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.252: INFO: Container ibm-cloud-provider-ip-168-1-198-197 ready: true, restart count 0 - Jun 12 20:48:50.252: INFO: ibm-keepalived-watcher-5hc6v from kube-system started at 2023-06-12 17:40:13 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.252: INFO: Container keepalived-watcher ready: true, restart count 0 - Jun 12 20:48:50.252: INFO: ibm-master-proxy-static-10.138.75.112 from kube-system started at 2023-06-12 17:40:09 +0000 UTC (2 container statuses recorded) - Jun 12 20:48:50.252: INFO: Container ibm-master-proxy-static ready: true, restart count 0 - Jun 12 20:48:50.252: INFO: Container pause ready: true, restart count 0 - Jun 12 20:48:50.252: INFO: ibmcloud-block-storage-driver-5zqmj from kube-system started at 2023-06-12 17:40:20 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.253: INFO: Container ibmcloud-block-storage-driver-container ready: true, restart count 0 - Jun 12 20:48:50.253: INFO: tuned-phslc from openshift-cluster-node-tuning-operator started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.253: INFO: Container tuned ready: true, restart count 0 - Jun 12 20:48:50.253: INFO: csi-snapshot-controller-7f8879b9ff-p456r from openshift-cluster-storage-operator started at 2023-06-12 17:55:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.253: INFO: Container snapshot-controller ready: true, restart count 0 - Jun 12 20:48:50.253: INFO: csi-snapshot-webhook-7bd9594b6d-bp5dr from openshift-cluster-storage-operator started at 2023-06-12 17:55:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.253: INFO: Container webhook ready: true, restart count 0 - Jun 12 20:48:50.253: INFO: console-5bf97c7949-w5sn5 from openshift-console started at 2023-06-12 18:01:02 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.253: INFO: Container console ready: true, restart count 0 - Jun 12 20:48:50.253: INFO: downloads-8b57f44bb-55ss5 from openshift-console started at 2023-06-12 17:55:24 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.253: INFO: Container download-server ready: true, restart count 0 - Jun 12 20:48:50.253: INFO: dns-default-hpnqj from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (2 container statuses recorded) - Jun 12 20:48:50.254: INFO: Container dns ready: true, restart count 0 - Jun 12 20:48:50.254: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:48:50.254: INFO: node-resolver-5st6j from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.254: INFO: Container dns-node-resolver ready: true, restart count 0 - Jun 12 20:48:50.254: INFO: image-registry-6c79bcf5c4-p7ss4 from openshift-image-registry started at 2023-06-12 18:00:30 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.254: INFO: Container registry ready: true, restart count 0 - Jun 12 20:48:50.254: INFO: node-ca-qm7sb from openshift-image-registry started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.254: INFO: Container node-ca ready: true, restart count 0 - Jun 12 20:48:50.254: INFO: ingress-canary-5qpcw from openshift-ingress-canary started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.254: INFO: Container serve-healthcheck-canary ready: true, restart count 0 - Jun 12 20:48:50.254: INFO: router-default-7d454f944c-62qgz from openshift-ingress started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.254: INFO: Container router ready: true, restart count 0 - Jun 12 20:48:50.255: INFO: openshift-kube-proxy-b9xs9 from openshift-kube-proxy started at 2023-06-12 17:47:53 +0000 UTC (2 container statuses recorded) - Jun 12 20:48:50.255: INFO: Container kube-proxy ready: true, restart count 0 - Jun 12 20:48:50.255: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:48:50.255: INFO: migrator-cfb6c8f7c-vx2tr from openshift-kube-storage-version-migrator started at 2023-06-12 17:55:28 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.255: INFO: Container migrator ready: true, restart count 0 - Jun 12 20:48:50.255: INFO: community-operators-fm8cx from openshift-marketplace started at 2023-06-12 17:57:19 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.255: INFO: Container registry-server ready: true, restart count 0 - Jun 12 20:48:50.255: INFO: redhat-operators-pr47d from openshift-marketplace started at 2023-06-12 19:05:36 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.255: INFO: Container registry-server ready: true, restart count 0 - Jun 12 20:48:50.255: INFO: alertmanager-main-1 from openshift-monitoring started at 2023-06-12 18:01:06 +0000 UTC (6 container statuses recorded) - Jun 12 20:48:50.255: INFO: Container alertmanager ready: true, restart count 1 - Jun 12 20:48:50.255: INFO: Container alertmanager-proxy ready: true, restart count 0 - Jun 12 20:48:50.255: INFO: Container config-reloader ready: true, restart count 0 - Jun 12 20:48:50.255: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:48:50.256: INFO: Container kube-rbac-proxy-metric ready: true, restart count 0 - Jun 12 20:48:50.256: INFO: Container prom-label-proxy ready: true, restart count 0 - Jun 12 20:48:50.256: INFO: kube-state-metrics-6ccfb58dc4-rgnnh from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (3 container statuses recorded) - Jun 12 20:48:50.256: INFO: Container kube-rbac-proxy-main ready: true, restart count 0 - Jun 12 20:48:50.256: INFO: Container kube-rbac-proxy-self ready: true, restart count 0 - Jun 12 20:48:50.256: INFO: Container kube-state-metrics ready: true, restart count 0 - Jun 12 20:48:50.256: INFO: node-exporter-r799t from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (2 container statuses recorded) - Jun 12 20:48:50.256: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:48:50.256: INFO: Container node-exporter ready: true, restart count 0 - Jun 12 20:48:50.256: INFO: prometheus-adapter-7c58c77c58-xfd55 from openshift-monitoring started at 2023-06-12 17:59:36 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.257: INFO: Container prometheus-adapter ready: true, restart count 0 - Jun 12 20:48:50.257: INFO: prometheus-k8s-0 from openshift-monitoring started at 2023-06-12 18:01:32 +0000 UTC (6 container statuses recorded) - Jun 12 20:48:50.257: INFO: Container config-reloader ready: true, restart count 0 - Jun 12 20:48:50.257: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:48:50.257: INFO: Container kube-rbac-proxy-thanos ready: true, restart count 0 - Jun 12 20:48:50.257: INFO: Container prometheus ready: true, restart count 0 - Jun 12 20:48:50.257: INFO: Container prometheus-proxy ready: true, restart count 0 - Jun 12 20:48:50.257: INFO: Container thanos-sidecar ready: true, restart count 0 - Jun 12 20:48:50.258: INFO: prometheus-operator-admission-webhook-5d679565bb-66wnf from openshift-monitoring started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.258: INFO: Container prometheus-operator-admission-webhook ready: true, restart count 0 - Jun 12 20:48:50.258: INFO: thanos-querier-6497df7b9-djrsc from openshift-monitoring started at 2023-06-12 17:59:42 +0000 UTC (6 container statuses recorded) - Jun 12 20:48:50.258: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:48:50.258: INFO: Container kube-rbac-proxy-metrics ready: true, restart count 0 - Jun 12 20:48:50.258: INFO: Container kube-rbac-proxy-rules ready: true, restart count 0 - Jun 12 20:48:50.258: INFO: Container oauth-proxy ready: true, restart count 0 - Jun 12 20:48:50.258: INFO: Container prom-label-proxy ready: true, restart count 0 - Jun 12 20:48:50.258: INFO: Container thanos-query ready: true, restart count 0 - Jun 12 20:48:50.258: INFO: multus-additional-cni-plugins-zpr6c from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.259: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 - Jun 12 20:48:50.259: INFO: multus-q452d from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.259: INFO: Container kube-multus ready: true, restart count 0 - Jun 12 20:48:50.259: INFO: network-metrics-daemon-vx56x from openshift-multus started at 2023-06-12 17:47:49 +0000 UTC (2 container statuses recorded) - Jun 12 20:48:50.259: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:48:50.259: INFO: Container network-metrics-daemon ready: true, restart count 0 - Jun 12 20:48:50.259: INFO: network-check-target-lfvfw from openshift-network-diagnostics started at 2023-06-12 17:47:56 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.259: INFO: Container network-check-target-container ready: true, restart count 0 - Jun 12 20:48:50.259: INFO: network-operator-5498bf7dc6-xv8r2 from openshift-network-operator started at 2023-06-12 17:47:21 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.259: INFO: Container network-operator ready: true, restart count 1 - Jun 12 20:48:50.260: INFO: packageserver-7f8bd8c95b-fgfhz from openshift-operator-lifecycle-manager started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.260: INFO: Container packageserver ready: true, restart count 0 - Jun 12 20:48:50.260: INFO: sonobuoy-systemd-logs-daemon-set-129e475d543c42cc-xk7f7 from sonobuoy started at 2023-06-12 20:39:06 +0000 UTC (2 container statuses recorded) - Jun 12 20:48:50.260: INFO: Container sonobuoy-worker ready: true, restart count 0 - Jun 12 20:48:50.260: INFO: Container systemd-logs ready: true, restart count 0 - Jun 12 20:48:50.260: INFO: - Logging pods the apiserver thinks is on node 10.138.75.116 before test - Jun 12 20:48:50.333: INFO: calico-kube-controllers-58944988fc-kv6pq from calico-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.333: INFO: Container calico-kube-controllers ready: true, restart count 0 - Jun 12 20:48:50.333: INFO: calico-node-nhd4m from calico-system started at 2023-06-12 17:53:02 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.333: INFO: Container calico-node ready: true, restart count 0 - Jun 12 20:48:50.333: INFO: ibm-file-plugin-5f8cc7b66-hc7b9 from kube-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.333: INFO: Container ibm-file-plugin-container ready: true, restart count 0 - Jun 12 20:48:50.333: INFO: ibm-keepalived-watcher-zp24l from kube-system started at 2023-06-12 17:40:01 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.333: INFO: Container keepalived-watcher ready: true, restart count 0 - Jun 12 20:48:50.333: INFO: ibm-master-proxy-static-10.138.75.116 from kube-system started at 2023-06-12 17:39:58 +0000 UTC (2 container statuses recorded) - Jun 12 20:48:50.333: INFO: Container ibm-master-proxy-static ready: true, restart count 0 - Jun 12 20:48:50.333: INFO: Container pause ready: true, restart count 0 - Jun 12 20:48:50.333: INFO: ibm-storage-watcher-f4db746b4-mlm76 from kube-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.333: INFO: Container ibm-storage-watcher-container ready: true, restart count 0 - Jun 12 20:48:50.333: INFO: ibmcloud-block-storage-driver-4wh25 from kube-system started at 2023-06-12 17:40:09 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.333: INFO: Container ibmcloud-block-storage-driver-container ready: true, restart count 0 - Jun 12 20:48:50.333: INFO: ibmcloud-block-storage-plugin-5f85bc9665-2ltn5 from kube-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.333: INFO: Container ibmcloud-block-storage-plugin-container ready: true, restart count 0 - Jun 12 20:48:50.333: INFO: vpn-7bc564c55c-htxd6 from kube-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.333: INFO: Container vpn ready: true, restart count 0 - Jun 12 20:48:50.333: INFO: cluster-node-tuning-operator-5f6cff5c99-z22gd from openshift-cluster-node-tuning-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.333: INFO: Container cluster-node-tuning-operator ready: true, restart count 0 - Jun 12 20:48:50.333: INFO: tuned-44pqh from openshift-cluster-node-tuning-operator started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.333: INFO: Container tuned ready: true, restart count 0 - Jun 12 20:48:50.333: INFO: cluster-samples-operator-597884bb5d-bv9cn from openshift-cluster-samples-operator started at 2023-06-12 17:54:08 +0000 UTC (2 container statuses recorded) - Jun 12 20:48:50.333: INFO: Container cluster-samples-operator ready: true, restart count 0 - Jun 12 20:48:50.333: INFO: Container cluster-samples-operator-watch ready: true, restart count 0 - Jun 12 20:48:50.333: INFO: cluster-storage-operator-75bb97486-7xrgf from openshift-cluster-storage-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.333: INFO: Container cluster-storage-operator ready: true, restart count 1 - Jun 12 20:48:50.333: INFO: csi-snapshot-controller-operator-69df8b995f-flpdz from openshift-cluster-storage-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.333: INFO: Container csi-snapshot-controller-operator ready: true, restart count 0 - Jun 12 20:48:50.333: INFO: console-operator-747447cc44-5hk9p from openshift-console-operator started at 2023-06-12 17:54:08 +0000 UTC (2 container statuses recorded) - Jun 12 20:48:50.333: INFO: Container console-operator ready: true, restart count 1 - Jun 12 20:48:50.333: INFO: Container conversion-webhook-server ready: true, restart count 2 - Jun 12 20:48:50.333: INFO: console-5bf97c7949-22prk from openshift-console started at 2023-06-12 18:01:30 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.333: INFO: Container console ready: true, restart count 0 - Jun 12 20:48:50.333: INFO: dns-operator-65c495d75-cd4fc from openshift-dns-operator started at 2023-06-12 17:54:08 +0000 UTC (2 container statuses recorded) - Jun 12 20:48:50.333: INFO: Container dns-operator ready: true, restart count 0 - Jun 12 20:48:50.333: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:48:50.333: INFO: dns-default-cw4pt from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (2 container statuses recorded) - Jun 12 20:48:50.333: INFO: Container dns ready: true, restart count 0 - Jun 12 20:48:50.333: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:48:50.333: INFO: node-resolver-8mss5 from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.333: INFO: Container dns-node-resolver ready: true, restart count 0 - Jun 12 20:48:50.333: INFO: cluster-image-registry-operator-f9c46b94f-swtmm from openshift-image-registry started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.333: INFO: Container cluster-image-registry-operator ready: true, restart count 0 - Jun 12 20:48:50.333: INFO: node-ca-5cs7d from openshift-image-registry started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.333: INFO: Container node-ca ready: true, restart count 0 - Jun 12 20:48:50.333: INFO: registry-pvc-permissions-j28ls from openshift-image-registry started at 2023-06-12 18:00:38 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.333: INFO: Container pvc-permissions ready: false, restart count 0 - Jun 12 20:48:50.333: INFO: ingress-canary-9xbwx from openshift-ingress-canary started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.333: INFO: Container serve-healthcheck-canary ready: true, restart count 0 - Jun 12 20:48:50.333: INFO: ingress-operator-57d9f78b9c-59cl8 from openshift-ingress-operator started at 2023-06-12 17:54:08 +0000 UTC (2 container statuses recorded) - Jun 12 20:48:50.333: INFO: Container ingress-operator ready: true, restart count 0 - Jun 12 20:48:50.333: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:48:50.333: INFO: insights-operator-7dfcfbc664-j8swm from openshift-insights started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.333: INFO: Container insights-operator ready: true, restart count 1 - Jun 12 20:48:50.333: INFO: openshift-kube-proxy-5hl4f from openshift-kube-proxy started at 2023-06-12 17:47:53 +0000 UTC (2 container statuses recorded) - Jun 12 20:48:50.334: INFO: Container kube-proxy ready: true, restart count 0 - Jun 12 20:48:50.334: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:48:50.334: INFO: kube-storage-version-migrator-operator-689b97b878-cqw2l from openshift-kube-storage-version-migrator-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.334: INFO: Container kube-storage-version-migrator-operator ready: true, restart count 1 - Jun 12 20:48:50.334: INFO: marketplace-operator-769ddf547d-mm52g from openshift-marketplace started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.334: INFO: Container marketplace-operator ready: true, restart count 0 - Jun 12 20:48:50.334: INFO: cluster-monitoring-operator-7df766d4db-cnq44 from openshift-monitoring started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.334: INFO: Container cluster-monitoring-operator ready: true, restart count 0 - Jun 12 20:48:50.334: INFO: node-exporter-s9sgk from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (2 container statuses recorded) - Jun 12 20:48:50.334: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:48:50.334: INFO: Container node-exporter ready: true, restart count 0 - Jun 12 20:48:50.334: INFO: multus-additional-cni-plugins-rsr27 from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.334: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 - Jun 12 20:48:50.334: INFO: multus-admission-controller-5894dd7875-bfbwp from openshift-multus started at 2023-06-12 17:58:31 +0000 UTC (2 container statuses recorded) - Jun 12 20:48:50.334: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:48:50.334: INFO: Container multus-admission-controller ready: true, restart count 0 - Jun 12 20:48:50.334: INFO: multus-ln9rr from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.334: INFO: Container kube-multus ready: true, restart count 0 - Jun 12 20:48:50.334: INFO: network-metrics-daemon-75s49 from openshift-multus started at 2023-06-12 17:47:49 +0000 UTC (2 container statuses recorded) - Jun 12 20:48:50.334: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:48:50.334: INFO: Container network-metrics-daemon ready: true, restart count 0 - Jun 12 20:48:50.334: INFO: network-check-source-7f6b75fdb6-8882l from openshift-network-diagnostics started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.334: INFO: Container check-endpoints ready: true, restart count 0 - Jun 12 20:48:50.334: INFO: network-check-target-kjfll from openshift-network-diagnostics started at 2023-06-12 17:47:56 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.334: INFO: Container network-check-target-container ready: true, restart count 0 - Jun 12 20:48:50.334: INFO: catalog-operator-874999f59-jggx9 from openshift-operator-lifecycle-manager started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.334: INFO: Container catalog-operator ready: true, restart count 0 - Jun 12 20:48:50.334: INFO: collect-profiles-28110015-d4v2k from openshift-operator-lifecycle-manager started at 2023-06-12 20:15:00 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.334: INFO: Container collect-profiles ready: false, restart count 0 - Jun 12 20:48:50.334: INFO: collect-profiles-28110030-fzbkf from openshift-operator-lifecycle-manager started at 2023-06-12 20:30:00 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.334: INFO: Container collect-profiles ready: false, restart count 0 - Jun 12 20:48:50.334: INFO: collect-profiles-28110045-fcbk8 from openshift-operator-lifecycle-manager started at 2023-06-12 20:45:00 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.334: INFO: Container collect-profiles ready: false, restart count 0 - Jun 12 20:48:50.334: INFO: olm-operator-bdbf4b468-8vj6q from openshift-operator-lifecycle-manager started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.334: INFO: Container olm-operator ready: true, restart count 0 - Jun 12 20:48:50.334: INFO: package-server-manager-5b897cb946-pz59r from openshift-operator-lifecycle-manager started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.334: INFO: Container package-server-manager ready: true, restart count 0 - Jun 12 20:48:50.334: INFO: packageserver-7f8bd8c95b-2zntg from openshift-operator-lifecycle-manager started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.334: INFO: Container packageserver ready: true, restart count 0 - Jun 12 20:48:50.334: INFO: metrics-78c5579cb7-nlfqq from openshift-roks-metrics started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.334: INFO: Container metrics ready: true, restart count 3 - Jun 12 20:48:50.334: INFO: push-gateway-85f6799b47-cgtdt from openshift-roks-metrics started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.334: INFO: Container push-gateway ready: true, restart count 0 - Jun 12 20:48:50.334: INFO: service-ca-operator-86d6dcd567-8jc2t from openshift-service-ca-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.334: INFO: Container service-ca-operator ready: true, restart count 1 - Jun 12 20:48:50.334: INFO: service-ca-7c79786568-vhxsl from openshift-service-ca started at 2023-06-12 17:55:23 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.334: INFO: Container service-ca-controller ready: true, restart count 0 - Jun 12 20:48:50.334: INFO: sonobuoy-e2e-job-9876719f3d1644bf from sonobuoy started at 2023-06-12 20:39:06 +0000 UTC (2 container statuses recorded) - Jun 12 20:48:50.334: INFO: Container e2e ready: true, restart count 0 - Jun 12 20:48:50.334: INFO: Container sonobuoy-worker ready: true, restart count 0 - Jun 12 20:48:50.334: INFO: sonobuoy-systemd-logs-daemon-set-129e475d543c42cc-nbw64 from sonobuoy started at 2023-06-12 20:39:07 +0000 UTC (2 container statuses recorded) - Jun 12 20:48:50.334: INFO: Container sonobuoy-worker ready: true, restart count 0 - Jun 12 20:48:50.334: INFO: Container systemd-logs ready: true, restart count 0 - Jun 12 20:48:50.334: INFO: tigera-operator-5b48cf996b-z7p6p from tigera-operator started at 2023-06-12 17:40:11 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.334: INFO: Container tigera-operator ready: true, restart count 7 - Jun 12 20:48:50.334: INFO: - Logging pods the apiserver thinks is on node 10.138.75.70 before test - Jun 12 20:48:50.392: INFO: calico-node-v822j from calico-system started at 2023-06-12 17:53:02 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.392: INFO: Container calico-node ready: true, restart count 0 - Jun 12 20:48:50.392: INFO: calico-typha-74d94b74f5-db4zz from calico-system started at 2023-06-12 17:53:02 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.392: INFO: Container calico-typha ready: true, restart count 0 - Jun 12 20:48:50.392: INFO: annotationupdatee37280f9-0343-40b6-be8a-9549e589c5f7 from downward-api-855 started at 2023-06-12 20:48:35 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.392: INFO: Container client-container ready: true, restart count 0 - Jun 12 20:48:50.392: INFO: ibm-cloud-provider-ip-168-1-198-197-75947fc545-9m2wx from ibm-system started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.392: INFO: Container ibm-cloud-provider-ip-168-1-198-197 ready: true, restart count 0 - Jun 12 20:48:50.392: INFO: ibm-keepalived-watcher-nl9l9 from kube-system started at 2023-06-12 17:40:20 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.392: INFO: Container keepalived-watcher ready: true, restart count 0 - Jun 12 20:48:50.392: INFO: ibm-master-proxy-static-10.138.75.70 from kube-system started at 2023-06-12 17:40:17 +0000 UTC (2 container statuses recorded) - Jun 12 20:48:50.392: INFO: Container ibm-master-proxy-static ready: true, restart count 0 - Jun 12 20:48:50.392: INFO: Container pause ready: true, restart count 0 - Jun 12 20:48:50.392: INFO: ibmcloud-block-storage-driver-jl8fq from kube-system started at 2023-06-12 17:40:28 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.392: INFO: Container ibmcloud-block-storage-driver-container ready: true, restart count 0 - Jun 12 20:48:50.392: INFO: tuned-dmlsr from openshift-cluster-node-tuning-operator started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.392: INFO: Container tuned ready: true, restart count 0 - Jun 12 20:48:50.392: INFO: csi-snapshot-controller-7f8879b9ff-lhkmp from openshift-cluster-storage-operator started at 2023-06-12 17:55:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.392: INFO: Container snapshot-controller ready: true, restart count 0 - Jun 12 20:48:50.392: INFO: csi-snapshot-webhook-7bd9594b6d-9f476 from openshift-cluster-storage-operator started at 2023-06-12 17:55:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.392: INFO: Container webhook ready: true, restart count 0 - Jun 12 20:48:50.392: INFO: downloads-8b57f44bb-f7r76 from openshift-console started at 2023-06-12 17:55:24 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.392: INFO: Container download-server ready: true, restart count 0 - Jun 12 20:48:50.392: INFO: dns-default-5d2sp from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (2 container statuses recorded) - Jun 12 20:48:50.392: INFO: Container dns ready: true, restart count 0 - Jun 12 20:48:50.392: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:48:50.392: INFO: node-resolver-lf2bx from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.392: INFO: Container dns-node-resolver ready: true, restart count 0 - Jun 12 20:48:50.392: INFO: node-ca-mwjbd from openshift-image-registry started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.392: INFO: Container node-ca ready: true, restart count 0 - Jun 12 20:48:50.392: INFO: ingress-canary-xwc5b from openshift-ingress-canary started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.392: INFO: Container serve-healthcheck-canary ready: true, restart count 0 - Jun 12 20:48:50.392: INFO: router-default-7d454f944c-s862z from openshift-ingress started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.392: INFO: Container router ready: true, restart count 0 - Jun 12 20:48:50.392: INFO: openshift-kube-proxy-rckf9 from openshift-kube-proxy started at 2023-06-12 17:47:53 +0000 UTC (2 container statuses recorded) - Jun 12 20:48:50.392: INFO: Container kube-proxy ready: true, restart count 0 - Jun 12 20:48:50.392: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:48:50.392: INFO: certified-operators-9jhxm from openshift-marketplace started at 2023-06-12 17:57:19 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.392: INFO: Container registry-server ready: true, restart count 0 - Jun 12 20:48:50.392: INFO: redhat-marketplace-n9tcn from openshift-marketplace started at 2023-06-12 17:57:19 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.392: INFO: Container registry-server ready: true, restart count 0 - Jun 12 20:48:50.392: INFO: alertmanager-main-0 from openshift-monitoring started at 2023-06-12 18:01:41 +0000 UTC (6 container statuses recorded) - Jun 12 20:48:50.392: INFO: Container alertmanager ready: true, restart count 1 - Jun 12 20:48:50.392: INFO: Container alertmanager-proxy ready: true, restart count 0 - Jun 12 20:48:50.392: INFO: Container config-reloader ready: true, restart count 0 - Jun 12 20:48:50.392: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: Container kube-rbac-proxy-metric ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: Container prom-label-proxy ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: node-exporter-5vgf6 from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (2 container statuses recorded) - Jun 12 20:48:50.393: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: Container node-exporter ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: openshift-state-metrics-7d7f8b4cf8-6kdhb from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (3 container statuses recorded) - Jun 12 20:48:50.393: INFO: Container kube-rbac-proxy-main ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: Container kube-rbac-proxy-self ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: Container openshift-state-metrics ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: prometheus-adapter-7c58c77c58-2j47k from openshift-monitoring started at 2023-06-12 17:59:36 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.393: INFO: Container prometheus-adapter ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: prometheus-k8s-1 from openshift-monitoring started at 2023-06-12 18:01:12 +0000 UTC (6 container statuses recorded) - Jun 12 20:48:50.393: INFO: Container config-reloader ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: Container kube-rbac-proxy-thanos ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: Container prometheus ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: Container prometheus-proxy ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: Container thanos-sidecar ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: prometheus-operator-5d978dbf9c-zvq6g from openshift-monitoring started at 2023-06-12 17:59:19 +0000 UTC (2 container statuses recorded) - Jun 12 20:48:50.393: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: Container prometheus-operator ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: prometheus-operator-admission-webhook-5d679565bb-sj42p from openshift-monitoring started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.393: INFO: Container prometheus-operator-admission-webhook ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: telemeter-client-55c7b57d84-vh47h from openshift-monitoring started at 2023-06-12 17:59:37 +0000 UTC (3 container statuses recorded) - Jun 12 20:48:50.393: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: Container reload ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: Container telemeter-client ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: thanos-querier-6497df7b9-pg2z9 from openshift-monitoring started at 2023-06-12 17:59:42 +0000 UTC (6 container statuses recorded) - Jun 12 20:48:50.393: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: Container kube-rbac-proxy-metrics ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: Container kube-rbac-proxy-rules ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: Container oauth-proxy ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: Container prom-label-proxy ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: Container thanos-query ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: multus-26bfs from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.393: INFO: Container kube-multus ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: multus-additional-cni-plugins-9vls6 from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.393: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: multus-admission-controller-5894dd7875-xldt9 from openshift-multus started at 2023-06-12 17:58:44 +0000 UTC (2 container statuses recorded) - Jun 12 20:48:50.393: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: Container multus-admission-controller ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: network-metrics-daemon-g9zzs from openshift-multus started at 2023-06-12 17:47:49 +0000 UTC (2 container statuses recorded) - Jun 12 20:48:50.393: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: Container network-metrics-daemon ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: network-check-target-l622r from openshift-network-diagnostics started at 2023-06-12 17:47:56 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.393: INFO: Container network-check-target-container ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: sonobuoy from sonobuoy started at 2023-06-12 20:38:54 +0000 UTC (1 container statuses recorded) - Jun 12 20:48:50.393: INFO: Container kube-sonobuoy ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: sonobuoy-systemd-logs-daemon-set-129e475d543c42cc-4dn8s from sonobuoy started at 2023-06-12 20:39:07 +0000 UTC (2 container statuses recorded) - Jun 12 20:48:50.393: INFO: Container sonobuoy-worker ready: true, restart count 0 - Jun 12 20:48:50.393: INFO: Container systemd-logs ready: true, restart count 0 - [It] validates resource limits of pods that are allowed to run [Conformance] - test/e2e/scheduling/predicates.go:331 - STEP: verifying the node has the label node 10.138.75.112 06/12/23 20:48:50.523 - STEP: verifying the node has the label node 10.138.75.116 06/12/23 20:48:50.59 - STEP: verifying the node has the label node 10.138.75.70 06/12/23 20:48:50.646 - Jun 12 20:48:50.748: INFO: Pod calico-kube-controllers-58944988fc-kv6pq requesting resource cpu=10m on Node 10.138.75.116 - Jun 12 20:48:50.748: INFO: Pod calico-node-b9sdb requesting resource cpu=250m on Node 10.138.75.112 - Jun 12 20:48:50.748: INFO: Pod calico-node-nhd4m requesting resource cpu=250m on Node 10.138.75.116 - Jun 12 20:48:50.748: INFO: Pod calico-node-v822j requesting resource cpu=250m on Node 10.138.75.70 - Jun 12 20:48:50.748: INFO: Pod calico-typha-74d94b74f5-db4zz requesting resource cpu=250m on Node 10.138.75.70 - Jun 12 20:48:50.748: INFO: Pod calico-typha-74d94b74f5-dc6td requesting resource cpu=250m on Node 10.138.75.112 - Jun 12 20:48:50.748: INFO: Pod annotationupdatee37280f9-0343-40b6-be8a-9549e589c5f7 requesting resource cpu=0m on Node 10.138.75.70 - Jun 12 20:48:50.748: INFO: Pod ibm-cloud-provider-ip-168-1-198-197-75947fc545-9m2wx requesting resource cpu=5m on Node 10.138.75.70 - Jun 12 20:48:50.748: INFO: Pod ibm-cloud-provider-ip-168-1-198-197-75947fc545-gxzn7 requesting resource cpu=5m on Node 10.138.75.112 - Jun 12 20:48:50.748: INFO: Pod ibm-file-plugin-5f8cc7b66-hc7b9 requesting resource cpu=50m on Node 10.138.75.116 - Jun 12 20:48:50.748: INFO: Pod ibm-keepalived-watcher-5hc6v requesting resource cpu=5m on Node 10.138.75.112 - Jun 12 20:48:50.748: INFO: Pod ibm-keepalived-watcher-nl9l9 requesting resource cpu=5m on Node 10.138.75.70 - Jun 12 20:48:50.748: INFO: Pod ibm-keepalived-watcher-zp24l requesting resource cpu=5m on Node 10.138.75.116 - Jun 12 20:48:50.748: INFO: Pod ibm-master-proxy-static-10.138.75.112 requesting resource cpu=26m on Node 10.138.75.112 - Jun 12 20:48:50.748: INFO: Pod ibm-master-proxy-static-10.138.75.116 requesting resource cpu=26m on Node 10.138.75.116 - Jun 12 20:48:50.748: INFO: Pod ibm-master-proxy-static-10.138.75.70 requesting resource cpu=26m on Node 10.138.75.70 - Jun 12 20:48:50.748: INFO: Pod ibm-storage-watcher-f4db746b4-mlm76 requesting resource cpu=50m on Node 10.138.75.116 - Jun 12 20:48:50.748: INFO: Pod ibmcloud-block-storage-driver-4wh25 requesting resource cpu=50m on Node 10.138.75.116 - Jun 12 20:48:50.748: INFO: Pod ibmcloud-block-storage-driver-5zqmj requesting resource cpu=50m on Node 10.138.75.112 - Jun 12 20:48:50.748: INFO: Pod ibmcloud-block-storage-driver-jl8fq requesting resource cpu=50m on Node 10.138.75.70 - Jun 12 20:48:50.748: INFO: Pod ibmcloud-block-storage-plugin-5f85bc9665-2ltn5 requesting resource cpu=50m on Node 10.138.75.116 - Jun 12 20:48:50.748: INFO: Pod vpn-7bc564c55c-htxd6 requesting resource cpu=5m on Node 10.138.75.116 - Jun 12 20:48:50.748: INFO: Pod cluster-node-tuning-operator-5f6cff5c99-z22gd requesting resource cpu=10m on Node 10.138.75.116 - Jun 12 20:48:50.748: INFO: Pod tuned-44pqh requesting resource cpu=10m on Node 10.138.75.116 - Jun 12 20:48:50.748: INFO: Pod tuned-dmlsr requesting resource cpu=10m on Node 10.138.75.70 - Jun 12 20:48:50.748: INFO: Pod tuned-phslc requesting resource cpu=10m on Node 10.138.75.112 - Jun 12 20:48:50.749: INFO: Pod cluster-samples-operator-597884bb5d-bv9cn requesting resource cpu=20m on Node 10.138.75.116 - Jun 12 20:48:50.749: INFO: Pod cluster-storage-operator-75bb97486-7xrgf requesting resource cpu=10m on Node 10.138.75.116 - Jun 12 20:48:50.749: INFO: Pod csi-snapshot-controller-7f8879b9ff-lhkmp requesting resource cpu=10m on Node 10.138.75.70 - Jun 12 20:48:50.749: INFO: Pod csi-snapshot-controller-7f8879b9ff-p456r requesting resource cpu=10m on Node 10.138.75.112 - Jun 12 20:48:50.749: INFO: Pod csi-snapshot-controller-operator-69df8b995f-flpdz requesting resource cpu=10m on Node 10.138.75.116 - Jun 12 20:48:50.749: INFO: Pod csi-snapshot-webhook-7bd9594b6d-9f476 requesting resource cpu=10m on Node 10.138.75.70 - Jun 12 20:48:50.749: INFO: Pod csi-snapshot-webhook-7bd9594b6d-bp5dr requesting resource cpu=10m on Node 10.138.75.112 - Jun 12 20:48:50.749: INFO: Pod console-operator-747447cc44-5hk9p requesting resource cpu=20m on Node 10.138.75.116 - Jun 12 20:48:50.749: INFO: Pod console-5bf97c7949-22prk requesting resource cpu=10m on Node 10.138.75.116 - Jun 12 20:48:50.749: INFO: Pod console-5bf97c7949-w5sn5 requesting resource cpu=10m on Node 10.138.75.112 - Jun 12 20:48:50.749: INFO: Pod downloads-8b57f44bb-55ss5 requesting resource cpu=10m on Node 10.138.75.112 - Jun 12 20:48:50.749: INFO: Pod downloads-8b57f44bb-f7r76 requesting resource cpu=10m on Node 10.138.75.70 - Jun 12 20:48:50.749: INFO: Pod dns-operator-65c495d75-cd4fc requesting resource cpu=20m on Node 10.138.75.116 - Jun 12 20:48:50.749: INFO: Pod dns-default-5d2sp requesting resource cpu=60m on Node 10.138.75.70 - Jun 12 20:48:50.749: INFO: Pod dns-default-cw4pt requesting resource cpu=60m on Node 10.138.75.116 - Jun 12 20:48:50.749: INFO: Pod dns-default-hpnqj requesting resource cpu=60m on Node 10.138.75.112 - Jun 12 20:48:50.749: INFO: Pod node-resolver-5st6j requesting resource cpu=5m on Node 10.138.75.112 - Jun 12 20:48:50.749: INFO: Pod node-resolver-8mss5 requesting resource cpu=5m on Node 10.138.75.116 - Jun 12 20:48:50.749: INFO: Pod node-resolver-lf2bx requesting resource cpu=5m on Node 10.138.75.70 - Jun 12 20:48:50.749: INFO: Pod cluster-image-registry-operator-f9c46b94f-swtmm requesting resource cpu=10m on Node 10.138.75.116 - Jun 12 20:48:50.749: INFO: Pod image-registry-6c79bcf5c4-p7ss4 requesting resource cpu=100m on Node 10.138.75.112 - Jun 12 20:48:50.749: INFO: Pod node-ca-5cs7d requesting resource cpu=10m on Node 10.138.75.116 - Jun 12 20:48:50.749: INFO: Pod node-ca-mwjbd requesting resource cpu=10m on Node 10.138.75.70 - Jun 12 20:48:50.749: INFO: Pod node-ca-qm7sb requesting resource cpu=10m on Node 10.138.75.112 - Jun 12 20:48:50.749: INFO: Pod ingress-canary-5qpcw requesting resource cpu=10m on Node 10.138.75.112 - Jun 12 20:48:50.749: INFO: Pod ingress-canary-9xbwx requesting resource cpu=10m on Node 10.138.75.116 - Jun 12 20:48:50.749: INFO: Pod ingress-canary-xwc5b requesting resource cpu=10m on Node 10.138.75.70 - Jun 12 20:48:50.749: INFO: Pod ingress-operator-57d9f78b9c-59cl8 requesting resource cpu=20m on Node 10.138.75.116 - Jun 12 20:48:50.749: INFO: Pod router-default-7d454f944c-62qgz requesting resource cpu=100m on Node 10.138.75.112 - Jun 12 20:48:50.749: INFO: Pod router-default-7d454f944c-s862z requesting resource cpu=100m on Node 10.138.75.70 - Jun 12 20:48:50.749: INFO: Pod insights-operator-7dfcfbc664-j8swm requesting resource cpu=10m on Node 10.138.75.116 - Jun 12 20:48:50.749: INFO: Pod openshift-kube-proxy-5hl4f requesting resource cpu=110m on Node 10.138.75.116 - Jun 12 20:48:50.749: INFO: Pod openshift-kube-proxy-b9xs9 requesting resource cpu=110m on Node 10.138.75.112 - Jun 12 20:48:50.749: INFO: Pod openshift-kube-proxy-rckf9 requesting resource cpu=110m on Node 10.138.75.70 - Jun 12 20:48:50.749: INFO: Pod kube-storage-version-migrator-operator-689b97b878-cqw2l requesting resource cpu=10m on Node 10.138.75.116 - Jun 12 20:48:50.749: INFO: Pod migrator-cfb6c8f7c-vx2tr requesting resource cpu=10m on Node 10.138.75.112 - Jun 12 20:48:50.749: INFO: Pod certified-operators-9jhxm requesting resource cpu=10m on Node 10.138.75.70 - Jun 12 20:48:50.749: INFO: Pod community-operators-fm8cx requesting resource cpu=10m on Node 10.138.75.112 - Jun 12 20:48:50.749: INFO: Pod marketplace-operator-769ddf547d-mm52g requesting resource cpu=10m on Node 10.138.75.116 - Jun 12 20:48:50.749: INFO: Pod redhat-marketplace-n9tcn requesting resource cpu=10m on Node 10.138.75.70 - Jun 12 20:48:50.749: INFO: Pod redhat-operators-pr47d requesting resource cpu=10m on Node 10.138.75.112 - Jun 12 20:48:50.749: INFO: Pod alertmanager-main-0 requesting resource cpu=9m on Node 10.138.75.70 - Jun 12 20:48:50.749: INFO: Pod alertmanager-main-1 requesting resource cpu=9m on Node 10.138.75.112 - Jun 12 20:48:50.749: INFO: Pod cluster-monitoring-operator-7df766d4db-cnq44 requesting resource cpu=10m on Node 10.138.75.116 - Jun 12 20:48:50.749: INFO: Pod kube-state-metrics-6ccfb58dc4-rgnnh requesting resource cpu=4m on Node 10.138.75.112 - Jun 12 20:48:50.749: INFO: Pod node-exporter-5vgf6 requesting resource cpu=9m on Node 10.138.75.70 - Jun 12 20:48:50.749: INFO: Pod node-exporter-r799t requesting resource cpu=9m on Node 10.138.75.112 - Jun 12 20:48:50.749: INFO: Pod node-exporter-s9sgk requesting resource cpu=9m on Node 10.138.75.116 - Jun 12 20:48:50.749: INFO: Pod openshift-state-metrics-7d7f8b4cf8-6kdhb requesting resource cpu=3m on Node 10.138.75.70 - Jun 12 20:48:50.749: INFO: Pod prometheus-adapter-7c58c77c58-2j47k requesting resource cpu=1m on Node 10.138.75.70 - Jun 12 20:48:50.749: INFO: Pod prometheus-adapter-7c58c77c58-xfd55 requesting resource cpu=1m on Node 10.138.75.112 - Jun 12 20:48:50.749: INFO: Pod prometheus-k8s-0 requesting resource cpu=75m on Node 10.138.75.112 - Jun 12 20:48:50.749: INFO: Pod prometheus-k8s-1 requesting resource cpu=75m on Node 10.138.75.70 - Jun 12 20:48:50.749: INFO: Pod prometheus-operator-5d978dbf9c-zvq6g requesting resource cpu=6m on Node 10.138.75.70 - Jun 12 20:48:50.749: INFO: Pod prometheus-operator-admission-webhook-5d679565bb-66wnf requesting resource cpu=5m on Node 10.138.75.112 - Jun 12 20:48:50.749: INFO: Pod prometheus-operator-admission-webhook-5d679565bb-sj42p requesting resource cpu=5m on Node 10.138.75.70 - Jun 12 20:48:50.749: INFO: Pod telemeter-client-55c7b57d84-vh47h requesting resource cpu=3m on Node 10.138.75.70 - Jun 12 20:48:50.749: INFO: Pod thanos-querier-6497df7b9-djrsc requesting resource cpu=15m on Node 10.138.75.112 - Jun 12 20:48:50.749: INFO: Pod thanos-querier-6497df7b9-pg2z9 requesting resource cpu=15m on Node 10.138.75.70 - Jun 12 20:48:50.749: INFO: Pod multus-26bfs requesting resource cpu=10m on Node 10.138.75.70 - Jun 12 20:48:50.749: INFO: Pod multus-additional-cni-plugins-9vls6 requesting resource cpu=10m on Node 10.138.75.70 - Jun 12 20:48:50.749: INFO: Pod multus-additional-cni-plugins-rsr27 requesting resource cpu=10m on Node 10.138.75.116 - Jun 12 20:48:50.749: INFO: Pod multus-additional-cni-plugins-zpr6c requesting resource cpu=10m on Node 10.138.75.112 - Jun 12 20:48:50.749: INFO: Pod multus-admission-controller-5894dd7875-bfbwp requesting resource cpu=20m on Node 10.138.75.116 - Jun 12 20:48:50.749: INFO: Pod multus-admission-controller-5894dd7875-xldt9 requesting resource cpu=20m on Node 10.138.75.70 - Jun 12 20:48:50.750: INFO: Pod multus-ln9rr requesting resource cpu=10m on Node 10.138.75.116 - Jun 12 20:48:50.750: INFO: Pod multus-q452d requesting resource cpu=10m on Node 10.138.75.112 - Jun 12 20:48:50.750: INFO: Pod network-metrics-daemon-75s49 requesting resource cpu=20m on Node 10.138.75.116 - Jun 12 20:48:50.750: INFO: Pod network-metrics-daemon-g9zzs requesting resource cpu=20m on Node 10.138.75.70 - Jun 12 20:48:50.750: INFO: Pod network-metrics-daemon-vx56x requesting resource cpu=20m on Node 10.138.75.112 - Jun 12 20:48:50.750: INFO: Pod network-check-source-7f6b75fdb6-8882l requesting resource cpu=10m on Node 10.138.75.116 - Jun 12 20:48:50.750: INFO: Pod network-check-target-kjfll requesting resource cpu=10m on Node 10.138.75.116 - Jun 12 20:48:50.750: INFO: Pod network-check-target-l622r requesting resource cpu=10m on Node 10.138.75.70 - Jun 12 20:48:50.750: INFO: Pod network-check-target-lfvfw requesting resource cpu=10m on Node 10.138.75.112 - Jun 12 20:48:50.750: INFO: Pod network-operator-5498bf7dc6-xv8r2 requesting resource cpu=10m on Node 10.138.75.112 - Jun 12 20:48:50.750: INFO: Pod catalog-operator-874999f59-jggx9 requesting resource cpu=10m on Node 10.138.75.116 - Jun 12 20:48:50.750: INFO: Pod olm-operator-bdbf4b468-8vj6q requesting resource cpu=10m on Node 10.138.75.116 - Jun 12 20:48:50.750: INFO: Pod package-server-manager-5b897cb946-pz59r requesting resource cpu=10m on Node 10.138.75.116 - Jun 12 20:48:50.750: INFO: Pod packageserver-7f8bd8c95b-2zntg requesting resource cpu=10m on Node 10.138.75.116 - Jun 12 20:48:50.750: INFO: Pod packageserver-7f8bd8c95b-fgfhz requesting resource cpu=10m on Node 10.138.75.112 - Jun 12 20:48:50.750: INFO: Pod metrics-78c5579cb7-nlfqq requesting resource cpu=10m on Node 10.138.75.116 - Jun 12 20:48:50.750: INFO: Pod push-gateway-85f6799b47-cgtdt requesting resource cpu=10m on Node 10.138.75.116 - Jun 12 20:48:50.750: INFO: Pod service-ca-operator-86d6dcd567-8jc2t requesting resource cpu=10m on Node 10.138.75.116 - Jun 12 20:48:50.750: INFO: Pod service-ca-7c79786568-vhxsl requesting resource cpu=10m on Node 10.138.75.116 - Jun 12 20:48:50.750: INFO: Pod sonobuoy requesting resource cpu=0m on Node 10.138.75.70 - Jun 12 20:48:50.750: INFO: Pod sonobuoy-e2e-job-9876719f3d1644bf requesting resource cpu=0m on Node 10.138.75.116 - Jun 12 20:48:50.750: INFO: Pod sonobuoy-systemd-logs-daemon-set-129e475d543c42cc-4dn8s requesting resource cpu=0m on Node 10.138.75.70 - Jun 12 20:48:50.750: INFO: Pod sonobuoy-systemd-logs-daemon-set-129e475d543c42cc-nbw64 requesting resource cpu=0m on Node 10.138.75.116 - Jun 12 20:48:50.750: INFO: Pod sonobuoy-systemd-logs-daemon-set-129e475d543c42cc-xk7f7 requesting resource cpu=0m on Node 10.138.75.112 - Jun 12 20:48:50.750: INFO: Pod tigera-operator-5b48cf996b-z7p6p requesting resource cpu=100m on Node 10.138.75.116 - STEP: Starting Pods to consume most of the cluster CPU. 06/12/23 20:48:50.75 - Jun 12 20:48:50.750: INFO: Creating a pod which consumes cpu=1862m on Node 10.138.75.112 - Jun 12 20:48:50.772: INFO: Creating a pod which consumes cpu=1939m on Node 10.138.75.116 - Jun 12 20:48:50.793: INFO: Creating a pod which consumes cpu=1941m on Node 10.138.75.70 - Jun 12 20:48:50.821: INFO: Waiting up to 5m0s for pod "filler-pod-f16d4b1d-39ab-4904-98df-1c93e81e0f1d" in namespace "sched-pred-9679" to be "running" - Jun 12 20:48:50.843: INFO: Pod "filler-pod-f16d4b1d-39ab-4904-98df-1c93e81e0f1d": Phase="Pending", Reason="", readiness=false. Elapsed: 21.641344ms - Jun 12 20:48:52.856: INFO: Pod "filler-pod-f16d4b1d-39ab-4904-98df-1c93e81e0f1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034825724s - Jun 12 20:48:54.857: INFO: Pod "filler-pod-f16d4b1d-39ab-4904-98df-1c93e81e0f1d": Phase="Running", Reason="", readiness=true. Elapsed: 4.035298099s - Jun 12 20:48:54.857: INFO: Pod "filler-pod-f16d4b1d-39ab-4904-98df-1c93e81e0f1d" satisfied condition "running" - Jun 12 20:48:54.857: INFO: Waiting up to 5m0s for pod "filler-pod-2797e78f-b5c4-459b-bea5-06c9162568fa" in namespace "sched-pred-9679" to be "running" - Jun 12 20:48:54.867: INFO: Pod "filler-pod-2797e78f-b5c4-459b-bea5-06c9162568fa": Phase="Running", Reason="", readiness=true. Elapsed: 9.514513ms - Jun 12 20:48:54.867: INFO: Pod "filler-pod-2797e78f-b5c4-459b-bea5-06c9162568fa" satisfied condition "running" - Jun 12 20:48:54.867: INFO: Waiting up to 5m0s for pod "filler-pod-b9f9f977-5b79-429a-a2ff-4ce9bc353917" in namespace "sched-pred-9679" to be "running" - Jun 12 20:48:54.894: INFO: Pod "filler-pod-b9f9f977-5b79-429a-a2ff-4ce9bc353917": Phase="Running", Reason="", readiness=true. Elapsed: 27.199081ms - Jun 12 20:48:54.894: INFO: Pod "filler-pod-b9f9f977-5b79-429a-a2ff-4ce9bc353917" satisfied condition "running" - STEP: Creating another pod that requires unavailable amount of CPU. 06/12/23 20:48:54.894 - STEP: Considering event: - Type = [Normal], Name = [filler-pod-2797e78f-b5c4-459b-bea5-06c9162568fa.1768046df765a1a0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9679/filler-pod-2797e78f-b5c4-459b-bea5-06c9162568fa to 10.138.75.116] 06/12/23 20:48:54.907 - STEP: Considering event: - Type = [Normal], Name = [filler-pod-2797e78f-b5c4-459b-bea5-06c9162568fa.1768046e3ad52946], Reason = [AddedInterface], Message = [Add eth0 [172.30.185.115/32] from k8s-pod-network] 06/12/23 20:48:54.908 - STEP: Considering event: - Type = [Normal], Name = [filler-pod-2797e78f-b5c4-459b-bea5-06c9162568fa.1768046e4f3ef96a], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 06/12/23 20:48:54.908 - STEP: Considering event: - Type = [Normal], Name = [filler-pod-2797e78f-b5c4-459b-bea5-06c9162568fa.1768046e70743832], Reason = [Created], Message = [Created container filler-pod-2797e78f-b5c4-459b-bea5-06c9162568fa] 06/12/23 20:48:54.908 - STEP: Considering event: - Type = [Normal], Name = [filler-pod-2797e78f-b5c4-459b-bea5-06c9162568fa.1768046e7457e0bb], Reason = [Started], Message = [Started container filler-pod-2797e78f-b5c4-459b-bea5-06c9162568fa] 06/12/23 20:48:54.909 - STEP: Considering event: - Type = [Normal], Name = [filler-pod-b9f9f977-5b79-429a-a2ff-4ce9bc353917.1768046df8bac5c2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9679/filler-pod-b9f9f977-5b79-429a-a2ff-4ce9bc353917 to 10.138.75.70] 06/12/23 20:48:54.909 - STEP: Considering event: - Type = [Normal], Name = [filler-pod-b9f9f977-5b79-429a-a2ff-4ce9bc353917.1768046e3b22ce9c], Reason = [AddedInterface], Message = [Add eth0 [172.30.224.48/32] from k8s-pod-network] 06/12/23 20:48:54.909 - STEP: Considering event: - Type = [Normal], Name = [filler-pod-b9f9f977-5b79-429a-a2ff-4ce9bc353917.1768046e5855bc2d], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 06/12/23 20:48:54.91 - STEP: Considering event: - Type = [Normal], Name = [filler-pod-b9f9f977-5b79-429a-a2ff-4ce9bc353917.1768046e6bb431a6], Reason = [Created], Message = [Created container filler-pod-b9f9f977-5b79-429a-a2ff-4ce9bc353917] 06/12/23 20:48:54.91 - STEP: Considering event: - Type = [Normal], Name = [filler-pod-b9f9f977-5b79-429a-a2ff-4ce9bc353917.1768046e6f1d133b], Reason = [Started], Message = [Started container filler-pod-b9f9f977-5b79-429a-a2ff-4ce9bc353917] 06/12/23 20:48:54.91 - STEP: Considering event: - Type = [Normal], Name = [filler-pod-f16d4b1d-39ab-4904-98df-1c93e81e0f1d.1768046df635161f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9679/filler-pod-f16d4b1d-39ab-4904-98df-1c93e81e0f1d to 10.138.75.112] 06/12/23 20:48:54.911 - STEP: Considering event: - Type = [Normal], Name = [filler-pod-f16d4b1d-39ab-4904-98df-1c93e81e0f1d.1768046e3d077cb0], Reason = [AddedInterface], Message = [Add eth0 [172.30.161.119/32] from k8s-pod-network] 06/12/23 20:48:54.911 - STEP: Considering event: - Type = [Normal], Name = [filler-pod-f16d4b1d-39ab-4904-98df-1c93e81e0f1d.1768046e5751ada0], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 06/12/23 20:48:54.911 - STEP: Considering event: - Type = [Normal], Name = [filler-pod-f16d4b1d-39ab-4904-98df-1c93e81e0f1d.1768046e6d51d85f], Reason = [Created], Message = [Created container filler-pod-f16d4b1d-39ab-4904-98df-1c93e81e0f1d] 06/12/23 20:48:54.912 - STEP: Considering event: - Type = [Normal], Name = [filler-pod-f16d4b1d-39ab-4904-98df-1c93e81e0f1d.1768046e70feba97], Reason = [Started], Message = [Started container filler-pod-f16d4b1d-39ab-4904-98df-1c93e81e0f1d] 06/12/23 20:48:54.912 - STEP: Considering event: - Type = [Warning], Name = [additional-pod.1768046eedd7ad21], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 Insufficient cpu. preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod..] 06/12/23 20:48:54.944 - STEP: removing the label node off the node 10.138.75.112 06/12/23 20:48:55.945 - STEP: verifying the node doesn't have the label node 06/12/23 20:48:55.985 - STEP: removing the label node off the node 10.138.75.116 06/12/23 20:48:56.038 - STEP: verifying the node doesn't have the label node 06/12/23 20:48:56.103 - STEP: removing the label node off the node 10.138.75.70 06/12/23 20:48:56.153 - STEP: verifying the node doesn't have the label node 06/12/23 20:48:56.207 - [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + [It] should verify ResourceQuota with best effort scope. [Conformance] + test/e2e/apimachinery/resource_quota.go:803 + STEP: Creating a ResourceQuota with best effort scope 07/27/23 01:41:58.213 + STEP: Ensuring ResourceQuota status is calculated 07/27/23 01:41:58.231 + STEP: Creating a ResourceQuota with not best effort scope 07/27/23 01:42:00.24 + STEP: Ensuring ResourceQuota status is calculated 07/27/23 01:42:00.254 + STEP: Creating a best-effort pod 07/27/23 01:42:02.287 + STEP: Ensuring resource quota with best effort scope captures the pod usage 07/27/23 01:42:02.355 + STEP: Ensuring resource quota with not best effort ignored the pod usage 07/27/23 01:42:04.368 + STEP: Deleting the pod 07/27/23 01:42:06.378 + STEP: Ensuring resource quota status released the pod usage 07/27/23 01:42:06.432 + STEP: Creating a not best-effort pod 07/27/23 01:42:08.441 + STEP: Ensuring resource quota with not best effort scope captures the pod usage 07/27/23 01:42:08.468 + STEP: Ensuring resource quota with best effort scope ignored the pod usage 07/27/23 01:42:10.48 + STEP: Deleting the pod 07/27/23 01:42:12.489 + STEP: Ensuring resource quota status released the pod usage 07/27/23 01:42:12.55 + [AfterEach] [sig-api-machinery] ResourceQuota test/e2e/framework/node/init/init.go:32 - Jun 12 20:48:56.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] - test/e2e/scheduling/predicates.go:88 - [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + Jul 27 01:42:14.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota tear down framework | framework.go:193 - STEP: Destroying namespace "sched-pred-9679" for this suite. 06/12/23 20:48:56.251 + STEP: Destroying namespace "resourcequota-7643" for this suite. 07/27/23 01:42:14.573 << End Captured GinkgoWriter Output ------------------------------ -SSSSSS +SSS ------------------------------ -[sig-cli] Kubectl client Kubectl expose - should create services for rc [Conformance] - test/e2e/kubectl/kubectl.go:1415 -[BeforeEach] [sig-cli] Kubectl client +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + custom resource defaulting for requests and from storage works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:269 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:48:56.275 -Jun 12 20:48:56.276: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename kubectl 06/12/23 20:48:56.284 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:48:56.343 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:48:56.398 -[BeforeEach] [sig-cli] Kubectl client +STEP: Creating a kubernetes client 07/27/23 01:42:14.613 +Jul 27 01:42:14.613: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename custom-resource-definition 07/27/23 01:42:14.614 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:42:14.656 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:42:14.666 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-cli] Kubectl client - test/e2e/kubectl/kubectl.go:274 -[It] should create services for rc [Conformance] - test/e2e/kubectl/kubectl.go:1415 -STEP: creating Agnhost RC 06/12/23 20:48:56.413 -Jun 12 20:48:56.413: INFO: namespace kubectl-8977 -Jun 12 20:48:56.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-8977 create -f -' -Jun 12 20:49:00.304: INFO: stderr: "" -Jun 12 20:49:00.304: INFO: stdout: "replicationcontroller/agnhost-primary created\n" -STEP: Waiting for Agnhost primary to start. 06/12/23 20:49:00.304 -Jun 12 20:49:01.313: INFO: Selector matched 1 pods for map[app:agnhost] -Jun 12 20:49:01.313: INFO: Found 0 / 1 -Jun 12 20:49:02.332: INFO: Selector matched 1 pods for map[app:agnhost] -Jun 12 20:49:02.332: INFO: Found 0 / 1 -Jun 12 20:49:03.316: INFO: Selector matched 1 pods for map[app:agnhost] -Jun 12 20:49:03.316: INFO: Found 0 / 1 -Jun 12 20:49:04.331: INFO: Selector matched 1 pods for map[app:agnhost] -Jun 12 20:49:04.331: INFO: Found 0 / 1 -Jun 12 20:49:05.435: INFO: Selector matched 1 pods for map[app:agnhost] -Jun 12 20:49:05.435: INFO: Found 1 / 1 -Jun 12 20:49:05.435: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 -Jun 12 20:49:05.447: INFO: Selector matched 1 pods for map[app:agnhost] -Jun 12 20:49:05.447: INFO: ForEach: Found 1 pods from the filter. Now looping through them. -Jun 12 20:49:05.447: INFO: wait on agnhost-primary startup in kubectl-8977 -Jun 12 20:49:05.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-8977 logs agnhost-primary-s564c agnhost-primary' -Jun 12 20:49:06.126: INFO: stderr: "" -Jun 12 20:49:06.126: INFO: stdout: "Paused\n" -STEP: exposing RC 06/12/23 20:49:06.126 -Jun 12 20:49:06.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-8977 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' -Jun 12 20:49:06.474: INFO: stderr: "" -Jun 12 20:49:06.474: INFO: stdout: "service/rm2 exposed\n" -Jun 12 20:49:06.487: INFO: Service rm2 in namespace kubectl-8977 found. -STEP: exposing service 06/12/23 20:49:08.509 -Jun 12 20:49:08.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-8977 expose service rm2 --name=rm3 --port=2345 --target-port=6379' -Jun 12 20:49:08.734: INFO: stderr: "" -Jun 12 20:49:08.734: INFO: stdout: "service/rm3 exposed\n" -Jun 12 20:49:08.746: INFO: Service rm3 in namespace kubectl-8977 found. -[AfterEach] [sig-cli] Kubectl client +[It] custom resource defaulting for requests and from storage works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:269 +Jul 27 01:42:14.674: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 -Jun 12 20:49:10.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-cli] Kubectl client +Jul 27 01:42:17.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-cli] Kubectl client +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-cli] Kubectl client +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] tear down framework | framework.go:193 -STEP: Destroying namespace "kubectl-8977" for this suite. 06/12/23 20:49:10.782 +STEP: Destroying namespace "custom-resource-definition-3307" for this suite. 07/27/23 01:42:17.574 ------------------------------ -• [SLOW TEST] [14.525 seconds] -[sig-cli] Kubectl client -test/e2e/kubectl/framework.go:23 - Kubectl expose - test/e2e/kubectl/kubectl.go:1409 - should create services for rc [Conformance] - test/e2e/kubectl/kubectl.go:1415 +• [2.985 seconds] +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + custom resource defaulting for requests and from storage works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:269 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-cli] Kubectl client + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:48:56.275 - Jun 12 20:48:56.276: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename kubectl 06/12/23 20:48:56.284 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:48:56.343 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:48:56.398 - [BeforeEach] [sig-cli] Kubectl client + STEP: Creating a kubernetes client 07/27/23 01:42:14.613 + Jul 27 01:42:14.613: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename custom-resource-definition 07/27/23 01:42:14.614 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:42:14.656 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:42:14.666 + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-cli] Kubectl client - test/e2e/kubectl/kubectl.go:274 - [It] should create services for rc [Conformance] - test/e2e/kubectl/kubectl.go:1415 - STEP: creating Agnhost RC 06/12/23 20:48:56.413 - Jun 12 20:48:56.413: INFO: namespace kubectl-8977 - Jun 12 20:48:56.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-8977 create -f -' - Jun 12 20:49:00.304: INFO: stderr: "" - Jun 12 20:49:00.304: INFO: stdout: "replicationcontroller/agnhost-primary created\n" - STEP: Waiting for Agnhost primary to start. 06/12/23 20:49:00.304 - Jun 12 20:49:01.313: INFO: Selector matched 1 pods for map[app:agnhost] - Jun 12 20:49:01.313: INFO: Found 0 / 1 - Jun 12 20:49:02.332: INFO: Selector matched 1 pods for map[app:agnhost] - Jun 12 20:49:02.332: INFO: Found 0 / 1 - Jun 12 20:49:03.316: INFO: Selector matched 1 pods for map[app:agnhost] - Jun 12 20:49:03.316: INFO: Found 0 / 1 - Jun 12 20:49:04.331: INFO: Selector matched 1 pods for map[app:agnhost] - Jun 12 20:49:04.331: INFO: Found 0 / 1 - Jun 12 20:49:05.435: INFO: Selector matched 1 pods for map[app:agnhost] - Jun 12 20:49:05.435: INFO: Found 1 / 1 - Jun 12 20:49:05.435: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 - Jun 12 20:49:05.447: INFO: Selector matched 1 pods for map[app:agnhost] - Jun 12 20:49:05.447: INFO: ForEach: Found 1 pods from the filter. Now looping through them. - Jun 12 20:49:05.447: INFO: wait on agnhost-primary startup in kubectl-8977 - Jun 12 20:49:05.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-8977 logs agnhost-primary-s564c agnhost-primary' - Jun 12 20:49:06.126: INFO: stderr: "" - Jun 12 20:49:06.126: INFO: stdout: "Paused\n" - STEP: exposing RC 06/12/23 20:49:06.126 - Jun 12 20:49:06.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-8977 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' - Jun 12 20:49:06.474: INFO: stderr: "" - Jun 12 20:49:06.474: INFO: stdout: "service/rm2 exposed\n" - Jun 12 20:49:06.487: INFO: Service rm2 in namespace kubectl-8977 found. - STEP: exposing service 06/12/23 20:49:08.509 - Jun 12 20:49:08.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-8977 expose service rm2 --name=rm3 --port=2345 --target-port=6379' - Jun 12 20:49:08.734: INFO: stderr: "" - Jun 12 20:49:08.734: INFO: stdout: "service/rm3 exposed\n" - Jun 12 20:49:08.746: INFO: Service rm3 in namespace kubectl-8977 found. - [AfterEach] [sig-cli] Kubectl client + [It] custom resource defaulting for requests and from storage works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:269 + Jul 27 01:42:14.674: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 20:49:10.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-cli] Kubectl client + Jul 27 01:42:17.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-cli] Kubectl client + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-cli] Kubectl client + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "kubectl-8977" for this suite. 06/12/23 20:49:10.782 + STEP: Destroying namespace "custom-resource-definition-3307" for this suite. 07/27/23 01:42:17.574 << End Captured GinkgoWriter Output ------------------------------ -SSS +SSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] Projected downwardAPI - should provide container's memory limit [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:207 -[BeforeEach] [sig-storage] Projected downwardAPI +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD preserving unknown fields in an embedded object [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:236 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:49:10.801 -Jun 12 20:49:10.801: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename projected 06/12/23 20:49:10.803 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:49:10.843 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:49:10.854 -[BeforeEach] [sig-storage] Projected downwardAPI +STEP: Creating a kubernetes client 07/27/23 01:42:17.599 +Jul 27 01:42:17.599: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename crd-publish-openapi 07/27/23 01:42:17.601 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:42:17.644 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:42:17.687 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-storage] Projected downwardAPI - test/e2e/common/storage/projected_downwardapi.go:44 -[It] should provide container's memory limit [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:207 -STEP: Creating a pod to test downward API volume plugin 06/12/23 20:49:10.875 -Jun 12 20:49:10.909: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0ad42d6d-0b1c-4c73-8e98-62bb64f5315e" in namespace "projected-6392" to be "Succeeded or Failed" -Jun 12 20:49:10.928: INFO: Pod "downwardapi-volume-0ad42d6d-0b1c-4c73-8e98-62bb64f5315e": Phase="Pending", Reason="", readiness=false. Elapsed: 18.244185ms -Jun 12 20:49:12.996: INFO: Pod "downwardapi-volume-0ad42d6d-0b1c-4c73-8e98-62bb64f5315e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086215087s -Jun 12 20:49:14.965: INFO: Pod "downwardapi-volume-0ad42d6d-0b1c-4c73-8e98-62bb64f5315e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055878256s -Jun 12 20:49:16.960: INFO: Pod "downwardapi-volume-0ad42d6d-0b1c-4c73-8e98-62bb64f5315e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.05077865s -STEP: Saw pod success 06/12/23 20:49:16.96 -Jun 12 20:49:16.961: INFO: Pod "downwardapi-volume-0ad42d6d-0b1c-4c73-8e98-62bb64f5315e" satisfied condition "Succeeded or Failed" -Jun 12 20:49:17.040: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-0ad42d6d-0b1c-4c73-8e98-62bb64f5315e container client-container: -STEP: delete the pod 06/12/23 20:49:17.125 -Jun 12 20:49:17.176: INFO: Waiting for pod downwardapi-volume-0ad42d6d-0b1c-4c73-8e98-62bb64f5315e to disappear -Jun 12 20:49:17.214: INFO: Pod downwardapi-volume-0ad42d6d-0b1c-4c73-8e98-62bb64f5315e no longer exists -[AfterEach] [sig-storage] Projected downwardAPI +[It] works for CRD preserving unknown fields in an embedded object [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:236 +Jul 27 01:42:17.696: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: kubectl validation (kubectl create and apply) allows request with any unknown properties 07/27/23 01:42:25.222 +Jul 27 01:42:25.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-8731 --namespace=crd-publish-openapi-8731 create -f -' +Jul 27 01:42:29.291: INFO: stderr: "" +Jul 27 01:42:29.291: INFO: stdout: "e2e-test-crd-publish-openapi-8614-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" +Jul 27 01:42:29.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-8731 --namespace=crd-publish-openapi-8731 delete e2e-test-crd-publish-openapi-8614-crds test-cr' +Jul 27 01:42:29.468: INFO: stderr: "" +Jul 27 01:42:29.468: INFO: stdout: "e2e-test-crd-publish-openapi-8614-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" +Jul 27 01:42:29.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-8731 --namespace=crd-publish-openapi-8731 apply -f -' +Jul 27 01:42:30.589: INFO: stderr: "" +Jul 27 01:42:30.589: INFO: stdout: "e2e-test-crd-publish-openapi-8614-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" +Jul 27 01:42:30.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-8731 --namespace=crd-publish-openapi-8731 delete e2e-test-crd-publish-openapi-8614-crds test-cr' +Jul 27 01:42:30.822: INFO: stderr: "" +Jul 27 01:42:30.822: INFO: stdout: "e2e-test-crd-publish-openapi-8614-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR 07/27/23 01:42:30.822 +Jul 27 01:42:30.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-8731 explain e2e-test-crd-publish-openapi-8614-crds' +Jul 27 01:42:31.557: INFO: stderr: "" +Jul 27 01:42:31.557: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-8614-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 -Jun 12 20:49:17.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Projected downwardAPI +Jul 27 01:42:39.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Projected downwardAPI +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Projected downwardAPI +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] tear down framework | framework.go:193 -STEP: Destroying namespace "projected-6392" for this suite. 06/12/23 20:49:17.261 +STEP: Destroying namespace "crd-publish-openapi-8731" for this suite. 07/27/23 01:42:39.97 ------------------------------ -• [SLOW TEST] [6.499 seconds] -[sig-storage] Projected downwardAPI -test/e2e/common/storage/framework.go:23 - should provide container's memory limit [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:207 +• [SLOW TEST] [22.394 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for CRD preserving unknown fields in an embedded object [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:236 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Projected downwardAPI + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:49:10.801 - Jun 12 20:49:10.801: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename projected 06/12/23 20:49:10.803 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:49:10.843 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:49:10.854 - [BeforeEach] [sig-storage] Projected downwardAPI + STEP: Creating a kubernetes client 07/27/23 01:42:17.599 + Jul 27 01:42:17.599: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename crd-publish-openapi 07/27/23 01:42:17.601 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:42:17.644 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:42:17.687 + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-storage] Projected downwardAPI - test/e2e/common/storage/projected_downwardapi.go:44 - [It] should provide container's memory limit [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:207 - STEP: Creating a pod to test downward API volume plugin 06/12/23 20:49:10.875 - Jun 12 20:49:10.909: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0ad42d6d-0b1c-4c73-8e98-62bb64f5315e" in namespace "projected-6392" to be "Succeeded or Failed" - Jun 12 20:49:10.928: INFO: Pod "downwardapi-volume-0ad42d6d-0b1c-4c73-8e98-62bb64f5315e": Phase="Pending", Reason="", readiness=false. Elapsed: 18.244185ms - Jun 12 20:49:12.996: INFO: Pod "downwardapi-volume-0ad42d6d-0b1c-4c73-8e98-62bb64f5315e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086215087s - Jun 12 20:49:14.965: INFO: Pod "downwardapi-volume-0ad42d6d-0b1c-4c73-8e98-62bb64f5315e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055878256s - Jun 12 20:49:16.960: INFO: Pod "downwardapi-volume-0ad42d6d-0b1c-4c73-8e98-62bb64f5315e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.05077865s - STEP: Saw pod success 06/12/23 20:49:16.96 - Jun 12 20:49:16.961: INFO: Pod "downwardapi-volume-0ad42d6d-0b1c-4c73-8e98-62bb64f5315e" satisfied condition "Succeeded or Failed" - Jun 12 20:49:17.040: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-0ad42d6d-0b1c-4c73-8e98-62bb64f5315e container client-container: - STEP: delete the pod 06/12/23 20:49:17.125 - Jun 12 20:49:17.176: INFO: Waiting for pod downwardapi-volume-0ad42d6d-0b1c-4c73-8e98-62bb64f5315e to disappear - Jun 12 20:49:17.214: INFO: Pod downwardapi-volume-0ad42d6d-0b1c-4c73-8e98-62bb64f5315e no longer exists - [AfterEach] [sig-storage] Projected downwardAPI + [It] works for CRD preserving unknown fields in an embedded object [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:236 + Jul 27 01:42:17.696: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: kubectl validation (kubectl create and apply) allows request with any unknown properties 07/27/23 01:42:25.222 + Jul 27 01:42:25.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-8731 --namespace=crd-publish-openapi-8731 create -f -' + Jul 27 01:42:29.291: INFO: stderr: "" + Jul 27 01:42:29.291: INFO: stdout: "e2e-test-crd-publish-openapi-8614-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" + Jul 27 01:42:29.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-8731 --namespace=crd-publish-openapi-8731 delete e2e-test-crd-publish-openapi-8614-crds test-cr' + Jul 27 01:42:29.468: INFO: stderr: "" + Jul 27 01:42:29.468: INFO: stdout: "e2e-test-crd-publish-openapi-8614-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" + Jul 27 01:42:29.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-8731 --namespace=crd-publish-openapi-8731 apply -f -' + Jul 27 01:42:30.589: INFO: stderr: "" + Jul 27 01:42:30.589: INFO: stdout: "e2e-test-crd-publish-openapi-8614-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" + Jul 27 01:42:30.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-8731 --namespace=crd-publish-openapi-8731 delete e2e-test-crd-publish-openapi-8614-crds test-cr' + Jul 27 01:42:30.822: INFO: stderr: "" + Jul 27 01:42:30.822: INFO: stdout: "e2e-test-crd-publish-openapi-8614-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" + STEP: kubectl explain works to explain CR 07/27/23 01:42:30.822 + Jul 27 01:42:30.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-8731 explain e2e-test-crd-publish-openapi-8614-crds' + Jul 27 01:42:31.557: INFO: stderr: "" + Jul 27 01:42:31.557: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-8614-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 20:49:17.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + Jul 27 01:42:39.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "projected-6392" for this suite. 06/12/23 20:49:17.261 + STEP: Destroying namespace "crd-publish-openapi-8731" for this suite. 07/27/23 01:42:39.97 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSS ------------------------------ -[sig-node] Probing container - should have monotonically increasing restart count [NodeConformance] [Conformance] - test/e2e/common/node/container_probe.go:199 -[BeforeEach] [sig-node] Probing container +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + listing validating webhooks should work [Conformance] + test/e2e/apimachinery/webhook.go:582 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:49:17.51 -Jun 12 20:49:17.510: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename container-probe 06/12/23 20:49:17.513 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:49:17.727 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:49:17.784 -[BeforeEach] [sig-node] Probing container +STEP: Creating a kubernetes client 07/27/23 01:42:39.994 +Jul 27 01:42:39.994: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename webhook 07/27/23 01:42:39.995 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:42:40.033 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:42:40.041 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-node] Probing container - test/e2e/common/node/container_probe.go:63 -[It] should have monotonically increasing restart count [NodeConformance] [Conformance] - test/e2e/common/node/container_probe.go:199 -STEP: Creating pod liveness-00ca1c1d-603f-4cba-b60a-44c9f2b9c16a in namespace container-probe-1452 06/12/23 20:49:17.916 -Jun 12 20:49:17.944: INFO: Waiting up to 5m0s for pod "liveness-00ca1c1d-603f-4cba-b60a-44c9f2b9c16a" in namespace "container-probe-1452" to be "not pending" -Jun 12 20:49:17.957: INFO: Pod "liveness-00ca1c1d-603f-4cba-b60a-44c9f2b9c16a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.075686ms -Jun 12 20:49:19.967: INFO: Pod "liveness-00ca1c1d-603f-4cba-b60a-44c9f2b9c16a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022301614s -Jun 12 20:49:21.967: INFO: Pod "liveness-00ca1c1d-603f-4cba-b60a-44c9f2b9c16a": Phase="Running", Reason="", readiness=true. Elapsed: 4.022681847s -Jun 12 20:49:21.967: INFO: Pod "liveness-00ca1c1d-603f-4cba-b60a-44c9f2b9c16a" satisfied condition "not pending" -Jun 12 20:49:21.967: INFO: Started pod liveness-00ca1c1d-603f-4cba-b60a-44c9f2b9c16a in namespace container-probe-1452 -STEP: checking the pod's current state and verifying that restartCount is present 06/12/23 20:49:21.968 -Jun 12 20:49:21.981: INFO: Initial restart count of pod liveness-00ca1c1d-603f-4cba-b60a-44c9f2b9c16a is 0 -Jun 12 20:49:40.096: INFO: Restart count of pod container-probe-1452/liveness-00ca1c1d-603f-4cba-b60a-44c9f2b9c16a is now 1 (18.114742145s elapsed) -Jun 12 20:50:00.298: INFO: Restart count of pod container-probe-1452/liveness-00ca1c1d-603f-4cba-b60a-44c9f2b9c16a is now 2 (38.317375334s elapsed) -Jun 12 20:50:20.435: INFO: Restart count of pod container-probe-1452/liveness-00ca1c1d-603f-4cba-b60a-44c9f2b9c16a is now 3 (58.454399693s elapsed) -Jun 12 20:50:40.596: INFO: Restart count of pod container-probe-1452/liveness-00ca1c1d-603f-4cba-b60a-44c9f2b9c16a is now 4 (1m18.614783239s elapsed) -Jun 12 20:51:43.112: INFO: Restart count of pod container-probe-1452/liveness-00ca1c1d-603f-4cba-b60a-44c9f2b9c16a is now 5 (2m21.131191734s elapsed) -STEP: deleting the pod 06/12/23 20:51:43.112 -[AfterEach] [sig-node] Probing container - test/e2e/framework/node/init/init.go:32 -Jun 12 20:51:43.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Probing container - test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Probing container +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 07/27/23 01:42:40.13 +STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 01:42:40.547 +STEP: Deploying the webhook pod 07/27/23 01:42:40.58 +STEP: Wait for the deployment to be ready 07/27/23 01:42:40.618 +Jul 27 01:42:40.634: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created +Jul 27 01:42:42.660: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 1, 42, 40, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 42, 40, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 42, 40, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 42, 40, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service 07/27/23 01:42:44.67 +STEP: Verifying the service has paired with the endpoint 07/27/23 01:42:44.707 +Jul 27 01:42:45.707: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] listing validating webhooks should work [Conformance] + test/e2e/apimachinery/webhook.go:582 +STEP: Listing all of the created validation webhooks 07/27/23 01:42:45.909 +STEP: Creating a configMap that does not comply to the validation webhook rules 07/27/23 01:42:45.985 +STEP: Deleting the collection of validation webhooks 07/27/23 01:42:46.036 +STEP: Creating a configMap that does not comply to the validation webhook rules 07/27/23 01:42:46.229 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jul 27 01:42:46.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Probing container +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 -STEP: Destroying namespace "container-probe-1452" for this suite. 06/12/23 20:51:43.163 +STEP: Destroying namespace "webhook-1255" for this suite. 07/27/23 01:42:46.396 +STEP: Destroying namespace "webhook-1255-markers" for this suite. 07/27/23 01:42:46.423 ------------------------------ -• [SLOW TEST] [145.671 seconds] -[sig-node] Probing container -test/e2e/common/node/framework.go:23 - should have monotonically increasing restart count [NodeConformance] [Conformance] - test/e2e/common/node/container_probe.go:199 +• [SLOW TEST] [6.455 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + listing validating webhooks should work [Conformance] + test/e2e/apimachinery/webhook.go:582 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Probing container + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:49:17.51 - Jun 12 20:49:17.510: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename container-probe 06/12/23 20:49:17.513 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:49:17.727 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:49:17.784 - [BeforeEach] [sig-node] Probing container + STEP: Creating a kubernetes client 07/27/23 01:42:39.994 + Jul 27 01:42:39.994: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename webhook 07/27/23 01:42:39.995 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:42:40.033 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:42:40.041 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-node] Probing container - test/e2e/common/node/container_probe.go:63 - [It] should have monotonically increasing restart count [NodeConformance] [Conformance] - test/e2e/common/node/container_probe.go:199 - STEP: Creating pod liveness-00ca1c1d-603f-4cba-b60a-44c9f2b9c16a in namespace container-probe-1452 06/12/23 20:49:17.916 - Jun 12 20:49:17.944: INFO: Waiting up to 5m0s for pod "liveness-00ca1c1d-603f-4cba-b60a-44c9f2b9c16a" in namespace "container-probe-1452" to be "not pending" - Jun 12 20:49:17.957: INFO: Pod "liveness-00ca1c1d-603f-4cba-b60a-44c9f2b9c16a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.075686ms - Jun 12 20:49:19.967: INFO: Pod "liveness-00ca1c1d-603f-4cba-b60a-44c9f2b9c16a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022301614s - Jun 12 20:49:21.967: INFO: Pod "liveness-00ca1c1d-603f-4cba-b60a-44c9f2b9c16a": Phase="Running", Reason="", readiness=true. Elapsed: 4.022681847s - Jun 12 20:49:21.967: INFO: Pod "liveness-00ca1c1d-603f-4cba-b60a-44c9f2b9c16a" satisfied condition "not pending" - Jun 12 20:49:21.967: INFO: Started pod liveness-00ca1c1d-603f-4cba-b60a-44c9f2b9c16a in namespace container-probe-1452 - STEP: checking the pod's current state and verifying that restartCount is present 06/12/23 20:49:21.968 - Jun 12 20:49:21.981: INFO: Initial restart count of pod liveness-00ca1c1d-603f-4cba-b60a-44c9f2b9c16a is 0 - Jun 12 20:49:40.096: INFO: Restart count of pod container-probe-1452/liveness-00ca1c1d-603f-4cba-b60a-44c9f2b9c16a is now 1 (18.114742145s elapsed) - Jun 12 20:50:00.298: INFO: Restart count of pod container-probe-1452/liveness-00ca1c1d-603f-4cba-b60a-44c9f2b9c16a is now 2 (38.317375334s elapsed) - Jun 12 20:50:20.435: INFO: Restart count of pod container-probe-1452/liveness-00ca1c1d-603f-4cba-b60a-44c9f2b9c16a is now 3 (58.454399693s elapsed) - Jun 12 20:50:40.596: INFO: Restart count of pod container-probe-1452/liveness-00ca1c1d-603f-4cba-b60a-44c9f2b9c16a is now 4 (1m18.614783239s elapsed) - Jun 12 20:51:43.112: INFO: Restart count of pod container-probe-1452/liveness-00ca1c1d-603f-4cba-b60a-44c9f2b9c16a is now 5 (2m21.131191734s elapsed) - STEP: deleting the pod 06/12/23 20:51:43.112 - [AfterEach] [sig-node] Probing container + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 07/27/23 01:42:40.13 + STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 01:42:40.547 + STEP: Deploying the webhook pod 07/27/23 01:42:40.58 + STEP: Wait for the deployment to be ready 07/27/23 01:42:40.618 + Jul 27 01:42:40.634: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created + Jul 27 01:42:42.660: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 1, 42, 40, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 42, 40, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 42, 40, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 42, 40, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} + STEP: Deploying the webhook service 07/27/23 01:42:44.67 + STEP: Verifying the service has paired with the endpoint 07/27/23 01:42:44.707 + Jul 27 01:42:45.707: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] listing validating webhooks should work [Conformance] + test/e2e/apimachinery/webhook.go:582 + STEP: Listing all of the created validation webhooks 07/27/23 01:42:45.909 + STEP: Creating a configMap that does not comply to the validation webhook rules 07/27/23 01:42:45.985 + STEP: Deleting the collection of validation webhooks 07/27/23 01:42:46.036 + STEP: Creating a configMap that does not comply to the validation webhook rules 07/27/23 01:42:46.229 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 20:51:43.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Probing container + Jul 27 01:42:46.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Probing container + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Probing container + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "container-probe-1452" for this suite. 06/12/23 20:51:43.163 + STEP: Destroying namespace "webhook-1255" for this suite. 07/27/23 01:42:46.396 + STEP: Destroying namespace "webhook-1255-markers" for this suite. 07/27/23 01:42:46.423 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------- -[sig-scheduling] SchedulerPredicates [Serial] - validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] - test/e2e/scheduling/predicates.go:704 -[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] +[sig-apps] ReplicaSet + Replicaset should have a working scale subresource [Conformance] + test/e2e/apps/replica_set.go:143 +[BeforeEach] [sig-apps] ReplicaSet set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:51:43.187 -Jun 12 20:51:43.187: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename sched-pred 06/12/23 20:51:43.19 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:51:43.234 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:51:43.255 -[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] +STEP: Creating a kubernetes client 07/27/23 01:42:46.449 +Jul 27 01:42:46.449: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename replicaset 07/27/23 01:42:46.45 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:42:46.49 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:42:46.5 +[BeforeEach] [sig-apps] ReplicaSet test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] - test/e2e/scheduling/predicates.go:97 -Jun 12 20:51:43.279: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready -Jun 12 20:51:43.313: INFO: Waiting for terminating namespaces to be deleted... -Jun 12 20:51:43.342: INFO: -Logging pods the apiserver thinks is on node 10.138.75.112 before test -Jun 12 20:51:43.424: INFO: calico-node-b9sdb from calico-system started at 2023-06-12 17:53:02 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.425: INFO: Container calico-node ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: calico-typha-74d94b74f5-dc6td from calico-system started at 2023-06-12 17:53:09 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.425: INFO: Container calico-typha ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: ibm-cloud-provider-ip-168-1-198-197-75947fc545-gxzn7 from ibm-system started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.425: INFO: Container ibm-cloud-provider-ip-168-1-198-197 ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: ibm-keepalived-watcher-5hc6v from kube-system started at 2023-06-12 17:40:13 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.425: INFO: Container keepalived-watcher ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: ibm-master-proxy-static-10.138.75.112 from kube-system started at 2023-06-12 17:40:09 +0000 UTC (2 container statuses recorded) -Jun 12 20:51:43.425: INFO: Container ibm-master-proxy-static ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: Container pause ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: ibmcloud-block-storage-driver-5zqmj from kube-system started at 2023-06-12 17:40:20 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.425: INFO: Container ibmcloud-block-storage-driver-container ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: tuned-phslc from openshift-cluster-node-tuning-operator started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.425: INFO: Container tuned ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: csi-snapshot-controller-7f8879b9ff-p456r from openshift-cluster-storage-operator started at 2023-06-12 17:55:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.425: INFO: Container snapshot-controller ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: csi-snapshot-webhook-7bd9594b6d-bp5dr from openshift-cluster-storage-operator started at 2023-06-12 17:55:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.425: INFO: Container webhook ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: console-5bf97c7949-w5sn5 from openshift-console started at 2023-06-12 18:01:02 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.425: INFO: Container console ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: downloads-8b57f44bb-55ss5 from openshift-console started at 2023-06-12 17:55:24 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.425: INFO: Container download-server ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: dns-default-hpnqj from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (2 container statuses recorded) -Jun 12 20:51:43.425: INFO: Container dns ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: node-resolver-5st6j from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.425: INFO: Container dns-node-resolver ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: image-registry-6c79bcf5c4-p7ss4 from openshift-image-registry started at 2023-06-12 18:00:30 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.425: INFO: Container registry ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: node-ca-qm7sb from openshift-image-registry started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.425: INFO: Container node-ca ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: ingress-canary-5qpcw from openshift-ingress-canary started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.425: INFO: Container serve-healthcheck-canary ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: router-default-7d454f944c-62qgz from openshift-ingress started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.425: INFO: Container router ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: openshift-kube-proxy-b9xs9 from openshift-kube-proxy started at 2023-06-12 17:47:53 +0000 UTC (2 container statuses recorded) -Jun 12 20:51:43.425: INFO: Container kube-proxy ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: migrator-cfb6c8f7c-vx2tr from openshift-kube-storage-version-migrator started at 2023-06-12 17:55:28 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.425: INFO: Container migrator ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: community-operators-fm8cx from openshift-marketplace started at 2023-06-12 17:57:19 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.425: INFO: Container registry-server ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: redhat-operators-pr47d from openshift-marketplace started at 2023-06-12 19:05:36 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.425: INFO: Container registry-server ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: alertmanager-main-1 from openshift-monitoring started at 2023-06-12 18:01:06 +0000 UTC (6 container statuses recorded) -Jun 12 20:51:43.425: INFO: Container alertmanager ready: true, restart count 1 -Jun 12 20:51:43.425: INFO: Container alertmanager-proxy ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: Container config-reloader ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: Container kube-rbac-proxy-metric ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: Container prom-label-proxy ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: kube-state-metrics-6ccfb58dc4-rgnnh from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (3 container statuses recorded) -Jun 12 20:51:43.425: INFO: Container kube-rbac-proxy-main ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: Container kube-rbac-proxy-self ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: Container kube-state-metrics ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: node-exporter-r799t from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (2 container statuses recorded) -Jun 12 20:51:43.425: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: Container node-exporter ready: true, restart count 0 -Jun 12 20:51:43.425: INFO: prometheus-adapter-7c58c77c58-xfd55 from openshift-monitoring started at 2023-06-12 17:59:36 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.425: INFO: Container prometheus-adapter ready: true, restart count 0 -Jun 12 20:51:43.426: INFO: prometheus-k8s-0 from openshift-monitoring started at 2023-06-12 18:01:32 +0000 UTC (6 container statuses recorded) -Jun 12 20:51:43.426: INFO: Container config-reloader ready: true, restart count 0 -Jun 12 20:51:43.426: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:51:43.426: INFO: Container kube-rbac-proxy-thanos ready: true, restart count 0 -Jun 12 20:51:43.426: INFO: Container prometheus ready: true, restart count 0 -Jun 12 20:51:43.426: INFO: Container prometheus-proxy ready: true, restart count 0 -Jun 12 20:51:43.426: INFO: Container thanos-sidecar ready: true, restart count 0 -Jun 12 20:51:43.426: INFO: prometheus-operator-admission-webhook-5d679565bb-66wnf from openshift-monitoring started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.426: INFO: Container prometheus-operator-admission-webhook ready: true, restart count 0 -Jun 12 20:51:43.426: INFO: thanos-querier-6497df7b9-djrsc from openshift-monitoring started at 2023-06-12 17:59:42 +0000 UTC (6 container statuses recorded) -Jun 12 20:51:43.426: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:51:43.426: INFO: Container kube-rbac-proxy-metrics ready: true, restart count 0 -Jun 12 20:51:43.426: INFO: Container kube-rbac-proxy-rules ready: true, restart count 0 -Jun 12 20:51:43.426: INFO: Container oauth-proxy ready: true, restart count 0 -Jun 12 20:51:43.426: INFO: Container prom-label-proxy ready: true, restart count 0 -Jun 12 20:51:43.426: INFO: Container thanos-query ready: true, restart count 0 -Jun 12 20:51:43.426: INFO: multus-additional-cni-plugins-zpr6c from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.426: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 -Jun 12 20:51:43.426: INFO: multus-q452d from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.426: INFO: Container kube-multus ready: true, restart count 0 -Jun 12 20:51:43.426: INFO: network-metrics-daemon-vx56x from openshift-multus started at 2023-06-12 17:47:49 +0000 UTC (2 container statuses recorded) -Jun 12 20:51:43.426: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:51:43.426: INFO: Container network-metrics-daemon ready: true, restart count 0 -Jun 12 20:51:43.426: INFO: network-check-target-lfvfw from openshift-network-diagnostics started at 2023-06-12 17:47:56 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.426: INFO: Container network-check-target-container ready: true, restart count 0 -Jun 12 20:51:43.426: INFO: network-operator-5498bf7dc6-xv8r2 from openshift-network-operator started at 2023-06-12 17:47:21 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.426: INFO: Container network-operator ready: true, restart count 1 -Jun 12 20:51:43.426: INFO: packageserver-7f8bd8c95b-fgfhz from openshift-operator-lifecycle-manager started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.426: INFO: Container packageserver ready: true, restart count 0 -Jun 12 20:51:43.426: INFO: sonobuoy-systemd-logs-daemon-set-129e475d543c42cc-xk7f7 from sonobuoy started at 2023-06-12 20:39:06 +0000 UTC (2 container statuses recorded) -Jun 12 20:51:43.426: INFO: Container sonobuoy-worker ready: true, restart count 0 -Jun 12 20:51:43.426: INFO: Container systemd-logs ready: true, restart count 0 -Jun 12 20:51:43.426: INFO: -Logging pods the apiserver thinks is on node 10.138.75.116 before test -Jun 12 20:51:43.508: INFO: calico-kube-controllers-58944988fc-kv6pq from calico-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.508: INFO: Container calico-kube-controllers ready: true, restart count 0 -Jun 12 20:51:43.508: INFO: calico-node-nhd4m from calico-system started at 2023-06-12 17:53:02 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.508: INFO: Container calico-node ready: true, restart count 0 -Jun 12 20:51:43.508: INFO: ibm-file-plugin-5f8cc7b66-hc7b9 from kube-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.508: INFO: Container ibm-file-plugin-container ready: true, restart count 0 -Jun 12 20:51:43.508: INFO: ibm-keepalived-watcher-zp24l from kube-system started at 2023-06-12 17:40:01 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.508: INFO: Container keepalived-watcher ready: true, restart count 0 -Jun 12 20:51:43.508: INFO: ibm-master-proxy-static-10.138.75.116 from kube-system started at 2023-06-12 17:39:58 +0000 UTC (2 container statuses recorded) -Jun 12 20:51:43.508: INFO: Container ibm-master-proxy-static ready: true, restart count 0 -Jun 12 20:51:43.508: INFO: Container pause ready: true, restart count 0 -Jun 12 20:51:43.508: INFO: ibm-storage-watcher-f4db746b4-mlm76 from kube-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.508: INFO: Container ibm-storage-watcher-container ready: true, restart count 0 -Jun 12 20:51:43.508: INFO: ibmcloud-block-storage-driver-4wh25 from kube-system started at 2023-06-12 17:40:09 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.508: INFO: Container ibmcloud-block-storage-driver-container ready: true, restart count 0 -Jun 12 20:51:43.508: INFO: ibmcloud-block-storage-plugin-5f85bc9665-2ltn5 from kube-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.508: INFO: Container ibmcloud-block-storage-plugin-container ready: true, restart count 0 -Jun 12 20:51:43.508: INFO: vpn-7bc564c55c-htxd6 from kube-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.508: INFO: Container vpn ready: true, restart count 0 -Jun 12 20:51:43.508: INFO: cluster-node-tuning-operator-5f6cff5c99-z22gd from openshift-cluster-node-tuning-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.508: INFO: Container cluster-node-tuning-operator ready: true, restart count 0 -Jun 12 20:51:43.508: INFO: tuned-44pqh from openshift-cluster-node-tuning-operator started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.508: INFO: Container tuned ready: true, restart count 0 -Jun 12 20:51:43.508: INFO: cluster-samples-operator-597884bb5d-bv9cn from openshift-cluster-samples-operator started at 2023-06-12 17:54:08 +0000 UTC (2 container statuses recorded) -Jun 12 20:51:43.508: INFO: Container cluster-samples-operator ready: true, restart count 0 -Jun 12 20:51:43.508: INFO: Container cluster-samples-operator-watch ready: true, restart count 0 -Jun 12 20:51:43.508: INFO: cluster-storage-operator-75bb97486-7xrgf from openshift-cluster-storage-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.508: INFO: Container cluster-storage-operator ready: true, restart count 1 -Jun 12 20:51:43.508: INFO: csi-snapshot-controller-operator-69df8b995f-flpdz from openshift-cluster-storage-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.508: INFO: Container csi-snapshot-controller-operator ready: true, restart count 0 -Jun 12 20:51:43.508: INFO: console-operator-747447cc44-5hk9p from openshift-console-operator started at 2023-06-12 17:54:08 +0000 UTC (2 container statuses recorded) -Jun 12 20:51:43.508: INFO: Container console-operator ready: true, restart count 1 -Jun 12 20:51:43.508: INFO: Container conversion-webhook-server ready: true, restart count 2 -Jun 12 20:51:43.508: INFO: console-5bf97c7949-22prk from openshift-console started at 2023-06-12 18:01:30 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.508: INFO: Container console ready: true, restart count 0 -Jun 12 20:51:43.508: INFO: dns-operator-65c495d75-cd4fc from openshift-dns-operator started at 2023-06-12 17:54:08 +0000 UTC (2 container statuses recorded) -Jun 12 20:51:43.508: INFO: Container dns-operator ready: true, restart count 0 -Jun 12 20:51:43.508: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:51:43.508: INFO: dns-default-cw4pt from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (2 container statuses recorded) -Jun 12 20:51:43.508: INFO: Container dns ready: true, restart count 0 -Jun 12 20:51:43.508: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:51:43.508: INFO: node-resolver-8mss5 from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.508: INFO: Container dns-node-resolver ready: true, restart count 0 -Jun 12 20:51:43.508: INFO: cluster-image-registry-operator-f9c46b94f-swtmm from openshift-image-registry started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.508: INFO: Container cluster-image-registry-operator ready: true, restart count 0 -Jun 12 20:51:43.508: INFO: node-ca-5cs7d from openshift-image-registry started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.508: INFO: Container node-ca ready: true, restart count 0 -Jun 12 20:51:43.509: INFO: registry-pvc-permissions-j28ls from openshift-image-registry started at 2023-06-12 18:00:38 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.509: INFO: Container pvc-permissions ready: false, restart count 0 -Jun 12 20:51:43.509: INFO: ingress-canary-9xbwx from openshift-ingress-canary started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.509: INFO: Container serve-healthcheck-canary ready: true, restart count 0 -Jun 12 20:51:43.509: INFO: ingress-operator-57d9f78b9c-59cl8 from openshift-ingress-operator started at 2023-06-12 17:54:08 +0000 UTC (2 container statuses recorded) -Jun 12 20:51:43.509: INFO: Container ingress-operator ready: true, restart count 0 -Jun 12 20:51:43.509: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:51:43.509: INFO: insights-operator-7dfcfbc664-j8swm from openshift-insights started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.509: INFO: Container insights-operator ready: true, restart count 1 -Jun 12 20:51:43.509: INFO: openshift-kube-proxy-5hl4f from openshift-kube-proxy started at 2023-06-12 17:47:53 +0000 UTC (2 container statuses recorded) -Jun 12 20:51:43.509: INFO: Container kube-proxy ready: true, restart count 0 -Jun 12 20:51:43.509: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:51:43.509: INFO: kube-storage-version-migrator-operator-689b97b878-cqw2l from openshift-kube-storage-version-migrator-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.509: INFO: Container kube-storage-version-migrator-operator ready: true, restart count 1 -Jun 12 20:51:43.509: INFO: marketplace-operator-769ddf547d-mm52g from openshift-marketplace started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.509: INFO: Container marketplace-operator ready: true, restart count 0 -Jun 12 20:51:43.509: INFO: cluster-monitoring-operator-7df766d4db-cnq44 from openshift-monitoring started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.509: INFO: Container cluster-monitoring-operator ready: true, restart count 0 -Jun 12 20:51:43.509: INFO: node-exporter-s9sgk from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (2 container statuses recorded) -Jun 12 20:51:43.509: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:51:43.509: INFO: Container node-exporter ready: true, restart count 0 -Jun 12 20:51:43.509: INFO: multus-additional-cni-plugins-rsr27 from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.509: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 -Jun 12 20:51:43.509: INFO: multus-admission-controller-5894dd7875-bfbwp from openshift-multus started at 2023-06-12 17:58:31 +0000 UTC (2 container statuses recorded) -Jun 12 20:51:43.509: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:51:43.509: INFO: Container multus-admission-controller ready: true, restart count 0 -Jun 12 20:51:43.509: INFO: multus-ln9rr from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.509: INFO: Container kube-multus ready: true, restart count 0 -Jun 12 20:51:43.509: INFO: network-metrics-daemon-75s49 from openshift-multus started at 2023-06-12 17:47:49 +0000 UTC (2 container statuses recorded) -Jun 12 20:51:43.509: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:51:43.509: INFO: Container network-metrics-daemon ready: true, restart count 0 -Jun 12 20:51:43.509: INFO: network-check-source-7f6b75fdb6-8882l from openshift-network-diagnostics started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.509: INFO: Container check-endpoints ready: true, restart count 0 -Jun 12 20:51:43.509: INFO: network-check-target-kjfll from openshift-network-diagnostics started at 2023-06-12 17:47:56 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.509: INFO: Container network-check-target-container ready: true, restart count 0 -Jun 12 20:51:43.509: INFO: catalog-operator-874999f59-jggx9 from openshift-operator-lifecycle-manager started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.509: INFO: Container catalog-operator ready: true, restart count 0 -Jun 12 20:51:43.509: INFO: collect-profiles-28110015-d4v2k from openshift-operator-lifecycle-manager started at 2023-06-12 20:15:00 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.509: INFO: Container collect-profiles ready: false, restart count 0 -Jun 12 20:51:43.509: INFO: collect-profiles-28110030-fzbkf from openshift-operator-lifecycle-manager started at 2023-06-12 20:30:00 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.509: INFO: Container collect-profiles ready: false, restart count 0 -Jun 12 20:51:43.509: INFO: collect-profiles-28110045-fcbk8 from openshift-operator-lifecycle-manager started at 2023-06-12 20:45:00 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.509: INFO: Container collect-profiles ready: false, restart count 0 -Jun 12 20:51:43.509: INFO: olm-operator-bdbf4b468-8vj6q from openshift-operator-lifecycle-manager started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.509: INFO: Container olm-operator ready: true, restart count 0 -Jun 12 20:51:43.509: INFO: package-server-manager-5b897cb946-pz59r from openshift-operator-lifecycle-manager started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.509: INFO: Container package-server-manager ready: true, restart count 0 -Jun 12 20:51:43.509: INFO: packageserver-7f8bd8c95b-2zntg from openshift-operator-lifecycle-manager started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.509: INFO: Container packageserver ready: true, restart count 0 -Jun 12 20:51:43.509: INFO: metrics-78c5579cb7-nlfqq from openshift-roks-metrics started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.509: INFO: Container metrics ready: true, restart count 3 -Jun 12 20:51:43.509: INFO: push-gateway-85f6799b47-cgtdt from openshift-roks-metrics started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.509: INFO: Container push-gateway ready: true, restart count 0 -Jun 12 20:51:43.509: INFO: service-ca-operator-86d6dcd567-8jc2t from openshift-service-ca-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.509: INFO: Container service-ca-operator ready: true, restart count 1 -Jun 12 20:51:43.509: INFO: service-ca-7c79786568-vhxsl from openshift-service-ca started at 2023-06-12 17:55:23 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.509: INFO: Container service-ca-controller ready: true, restart count 0 -Jun 12 20:51:43.509: INFO: sonobuoy-e2e-job-9876719f3d1644bf from sonobuoy started at 2023-06-12 20:39:06 +0000 UTC (2 container statuses recorded) -Jun 12 20:51:43.509: INFO: Container e2e ready: true, restart count 0 -Jun 12 20:51:43.509: INFO: Container sonobuoy-worker ready: true, restart count 0 -Jun 12 20:51:43.509: INFO: sonobuoy-systemd-logs-daemon-set-129e475d543c42cc-nbw64 from sonobuoy started at 2023-06-12 20:39:07 +0000 UTC (2 container statuses recorded) -Jun 12 20:51:43.509: INFO: Container sonobuoy-worker ready: true, restart count 0 -Jun 12 20:51:43.509: INFO: Container systemd-logs ready: true, restart count 0 -Jun 12 20:51:43.509: INFO: tigera-operator-5b48cf996b-z7p6p from tigera-operator started at 2023-06-12 17:40:11 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.509: INFO: Container tigera-operator ready: true, restart count 7 -Jun 12 20:51:43.509: INFO: -Logging pods the apiserver thinks is on node 10.138.75.70 before test -Jun 12 20:51:43.555: INFO: calico-node-v822j from calico-system started at 2023-06-12 17:53:02 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.555: INFO: Container calico-node ready: true, restart count 0 -Jun 12 20:51:43.555: INFO: calico-typha-74d94b74f5-db4zz from calico-system started at 2023-06-12 17:53:02 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.555: INFO: Container calico-typha ready: true, restart count 0 -Jun 12 20:51:43.555: INFO: ibm-cloud-provider-ip-168-1-198-197-75947fc545-9m2wx from ibm-system started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.555: INFO: Container ibm-cloud-provider-ip-168-1-198-197 ready: true, restart count 0 -Jun 12 20:51:43.555: INFO: ibm-keepalived-watcher-nl9l9 from kube-system started at 2023-06-12 17:40:20 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.555: INFO: Container keepalived-watcher ready: true, restart count 0 -Jun 12 20:51:43.555: INFO: ibm-master-proxy-static-10.138.75.70 from kube-system started at 2023-06-12 17:40:17 +0000 UTC (2 container statuses recorded) -Jun 12 20:51:43.555: INFO: Container ibm-master-proxy-static ready: true, restart count 0 -Jun 12 20:51:43.555: INFO: Container pause ready: true, restart count 0 -Jun 12 20:51:43.555: INFO: ibmcloud-block-storage-driver-jl8fq from kube-system started at 2023-06-12 17:40:28 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.555: INFO: Container ibmcloud-block-storage-driver-container ready: true, restart count 0 -Jun 12 20:51:43.555: INFO: tuned-dmlsr from openshift-cluster-node-tuning-operator started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.555: INFO: Container tuned ready: true, restart count 0 -Jun 12 20:51:43.555: INFO: csi-snapshot-controller-7f8879b9ff-lhkmp from openshift-cluster-storage-operator started at 2023-06-12 17:55:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.555: INFO: Container snapshot-controller ready: true, restart count 0 -Jun 12 20:51:43.555: INFO: csi-snapshot-webhook-7bd9594b6d-9f476 from openshift-cluster-storage-operator started at 2023-06-12 17:55:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.555: INFO: Container webhook ready: true, restart count 0 -Jun 12 20:51:43.555: INFO: downloads-8b57f44bb-f7r76 from openshift-console started at 2023-06-12 17:55:24 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.555: INFO: Container download-server ready: true, restart count 0 -Jun 12 20:51:43.555: INFO: dns-default-5d2sp from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (2 container statuses recorded) -Jun 12 20:51:43.555: INFO: Container dns ready: true, restart count 0 -Jun 12 20:51:43.555: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:51:43.555: INFO: node-resolver-lf2bx from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.555: INFO: Container dns-node-resolver ready: true, restart count 0 -Jun 12 20:51:43.555: INFO: node-ca-mwjbd from openshift-image-registry started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.555: INFO: Container node-ca ready: true, restart count 0 -Jun 12 20:51:43.555: INFO: ingress-canary-xwc5b from openshift-ingress-canary started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.555: INFO: Container serve-healthcheck-canary ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: router-default-7d454f944c-s862z from openshift-ingress started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.556: INFO: Container router ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: openshift-kube-proxy-rckf9 from openshift-kube-proxy started at 2023-06-12 17:47:53 +0000 UTC (2 container statuses recorded) -Jun 12 20:51:43.556: INFO: Container kube-proxy ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: certified-operators-9jhxm from openshift-marketplace started at 2023-06-12 17:57:19 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.556: INFO: Container registry-server ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: redhat-marketplace-n9tcn from openshift-marketplace started at 2023-06-12 17:57:19 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.556: INFO: Container registry-server ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: alertmanager-main-0 from openshift-monitoring started at 2023-06-12 18:01:41 +0000 UTC (6 container statuses recorded) -Jun 12 20:51:43.556: INFO: Container alertmanager ready: true, restart count 1 -Jun 12 20:51:43.556: INFO: Container alertmanager-proxy ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: Container config-reloader ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: Container kube-rbac-proxy-metric ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: Container prom-label-proxy ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: node-exporter-5vgf6 from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (2 container statuses recorded) -Jun 12 20:51:43.556: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: Container node-exporter ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: openshift-state-metrics-7d7f8b4cf8-6kdhb from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (3 container statuses recorded) -Jun 12 20:51:43.556: INFO: Container kube-rbac-proxy-main ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: Container kube-rbac-proxy-self ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: Container openshift-state-metrics ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: prometheus-adapter-7c58c77c58-2j47k from openshift-monitoring started at 2023-06-12 17:59:36 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.556: INFO: Container prometheus-adapter ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: prometheus-k8s-1 from openshift-monitoring started at 2023-06-12 18:01:12 +0000 UTC (6 container statuses recorded) -Jun 12 20:51:43.556: INFO: Container config-reloader ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: Container kube-rbac-proxy-thanos ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: Container prometheus ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: Container prometheus-proxy ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: Container thanos-sidecar ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: prometheus-operator-5d978dbf9c-zvq6g from openshift-monitoring started at 2023-06-12 17:59:19 +0000 UTC (2 container statuses recorded) -Jun 12 20:51:43.556: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: Container prometheus-operator ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: prometheus-operator-admission-webhook-5d679565bb-sj42p from openshift-monitoring started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.556: INFO: Container prometheus-operator-admission-webhook ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: telemeter-client-55c7b57d84-vh47h from openshift-monitoring started at 2023-06-12 17:59:37 +0000 UTC (3 container statuses recorded) -Jun 12 20:51:43.556: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: Container reload ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: Container telemeter-client ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: thanos-querier-6497df7b9-pg2z9 from openshift-monitoring started at 2023-06-12 17:59:42 +0000 UTC (6 container statuses recorded) -Jun 12 20:51:43.556: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: Container kube-rbac-proxy-metrics ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: Container kube-rbac-proxy-rules ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: Container oauth-proxy ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: Container prom-label-proxy ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: Container thanos-query ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: multus-26bfs from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.556: INFO: Container kube-multus ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: multus-additional-cni-plugins-9vls6 from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.556: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: multus-admission-controller-5894dd7875-xldt9 from openshift-multus started at 2023-06-12 17:58:44 +0000 UTC (2 container statuses recorded) -Jun 12 20:51:43.556: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: Container multus-admission-controller ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: network-metrics-daemon-g9zzs from openshift-multus started at 2023-06-12 17:47:49 +0000 UTC (2 container statuses recorded) -Jun 12 20:51:43.556: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: Container network-metrics-daemon ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: network-check-target-l622r from openshift-network-diagnostics started at 2023-06-12 17:47:56 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.556: INFO: Container network-check-target-container ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: sonobuoy from sonobuoy started at 2023-06-12 20:38:54 +0000 UTC (1 container statuses recorded) -Jun 12 20:51:43.556: INFO: Container kube-sonobuoy ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: sonobuoy-systemd-logs-daemon-set-129e475d543c42cc-4dn8s from sonobuoy started at 2023-06-12 20:39:07 +0000 UTC (2 container statuses recorded) -Jun 12 20:51:43.556: INFO: Container sonobuoy-worker ready: true, restart count 0 -Jun 12 20:51:43.556: INFO: Container systemd-logs ready: true, restart count 0 -[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] - test/e2e/scheduling/predicates.go:704 -STEP: Trying to launch a pod without a label to get a node which can launch it. 06/12/23 20:51:43.556 -Jun 12 20:51:43.581: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-4369" to be "running" -Jun 12 20:51:43.604: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 22.212636ms -Jun 12 20:51:45.616: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034157587s -Jun 12 20:51:47.615: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 4.033394331s -Jun 12 20:51:47.615: INFO: Pod "without-label" satisfied condition "running" -STEP: Explicitly delete pod here to free the resource it takes. 06/12/23 20:51:47.624 -STEP: Trying to apply a random label on the found node. 06/12/23 20:51:47.666 -STEP: verifying the node has the label kubernetes.io/e2e-f0ac351b-7bc2-4bc9-8d4a-bffa9017a1c0 95 06/12/23 20:51:47.703 -STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled 06/12/23 20:51:47.716 -Jun 12 20:51:47.735: INFO: Waiting up to 5m0s for pod "pod4" in namespace "sched-pred-4369" to be "not pending" -Jun 12 20:51:47.745: INFO: Pod "pod4": Phase="Pending", Reason="", readiness=false. Elapsed: 9.910878ms -Jun 12 20:51:49.756: INFO: Pod "pod4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021517215s -Jun 12 20:51:51.756: INFO: Pod "pod4": Phase="Running", Reason="", readiness=true. Elapsed: 4.021502536s -Jun 12 20:51:51.756: INFO: Pod "pod4" satisfied condition "not pending" -STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 10.138.75.70 on the node which pod4 resides and expect not scheduled 06/12/23 20:51:51.756 -Jun 12 20:51:51.778: INFO: Waiting up to 5m0s for pod "pod5" in namespace "sched-pred-4369" to be "not pending" -Jun 12 20:51:51.790: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 11.781741ms -Jun 12 20:51:53.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021691582s -Jun 12 20:51:55.802: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023577371s -Jun 12 20:51:57.803: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024312266s -Jun 12 20:51:59.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.022330411s -Jun 12 20:52:01.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022035455s -Jun 12 20:52:03.804: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.025186659s -Jun 12 20:52:05.808: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.029480118s -Jun 12 20:52:07.846: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.067374545s -Jun 12 20:52:09.803: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.025084116s -Jun 12 20:52:11.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.022753817s -Jun 12 20:52:13.802: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.023555268s -Jun 12 20:52:15.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.022755859s -Jun 12 20:52:17.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.021891812s -Jun 12 20:52:19.806: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.027279476s -Jun 12 20:52:21.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.022427454s -Jun 12 20:52:23.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.021623675s -Jun 12 20:52:25.803: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.024177413s -Jun 12 20:52:27.803: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 36.024115611s -Jun 12 20:52:29.815: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 38.036182591s -Jun 12 20:52:31.799: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 40.020464831s -Jun 12 20:52:33.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 42.022571293s -Jun 12 20:52:35.810: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 44.031367751s -Jun 12 20:52:37.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 46.021905778s -Jun 12 20:52:39.812: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 48.03317152s -Jun 12 20:52:41.805: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 50.026188727s -Jun 12 20:52:43.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 52.022571463s -Jun 12 20:52:45.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 54.022555575s -Jun 12 20:52:47.803: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 56.024141936s -Jun 12 20:52:49.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 58.022543435s -Jun 12 20:52:51.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.022649095s -Jun 12 20:52:53.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.02242785s -Jun 12 20:52:55.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.022143939s -Jun 12 20:52:57.802: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.023352187s -Jun 12 20:52:59.809: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.030652559s -Jun 12 20:53:01.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.022734904s -Jun 12 20:53:03.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.022584873s -Jun 12 20:53:05.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.023081524s -Jun 12 20:53:07.807: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.028616406s -Jun 12 20:53:09.802: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.023169706s -Jun 12 20:53:11.802: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.023234091s -Jun 12 20:53:13.829: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.050630462s -Jun 12 20:53:15.834: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.055851999s -Jun 12 20:53:17.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.021914015s -Jun 12 20:53:19.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.022309986s -Jun 12 20:53:21.803: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.024846362s -Jun 12 20:53:23.802: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.023478255s -Jun 12 20:53:25.799: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.020751578s -Jun 12 20:53:27.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.021938466s -Jun 12 20:53:29.816: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.037183547s -Jun 12 20:53:31.799: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.020277945s -Jun 12 20:53:33.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.021198221s -Jun 12 20:53:35.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.021258822s -Jun 12 20:53:37.830: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.051288999s -Jun 12 20:53:39.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.021632362s -Jun 12 20:53:41.803: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.024709508s -Jun 12 20:53:43.803: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.024786533s -Jun 12 20:53:45.803: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.024548202s -Jun 12 20:53:47.911: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.132366299s -Jun 12 20:53:49.803: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.02467715s -Jun 12 20:53:51.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.021780449s -Jun 12 20:53:53.812: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.033640752s -Jun 12 20:53:55.803: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.025077628s -Jun 12 20:53:57.803: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.0241899s -Jun 12 20:53:59.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.022228605s -Jun 12 20:54:01.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.022476433s -Jun 12 20:54:03.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.022278146s -Jun 12 20:54:05.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.021805772s -Jun 12 20:54:07.804: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.025477133s -Jun 12 20:54:09.812: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.033686892s -Jun 12 20:54:11.804: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.025070333s -Jun 12 20:54:13.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.021597075s -Jun 12 20:54:15.809: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.030585016s -Jun 12 20:54:17.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.022513391s -Jun 12 20:54:19.805: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.027033176s -Jun 12 20:54:21.814: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.035133158s -Jun 12 20:54:23.839: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.060109493s -Jun 12 20:54:25.804: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.025234711s -Jun 12 20:54:27.802: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.023325408s -Jun 12 20:54:29.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.022862338s -Jun 12 20:54:31.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.021774013s -Jun 12 20:54:33.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.02247688s -Jun 12 20:54:35.802: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.023164836s -Jun 12 20:54:37.804: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.025404028s -Jun 12 20:54:39.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.022692922s -Jun 12 20:54:41.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.022067315s -Jun 12 20:54:43.806: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.02719431s -Jun 12 20:54:45.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.022524828s -Jun 12 20:54:47.843: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.064678034s -Jun 12 20:54:49.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.021485582s -Jun 12 20:54:51.802: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.023594776s -Jun 12 20:54:53.809: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.030519316s -Jun 12 20:54:55.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.04531737s -Jun 12 20:54:57.805: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.026685401s -Jun 12 20:54:59.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.022679881s -Jun 12 20:55:01.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.022402171s -Jun 12 20:55:03.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.022179492s -Jun 12 20:55:05.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.021828121s -Jun 12 20:55:07.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.022518234s -Jun 12 20:55:09.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.022145077s -Jun 12 20:55:11.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.022240502s -Jun 12 20:55:13.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.022000902s -Jun 12 20:55:15.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.022524638s -Jun 12 20:55:17.802: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.023410452s -Jun 12 20:55:19.814: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.0352041s -Jun 12 20:55:21.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.022310702s -Jun 12 20:55:23.811: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.032509904s -Jun 12 20:55:25.817: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.038700656s -Jun 12 20:55:27.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.022051122s -Jun 12 20:55:29.803: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.024333702s -Jun 12 20:55:31.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.022299557s -Jun 12 20:55:33.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.022384032s -Jun 12 20:55:35.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.022090246s -Jun 12 20:55:37.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.047428762s -Jun 12 20:55:39.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.0218048s -Jun 12 20:55:41.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.022768054s -Jun 12 20:55:43.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.022584509s -Jun 12 20:55:45.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.022368975s -Jun 12 20:55:47.811: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.032529442s -Jun 12 20:55:49.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.021933647s -Jun 12 20:55:51.808: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.029166578s -Jun 12 20:55:53.812: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.033176265s -Jun 12 20:55:55.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.022859634s -Jun 12 20:55:57.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.022333358s -Jun 12 20:55:59.803: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.024231992s -Jun 12 20:56:01.803: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.024548828s -Jun 12 20:56:03.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.022400666s -Jun 12 20:56:05.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.022488173s -Jun 12 20:56:07.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.022677668s -Jun 12 20:56:09.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.021360106s -Jun 12 20:56:11.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.022480357s -Jun 12 20:56:13.802: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.02349503s -Jun 12 20:56:15.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.021840914s -Jun 12 20:56:17.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.022348386s -Jun 12 20:56:19.802: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.023400154s -Jun 12 20:56:21.802: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.023795191s -Jun 12 20:56:23.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.046491271s -Jun 12 20:56:25.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.023033481s -Jun 12 20:56:27.804: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.025880646s -Jun 12 20:56:29.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.021518342s -Jun 12 20:56:31.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.021683432s -Jun 12 20:56:33.798: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.019892911s -Jun 12 20:56:35.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.022413666s -Jun 12 20:56:37.832: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.053234621s -Jun 12 20:56:39.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.021551707s -Jun 12 20:56:41.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.022324962s -Jun 12 20:56:43.820: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.041267946s -Jun 12 20:56:45.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.022501965s -Jun 12 20:56:47.802: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.023436053s -Jun 12 20:56:49.802: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.023278407s -Jun 12 20:56:51.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.0221844s -Jun 12 20:56:51.810: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.031477257s -STEP: removing the label kubernetes.io/e2e-f0ac351b-7bc2-4bc9-8d4a-bffa9017a1c0 off the node 10.138.75.70 06/12/23 20:56:51.81 -STEP: verifying the node doesn't have the label kubernetes.io/e2e-f0ac351b-7bc2-4bc9-8d4a-bffa9017a1c0 06/12/23 20:56:51.848 -[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] +[It] Replicaset should have a working scale subresource [Conformance] + test/e2e/apps/replica_set.go:143 +STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota 07/27/23 01:42:46.508 +W0727 01:42:46.523993 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "httpd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "httpd" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "httpd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "httpd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +Jul 27 01:42:46.532: INFO: Pod name sample-pod: Found 0 pods out of 1 +Jul 27 01:42:51.544: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running 07/27/23 01:42:51.544 +STEP: getting scale subresource 07/27/23 01:42:51.544 +STEP: updating a scale subresource 07/27/23 01:42:51.554 +STEP: verifying the replicaset Spec.Replicas was modified 07/27/23 01:42:51.569 +STEP: Patch a scale subresource 07/27/23 01:42:51.577 +[AfterEach] [sig-apps] ReplicaSet test/e2e/framework/node/init/init.go:32 -Jun 12 20:56:51.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] - test/e2e/scheduling/predicates.go:88 -[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] +Jul 27 01:42:51.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicaSet test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] +[DeferCleanup (Each)] [sig-apps] ReplicaSet dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] +[DeferCleanup (Each)] [sig-apps] ReplicaSet tear down framework | framework.go:193 -STEP: Destroying namespace "sched-pred-4369" for this suite. 06/12/23 20:56:51.874 +STEP: Destroying namespace "replicaset-6150" for this suite. 07/27/23 01:42:51.668 ------------------------------ -• [SLOW TEST] [308.701 seconds] -[sig-scheduling] SchedulerPredicates [Serial] -test/e2e/scheduling/framework.go:40 - validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] - test/e2e/scheduling/predicates.go:704 +• [SLOW TEST] [5.268 seconds] +[sig-apps] ReplicaSet +test/e2e/apps/framework.go:23 + Replicaset should have a working scale subresource [Conformance] + test/e2e/apps/replica_set.go:143 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + [BeforeEach] [sig-apps] ReplicaSet set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:51:43.187 - Jun 12 20:51:43.187: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename sched-pred 06/12/23 20:51:43.19 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:51:43.234 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:51:43.255 - [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + STEP: Creating a kubernetes client 07/27/23 01:42:46.449 + Jul 27 01:42:46.449: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename replicaset 07/27/23 01:42:46.45 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:42:46.49 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:42:46.5 + [BeforeEach] [sig-apps] ReplicaSet test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] - test/e2e/scheduling/predicates.go:97 - Jun 12 20:51:43.279: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready - Jun 12 20:51:43.313: INFO: Waiting for terminating namespaces to be deleted... - Jun 12 20:51:43.342: INFO: - Logging pods the apiserver thinks is on node 10.138.75.112 before test - Jun 12 20:51:43.424: INFO: calico-node-b9sdb from calico-system started at 2023-06-12 17:53:02 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.425: INFO: Container calico-node ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: calico-typha-74d94b74f5-dc6td from calico-system started at 2023-06-12 17:53:09 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.425: INFO: Container calico-typha ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: ibm-cloud-provider-ip-168-1-198-197-75947fc545-gxzn7 from ibm-system started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.425: INFO: Container ibm-cloud-provider-ip-168-1-198-197 ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: ibm-keepalived-watcher-5hc6v from kube-system started at 2023-06-12 17:40:13 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.425: INFO: Container keepalived-watcher ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: ibm-master-proxy-static-10.138.75.112 from kube-system started at 2023-06-12 17:40:09 +0000 UTC (2 container statuses recorded) - Jun 12 20:51:43.425: INFO: Container ibm-master-proxy-static ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: Container pause ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: ibmcloud-block-storage-driver-5zqmj from kube-system started at 2023-06-12 17:40:20 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.425: INFO: Container ibmcloud-block-storage-driver-container ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: tuned-phslc from openshift-cluster-node-tuning-operator started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.425: INFO: Container tuned ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: csi-snapshot-controller-7f8879b9ff-p456r from openshift-cluster-storage-operator started at 2023-06-12 17:55:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.425: INFO: Container snapshot-controller ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: csi-snapshot-webhook-7bd9594b6d-bp5dr from openshift-cluster-storage-operator started at 2023-06-12 17:55:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.425: INFO: Container webhook ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: console-5bf97c7949-w5sn5 from openshift-console started at 2023-06-12 18:01:02 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.425: INFO: Container console ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: downloads-8b57f44bb-55ss5 from openshift-console started at 2023-06-12 17:55:24 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.425: INFO: Container download-server ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: dns-default-hpnqj from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (2 container statuses recorded) - Jun 12 20:51:43.425: INFO: Container dns ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: node-resolver-5st6j from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.425: INFO: Container dns-node-resolver ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: image-registry-6c79bcf5c4-p7ss4 from openshift-image-registry started at 2023-06-12 18:00:30 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.425: INFO: Container registry ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: node-ca-qm7sb from openshift-image-registry started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.425: INFO: Container node-ca ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: ingress-canary-5qpcw from openshift-ingress-canary started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.425: INFO: Container serve-healthcheck-canary ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: router-default-7d454f944c-62qgz from openshift-ingress started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.425: INFO: Container router ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: openshift-kube-proxy-b9xs9 from openshift-kube-proxy started at 2023-06-12 17:47:53 +0000 UTC (2 container statuses recorded) - Jun 12 20:51:43.425: INFO: Container kube-proxy ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: migrator-cfb6c8f7c-vx2tr from openshift-kube-storage-version-migrator started at 2023-06-12 17:55:28 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.425: INFO: Container migrator ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: community-operators-fm8cx from openshift-marketplace started at 2023-06-12 17:57:19 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.425: INFO: Container registry-server ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: redhat-operators-pr47d from openshift-marketplace started at 2023-06-12 19:05:36 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.425: INFO: Container registry-server ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: alertmanager-main-1 from openshift-monitoring started at 2023-06-12 18:01:06 +0000 UTC (6 container statuses recorded) - Jun 12 20:51:43.425: INFO: Container alertmanager ready: true, restart count 1 - Jun 12 20:51:43.425: INFO: Container alertmanager-proxy ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: Container config-reloader ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: Container kube-rbac-proxy-metric ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: Container prom-label-proxy ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: kube-state-metrics-6ccfb58dc4-rgnnh from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (3 container statuses recorded) - Jun 12 20:51:43.425: INFO: Container kube-rbac-proxy-main ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: Container kube-rbac-proxy-self ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: Container kube-state-metrics ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: node-exporter-r799t from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (2 container statuses recorded) - Jun 12 20:51:43.425: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: Container node-exporter ready: true, restart count 0 - Jun 12 20:51:43.425: INFO: prometheus-adapter-7c58c77c58-xfd55 from openshift-monitoring started at 2023-06-12 17:59:36 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.425: INFO: Container prometheus-adapter ready: true, restart count 0 - Jun 12 20:51:43.426: INFO: prometheus-k8s-0 from openshift-monitoring started at 2023-06-12 18:01:32 +0000 UTC (6 container statuses recorded) - Jun 12 20:51:43.426: INFO: Container config-reloader ready: true, restart count 0 - Jun 12 20:51:43.426: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:51:43.426: INFO: Container kube-rbac-proxy-thanos ready: true, restart count 0 - Jun 12 20:51:43.426: INFO: Container prometheus ready: true, restart count 0 - Jun 12 20:51:43.426: INFO: Container prometheus-proxy ready: true, restart count 0 - Jun 12 20:51:43.426: INFO: Container thanos-sidecar ready: true, restart count 0 - Jun 12 20:51:43.426: INFO: prometheus-operator-admission-webhook-5d679565bb-66wnf from openshift-monitoring started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.426: INFO: Container prometheus-operator-admission-webhook ready: true, restart count 0 - Jun 12 20:51:43.426: INFO: thanos-querier-6497df7b9-djrsc from openshift-monitoring started at 2023-06-12 17:59:42 +0000 UTC (6 container statuses recorded) - Jun 12 20:51:43.426: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:51:43.426: INFO: Container kube-rbac-proxy-metrics ready: true, restart count 0 - Jun 12 20:51:43.426: INFO: Container kube-rbac-proxy-rules ready: true, restart count 0 - Jun 12 20:51:43.426: INFO: Container oauth-proxy ready: true, restart count 0 - Jun 12 20:51:43.426: INFO: Container prom-label-proxy ready: true, restart count 0 - Jun 12 20:51:43.426: INFO: Container thanos-query ready: true, restart count 0 - Jun 12 20:51:43.426: INFO: multus-additional-cni-plugins-zpr6c from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.426: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 - Jun 12 20:51:43.426: INFO: multus-q452d from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.426: INFO: Container kube-multus ready: true, restart count 0 - Jun 12 20:51:43.426: INFO: network-metrics-daemon-vx56x from openshift-multus started at 2023-06-12 17:47:49 +0000 UTC (2 container statuses recorded) - Jun 12 20:51:43.426: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:51:43.426: INFO: Container network-metrics-daemon ready: true, restart count 0 - Jun 12 20:51:43.426: INFO: network-check-target-lfvfw from openshift-network-diagnostics started at 2023-06-12 17:47:56 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.426: INFO: Container network-check-target-container ready: true, restart count 0 - Jun 12 20:51:43.426: INFO: network-operator-5498bf7dc6-xv8r2 from openshift-network-operator started at 2023-06-12 17:47:21 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.426: INFO: Container network-operator ready: true, restart count 1 - Jun 12 20:51:43.426: INFO: packageserver-7f8bd8c95b-fgfhz from openshift-operator-lifecycle-manager started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.426: INFO: Container packageserver ready: true, restart count 0 - Jun 12 20:51:43.426: INFO: sonobuoy-systemd-logs-daemon-set-129e475d543c42cc-xk7f7 from sonobuoy started at 2023-06-12 20:39:06 +0000 UTC (2 container statuses recorded) - Jun 12 20:51:43.426: INFO: Container sonobuoy-worker ready: true, restart count 0 - Jun 12 20:51:43.426: INFO: Container systemd-logs ready: true, restart count 0 - Jun 12 20:51:43.426: INFO: - Logging pods the apiserver thinks is on node 10.138.75.116 before test - Jun 12 20:51:43.508: INFO: calico-kube-controllers-58944988fc-kv6pq from calico-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.508: INFO: Container calico-kube-controllers ready: true, restart count 0 - Jun 12 20:51:43.508: INFO: calico-node-nhd4m from calico-system started at 2023-06-12 17:53:02 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.508: INFO: Container calico-node ready: true, restart count 0 - Jun 12 20:51:43.508: INFO: ibm-file-plugin-5f8cc7b66-hc7b9 from kube-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.508: INFO: Container ibm-file-plugin-container ready: true, restart count 0 - Jun 12 20:51:43.508: INFO: ibm-keepalived-watcher-zp24l from kube-system started at 2023-06-12 17:40:01 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.508: INFO: Container keepalived-watcher ready: true, restart count 0 - Jun 12 20:51:43.508: INFO: ibm-master-proxy-static-10.138.75.116 from kube-system started at 2023-06-12 17:39:58 +0000 UTC (2 container statuses recorded) - Jun 12 20:51:43.508: INFO: Container ibm-master-proxy-static ready: true, restart count 0 - Jun 12 20:51:43.508: INFO: Container pause ready: true, restart count 0 - Jun 12 20:51:43.508: INFO: ibm-storage-watcher-f4db746b4-mlm76 from kube-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.508: INFO: Container ibm-storage-watcher-container ready: true, restart count 0 - Jun 12 20:51:43.508: INFO: ibmcloud-block-storage-driver-4wh25 from kube-system started at 2023-06-12 17:40:09 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.508: INFO: Container ibmcloud-block-storage-driver-container ready: true, restart count 0 - Jun 12 20:51:43.508: INFO: ibmcloud-block-storage-plugin-5f85bc9665-2ltn5 from kube-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.508: INFO: Container ibmcloud-block-storage-plugin-container ready: true, restart count 0 - Jun 12 20:51:43.508: INFO: vpn-7bc564c55c-htxd6 from kube-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.508: INFO: Container vpn ready: true, restart count 0 - Jun 12 20:51:43.508: INFO: cluster-node-tuning-operator-5f6cff5c99-z22gd from openshift-cluster-node-tuning-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.508: INFO: Container cluster-node-tuning-operator ready: true, restart count 0 - Jun 12 20:51:43.508: INFO: tuned-44pqh from openshift-cluster-node-tuning-operator started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.508: INFO: Container tuned ready: true, restart count 0 - Jun 12 20:51:43.508: INFO: cluster-samples-operator-597884bb5d-bv9cn from openshift-cluster-samples-operator started at 2023-06-12 17:54:08 +0000 UTC (2 container statuses recorded) - Jun 12 20:51:43.508: INFO: Container cluster-samples-operator ready: true, restart count 0 - Jun 12 20:51:43.508: INFO: Container cluster-samples-operator-watch ready: true, restart count 0 - Jun 12 20:51:43.508: INFO: cluster-storage-operator-75bb97486-7xrgf from openshift-cluster-storage-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.508: INFO: Container cluster-storage-operator ready: true, restart count 1 - Jun 12 20:51:43.508: INFO: csi-snapshot-controller-operator-69df8b995f-flpdz from openshift-cluster-storage-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.508: INFO: Container csi-snapshot-controller-operator ready: true, restart count 0 - Jun 12 20:51:43.508: INFO: console-operator-747447cc44-5hk9p from openshift-console-operator started at 2023-06-12 17:54:08 +0000 UTC (2 container statuses recorded) - Jun 12 20:51:43.508: INFO: Container console-operator ready: true, restart count 1 - Jun 12 20:51:43.508: INFO: Container conversion-webhook-server ready: true, restart count 2 - Jun 12 20:51:43.508: INFO: console-5bf97c7949-22prk from openshift-console started at 2023-06-12 18:01:30 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.508: INFO: Container console ready: true, restart count 0 - Jun 12 20:51:43.508: INFO: dns-operator-65c495d75-cd4fc from openshift-dns-operator started at 2023-06-12 17:54:08 +0000 UTC (2 container statuses recorded) - Jun 12 20:51:43.508: INFO: Container dns-operator ready: true, restart count 0 - Jun 12 20:51:43.508: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:51:43.508: INFO: dns-default-cw4pt from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (2 container statuses recorded) - Jun 12 20:51:43.508: INFO: Container dns ready: true, restart count 0 - Jun 12 20:51:43.508: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:51:43.508: INFO: node-resolver-8mss5 from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.508: INFO: Container dns-node-resolver ready: true, restart count 0 - Jun 12 20:51:43.508: INFO: cluster-image-registry-operator-f9c46b94f-swtmm from openshift-image-registry started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.508: INFO: Container cluster-image-registry-operator ready: true, restart count 0 - Jun 12 20:51:43.508: INFO: node-ca-5cs7d from openshift-image-registry started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.508: INFO: Container node-ca ready: true, restart count 0 - Jun 12 20:51:43.509: INFO: registry-pvc-permissions-j28ls from openshift-image-registry started at 2023-06-12 18:00:38 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.509: INFO: Container pvc-permissions ready: false, restart count 0 - Jun 12 20:51:43.509: INFO: ingress-canary-9xbwx from openshift-ingress-canary started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.509: INFO: Container serve-healthcheck-canary ready: true, restart count 0 - Jun 12 20:51:43.509: INFO: ingress-operator-57d9f78b9c-59cl8 from openshift-ingress-operator started at 2023-06-12 17:54:08 +0000 UTC (2 container statuses recorded) - Jun 12 20:51:43.509: INFO: Container ingress-operator ready: true, restart count 0 - Jun 12 20:51:43.509: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:51:43.509: INFO: insights-operator-7dfcfbc664-j8swm from openshift-insights started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.509: INFO: Container insights-operator ready: true, restart count 1 - Jun 12 20:51:43.509: INFO: openshift-kube-proxy-5hl4f from openshift-kube-proxy started at 2023-06-12 17:47:53 +0000 UTC (2 container statuses recorded) - Jun 12 20:51:43.509: INFO: Container kube-proxy ready: true, restart count 0 - Jun 12 20:51:43.509: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:51:43.509: INFO: kube-storage-version-migrator-operator-689b97b878-cqw2l from openshift-kube-storage-version-migrator-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.509: INFO: Container kube-storage-version-migrator-operator ready: true, restart count 1 - Jun 12 20:51:43.509: INFO: marketplace-operator-769ddf547d-mm52g from openshift-marketplace started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.509: INFO: Container marketplace-operator ready: true, restart count 0 - Jun 12 20:51:43.509: INFO: cluster-monitoring-operator-7df766d4db-cnq44 from openshift-monitoring started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.509: INFO: Container cluster-monitoring-operator ready: true, restart count 0 - Jun 12 20:51:43.509: INFO: node-exporter-s9sgk from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (2 container statuses recorded) - Jun 12 20:51:43.509: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:51:43.509: INFO: Container node-exporter ready: true, restart count 0 - Jun 12 20:51:43.509: INFO: multus-additional-cni-plugins-rsr27 from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.509: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 - Jun 12 20:51:43.509: INFO: multus-admission-controller-5894dd7875-bfbwp from openshift-multus started at 2023-06-12 17:58:31 +0000 UTC (2 container statuses recorded) - Jun 12 20:51:43.509: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:51:43.509: INFO: Container multus-admission-controller ready: true, restart count 0 - Jun 12 20:51:43.509: INFO: multus-ln9rr from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.509: INFO: Container kube-multus ready: true, restart count 0 - Jun 12 20:51:43.509: INFO: network-metrics-daemon-75s49 from openshift-multus started at 2023-06-12 17:47:49 +0000 UTC (2 container statuses recorded) - Jun 12 20:51:43.509: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:51:43.509: INFO: Container network-metrics-daemon ready: true, restart count 0 - Jun 12 20:51:43.509: INFO: network-check-source-7f6b75fdb6-8882l from openshift-network-diagnostics started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.509: INFO: Container check-endpoints ready: true, restart count 0 - Jun 12 20:51:43.509: INFO: network-check-target-kjfll from openshift-network-diagnostics started at 2023-06-12 17:47:56 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.509: INFO: Container network-check-target-container ready: true, restart count 0 - Jun 12 20:51:43.509: INFO: catalog-operator-874999f59-jggx9 from openshift-operator-lifecycle-manager started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.509: INFO: Container catalog-operator ready: true, restart count 0 - Jun 12 20:51:43.509: INFO: collect-profiles-28110015-d4v2k from openshift-operator-lifecycle-manager started at 2023-06-12 20:15:00 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.509: INFO: Container collect-profiles ready: false, restart count 0 - Jun 12 20:51:43.509: INFO: collect-profiles-28110030-fzbkf from openshift-operator-lifecycle-manager started at 2023-06-12 20:30:00 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.509: INFO: Container collect-profiles ready: false, restart count 0 - Jun 12 20:51:43.509: INFO: collect-profiles-28110045-fcbk8 from openshift-operator-lifecycle-manager started at 2023-06-12 20:45:00 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.509: INFO: Container collect-profiles ready: false, restart count 0 - Jun 12 20:51:43.509: INFO: olm-operator-bdbf4b468-8vj6q from openshift-operator-lifecycle-manager started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.509: INFO: Container olm-operator ready: true, restart count 0 - Jun 12 20:51:43.509: INFO: package-server-manager-5b897cb946-pz59r from openshift-operator-lifecycle-manager started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.509: INFO: Container package-server-manager ready: true, restart count 0 - Jun 12 20:51:43.509: INFO: packageserver-7f8bd8c95b-2zntg from openshift-operator-lifecycle-manager started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.509: INFO: Container packageserver ready: true, restart count 0 - Jun 12 20:51:43.509: INFO: metrics-78c5579cb7-nlfqq from openshift-roks-metrics started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.509: INFO: Container metrics ready: true, restart count 3 - Jun 12 20:51:43.509: INFO: push-gateway-85f6799b47-cgtdt from openshift-roks-metrics started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.509: INFO: Container push-gateway ready: true, restart count 0 - Jun 12 20:51:43.509: INFO: service-ca-operator-86d6dcd567-8jc2t from openshift-service-ca-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.509: INFO: Container service-ca-operator ready: true, restart count 1 - Jun 12 20:51:43.509: INFO: service-ca-7c79786568-vhxsl from openshift-service-ca started at 2023-06-12 17:55:23 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.509: INFO: Container service-ca-controller ready: true, restart count 0 - Jun 12 20:51:43.509: INFO: sonobuoy-e2e-job-9876719f3d1644bf from sonobuoy started at 2023-06-12 20:39:06 +0000 UTC (2 container statuses recorded) - Jun 12 20:51:43.509: INFO: Container e2e ready: true, restart count 0 - Jun 12 20:51:43.509: INFO: Container sonobuoy-worker ready: true, restart count 0 - Jun 12 20:51:43.509: INFO: sonobuoy-systemd-logs-daemon-set-129e475d543c42cc-nbw64 from sonobuoy started at 2023-06-12 20:39:07 +0000 UTC (2 container statuses recorded) - Jun 12 20:51:43.509: INFO: Container sonobuoy-worker ready: true, restart count 0 - Jun 12 20:51:43.509: INFO: Container systemd-logs ready: true, restart count 0 - Jun 12 20:51:43.509: INFO: tigera-operator-5b48cf996b-z7p6p from tigera-operator started at 2023-06-12 17:40:11 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.509: INFO: Container tigera-operator ready: true, restart count 7 - Jun 12 20:51:43.509: INFO: - Logging pods the apiserver thinks is on node 10.138.75.70 before test - Jun 12 20:51:43.555: INFO: calico-node-v822j from calico-system started at 2023-06-12 17:53:02 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.555: INFO: Container calico-node ready: true, restart count 0 - Jun 12 20:51:43.555: INFO: calico-typha-74d94b74f5-db4zz from calico-system started at 2023-06-12 17:53:02 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.555: INFO: Container calico-typha ready: true, restart count 0 - Jun 12 20:51:43.555: INFO: ibm-cloud-provider-ip-168-1-198-197-75947fc545-9m2wx from ibm-system started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.555: INFO: Container ibm-cloud-provider-ip-168-1-198-197 ready: true, restart count 0 - Jun 12 20:51:43.555: INFO: ibm-keepalived-watcher-nl9l9 from kube-system started at 2023-06-12 17:40:20 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.555: INFO: Container keepalived-watcher ready: true, restart count 0 - Jun 12 20:51:43.555: INFO: ibm-master-proxy-static-10.138.75.70 from kube-system started at 2023-06-12 17:40:17 +0000 UTC (2 container statuses recorded) - Jun 12 20:51:43.555: INFO: Container ibm-master-proxy-static ready: true, restart count 0 - Jun 12 20:51:43.555: INFO: Container pause ready: true, restart count 0 - Jun 12 20:51:43.555: INFO: ibmcloud-block-storage-driver-jl8fq from kube-system started at 2023-06-12 17:40:28 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.555: INFO: Container ibmcloud-block-storage-driver-container ready: true, restart count 0 - Jun 12 20:51:43.555: INFO: tuned-dmlsr from openshift-cluster-node-tuning-operator started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.555: INFO: Container tuned ready: true, restart count 0 - Jun 12 20:51:43.555: INFO: csi-snapshot-controller-7f8879b9ff-lhkmp from openshift-cluster-storage-operator started at 2023-06-12 17:55:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.555: INFO: Container snapshot-controller ready: true, restart count 0 - Jun 12 20:51:43.555: INFO: csi-snapshot-webhook-7bd9594b6d-9f476 from openshift-cluster-storage-operator started at 2023-06-12 17:55:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.555: INFO: Container webhook ready: true, restart count 0 - Jun 12 20:51:43.555: INFO: downloads-8b57f44bb-f7r76 from openshift-console started at 2023-06-12 17:55:24 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.555: INFO: Container download-server ready: true, restart count 0 - Jun 12 20:51:43.555: INFO: dns-default-5d2sp from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (2 container statuses recorded) - Jun 12 20:51:43.555: INFO: Container dns ready: true, restart count 0 - Jun 12 20:51:43.555: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:51:43.555: INFO: node-resolver-lf2bx from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.555: INFO: Container dns-node-resolver ready: true, restart count 0 - Jun 12 20:51:43.555: INFO: node-ca-mwjbd from openshift-image-registry started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.555: INFO: Container node-ca ready: true, restart count 0 - Jun 12 20:51:43.555: INFO: ingress-canary-xwc5b from openshift-ingress-canary started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.555: INFO: Container serve-healthcheck-canary ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: router-default-7d454f944c-s862z from openshift-ingress started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.556: INFO: Container router ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: openshift-kube-proxy-rckf9 from openshift-kube-proxy started at 2023-06-12 17:47:53 +0000 UTC (2 container statuses recorded) - Jun 12 20:51:43.556: INFO: Container kube-proxy ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: certified-operators-9jhxm from openshift-marketplace started at 2023-06-12 17:57:19 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.556: INFO: Container registry-server ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: redhat-marketplace-n9tcn from openshift-marketplace started at 2023-06-12 17:57:19 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.556: INFO: Container registry-server ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: alertmanager-main-0 from openshift-monitoring started at 2023-06-12 18:01:41 +0000 UTC (6 container statuses recorded) - Jun 12 20:51:43.556: INFO: Container alertmanager ready: true, restart count 1 - Jun 12 20:51:43.556: INFO: Container alertmanager-proxy ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: Container config-reloader ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: Container kube-rbac-proxy-metric ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: Container prom-label-proxy ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: node-exporter-5vgf6 from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (2 container statuses recorded) - Jun 12 20:51:43.556: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: Container node-exporter ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: openshift-state-metrics-7d7f8b4cf8-6kdhb from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (3 container statuses recorded) - Jun 12 20:51:43.556: INFO: Container kube-rbac-proxy-main ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: Container kube-rbac-proxy-self ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: Container openshift-state-metrics ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: prometheus-adapter-7c58c77c58-2j47k from openshift-monitoring started at 2023-06-12 17:59:36 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.556: INFO: Container prometheus-adapter ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: prometheus-k8s-1 from openshift-monitoring started at 2023-06-12 18:01:12 +0000 UTC (6 container statuses recorded) - Jun 12 20:51:43.556: INFO: Container config-reloader ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: Container kube-rbac-proxy-thanos ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: Container prometheus ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: Container prometheus-proxy ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: Container thanos-sidecar ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: prometheus-operator-5d978dbf9c-zvq6g from openshift-monitoring started at 2023-06-12 17:59:19 +0000 UTC (2 container statuses recorded) - Jun 12 20:51:43.556: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: Container prometheus-operator ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: prometheus-operator-admission-webhook-5d679565bb-sj42p from openshift-monitoring started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.556: INFO: Container prometheus-operator-admission-webhook ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: telemeter-client-55c7b57d84-vh47h from openshift-monitoring started at 2023-06-12 17:59:37 +0000 UTC (3 container statuses recorded) - Jun 12 20:51:43.556: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: Container reload ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: Container telemeter-client ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: thanos-querier-6497df7b9-pg2z9 from openshift-monitoring started at 2023-06-12 17:59:42 +0000 UTC (6 container statuses recorded) - Jun 12 20:51:43.556: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: Container kube-rbac-proxy-metrics ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: Container kube-rbac-proxy-rules ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: Container oauth-proxy ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: Container prom-label-proxy ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: Container thanos-query ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: multus-26bfs from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.556: INFO: Container kube-multus ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: multus-additional-cni-plugins-9vls6 from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.556: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: multus-admission-controller-5894dd7875-xldt9 from openshift-multus started at 2023-06-12 17:58:44 +0000 UTC (2 container statuses recorded) - Jun 12 20:51:43.556: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: Container multus-admission-controller ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: network-metrics-daemon-g9zzs from openshift-multus started at 2023-06-12 17:47:49 +0000 UTC (2 container statuses recorded) - Jun 12 20:51:43.556: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: Container network-metrics-daemon ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: network-check-target-l622r from openshift-network-diagnostics started at 2023-06-12 17:47:56 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.556: INFO: Container network-check-target-container ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: sonobuoy from sonobuoy started at 2023-06-12 20:38:54 +0000 UTC (1 container statuses recorded) - Jun 12 20:51:43.556: INFO: Container kube-sonobuoy ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: sonobuoy-systemd-logs-daemon-set-129e475d543c42cc-4dn8s from sonobuoy started at 2023-06-12 20:39:07 +0000 UTC (2 container statuses recorded) - Jun 12 20:51:43.556: INFO: Container sonobuoy-worker ready: true, restart count 0 - Jun 12 20:51:43.556: INFO: Container systemd-logs ready: true, restart count 0 - [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] - test/e2e/scheduling/predicates.go:704 - STEP: Trying to launch a pod without a label to get a node which can launch it. 06/12/23 20:51:43.556 - Jun 12 20:51:43.581: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-4369" to be "running" - Jun 12 20:51:43.604: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 22.212636ms - Jun 12 20:51:45.616: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034157587s - Jun 12 20:51:47.615: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 4.033394331s - Jun 12 20:51:47.615: INFO: Pod "without-label" satisfied condition "running" - STEP: Explicitly delete pod here to free the resource it takes. 06/12/23 20:51:47.624 - STEP: Trying to apply a random label on the found node. 06/12/23 20:51:47.666 - STEP: verifying the node has the label kubernetes.io/e2e-f0ac351b-7bc2-4bc9-8d4a-bffa9017a1c0 95 06/12/23 20:51:47.703 - STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled 06/12/23 20:51:47.716 - Jun 12 20:51:47.735: INFO: Waiting up to 5m0s for pod "pod4" in namespace "sched-pred-4369" to be "not pending" - Jun 12 20:51:47.745: INFO: Pod "pod4": Phase="Pending", Reason="", readiness=false. Elapsed: 9.910878ms - Jun 12 20:51:49.756: INFO: Pod "pod4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021517215s - Jun 12 20:51:51.756: INFO: Pod "pod4": Phase="Running", Reason="", readiness=true. Elapsed: 4.021502536s - Jun 12 20:51:51.756: INFO: Pod "pod4" satisfied condition "not pending" - STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 10.138.75.70 on the node which pod4 resides and expect not scheduled 06/12/23 20:51:51.756 - Jun 12 20:51:51.778: INFO: Waiting up to 5m0s for pod "pod5" in namespace "sched-pred-4369" to be "not pending" - Jun 12 20:51:51.790: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 11.781741ms - Jun 12 20:51:53.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021691582s - Jun 12 20:51:55.802: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023577371s - Jun 12 20:51:57.803: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024312266s - Jun 12 20:51:59.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.022330411s - Jun 12 20:52:01.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022035455s - Jun 12 20:52:03.804: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.025186659s - Jun 12 20:52:05.808: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.029480118s - Jun 12 20:52:07.846: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.067374545s - Jun 12 20:52:09.803: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.025084116s - Jun 12 20:52:11.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.022753817s - Jun 12 20:52:13.802: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.023555268s - Jun 12 20:52:15.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.022755859s - Jun 12 20:52:17.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.021891812s - Jun 12 20:52:19.806: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.027279476s - Jun 12 20:52:21.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.022427454s - Jun 12 20:52:23.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.021623675s - Jun 12 20:52:25.803: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.024177413s - Jun 12 20:52:27.803: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 36.024115611s - Jun 12 20:52:29.815: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 38.036182591s - Jun 12 20:52:31.799: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 40.020464831s - Jun 12 20:52:33.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 42.022571293s - Jun 12 20:52:35.810: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 44.031367751s - Jun 12 20:52:37.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 46.021905778s - Jun 12 20:52:39.812: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 48.03317152s - Jun 12 20:52:41.805: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 50.026188727s - Jun 12 20:52:43.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 52.022571463s - Jun 12 20:52:45.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 54.022555575s - Jun 12 20:52:47.803: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 56.024141936s - Jun 12 20:52:49.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 58.022543435s - Jun 12 20:52:51.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.022649095s - Jun 12 20:52:53.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.02242785s - Jun 12 20:52:55.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.022143939s - Jun 12 20:52:57.802: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.023352187s - Jun 12 20:52:59.809: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.030652559s - Jun 12 20:53:01.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.022734904s - Jun 12 20:53:03.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.022584873s - Jun 12 20:53:05.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.023081524s - Jun 12 20:53:07.807: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.028616406s - Jun 12 20:53:09.802: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.023169706s - Jun 12 20:53:11.802: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.023234091s - Jun 12 20:53:13.829: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.050630462s - Jun 12 20:53:15.834: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.055851999s - Jun 12 20:53:17.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.021914015s - Jun 12 20:53:19.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.022309986s - Jun 12 20:53:21.803: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.024846362s - Jun 12 20:53:23.802: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.023478255s - Jun 12 20:53:25.799: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.020751578s - Jun 12 20:53:27.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.021938466s - Jun 12 20:53:29.816: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.037183547s - Jun 12 20:53:31.799: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.020277945s - Jun 12 20:53:33.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.021198221s - Jun 12 20:53:35.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.021258822s - Jun 12 20:53:37.830: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.051288999s - Jun 12 20:53:39.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.021632362s - Jun 12 20:53:41.803: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.024709508s - Jun 12 20:53:43.803: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.024786533s - Jun 12 20:53:45.803: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.024548202s - Jun 12 20:53:47.911: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.132366299s - Jun 12 20:53:49.803: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.02467715s - Jun 12 20:53:51.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.021780449s - Jun 12 20:53:53.812: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.033640752s - Jun 12 20:53:55.803: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.025077628s - Jun 12 20:53:57.803: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.0241899s - Jun 12 20:53:59.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.022228605s - Jun 12 20:54:01.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.022476433s - Jun 12 20:54:03.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.022278146s - Jun 12 20:54:05.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.021805772s - Jun 12 20:54:07.804: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.025477133s - Jun 12 20:54:09.812: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.033686892s - Jun 12 20:54:11.804: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.025070333s - Jun 12 20:54:13.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.021597075s - Jun 12 20:54:15.809: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.030585016s - Jun 12 20:54:17.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.022513391s - Jun 12 20:54:19.805: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.027033176s - Jun 12 20:54:21.814: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.035133158s - Jun 12 20:54:23.839: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.060109493s - Jun 12 20:54:25.804: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.025234711s - Jun 12 20:54:27.802: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.023325408s - Jun 12 20:54:29.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.022862338s - Jun 12 20:54:31.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.021774013s - Jun 12 20:54:33.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.02247688s - Jun 12 20:54:35.802: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.023164836s - Jun 12 20:54:37.804: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.025404028s - Jun 12 20:54:39.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.022692922s - Jun 12 20:54:41.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.022067315s - Jun 12 20:54:43.806: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.02719431s - Jun 12 20:54:45.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.022524828s - Jun 12 20:54:47.843: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.064678034s - Jun 12 20:54:49.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.021485582s - Jun 12 20:54:51.802: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.023594776s - Jun 12 20:54:53.809: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.030519316s - Jun 12 20:54:55.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.04531737s - Jun 12 20:54:57.805: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.026685401s - Jun 12 20:54:59.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.022679881s - Jun 12 20:55:01.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.022402171s - Jun 12 20:55:03.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.022179492s - Jun 12 20:55:05.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.021828121s - Jun 12 20:55:07.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.022518234s - Jun 12 20:55:09.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.022145077s - Jun 12 20:55:11.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.022240502s - Jun 12 20:55:13.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.022000902s - Jun 12 20:55:15.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.022524638s - Jun 12 20:55:17.802: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.023410452s - Jun 12 20:55:19.814: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.0352041s - Jun 12 20:55:21.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.022310702s - Jun 12 20:55:23.811: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.032509904s - Jun 12 20:55:25.817: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.038700656s - Jun 12 20:55:27.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.022051122s - Jun 12 20:55:29.803: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.024333702s - Jun 12 20:55:31.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.022299557s - Jun 12 20:55:33.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.022384032s - Jun 12 20:55:35.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.022090246s - Jun 12 20:55:37.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.047428762s - Jun 12 20:55:39.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.0218048s - Jun 12 20:55:41.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.022768054s - Jun 12 20:55:43.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.022584509s - Jun 12 20:55:45.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.022368975s - Jun 12 20:55:47.811: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.032529442s - Jun 12 20:55:49.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.021933647s - Jun 12 20:55:51.808: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.029166578s - Jun 12 20:55:53.812: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.033176265s - Jun 12 20:55:55.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.022859634s - Jun 12 20:55:57.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.022333358s - Jun 12 20:55:59.803: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.024231992s - Jun 12 20:56:01.803: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.024548828s - Jun 12 20:56:03.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.022400666s - Jun 12 20:56:05.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.022488173s - Jun 12 20:56:07.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.022677668s - Jun 12 20:56:09.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.021360106s - Jun 12 20:56:11.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.022480357s - Jun 12 20:56:13.802: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.02349503s - Jun 12 20:56:15.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.021840914s - Jun 12 20:56:17.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.022348386s - Jun 12 20:56:19.802: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.023400154s - Jun 12 20:56:21.802: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.023795191s - Jun 12 20:56:23.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.046491271s - Jun 12 20:56:25.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.023033481s - Jun 12 20:56:27.804: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.025880646s - Jun 12 20:56:29.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.021518342s - Jun 12 20:56:31.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.021683432s - Jun 12 20:56:33.798: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.019892911s - Jun 12 20:56:35.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.022413666s - Jun 12 20:56:37.832: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.053234621s - Jun 12 20:56:39.800: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.021551707s - Jun 12 20:56:41.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.022324962s - Jun 12 20:56:43.820: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.041267946s - Jun 12 20:56:45.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.022501965s - Jun 12 20:56:47.802: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.023436053s - Jun 12 20:56:49.802: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.023278407s - Jun 12 20:56:51.801: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.0221844s - Jun 12 20:56:51.810: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.031477257s - STEP: removing the label kubernetes.io/e2e-f0ac351b-7bc2-4bc9-8d4a-bffa9017a1c0 off the node 10.138.75.70 06/12/23 20:56:51.81 - STEP: verifying the node doesn't have the label kubernetes.io/e2e-f0ac351b-7bc2-4bc9-8d4a-bffa9017a1c0 06/12/23 20:56:51.848 - [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + [It] Replicaset should have a working scale subresource [Conformance] + test/e2e/apps/replica_set.go:143 + STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota 07/27/23 01:42:46.508 + W0727 01:42:46.523993 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "httpd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "httpd" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "httpd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "httpd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + Jul 27 01:42:46.532: INFO: Pod name sample-pod: Found 0 pods out of 1 + Jul 27 01:42:51.544: INFO: Pod name sample-pod: Found 1 pods out of 1 + STEP: ensuring each pod is running 07/27/23 01:42:51.544 + STEP: getting scale subresource 07/27/23 01:42:51.544 + STEP: updating a scale subresource 07/27/23 01:42:51.554 + STEP: verifying the replicaset Spec.Replicas was modified 07/27/23 01:42:51.569 + STEP: Patch a scale subresource 07/27/23 01:42:51.577 + [AfterEach] [sig-apps] ReplicaSet test/e2e/framework/node/init/init.go:32 - Jun 12 20:56:51.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] - test/e2e/scheduling/predicates.go:88 - [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + Jul 27 01:42:51.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicaSet test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + [DeferCleanup (Each)] [sig-apps] ReplicaSet dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + [DeferCleanup (Each)] [sig-apps] ReplicaSet tear down framework | framework.go:193 - STEP: Destroying namespace "sched-pred-4369" for this suite. 06/12/23 20:56:51.874 + STEP: Destroying namespace "replicaset-6150" for this suite. 07/27/23 01:42:51.668 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-node] InitContainer [NodeConformance] - should invoke init containers on a RestartAlways pod [Conformance] - test/e2e/common/node/init_container.go:255 -[BeforeEach] [sig-node] InitContainer [NodeConformance] +[sig-network] Services + should have session affinity work for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2228 +[BeforeEach] [sig-network] Services set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:56:51.896 -Jun 12 20:56:51.896: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename init-container 06/12/23 20:56:51.899 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:56:51.938 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:56:51.954 -[BeforeEach] [sig-node] InitContainer [NodeConformance] +STEP: Creating a kubernetes client 07/27/23 01:42:51.72 +Jul 27 01:42:51.720: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename services 07/27/23 01:42:51.72 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:42:51.779 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:42:51.789 +[BeforeEach] [sig-network] Services test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-node] InitContainer [NodeConformance] - test/e2e/common/node/init_container.go:165 -[It] should invoke init containers on a RestartAlways pod [Conformance] - test/e2e/common/node/init_container.go:255 -STEP: creating the pod 06/12/23 20:56:51.981 -Jun 12 20:56:51.982: INFO: PodSpec: initContainers in spec.initContainers -[AfterEach] [sig-node] InitContainer [NodeConformance] +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2228 +STEP: creating service in namespace services-5197 07/27/23 01:42:51.798 +STEP: creating service affinity-nodeport in namespace services-5197 07/27/23 01:42:51.798 +STEP: creating replication controller affinity-nodeport in namespace services-5197 07/27/23 01:42:51.894 +I0727 01:42:51.913588 20 runners.go:193] Created replication controller with name: affinity-nodeport, namespace: services-5197, replica count: 3 +I0727 01:42:54.964287 20 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Jul 27 01:42:55.002: INFO: Creating new exec pod +Jul 27 01:42:55.024: INFO: Waiting up to 5m0s for pod "execpod-affinity6pdrk" in namespace "services-5197" to be "running" +Jul 27 01:42:55.044: INFO: Pod "execpod-affinity6pdrk": Phase="Pending", Reason="", readiness=false. Elapsed: 19.92771ms +Jul 27 01:42:57.054: INFO: Pod "execpod-affinity6pdrk": Phase="Running", Reason="", readiness=true. Elapsed: 2.029717618s +Jul 27 01:42:57.054: INFO: Pod "execpod-affinity6pdrk" satisfied condition "running" +Jul 27 01:42:58.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-5197 exec execpod-affinity6pdrk -- /bin/sh -x -c nc -v -z -w 2 affinity-nodeport 80' +Jul 27 01:42:58.297: INFO: stderr: "+ nc -v -z -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" +Jul 27 01:42:58.297: INFO: stdout: "" +Jul 27 01:42:58.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-5197 exec execpod-affinity6pdrk -- /bin/sh -x -c nc -v -z -w 2 172.21.47.144 80' +Jul 27 01:42:58.481: INFO: stderr: "+ nc -v -z -w 2 172.21.47.144 80\nConnection to 172.21.47.144 80 port [tcp/http] succeeded!\n" +Jul 27 01:42:58.481: INFO: stdout: "" +Jul 27 01:42:58.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-5197 exec execpod-affinity6pdrk -- /bin/sh -x -c nc -v -z -w 2 10.245.128.17 31636' +Jul 27 01:42:58.735: INFO: stderr: "+ nc -v -z -w 2 10.245.128.17 31636\nConnection to 10.245.128.17 31636 port [tcp/*] succeeded!\n" +Jul 27 01:42:58.735: INFO: stdout: "" +Jul 27 01:42:58.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-5197 exec execpod-affinity6pdrk -- /bin/sh -x -c nc -v -z -w 2 10.245.128.19 31636' +Jul 27 01:42:58.959: INFO: stderr: "+ nc -v -z -w 2 10.245.128.19 31636\nConnection to 10.245.128.19 31636 port [tcp/*] succeeded!\n" +Jul 27 01:42:58.959: INFO: stdout: "" +Jul 27 01:42:58.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-5197 exec execpod-affinity6pdrk -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.245.128.17:31636/ ; done' +Jul 27 01:42:59.281: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:31636/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:31636/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:31636/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:31636/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:31636/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:31636/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:31636/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:31636/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:31636/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:31636/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:31636/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:31636/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:31636/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:31636/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:31636/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:31636/\n" +Jul 27 01:42:59.281: INFO: stdout: "\naffinity-nodeport-2wlbz\naffinity-nodeport-2wlbz\naffinity-nodeport-2wlbz\naffinity-nodeport-2wlbz\naffinity-nodeport-2wlbz\naffinity-nodeport-2wlbz\naffinity-nodeport-2wlbz\naffinity-nodeport-2wlbz\naffinity-nodeport-2wlbz\naffinity-nodeport-2wlbz\naffinity-nodeport-2wlbz\naffinity-nodeport-2wlbz\naffinity-nodeport-2wlbz\naffinity-nodeport-2wlbz\naffinity-nodeport-2wlbz\naffinity-nodeport-2wlbz" +Jul 27 01:42:59.281: INFO: Received response from host: affinity-nodeport-2wlbz +Jul 27 01:42:59.281: INFO: Received response from host: affinity-nodeport-2wlbz +Jul 27 01:42:59.281: INFO: Received response from host: affinity-nodeport-2wlbz +Jul 27 01:42:59.281: INFO: Received response from host: affinity-nodeport-2wlbz +Jul 27 01:42:59.281: INFO: Received response from host: affinity-nodeport-2wlbz +Jul 27 01:42:59.281: INFO: Received response from host: affinity-nodeport-2wlbz +Jul 27 01:42:59.281: INFO: Received response from host: affinity-nodeport-2wlbz +Jul 27 01:42:59.281: INFO: Received response from host: affinity-nodeport-2wlbz +Jul 27 01:42:59.281: INFO: Received response from host: affinity-nodeport-2wlbz +Jul 27 01:42:59.281: INFO: Received response from host: affinity-nodeport-2wlbz +Jul 27 01:42:59.281: INFO: Received response from host: affinity-nodeport-2wlbz +Jul 27 01:42:59.281: INFO: Received response from host: affinity-nodeport-2wlbz +Jul 27 01:42:59.281: INFO: Received response from host: affinity-nodeport-2wlbz +Jul 27 01:42:59.281: INFO: Received response from host: affinity-nodeport-2wlbz +Jul 27 01:42:59.281: INFO: Received response from host: affinity-nodeport-2wlbz +Jul 27 01:42:59.281: INFO: Received response from host: affinity-nodeport-2wlbz +Jul 27 01:42:59.281: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport in namespace services-5197, will wait for the garbage collector to delete the pods 07/27/23 01:42:59.304 +Jul 27 01:42:59.393: INFO: Deleting ReplicationController affinity-nodeport took: 22.728966ms +Jul 27 01:42:59.493: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.160472ms +[AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 -Jun 12 20:56:56.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] +Jul 27 01:43:02.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] +[DeferCleanup (Each)] [sig-network] Services dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] +[DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 -STEP: Destroying namespace "init-container-7334" for this suite. 06/12/23 20:56:57.007 +STEP: Destroying namespace "services-5197" for this suite. 07/27/23 01:43:02.071 ------------------------------ -• [SLOW TEST] [5.168 seconds] -[sig-node] InitContainer [NodeConformance] -test/e2e/common/node/framework.go:23 - should invoke init containers on a RestartAlways pod [Conformance] - test/e2e/common/node/init_container.go:255 +• [SLOW TEST] [10.376 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should have session affinity work for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2228 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] InitContainer [NodeConformance] + [BeforeEach] [sig-network] Services set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:56:51.896 - Jun 12 20:56:51.896: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename init-container 06/12/23 20:56:51.899 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:56:51.938 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:56:51.954 - [BeforeEach] [sig-node] InitContainer [NodeConformance] + STEP: Creating a kubernetes client 07/27/23 01:42:51.72 + Jul 27 01:42:51.720: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename services 07/27/23 01:42:51.72 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:42:51.779 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:42:51.789 + [BeforeEach] [sig-network] Services test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-node] InitContainer [NodeConformance] - test/e2e/common/node/init_container.go:165 - [It] should invoke init containers on a RestartAlways pod [Conformance] - test/e2e/common/node/init_container.go:255 - STEP: creating the pod 06/12/23 20:56:51.981 - Jun 12 20:56:51.982: INFO: PodSpec: initContainers in spec.initContainers - [AfterEach] [sig-node] InitContainer [NodeConformance] + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2228 + STEP: creating service in namespace services-5197 07/27/23 01:42:51.798 + STEP: creating service affinity-nodeport in namespace services-5197 07/27/23 01:42:51.798 + STEP: creating replication controller affinity-nodeport in namespace services-5197 07/27/23 01:42:51.894 + I0727 01:42:51.913588 20 runners.go:193] Created replication controller with name: affinity-nodeport, namespace: services-5197, replica count: 3 + I0727 01:42:54.964287 20 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Jul 27 01:42:55.002: INFO: Creating new exec pod + Jul 27 01:42:55.024: INFO: Waiting up to 5m0s for pod "execpod-affinity6pdrk" in namespace "services-5197" to be "running" + Jul 27 01:42:55.044: INFO: Pod "execpod-affinity6pdrk": Phase="Pending", Reason="", readiness=false. Elapsed: 19.92771ms + Jul 27 01:42:57.054: INFO: Pod "execpod-affinity6pdrk": Phase="Running", Reason="", readiness=true. Elapsed: 2.029717618s + Jul 27 01:42:57.054: INFO: Pod "execpod-affinity6pdrk" satisfied condition "running" + Jul 27 01:42:58.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-5197 exec execpod-affinity6pdrk -- /bin/sh -x -c nc -v -z -w 2 affinity-nodeport 80' + Jul 27 01:42:58.297: INFO: stderr: "+ nc -v -z -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" + Jul 27 01:42:58.297: INFO: stdout: "" + Jul 27 01:42:58.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-5197 exec execpod-affinity6pdrk -- /bin/sh -x -c nc -v -z -w 2 172.21.47.144 80' + Jul 27 01:42:58.481: INFO: stderr: "+ nc -v -z -w 2 172.21.47.144 80\nConnection to 172.21.47.144 80 port [tcp/http] succeeded!\n" + Jul 27 01:42:58.481: INFO: stdout: "" + Jul 27 01:42:58.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-5197 exec execpod-affinity6pdrk -- /bin/sh -x -c nc -v -z -w 2 10.245.128.17 31636' + Jul 27 01:42:58.735: INFO: stderr: "+ nc -v -z -w 2 10.245.128.17 31636\nConnection to 10.245.128.17 31636 port [tcp/*] succeeded!\n" + Jul 27 01:42:58.735: INFO: stdout: "" + Jul 27 01:42:58.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-5197 exec execpod-affinity6pdrk -- /bin/sh -x -c nc -v -z -w 2 10.245.128.19 31636' + Jul 27 01:42:58.959: INFO: stderr: "+ nc -v -z -w 2 10.245.128.19 31636\nConnection to 10.245.128.19 31636 port [tcp/*] succeeded!\n" + Jul 27 01:42:58.959: INFO: stdout: "" + Jul 27 01:42:58.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-5197 exec execpod-affinity6pdrk -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.245.128.17:31636/ ; done' + Jul 27 01:42:59.281: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:31636/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:31636/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:31636/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:31636/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:31636/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:31636/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:31636/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:31636/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:31636/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:31636/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:31636/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:31636/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:31636/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:31636/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:31636/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:31636/\n" + Jul 27 01:42:59.281: INFO: stdout: "\naffinity-nodeport-2wlbz\naffinity-nodeport-2wlbz\naffinity-nodeport-2wlbz\naffinity-nodeport-2wlbz\naffinity-nodeport-2wlbz\naffinity-nodeport-2wlbz\naffinity-nodeport-2wlbz\naffinity-nodeport-2wlbz\naffinity-nodeport-2wlbz\naffinity-nodeport-2wlbz\naffinity-nodeport-2wlbz\naffinity-nodeport-2wlbz\naffinity-nodeport-2wlbz\naffinity-nodeport-2wlbz\naffinity-nodeport-2wlbz\naffinity-nodeport-2wlbz" + Jul 27 01:42:59.281: INFO: Received response from host: affinity-nodeport-2wlbz + Jul 27 01:42:59.281: INFO: Received response from host: affinity-nodeport-2wlbz + Jul 27 01:42:59.281: INFO: Received response from host: affinity-nodeport-2wlbz + Jul 27 01:42:59.281: INFO: Received response from host: affinity-nodeport-2wlbz + Jul 27 01:42:59.281: INFO: Received response from host: affinity-nodeport-2wlbz + Jul 27 01:42:59.281: INFO: Received response from host: affinity-nodeport-2wlbz + Jul 27 01:42:59.281: INFO: Received response from host: affinity-nodeport-2wlbz + Jul 27 01:42:59.281: INFO: Received response from host: affinity-nodeport-2wlbz + Jul 27 01:42:59.281: INFO: Received response from host: affinity-nodeport-2wlbz + Jul 27 01:42:59.281: INFO: Received response from host: affinity-nodeport-2wlbz + Jul 27 01:42:59.281: INFO: Received response from host: affinity-nodeport-2wlbz + Jul 27 01:42:59.281: INFO: Received response from host: affinity-nodeport-2wlbz + Jul 27 01:42:59.281: INFO: Received response from host: affinity-nodeport-2wlbz + Jul 27 01:42:59.281: INFO: Received response from host: affinity-nodeport-2wlbz + Jul 27 01:42:59.281: INFO: Received response from host: affinity-nodeport-2wlbz + Jul 27 01:42:59.281: INFO: Received response from host: affinity-nodeport-2wlbz + Jul 27 01:42:59.281: INFO: Cleaning up the exec pod + STEP: deleting ReplicationController affinity-nodeport in namespace services-5197, will wait for the garbage collector to delete the pods 07/27/23 01:42:59.304 + Jul 27 01:42:59.393: INFO: Deleting ReplicationController affinity-nodeport took: 22.728966ms + Jul 27 01:42:59.493: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.160472ms + [AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 - Jun 12 20:56:56.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + Jul 27 01:43:02.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + [DeferCleanup (Each)] [sig-network] Services dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + [DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 - STEP: Destroying namespace "init-container-7334" for this suite. 06/12/23 20:56:57.007 + STEP: Destroying namespace "services-5197" for this suite. 07/27/23 01:43:02.071 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook - should execute prestop http hook properly [NodeConformance] [Conformance] - test/e2e/common/node/lifecycle_hook.go:212 -[BeforeEach] [sig-node] Container Lifecycle Hook +[sig-node] Probing container + should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:169 +[BeforeEach] [sig-node] Probing container set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:56:57.083 -Jun 12 20:56:57.085: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename container-lifecycle-hook 06/12/23 20:56:57.089 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:56:57.152 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:56:57.178 -[BeforeEach] [sig-node] Container Lifecycle Hook +STEP: Creating a kubernetes client 07/27/23 01:43:02.097 +Jul 27 01:43:02.097: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename container-probe 07/27/23 01:43:02.098 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:43:02.173 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:43:02.217 +[BeforeEach] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] when create a pod with lifecycle hook - test/e2e/common/node/lifecycle_hook.go:77 -STEP: create the container to handle the HTTPGet hook request. 06/12/23 20:56:57.257 -Jun 12 20:56:57.296: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-279" to be "running and ready" -Jun 12 20:56:57.388: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 91.668318ms -Jun 12 20:56:57.388: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:56:59.408: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112242771s -Jun 12 20:56:59.408: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:57:01.401: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1049709s -Jun 12 20:57:01.401: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:57:03.402: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10604752s -Jun 12 20:57:03.402: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:57:05.397: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 8.101531339s -Jun 12 20:57:05.398: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:57:07.402: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 10.106467607s -Jun 12 20:57:07.403: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:57:09.402: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 12.105636736s -Jun 12 20:57:09.402: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:57:11.423: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 14.127170615s -Jun 12 20:57:11.423: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:57:13.452: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 16.155744395s -Jun 12 20:57:13.452: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:57:15.397: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 18.100928367s -Jun 12 20:57:15.397: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:57:17.419: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 20.1232287s -Jun 12 20:57:17.419: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) -Jun 12 20:57:17.419: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" -[It] should execute prestop http hook properly [NodeConformance] [Conformance] - test/e2e/common/node/lifecycle_hook.go:212 -STEP: create the pod with lifecycle hook 06/12/23 20:57:17.431 -Jun 12 20:57:17.454: INFO: Waiting up to 5m0s for pod "pod-with-prestop-http-hook" in namespace "container-lifecycle-hook-279" to be "running and ready" -Jun 12 20:57:17.466: INFO: Pod "pod-with-prestop-http-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 11.629579ms -Jun 12 20:57:17.466: INFO: The phase of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:57:19.477: INFO: Pod "pod-with-prestop-http-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022333991s -Jun 12 20:57:19.477: INFO: The phase of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) -Jun 12 20:57:21.477: INFO: Pod "pod-with-prestop-http-hook": Phase="Running", Reason="", readiness=true. Elapsed: 4.022414514s -Jun 12 20:57:21.477: INFO: The phase of Pod pod-with-prestop-http-hook is Running (Ready = true) -Jun 12 20:57:21.477: INFO: Pod "pod-with-prestop-http-hook" satisfied condition "running and ready" -STEP: delete the pod with lifecycle hook 06/12/23 20:57:21.485 -Jun 12 20:57:21.505: INFO: Waiting for pod pod-with-prestop-http-hook to disappear -Jun 12 20:57:21.516: INFO: Pod pod-with-prestop-http-hook still exists -Jun 12 20:57:23.517: INFO: Waiting for pod pod-with-prestop-http-hook to disappear -Jun 12 20:57:23.538: INFO: Pod pod-with-prestop-http-hook still exists -Jun 12 20:57:25.517: INFO: Waiting for pod pod-with-prestop-http-hook to disappear -Jun 12 20:57:25.527: INFO: Pod pod-with-prestop-http-hook no longer exists -STEP: check prestop hook 06/12/23 20:57:25.527 -[AfterEach] [sig-node] Container Lifecycle Hook +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 +[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:169 +STEP: Creating pod liveness-d3460fe5-b088-4cc4-a3c0-58ec82e41655 in namespace container-probe-3483 07/27/23 01:43:02.233 +Jul 27 01:43:02.293: INFO: Waiting up to 5m0s for pod "liveness-d3460fe5-b088-4cc4-a3c0-58ec82e41655" in namespace "container-probe-3483" to be "not pending" +Jul 27 01:43:02.331: INFO: Pod "liveness-d3460fe5-b088-4cc4-a3c0-58ec82e41655": Phase="Pending", Reason="", readiness=false. Elapsed: 38.015861ms +Jul 27 01:43:04.340: INFO: Pod "liveness-d3460fe5-b088-4cc4-a3c0-58ec82e41655": Phase="Running", Reason="", readiness=true. Elapsed: 2.046783644s +Jul 27 01:43:04.340: INFO: Pod "liveness-d3460fe5-b088-4cc4-a3c0-58ec82e41655" satisfied condition "not pending" +Jul 27 01:43:04.340: INFO: Started pod liveness-d3460fe5-b088-4cc4-a3c0-58ec82e41655 in namespace container-probe-3483 +STEP: checking the pod's current state and verifying that restartCount is present 07/27/23 01:43:04.34 +Jul 27 01:43:04.347: INFO: Initial restart count of pod liveness-d3460fe5-b088-4cc4-a3c0-58ec82e41655 is 0 +Jul 27 01:43:24.492: INFO: Restart count of pod container-probe-3483/liveness-d3460fe5-b088-4cc4-a3c0-58ec82e41655 is now 1 (20.145246229s elapsed) +STEP: deleting the pod 07/27/23 01:43:24.492 +[AfterEach] [sig-node] Probing container test/e2e/framework/node/init/init.go:32 -Jun 12 20:57:25.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook +Jul 27 01:43:24.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook +[DeferCleanup (Each)] [sig-node] Probing container dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook +[DeferCleanup (Each)] [sig-node] Probing container tear down framework | framework.go:193 -STEP: Destroying namespace "container-lifecycle-hook-279" for this suite. 06/12/23 20:57:25.592 +STEP: Destroying namespace "container-probe-3483" for this suite. 07/27/23 01:43:24.533 ------------------------------ -• [SLOW TEST] [28.524 seconds] -[sig-node] Container Lifecycle Hook +• [SLOW TEST] [22.457 seconds] +[sig-node] Probing container test/e2e/common/node/framework.go:23 - when create a pod with lifecycle hook - test/e2e/common/node/lifecycle_hook.go:46 - should execute prestop http hook properly [NodeConformance] [Conformance] - test/e2e/common/node/lifecycle_hook.go:212 + should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:169 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Container Lifecycle Hook + [BeforeEach] [sig-node] Probing container set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:56:57.083 - Jun 12 20:56:57.085: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename container-lifecycle-hook 06/12/23 20:56:57.089 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:56:57.152 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:56:57.178 - [BeforeEach] [sig-node] Container Lifecycle Hook + STEP: Creating a kubernetes client 07/27/23 01:43:02.097 + Jul 27 01:43:02.097: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename container-probe 07/27/23 01:43:02.098 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:43:02.173 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:43:02.217 + [BeforeEach] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] when create a pod with lifecycle hook - test/e2e/common/node/lifecycle_hook.go:77 - STEP: create the container to handle the HTTPGet hook request. 06/12/23 20:56:57.257 - Jun 12 20:56:57.296: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-279" to be "running and ready" - Jun 12 20:56:57.388: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 91.668318ms - Jun 12 20:56:57.388: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:56:59.408: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112242771s - Jun 12 20:56:59.408: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:57:01.401: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1049709s - Jun 12 20:57:01.401: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:57:03.402: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10604752s - Jun 12 20:57:03.402: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:57:05.397: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 8.101531339s - Jun 12 20:57:05.398: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:57:07.402: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 10.106467607s - Jun 12 20:57:07.403: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:57:09.402: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 12.105636736s - Jun 12 20:57:09.402: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:57:11.423: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 14.127170615s - Jun 12 20:57:11.423: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:57:13.452: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 16.155744395s - Jun 12 20:57:13.452: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:57:15.397: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 18.100928367s - Jun 12 20:57:15.397: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:57:17.419: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 20.1232287s - Jun 12 20:57:17.419: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) - Jun 12 20:57:17.419: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" - [It] should execute prestop http hook properly [NodeConformance] [Conformance] - test/e2e/common/node/lifecycle_hook.go:212 - STEP: create the pod with lifecycle hook 06/12/23 20:57:17.431 - Jun 12 20:57:17.454: INFO: Waiting up to 5m0s for pod "pod-with-prestop-http-hook" in namespace "container-lifecycle-hook-279" to be "running and ready" - Jun 12 20:57:17.466: INFO: Pod "pod-with-prestop-http-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 11.629579ms - Jun 12 20:57:17.466: INFO: The phase of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:57:19.477: INFO: Pod "pod-with-prestop-http-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022333991s - Jun 12 20:57:19.477: INFO: The phase of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) - Jun 12 20:57:21.477: INFO: Pod "pod-with-prestop-http-hook": Phase="Running", Reason="", readiness=true. Elapsed: 4.022414514s - Jun 12 20:57:21.477: INFO: The phase of Pod pod-with-prestop-http-hook is Running (Ready = true) - Jun 12 20:57:21.477: INFO: Pod "pod-with-prestop-http-hook" satisfied condition "running and ready" - STEP: delete the pod with lifecycle hook 06/12/23 20:57:21.485 - Jun 12 20:57:21.505: INFO: Waiting for pod pod-with-prestop-http-hook to disappear - Jun 12 20:57:21.516: INFO: Pod pod-with-prestop-http-hook still exists - Jun 12 20:57:23.517: INFO: Waiting for pod pod-with-prestop-http-hook to disappear - Jun 12 20:57:23.538: INFO: Pod pod-with-prestop-http-hook still exists - Jun 12 20:57:25.517: INFO: Waiting for pod pod-with-prestop-http-hook to disappear - Jun 12 20:57:25.527: INFO: Pod pod-with-prestop-http-hook no longer exists - STEP: check prestop hook 06/12/23 20:57:25.527 - [AfterEach] [sig-node] Container Lifecycle Hook + [BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 + [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:169 + STEP: Creating pod liveness-d3460fe5-b088-4cc4-a3c0-58ec82e41655 in namespace container-probe-3483 07/27/23 01:43:02.233 + Jul 27 01:43:02.293: INFO: Waiting up to 5m0s for pod "liveness-d3460fe5-b088-4cc4-a3c0-58ec82e41655" in namespace "container-probe-3483" to be "not pending" + Jul 27 01:43:02.331: INFO: Pod "liveness-d3460fe5-b088-4cc4-a3c0-58ec82e41655": Phase="Pending", Reason="", readiness=false. Elapsed: 38.015861ms + Jul 27 01:43:04.340: INFO: Pod "liveness-d3460fe5-b088-4cc4-a3c0-58ec82e41655": Phase="Running", Reason="", readiness=true. Elapsed: 2.046783644s + Jul 27 01:43:04.340: INFO: Pod "liveness-d3460fe5-b088-4cc4-a3c0-58ec82e41655" satisfied condition "not pending" + Jul 27 01:43:04.340: INFO: Started pod liveness-d3460fe5-b088-4cc4-a3c0-58ec82e41655 in namespace container-probe-3483 + STEP: checking the pod's current state and verifying that restartCount is present 07/27/23 01:43:04.34 + Jul 27 01:43:04.347: INFO: Initial restart count of pod liveness-d3460fe5-b088-4cc4-a3c0-58ec82e41655 is 0 + Jul 27 01:43:24.492: INFO: Restart count of pod container-probe-3483/liveness-d3460fe5-b088-4cc4-a3c0-58ec82e41655 is now 1 (20.145246229s elapsed) + STEP: deleting the pod 07/27/23 01:43:24.492 + [AfterEach] [sig-node] Probing container test/e2e/framework/node/init/init.go:32 - Jun 12 20:57:25.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + Jul 27 01:43:24.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + [DeferCleanup (Each)] [sig-node] Probing container dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + [DeferCleanup (Each)] [sig-node] Probing container tear down framework | framework.go:193 - STEP: Destroying namespace "container-lifecycle-hook-279" for this suite. 06/12/23 20:57:25.592 + STEP: Destroying namespace "container-probe-3483" for this suite. 07/27/23 01:43:24.533 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSS ------------------------------ -[sig-storage] EmptyDir volumes - should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:117 -[BeforeEach] [sig-storage] EmptyDir volumes +[sig-storage] Projected secret + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:56 +[BeforeEach] [sig-storage] Projected secret set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:57:25.615 -Jun 12 20:57:25.615: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename emptydir 06/12/23 20:57:25.617 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:57:25.658 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:57:25.672 -[BeforeEach] [sig-storage] EmptyDir volumes +STEP: Creating a kubernetes client 07/27/23 01:43:24.555 +Jul 27 01:43:24.555: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename projected 07/27/23 01:43:24.556 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:43:24.594 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:43:24.604 +[BeforeEach] [sig-storage] Projected secret test/e2e/framework/metrics/init/init.go:31 -[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:117 -STEP: Creating a pod to test emptydir 0777 on tmpfs 06/12/23 20:57:25.683 -Jun 12 20:57:25.712: INFO: Waiting up to 5m0s for pod "pod-6aad0db2-45a8-4158-a97d-9f42f231a132" in namespace "emptydir-3364" to be "Succeeded or Failed" -Jun 12 20:57:25.730: INFO: Pod "pod-6aad0db2-45a8-4158-a97d-9f42f231a132": Phase="Pending", Reason="", readiness=false. Elapsed: 17.496649ms -Jun 12 20:57:27.742: INFO: Pod "pod-6aad0db2-45a8-4158-a97d-9f42f231a132": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030046895s -Jun 12 20:57:29.760: INFO: Pod "pod-6aad0db2-45a8-4158-a97d-9f42f231a132": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047984089s -Jun 12 20:57:31.740: INFO: Pod "pod-6aad0db2-45a8-4158-a97d-9f42f231a132": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028132308s -STEP: Saw pod success 06/12/23 20:57:31.74 -Jun 12 20:57:31.741: INFO: Pod "pod-6aad0db2-45a8-4158-a97d-9f42f231a132" satisfied condition "Succeeded or Failed" -Jun 12 20:57:31.750: INFO: Trying to get logs from node 10.138.75.70 pod pod-6aad0db2-45a8-4158-a97d-9f42f231a132 container test-container: -STEP: delete the pod 06/12/23 20:57:31.804 -Jun 12 20:57:31.839: INFO: Waiting for pod pod-6aad0db2-45a8-4158-a97d-9f42f231a132 to disappear -Jun 12 20:57:31.847: INFO: Pod pod-6aad0db2-45a8-4158-a97d-9f42f231a132 no longer exists -[AfterEach] [sig-storage] EmptyDir volumes +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:56 +STEP: Creating projection with secret that has name projected-secret-test-0c0ae9df-74dc-4d84-bccf-250bbb740eb0 07/27/23 01:43:24.613 +STEP: Creating a pod to test consume secrets 07/27/23 01:43:24.626 +Jul 27 01:43:24.649: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c7b48945-a0d0-4899-a0e0-d9bf270fe0f2" in namespace "projected-9195" to be "Succeeded or Failed" +Jul 27 01:43:24.657: INFO: Pod "pod-projected-secrets-c7b48945-a0d0-4899-a0e0-d9bf270fe0f2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.195595ms +Jul 27 01:43:26.679: INFO: Pod "pod-projected-secrets-c7b48945-a0d0-4899-a0e0-d9bf270fe0f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030142375s +Jul 27 01:43:28.667: INFO: Pod "pod-projected-secrets-c7b48945-a0d0-4899-a0e0-d9bf270fe0f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01836208s +STEP: Saw pod success 07/27/23 01:43:28.667 +Jul 27 01:43:28.667: INFO: Pod "pod-projected-secrets-c7b48945-a0d0-4899-a0e0-d9bf270fe0f2" satisfied condition "Succeeded or Failed" +Jul 27 01:43:28.676: INFO: Trying to get logs from node 10.245.128.19 pod pod-projected-secrets-c7b48945-a0d0-4899-a0e0-d9bf270fe0f2 container projected-secret-volume-test: +STEP: delete the pod 07/27/23 01:43:28.72 +Jul 27 01:43:28.742: INFO: Waiting for pod pod-projected-secrets-c7b48945-a0d0-4899-a0e0-d9bf270fe0f2 to disappear +Jul 27 01:43:28.749: INFO: Pod pod-projected-secrets-c7b48945-a0d0-4899-a0e0-d9bf270fe0f2 no longer exists +[AfterEach] [sig-storage] Projected secret test/e2e/framework/node/init/init.go:32 -Jun 12 20:57:31.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +Jul 27 01:43:28.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected secret test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +[DeferCleanup (Each)] [sig-storage] Projected secret dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +[DeferCleanup (Each)] [sig-storage] Projected secret tear down framework | framework.go:193 -STEP: Destroying namespace "emptydir-3364" for this suite. 06/12/23 20:57:31.863 +STEP: Destroying namespace "projected-9195" for this suite. 07/27/23 01:43:28.779 ------------------------------ -• [SLOW TEST] [6.260 seconds] -[sig-storage] EmptyDir volumes +• [4.247 seconds] +[sig-storage] Projected secret test/e2e/common/storage/framework.go:23 - should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:117 + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:56 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] EmptyDir volumes + [BeforeEach] [sig-storage] Projected secret set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:57:25.615 - Jun 12 20:57:25.615: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename emptydir 06/12/23 20:57:25.617 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:57:25.658 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:57:25.672 - [BeforeEach] [sig-storage] EmptyDir volumes + STEP: Creating a kubernetes client 07/27/23 01:43:24.555 + Jul 27 01:43:24.555: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename projected 07/27/23 01:43:24.556 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:43:24.594 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:43:24.604 + [BeforeEach] [sig-storage] Projected secret test/e2e/framework/metrics/init/init.go:31 - [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:117 - STEP: Creating a pod to test emptydir 0777 on tmpfs 06/12/23 20:57:25.683 - Jun 12 20:57:25.712: INFO: Waiting up to 5m0s for pod "pod-6aad0db2-45a8-4158-a97d-9f42f231a132" in namespace "emptydir-3364" to be "Succeeded or Failed" - Jun 12 20:57:25.730: INFO: Pod "pod-6aad0db2-45a8-4158-a97d-9f42f231a132": Phase="Pending", Reason="", readiness=false. Elapsed: 17.496649ms - Jun 12 20:57:27.742: INFO: Pod "pod-6aad0db2-45a8-4158-a97d-9f42f231a132": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030046895s - Jun 12 20:57:29.760: INFO: Pod "pod-6aad0db2-45a8-4158-a97d-9f42f231a132": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047984089s - Jun 12 20:57:31.740: INFO: Pod "pod-6aad0db2-45a8-4158-a97d-9f42f231a132": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028132308s - STEP: Saw pod success 06/12/23 20:57:31.74 - Jun 12 20:57:31.741: INFO: Pod "pod-6aad0db2-45a8-4158-a97d-9f42f231a132" satisfied condition "Succeeded or Failed" - Jun 12 20:57:31.750: INFO: Trying to get logs from node 10.138.75.70 pod pod-6aad0db2-45a8-4158-a97d-9f42f231a132 container test-container: - STEP: delete the pod 06/12/23 20:57:31.804 - Jun 12 20:57:31.839: INFO: Waiting for pod pod-6aad0db2-45a8-4158-a97d-9f42f231a132 to disappear - Jun 12 20:57:31.847: INFO: Pod pod-6aad0db2-45a8-4158-a97d-9f42f231a132 no longer exists - [AfterEach] [sig-storage] EmptyDir volumes + [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:56 + STEP: Creating projection with secret that has name projected-secret-test-0c0ae9df-74dc-4d84-bccf-250bbb740eb0 07/27/23 01:43:24.613 + STEP: Creating a pod to test consume secrets 07/27/23 01:43:24.626 + Jul 27 01:43:24.649: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c7b48945-a0d0-4899-a0e0-d9bf270fe0f2" in namespace "projected-9195" to be "Succeeded or Failed" + Jul 27 01:43:24.657: INFO: Pod "pod-projected-secrets-c7b48945-a0d0-4899-a0e0-d9bf270fe0f2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.195595ms + Jul 27 01:43:26.679: INFO: Pod "pod-projected-secrets-c7b48945-a0d0-4899-a0e0-d9bf270fe0f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030142375s + Jul 27 01:43:28.667: INFO: Pod "pod-projected-secrets-c7b48945-a0d0-4899-a0e0-d9bf270fe0f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01836208s + STEP: Saw pod success 07/27/23 01:43:28.667 + Jul 27 01:43:28.667: INFO: Pod "pod-projected-secrets-c7b48945-a0d0-4899-a0e0-d9bf270fe0f2" satisfied condition "Succeeded or Failed" + Jul 27 01:43:28.676: INFO: Trying to get logs from node 10.245.128.19 pod pod-projected-secrets-c7b48945-a0d0-4899-a0e0-d9bf270fe0f2 container projected-secret-volume-test: + STEP: delete the pod 07/27/23 01:43:28.72 + Jul 27 01:43:28.742: INFO: Waiting for pod pod-projected-secrets-c7b48945-a0d0-4899-a0e0-d9bf270fe0f2 to disappear + Jul 27 01:43:28.749: INFO: Pod pod-projected-secrets-c7b48945-a0d0-4899-a0e0-d9bf270fe0f2 no longer exists + [AfterEach] [sig-storage] Projected secret test/e2e/framework/node/init/init.go:32 - Jun 12 20:57:31.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + Jul 27 01:43:28.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected secret test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + [DeferCleanup (Each)] [sig-storage] Projected secret dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + [DeferCleanup (Each)] [sig-storage] Projected secret tear down framework | framework.go:193 - STEP: Destroying namespace "emptydir-3364" for this suite. 06/12/23 20:57:31.863 + STEP: Destroying namespace "projected-9195" for this suite. 07/27/23 01:43:28.779 << End Captured GinkgoWriter Output ------------------------------ -SS +SSSSSSSS ------------------------------ -[sig-node] RuntimeClass - should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance] - test/e2e/common/node/runtimeclass.go:55 -[BeforeEach] [sig-node] RuntimeClass +[sig-network] Services + should be able to change the type from NodePort to ExternalName [Conformance] + test/e2e/network/service.go:1557 +[BeforeEach] [sig-network] Services set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:57:31.88 -Jun 12 20:57:31.880: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename runtimeclass 06/12/23 20:57:31.881 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:57:31.921 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:57:31.928 -[BeforeEach] [sig-node] RuntimeClass +STEP: Creating a kubernetes client 07/27/23 01:43:28.802 +Jul 27 01:43:28.802: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename services 07/27/23 01:43:28.803 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:43:28.872 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:43:28.882 +[BeforeEach] [sig-network] Services test/e2e/framework/metrics/init/init.go:31 -[It] should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance] - test/e2e/common/node/runtimeclass.go:55 -[AfterEach] [sig-node] RuntimeClass +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should be able to change the type from NodePort to ExternalName [Conformance] + test/e2e/network/service.go:1557 +STEP: creating a service nodeport-service with the type=NodePort in namespace services-9887 07/27/23 01:43:28.891 +STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service 07/27/23 01:43:28.977 +STEP: creating service externalsvc in namespace services-9887 07/27/23 01:43:28.977 +STEP: creating replication controller externalsvc in namespace services-9887 07/27/23 01:43:29.053 +I0727 01:43:29.072955 20 runners.go:193] Created replication controller with name: externalsvc, namespace: services-9887, replica count: 2 +I0727 01:43:32.123937 20 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +STEP: changing the NodePort service to type=ExternalName 07/27/23 01:43:32.138 +Jul 27 01:43:32.209: INFO: Creating new exec pod +Jul 27 01:43:32.233: INFO: Waiting up to 5m0s for pod "execpodn9sq4" in namespace "services-9887" to be "running" +Jul 27 01:43:32.241: INFO: Pod "execpodn9sq4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.214143ms +Jul 27 01:43:34.251: INFO: Pod "execpodn9sq4": Phase="Running", Reason="", readiness=true. Elapsed: 2.018163868s +Jul 27 01:43:34.251: INFO: Pod "execpodn9sq4" satisfied condition "running" +Jul 27 01:43:34.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-9887 exec execpodn9sq4 -- /bin/sh -x -c nslookup nodeport-service.services-9887.svc.cluster.local' +Jul 27 01:43:34.568: INFO: stderr: "+ nslookup nodeport-service.services-9887.svc.cluster.local\n" +Jul 27 01:43:34.568: INFO: stdout: "Server:\t\t172.21.0.10\nAddress:\t172.21.0.10#53\n\nnodeport-service.services-9887.svc.cluster.local\tcanonical name = externalsvc.services-9887.svc.cluster.local.\nName:\texternalsvc.services-9887.svc.cluster.local\nAddress: 172.21.176.93\n\n" +STEP: deleting ReplicationController externalsvc in namespace services-9887, will wait for the garbage collector to delete the pods 07/27/23 01:43:34.568 +Jul 27 01:43:34.652: INFO: Deleting ReplicationController externalsvc took: 21.288054ms +Jul 27 01:43:34.753: INFO: Terminating ReplicationController externalsvc pods took: 100.379721ms +Jul 27 01:43:37.626: INFO: Cleaning up the NodePort to ExternalName test service +[AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 -Jun 12 20:57:31.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] RuntimeClass +Jul 27 01:43:37.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] RuntimeClass +[DeferCleanup (Each)] [sig-network] Services dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] RuntimeClass +[DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 -STEP: Destroying namespace "runtimeclass-7635" for this suite. 06/12/23 20:57:31.979 +STEP: Destroying namespace "services-9887" for this suite. 07/27/23 01:43:37.69 ------------------------------ -• [0.117 seconds] -[sig-node] RuntimeClass -test/e2e/common/node/framework.go:23 - should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance] - test/e2e/common/node/runtimeclass.go:55 +• [SLOW TEST] [8.919 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should be able to change the type from NodePort to ExternalName [Conformance] + test/e2e/network/service.go:1557 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] RuntimeClass + [BeforeEach] [sig-network] Services set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:57:31.88 - Jun 12 20:57:31.880: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename runtimeclass 06/12/23 20:57:31.881 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:57:31.921 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:57:31.928 - [BeforeEach] [sig-node] RuntimeClass + STEP: Creating a kubernetes client 07/27/23 01:43:28.802 + Jul 27 01:43:28.802: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename services 07/27/23 01:43:28.803 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:43:28.872 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:43:28.882 + [BeforeEach] [sig-network] Services test/e2e/framework/metrics/init/init.go:31 - [It] should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance] - test/e2e/common/node/runtimeclass.go:55 - [AfterEach] [sig-node] RuntimeClass + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should be able to change the type from NodePort to ExternalName [Conformance] + test/e2e/network/service.go:1557 + STEP: creating a service nodeport-service with the type=NodePort in namespace services-9887 07/27/23 01:43:28.891 + STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service 07/27/23 01:43:28.977 + STEP: creating service externalsvc in namespace services-9887 07/27/23 01:43:28.977 + STEP: creating replication controller externalsvc in namespace services-9887 07/27/23 01:43:29.053 + I0727 01:43:29.072955 20 runners.go:193] Created replication controller with name: externalsvc, namespace: services-9887, replica count: 2 + I0727 01:43:32.123937 20 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + STEP: changing the NodePort service to type=ExternalName 07/27/23 01:43:32.138 + Jul 27 01:43:32.209: INFO: Creating new exec pod + Jul 27 01:43:32.233: INFO: Waiting up to 5m0s for pod "execpodn9sq4" in namespace "services-9887" to be "running" + Jul 27 01:43:32.241: INFO: Pod "execpodn9sq4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.214143ms + Jul 27 01:43:34.251: INFO: Pod "execpodn9sq4": Phase="Running", Reason="", readiness=true. Elapsed: 2.018163868s + Jul 27 01:43:34.251: INFO: Pod "execpodn9sq4" satisfied condition "running" + Jul 27 01:43:34.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-9887 exec execpodn9sq4 -- /bin/sh -x -c nslookup nodeport-service.services-9887.svc.cluster.local' + Jul 27 01:43:34.568: INFO: stderr: "+ nslookup nodeport-service.services-9887.svc.cluster.local\n" + Jul 27 01:43:34.568: INFO: stdout: "Server:\t\t172.21.0.10\nAddress:\t172.21.0.10#53\n\nnodeport-service.services-9887.svc.cluster.local\tcanonical name = externalsvc.services-9887.svc.cluster.local.\nName:\texternalsvc.services-9887.svc.cluster.local\nAddress: 172.21.176.93\n\n" + STEP: deleting ReplicationController externalsvc in namespace services-9887, will wait for the garbage collector to delete the pods 07/27/23 01:43:34.568 + Jul 27 01:43:34.652: INFO: Deleting ReplicationController externalsvc took: 21.288054ms + Jul 27 01:43:34.753: INFO: Terminating ReplicationController externalsvc pods took: 100.379721ms + Jul 27 01:43:37.626: INFO: Cleaning up the NodePort to ExternalName test service + [AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 - Jun 12 20:57:31.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] RuntimeClass + Jul 27 01:43:37.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] RuntimeClass + [DeferCleanup (Each)] [sig-network] Services dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] RuntimeClass + [DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 - STEP: Destroying namespace "runtimeclass-7635" for this suite. 06/12/23 20:57:31.979 + STEP: Destroying namespace "services-9887" for this suite. 07/27/23 01:43:37.69 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSS +S ------------------------------ -[sig-storage] EmptyDir volumes - should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:207 -[BeforeEach] [sig-storage] EmptyDir volumes +[sig-apps] Daemon set [Serial] + should list and delete a collection of DaemonSets [Conformance] + test/e2e/apps/daemon_set.go:823 +[BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:57:31.999 -Jun 12 20:57:32.000: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename emptydir 06/12/23 20:57:32.003 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:57:32.052 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:57:32.087 -[BeforeEach] [sig-storage] EmptyDir volumes +STEP: Creating a kubernetes client 07/27/23 01:43:37.721 +Jul 27 01:43:37.721: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename daemonsets 07/27/23 01:43:37.722 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:43:37.762 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:43:37.775 +[BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 -[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:207 -STEP: Creating a pod to test emptydir 0666 on node default medium 06/12/23 20:57:32.105 -Jun 12 20:57:32.141: INFO: Waiting up to 5m0s for pod "pod-996e9aa1-2126-4923-899b-724c5740e29d" in namespace "emptydir-930" to be "Succeeded or Failed" -Jun 12 20:57:32.223: INFO: Pod "pod-996e9aa1-2126-4923-899b-724c5740e29d": Phase="Pending", Reason="", readiness=false. Elapsed: 81.865648ms -Jun 12 20:57:34.234: INFO: Pod "pod-996e9aa1-2126-4923-899b-724c5740e29d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093109024s -Jun 12 20:57:36.239: INFO: Pod "pod-996e9aa1-2126-4923-899b-724c5740e29d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098373275s -Jun 12 20:57:38.234: INFO: Pod "pod-996e9aa1-2126-4923-899b-724c5740e29d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.092515047s -STEP: Saw pod success 06/12/23 20:57:38.234 -Jun 12 20:57:38.234: INFO: Pod "pod-996e9aa1-2126-4923-899b-724c5740e29d" satisfied condition "Succeeded or Failed" -Jun 12 20:57:38.242: INFO: Trying to get logs from node 10.138.75.116 pod pod-996e9aa1-2126-4923-899b-724c5740e29d container test-container: -STEP: delete the pod 06/12/23 20:57:38.265 -Jun 12 20:57:38.304: INFO: Waiting for pod pod-996e9aa1-2126-4923-899b-724c5740e29d to disappear -Jun 12 20:57:38.313: INFO: Pod pod-996e9aa1-2126-4923-899b-724c5740e29d no longer exists -[AfterEach] [sig-storage] EmptyDir volumes +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 +[It] should list and delete a collection of DaemonSets [Conformance] + test/e2e/apps/daemon_set.go:823 +STEP: Creating simple DaemonSet "daemon-set" 07/27/23 01:43:37.878 +STEP: Check that daemon pods launch on every node of the cluster. 07/27/23 01:43:37.893 +Jul 27 01:43:37.913: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jul 27 01:43:37.913: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 +Jul 27 01:43:38.941: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jul 27 01:43:38.941: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 +Jul 27 01:43:39.943: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Jul 27 01:43:39.943: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 +Jul 27 01:43:40.936: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Jul 27 01:43:40.936: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +STEP: listing all DeamonSets 07/27/23 01:43:40.944 +STEP: DeleteCollection of the DaemonSets 07/27/23 01:43:40.955 +STEP: Verify that ReplicaSets have been deleted 07/27/23 01:43:40.972 +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 +Jul 27 01:43:41.002: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"71211"},"items":null} + +Jul 27 01:43:41.011: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"71212"},"items":[{"metadata":{"name":"daemon-set-7hkng","generateName":"daemon-set-","namespace":"daemonsets-9983","uid":"e4106464-99d4-4cb4-bcdb-c1a694e2b139","resourceVersion":"71211","creationTimestamp":"2023-07-27T01:43:37Z","deletionTimestamp":"2023-07-27T01:44:10Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"6cff669f8c","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"02ff0ef13e28abdd3e79a82b60d50da893869bee4f8b66f553945cb3cc0f46cf","cni.projectcalico.org/podIP":"172.17.225.44/32","cni.projectcalico.org/podIPs":"172.17.225.44/32","k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"k8s-pod-network\",\n \"ips\": [\n \"172.17.225.44\"\n ],\n \"default\": true,\n \"dns\": {}\n}]","openshift.io/scc":"anyuid"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"e83afb31-3497-4b02-85e8-d4c8c4ae146b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-27T01:43:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e83afb31-3497-4b02-85e8-d4c8c4ae146b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"calico","operation":"Update","apiVersion":"v1","time":"2023-07-27T01:43:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"multus","operation":"Update","apiVersion":"v1","time":"2023-07-27T01:43:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}},"subresource":"status"},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-27T01:43:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.225.44\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-s8fmx","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}},{"configMap":{"name":"openshift-service-ca.crt","items":[{"key":"service-ca.crt","path":"service-ca.crt"}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-s8fmx","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{"capabilities":{"drop":["MKNOD"]}}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"10.245.128.19","securityContext":{"seLinuxOptions":{"level":"s0:c37,c14"}},"imagePullSecrets":[{"name":"default-dockercfg-snhn4"}],"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["10.245.128.19"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-07-27T01:43:37Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-07-27T01:43:39Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-07-27T01:43:39Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-07-27T01:43:37Z"}],"hostIP":"10.245.128.19","podIP":"172.17.225.44","podIPs":[{"ip":"172.17.225.44"}],"startTime":"2023-07-27T01:43:37Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-07-27T01:43:39Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22","containerID":"cri-o://638d3f758cbc21d02651d70636d8f608ab3c07f2705f3e16ea1e74b19f17a200","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-jfxwz","generateName":"daemon-set-","namespace":"daemonsets-9983","uid":"c03e8c3a-27ab-4540-89eb-ea3e8cb9fcda","resourceVersion":"71209","creationTimestamp":"2023-07-27T01:43:37Z","deletionTimestamp":"2023-07-27T01:44:10Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"6cff669f8c","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"84afae3295eac014ae153bc97442de8b1668f6b6ca947b904e2601feab94b318","cni.projectcalico.org/podIP":"172.17.218.61/32","cni.projectcalico.org/podIPs":"172.17.218.61/32","k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"k8s-pod-network\",\n \"ips\": [\n \"172.17.218.61\"\n ],\n \"default\": true,\n \"dns\": {}\n}]","openshift.io/scc":"anyuid"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"e83afb31-3497-4b02-85e8-d4c8c4ae146b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-27T01:43:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e83afb31-3497-4b02-85e8-d4c8c4ae146b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"calico","operation":"Update","apiVersion":"v1","time":"2023-07-27T01:43:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"multus","operation":"Update","apiVersion":"v1","time":"2023-07-27T01:43:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}},"subresource":"status"},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-27T01:43:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.218.61\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-g48mq","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}},{"configMap":{"name":"openshift-service-ca.crt","items":[{"key":"service-ca.crt","path":"service-ca.crt"}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-g48mq","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{"capabilities":{"drop":["MKNOD"]}}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"10.245.128.17","securityContext":{"seLinuxOptions":{"level":"s0:c37,c14"}},"imagePullSecrets":[{"name":"default-dockercfg-snhn4"}],"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["10.245.128.17"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-07-27T01:43:37Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-07-27T01:43:39Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-07-27T01:43:39Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-07-27T01:43:37Z"}],"hostIP":"10.245.128.17","podIP":"172.17.218.61","podIPs":[{"ip":"172.17.218.61"}],"startTime":"2023-07-27T01:43:37Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-07-27T01:43:39Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22","containerID":"cri-o://4d41ca60e396b4fd00d2e22c7965e3e63c062365b9b4b9839a193990adaff50a","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-ldtq2","generateName":"daemon-set-","namespace":"daemonsets-9983","uid":"2fe614f4-7f8b-432d-99a1-c0ca1d9ed618","resourceVersion":"71208","creationTimestamp":"2023-07-27T01:43:37Z","deletionTimestamp":"2023-07-27T01:44:10Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"6cff669f8c","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"302679f7e166907d42058ca58155d0bc88fe163123b57b0c9c5758f8f0b40fea","cni.projectcalico.org/podIP":"172.17.230.189/32","cni.projectcalico.org/podIPs":"172.17.230.189/32","k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"k8s-pod-network\",\n \"ips\": [\n \"172.17.230.189\"\n ],\n \"default\": true,\n \"dns\": {}\n}]","openshift.io/scc":"anyuid"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"e83afb31-3497-4b02-85e8-d4c8c4ae146b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-27T01:43:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e83afb31-3497-4b02-85e8-d4c8c4ae146b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"calico","operation":"Update","apiVersion":"v1","time":"2023-07-27T01:43:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"multus","operation":"Update","apiVersion":"v1","time":"2023-07-27T01:43:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}},"subresource":"status"},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-27T01:43:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.230.189\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-vn599","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}},{"configMap":{"name":"openshift-service-ca.crt","items":[{"key":"service-ca.crt","path":"service-ca.crt"}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-vn599","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{"capabilities":{"drop":["MKNOD"]}}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"10.245.128.18","securityContext":{"seLinuxOptions":{"level":"s0:c37,c14"}},"imagePullSecrets":[{"name":"default-dockercfg-snhn4"}],"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["10.245.128.18"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-07-27T01:43:37Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-07-27T01:43:39Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-07-27T01:43:39Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-07-27T01:43:37Z"}],"hostIP":"10.245.128.18","podIP":"172.17.230.189","podIPs":[{"ip":"172.17.230.189"}],"startTime":"2023-07-27T01:43:37Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-07-27T01:43:39Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22","containerID":"cri-o://53556bcf1f70fccff44d58a33a43d1f2a621a9f7912aa77d1b46d9ffae2ad507","started":true}],"qosClass":"BestEffort"}}]} + +[AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 -Jun 12 20:57:38.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +Jul 27 01:43:41.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 -STEP: Destroying namespace "emptydir-930" for this suite. 06/12/23 20:57:38.327 +STEP: Destroying namespace "daemonsets-9983" for this suite. 07/27/23 01:43:41.061 ------------------------------ -• [SLOW TEST] [6.342 seconds] -[sig-storage] EmptyDir volumes -test/e2e/common/storage/framework.go:23 - should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:207 +• [3.361 seconds] +[sig-apps] Daemon set [Serial] +test/e2e/apps/framework.go:23 + should list and delete a collection of DaemonSets [Conformance] + test/e2e/apps/daemon_set.go:823 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] EmptyDir volumes + [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:57:31.999 - Jun 12 20:57:32.000: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename emptydir 06/12/23 20:57:32.003 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:57:32.052 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:57:32.087 - [BeforeEach] [sig-storage] EmptyDir volumes + STEP: Creating a kubernetes client 07/27/23 01:43:37.721 + Jul 27 01:43:37.721: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename daemonsets 07/27/23 01:43:37.722 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:43:37.762 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:43:37.775 + [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 - [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:207 - STEP: Creating a pod to test emptydir 0666 on node default medium 06/12/23 20:57:32.105 - Jun 12 20:57:32.141: INFO: Waiting up to 5m0s for pod "pod-996e9aa1-2126-4923-899b-724c5740e29d" in namespace "emptydir-930" to be "Succeeded or Failed" - Jun 12 20:57:32.223: INFO: Pod "pod-996e9aa1-2126-4923-899b-724c5740e29d": Phase="Pending", Reason="", readiness=false. Elapsed: 81.865648ms - Jun 12 20:57:34.234: INFO: Pod "pod-996e9aa1-2126-4923-899b-724c5740e29d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093109024s - Jun 12 20:57:36.239: INFO: Pod "pod-996e9aa1-2126-4923-899b-724c5740e29d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098373275s - Jun 12 20:57:38.234: INFO: Pod "pod-996e9aa1-2126-4923-899b-724c5740e29d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.092515047s - STEP: Saw pod success 06/12/23 20:57:38.234 - Jun 12 20:57:38.234: INFO: Pod "pod-996e9aa1-2126-4923-899b-724c5740e29d" satisfied condition "Succeeded or Failed" - Jun 12 20:57:38.242: INFO: Trying to get logs from node 10.138.75.116 pod pod-996e9aa1-2126-4923-899b-724c5740e29d container test-container: - STEP: delete the pod 06/12/23 20:57:38.265 - Jun 12 20:57:38.304: INFO: Waiting for pod pod-996e9aa1-2126-4923-899b-724c5740e29d to disappear - Jun 12 20:57:38.313: INFO: Pod pod-996e9aa1-2126-4923-899b-724c5740e29d no longer exists - [AfterEach] [sig-storage] EmptyDir volumes + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 + [It] should list and delete a collection of DaemonSets [Conformance] + test/e2e/apps/daemon_set.go:823 + STEP: Creating simple DaemonSet "daemon-set" 07/27/23 01:43:37.878 + STEP: Check that daemon pods launch on every node of the cluster. 07/27/23 01:43:37.893 + Jul 27 01:43:37.913: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jul 27 01:43:37.913: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 + Jul 27 01:43:38.941: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jul 27 01:43:38.941: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 + Jul 27 01:43:39.943: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Jul 27 01:43:39.943: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 + Jul 27 01:43:40.936: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Jul 27 01:43:40.936: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + STEP: listing all DeamonSets 07/27/23 01:43:40.944 + STEP: DeleteCollection of the DaemonSets 07/27/23 01:43:40.955 + STEP: Verify that ReplicaSets have been deleted 07/27/23 01:43:40.972 + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 + Jul 27 01:43:41.002: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"71211"},"items":null} + + Jul 27 01:43:41.011: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"71212"},"items":[{"metadata":{"name":"daemon-set-7hkng","generateName":"daemon-set-","namespace":"daemonsets-9983","uid":"e4106464-99d4-4cb4-bcdb-c1a694e2b139","resourceVersion":"71211","creationTimestamp":"2023-07-27T01:43:37Z","deletionTimestamp":"2023-07-27T01:44:10Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"6cff669f8c","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"02ff0ef13e28abdd3e79a82b60d50da893869bee4f8b66f553945cb3cc0f46cf","cni.projectcalico.org/podIP":"172.17.225.44/32","cni.projectcalico.org/podIPs":"172.17.225.44/32","k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"k8s-pod-network\",\n \"ips\": [\n \"172.17.225.44\"\n ],\n \"default\": true,\n \"dns\": {}\n}]","openshift.io/scc":"anyuid"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"e83afb31-3497-4b02-85e8-d4c8c4ae146b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-27T01:43:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e83afb31-3497-4b02-85e8-d4c8c4ae146b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"calico","operation":"Update","apiVersion":"v1","time":"2023-07-27T01:43:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"multus","operation":"Update","apiVersion":"v1","time":"2023-07-27T01:43:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}},"subresource":"status"},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-27T01:43:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.225.44\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-s8fmx","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}},{"configMap":{"name":"openshift-service-ca.crt","items":[{"key":"service-ca.crt","path":"service-ca.crt"}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-s8fmx","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{"capabilities":{"drop":["MKNOD"]}}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"10.245.128.19","securityContext":{"seLinuxOptions":{"level":"s0:c37,c14"}},"imagePullSecrets":[{"name":"default-dockercfg-snhn4"}],"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["10.245.128.19"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-07-27T01:43:37Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-07-27T01:43:39Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-07-27T01:43:39Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-07-27T01:43:37Z"}],"hostIP":"10.245.128.19","podIP":"172.17.225.44","podIPs":[{"ip":"172.17.225.44"}],"startTime":"2023-07-27T01:43:37Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-07-27T01:43:39Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22","containerID":"cri-o://638d3f758cbc21d02651d70636d8f608ab3c07f2705f3e16ea1e74b19f17a200","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-jfxwz","generateName":"daemon-set-","namespace":"daemonsets-9983","uid":"c03e8c3a-27ab-4540-89eb-ea3e8cb9fcda","resourceVersion":"71209","creationTimestamp":"2023-07-27T01:43:37Z","deletionTimestamp":"2023-07-27T01:44:10Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"6cff669f8c","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"84afae3295eac014ae153bc97442de8b1668f6b6ca947b904e2601feab94b318","cni.projectcalico.org/podIP":"172.17.218.61/32","cni.projectcalico.org/podIPs":"172.17.218.61/32","k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"k8s-pod-network\",\n \"ips\": [\n \"172.17.218.61\"\n ],\n \"default\": true,\n \"dns\": {}\n}]","openshift.io/scc":"anyuid"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"e83afb31-3497-4b02-85e8-d4c8c4ae146b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-27T01:43:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e83afb31-3497-4b02-85e8-d4c8c4ae146b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"calico","operation":"Update","apiVersion":"v1","time":"2023-07-27T01:43:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"multus","operation":"Update","apiVersion":"v1","time":"2023-07-27T01:43:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}},"subresource":"status"},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-27T01:43:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.218.61\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-g48mq","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}},{"configMap":{"name":"openshift-service-ca.crt","items":[{"key":"service-ca.crt","path":"service-ca.crt"}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-g48mq","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{"capabilities":{"drop":["MKNOD"]}}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"10.245.128.17","securityContext":{"seLinuxOptions":{"level":"s0:c37,c14"}},"imagePullSecrets":[{"name":"default-dockercfg-snhn4"}],"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["10.245.128.17"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-07-27T01:43:37Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-07-27T01:43:39Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-07-27T01:43:39Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-07-27T01:43:37Z"}],"hostIP":"10.245.128.17","podIP":"172.17.218.61","podIPs":[{"ip":"172.17.218.61"}],"startTime":"2023-07-27T01:43:37Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-07-27T01:43:39Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22","containerID":"cri-o://4d41ca60e396b4fd00d2e22c7965e3e63c062365b9b4b9839a193990adaff50a","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-ldtq2","generateName":"daemon-set-","namespace":"daemonsets-9983","uid":"2fe614f4-7f8b-432d-99a1-c0ca1d9ed618","resourceVersion":"71208","creationTimestamp":"2023-07-27T01:43:37Z","deletionTimestamp":"2023-07-27T01:44:10Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"6cff669f8c","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"302679f7e166907d42058ca58155d0bc88fe163123b57b0c9c5758f8f0b40fea","cni.projectcalico.org/podIP":"172.17.230.189/32","cni.projectcalico.org/podIPs":"172.17.230.189/32","k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"k8s-pod-network\",\n \"ips\": [\n \"172.17.230.189\"\n ],\n \"default\": true,\n \"dns\": {}\n}]","openshift.io/scc":"anyuid"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"e83afb31-3497-4b02-85e8-d4c8c4ae146b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-27T01:43:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e83afb31-3497-4b02-85e8-d4c8c4ae146b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"calico","operation":"Update","apiVersion":"v1","time":"2023-07-27T01:43:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"multus","operation":"Update","apiVersion":"v1","time":"2023-07-27T01:43:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}},"subresource":"status"},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-27T01:43:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.230.189\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-vn599","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}},{"configMap":{"name":"openshift-service-ca.crt","items":[{"key":"service-ca.crt","path":"service-ca.crt"}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-vn599","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{"capabilities":{"drop":["MKNOD"]}}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"10.245.128.18","securityContext":{"seLinuxOptions":{"level":"s0:c37,c14"}},"imagePullSecrets":[{"name":"default-dockercfg-snhn4"}],"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["10.245.128.18"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-07-27T01:43:37Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-07-27T01:43:39Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-07-27T01:43:39Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-07-27T01:43:37Z"}],"hostIP":"10.245.128.18","podIP":"172.17.230.189","podIPs":[{"ip":"172.17.230.189"}],"startTime":"2023-07-27T01:43:37Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-07-27T01:43:39Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22","containerID":"cri-o://53556bcf1f70fccff44d58a33a43d1f2a621a9f7912aa77d1b46d9ffae2ad507","started":true}],"qosClass":"BestEffort"}}]} + + [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 - Jun 12 20:57:38.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + Jul 27 01:43:41.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 - STEP: Destroying namespace "emptydir-930" for this suite. 06/12/23 20:57:38.327 + STEP: Destroying namespace "daemonsets-9983" for this suite. 07/27/23 01:43:41.061 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSS +SSSSSSSSSSSSS ------------------------------ -[sig-network] Proxy version v1 - A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] - test/e2e/network/proxy.go:286 -[BeforeEach] version v1 +[sig-api-machinery] Namespaces [Serial] + should ensure that all pods are removed when a namespace is deleted [Conformance] + test/e2e/apimachinery/namespace.go:243 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:57:38.352 -Jun 12 20:57:38.353: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename proxy 06/12/23 20:57:38.356 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:57:38.398 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:57:38.408 -[BeforeEach] version v1 +STEP: Creating a kubernetes client 07/27/23 01:43:41.083 +Jul 27 01:43:41.083: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename namespaces 07/27/23 01:43:41.084 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:43:41.126 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:43:41.136 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 -[It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] - test/e2e/network/proxy.go:286 -Jun 12 20:57:38.422: INFO: Creating pod... -Jun 12 20:57:38.451: INFO: Waiting up to 5m0s for pod "agnhost" in namespace "proxy-1524" to be "running" -Jun 12 20:57:38.463: INFO: Pod "agnhost": Phase="Pending", Reason="", readiness=false. Elapsed: 11.98848ms -Jun 12 20:57:40.473: INFO: Pod "agnhost": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021915276s -Jun 12 20:57:42.471: INFO: Pod "agnhost": Phase="Running", Reason="", readiness=true. Elapsed: 4.020692441s -Jun 12 20:57:42.471: INFO: Pod "agnhost" satisfied condition "running" -Jun 12 20:57:42.471: INFO: Creating service... -Jun 12 20:57:42.507: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-1524/pods/agnhost/proxy/some/path/with/DELETE -Jun 12 20:57:42.553: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE -Jun 12 20:57:42.553: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-1524/pods/agnhost/proxy/some/path/with/GET -Jun 12 20:57:42.568: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET -Jun 12 20:57:42.568: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-1524/pods/agnhost/proxy/some/path/with/HEAD -Jun 12 20:57:42.580: INFO: http.Client request:HEAD | StatusCode:200 -Jun 12 20:57:42.580: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-1524/pods/agnhost/proxy/some/path/with/OPTIONS -Jun 12 20:57:42.593: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS -Jun 12 20:57:42.593: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-1524/pods/agnhost/proxy/some/path/with/PATCH -Jun 12 20:57:42.610: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH -Jun 12 20:57:42.610: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-1524/pods/agnhost/proxy/some/path/with/POST -Jun 12 20:57:42.626: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST -Jun 12 20:57:42.626: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-1524/pods/agnhost/proxy/some/path/with/PUT -Jun 12 20:57:42.655: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT -Jun 12 20:57:42.655: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-1524/services/test-service/proxy/some/path/with/DELETE -Jun 12 20:57:42.679: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE -Jun 12 20:57:42.679: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-1524/services/test-service/proxy/some/path/with/GET -Jun 12 20:57:42.697: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET -Jun 12 20:57:42.697: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-1524/services/test-service/proxy/some/path/with/HEAD -Jun 12 20:57:42.725: INFO: http.Client request:HEAD | StatusCode:200 -Jun 12 20:57:42.725: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-1524/services/test-service/proxy/some/path/with/OPTIONS -Jun 12 20:57:42.743: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS -Jun 12 20:57:42.743: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-1524/services/test-service/proxy/some/path/with/PATCH -Jun 12 20:57:42.765: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH -Jun 12 20:57:42.765: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-1524/services/test-service/proxy/some/path/with/POST -Jun 12 20:57:42.781: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST -Jun 12 20:57:42.781: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-1524/services/test-service/proxy/some/path/with/PUT -Jun 12 20:57:42.800: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT -[AfterEach] version v1 +[It] should ensure that all pods are removed when a namespace is deleted [Conformance] + test/e2e/apimachinery/namespace.go:243 +STEP: Creating a test namespace 07/27/23 01:43:41.146 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:43:41.193 +STEP: Creating a pod in the namespace 07/27/23 01:43:41.202 +STEP: Waiting for the pod to have running status 07/27/23 01:43:42.233 +Jul 27 01:43:42.233: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "nsdeletetest-8810" to be "running" +Jul 27 01:43:42.248: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 15.372336ms +Jul 27 01:43:44.257: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.024358302s +Jul 27 01:43:44.257: INFO: Pod "test-pod" satisfied condition "running" +STEP: Deleting the namespace 07/27/23 01:43:44.258 +STEP: Waiting for the namespace to be removed. 07/27/23 01:43:44.283 +STEP: Recreating the namespace 07/27/23 01:43:57.297 +STEP: Verifying there are no pods in the namespace 07/27/23 01:43:57.335 +[AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 -Jun 12 20:57:42.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] version v1 +Jul 27 01:43:57.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] version v1 +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 -[DeferCleanup (Each)] version v1 +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 -STEP: Destroying namespace "proxy-1524" for this suite. 06/12/23 20:57:42.816 +STEP: Destroying namespace "namespaces-8840" for this suite. 07/27/23 01:43:57.359 +STEP: Destroying namespace "nsdeletetest-8810" for this suite. 07/27/23 01:43:57.386 +Jul 27 01:43:57.407: INFO: Namespace nsdeletetest-8810 was already deleted +STEP: Destroying namespace "nsdeletetest-4299" for this suite. 07/27/23 01:43:57.407 ------------------------------ -• [4.479 seconds] -[sig-network] Proxy -test/e2e/network/common/framework.go:23 - version v1 - test/e2e/network/proxy.go:74 - A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] - test/e2e/network/proxy.go:286 +• [SLOW TEST] [16.351 seconds] +[sig-api-machinery] Namespaces [Serial] +test/e2e/apimachinery/framework.go:23 + should ensure that all pods are removed when a namespace is deleted [Conformance] + test/e2e/apimachinery/namespace.go:243 Begin Captured GinkgoWriter Output >> - [BeforeEach] version v1 + [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:57:38.352 - Jun 12 20:57:38.353: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename proxy 06/12/23 20:57:38.356 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:57:38.398 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:57:38.408 - [BeforeEach] version v1 + STEP: Creating a kubernetes client 07/27/23 01:43:41.083 + Jul 27 01:43:41.083: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename namespaces 07/27/23 01:43:41.084 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:43:41.126 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:43:41.136 + [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 - [It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] - test/e2e/network/proxy.go:286 - Jun 12 20:57:38.422: INFO: Creating pod... - Jun 12 20:57:38.451: INFO: Waiting up to 5m0s for pod "agnhost" in namespace "proxy-1524" to be "running" - Jun 12 20:57:38.463: INFO: Pod "agnhost": Phase="Pending", Reason="", readiness=false. Elapsed: 11.98848ms - Jun 12 20:57:40.473: INFO: Pod "agnhost": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021915276s - Jun 12 20:57:42.471: INFO: Pod "agnhost": Phase="Running", Reason="", readiness=true. Elapsed: 4.020692441s - Jun 12 20:57:42.471: INFO: Pod "agnhost" satisfied condition "running" - Jun 12 20:57:42.471: INFO: Creating service... - Jun 12 20:57:42.507: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-1524/pods/agnhost/proxy/some/path/with/DELETE - Jun 12 20:57:42.553: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE - Jun 12 20:57:42.553: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-1524/pods/agnhost/proxy/some/path/with/GET - Jun 12 20:57:42.568: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET - Jun 12 20:57:42.568: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-1524/pods/agnhost/proxy/some/path/with/HEAD - Jun 12 20:57:42.580: INFO: http.Client request:HEAD | StatusCode:200 - Jun 12 20:57:42.580: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-1524/pods/agnhost/proxy/some/path/with/OPTIONS - Jun 12 20:57:42.593: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS - Jun 12 20:57:42.593: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-1524/pods/agnhost/proxy/some/path/with/PATCH - Jun 12 20:57:42.610: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH - Jun 12 20:57:42.610: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-1524/pods/agnhost/proxy/some/path/with/POST - Jun 12 20:57:42.626: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST - Jun 12 20:57:42.626: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-1524/pods/agnhost/proxy/some/path/with/PUT - Jun 12 20:57:42.655: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT - Jun 12 20:57:42.655: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-1524/services/test-service/proxy/some/path/with/DELETE - Jun 12 20:57:42.679: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE - Jun 12 20:57:42.679: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-1524/services/test-service/proxy/some/path/with/GET - Jun 12 20:57:42.697: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET - Jun 12 20:57:42.697: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-1524/services/test-service/proxy/some/path/with/HEAD - Jun 12 20:57:42.725: INFO: http.Client request:HEAD | StatusCode:200 - Jun 12 20:57:42.725: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-1524/services/test-service/proxy/some/path/with/OPTIONS - Jun 12 20:57:42.743: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS - Jun 12 20:57:42.743: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-1524/services/test-service/proxy/some/path/with/PATCH - Jun 12 20:57:42.765: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH - Jun 12 20:57:42.765: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-1524/services/test-service/proxy/some/path/with/POST - Jun 12 20:57:42.781: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST - Jun 12 20:57:42.781: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-1524/services/test-service/proxy/some/path/with/PUT - Jun 12 20:57:42.800: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT - [AfterEach] version v1 + [It] should ensure that all pods are removed when a namespace is deleted [Conformance] + test/e2e/apimachinery/namespace.go:243 + STEP: Creating a test namespace 07/27/23 01:43:41.146 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:43:41.193 + STEP: Creating a pod in the namespace 07/27/23 01:43:41.202 + STEP: Waiting for the pod to have running status 07/27/23 01:43:42.233 + Jul 27 01:43:42.233: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "nsdeletetest-8810" to be "running" + Jul 27 01:43:42.248: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 15.372336ms + Jul 27 01:43:44.257: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.024358302s + Jul 27 01:43:44.257: INFO: Pod "test-pod" satisfied condition "running" + STEP: Deleting the namespace 07/27/23 01:43:44.258 + STEP: Waiting for the namespace to be removed. 07/27/23 01:43:44.283 + STEP: Recreating the namespace 07/27/23 01:43:57.297 + STEP: Verifying there are no pods in the namespace 07/27/23 01:43:57.335 + [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 - Jun 12 20:57:42.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] version v1 + Jul 27 01:43:57.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] version v1 + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 - [DeferCleanup (Each)] version v1 + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 - STEP: Destroying namespace "proxy-1524" for this suite. 06/12/23 20:57:42.816 + STEP: Destroying namespace "namespaces-8840" for this suite. 07/27/23 01:43:57.359 + STEP: Destroying namespace "nsdeletetest-8810" for this suite. 07/27/23 01:43:57.386 + Jul 27 01:43:57.407: INFO: Namespace nsdeletetest-8810 was already deleted + STEP: Destroying namespace "nsdeletetest-4299" for this suite. 07/27/23 01:43:57.407 << End Captured GinkgoWriter Output ------------------------------ -[sig-api-machinery] ResourceQuota - should create a ResourceQuota and capture the life of a pod. [Conformance] - test/e2e/apimachinery/resource_quota.go:230 -[BeforeEach] [sig-api-machinery] ResourceQuota +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide podname only [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:53 +[BeforeEach] [sig-storage] Downward API volume set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:57:42.836 -Jun 12 20:57:42.837: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename resourcequota 06/12/23 20:57:42.837 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:57:42.885 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:57:42.919 -[BeforeEach] [sig-api-machinery] ResourceQuota +STEP: Creating a kubernetes client 07/27/23 01:43:57.435 +Jul 27 01:43:57.435: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename downward-api 07/27/23 01:43:57.436 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:43:57.485 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:43:57.493 +[BeforeEach] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:31 -[It] should create a ResourceQuota and capture the life of a pod. [Conformance] - test/e2e/apimachinery/resource_quota.go:230 -STEP: Counting existing ResourceQuota 06/12/23 20:57:42.93 -STEP: Creating a ResourceQuota 06/12/23 20:57:47.944 -STEP: Ensuring resource quota status is calculated 06/12/23 20:57:47.963 -STEP: Creating a Pod that fits quota 06/12/23 20:57:49.982 -STEP: Ensuring ResourceQuota status captures the pod usage 06/12/23 20:57:50.022 -STEP: Not allowing a pod to be created that exceeds remaining quota 06/12/23 20:57:52.036 -STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) 06/12/23 20:57:52.046 -STEP: Ensuring a pod cannot update its resource requirements 06/12/23 20:57:52.058 -STEP: Ensuring attempts to update pod resource requirements did not change quota usage 06/12/23 20:57:52.074 -STEP: Deleting the pod 06/12/23 20:57:54.087 -STEP: Ensuring resource quota status released the pod usage 06/12/23 20:57:54.113 -[AfterEach] [sig-api-machinery] ResourceQuota +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 +[It] should provide podname only [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:53 +STEP: Creating a pod to test downward API volume plugin 07/27/23 01:43:57.503 +Jul 27 01:43:57.527: INFO: Waiting up to 5m0s for pod "downwardapi-volume-73decce2-6611-4f20-9ebc-168ab0c5b7b2" in namespace "downward-api-7069" to be "Succeeded or Failed" +Jul 27 01:43:57.536: INFO: Pod "downwardapi-volume-73decce2-6611-4f20-9ebc-168ab0c5b7b2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.81069ms +Jul 27 01:43:59.546: INFO: Pod "downwardapi-volume-73decce2-6611-4f20-9ebc-168ab0c5b7b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018463827s +Jul 27 01:44:01.550: INFO: Pod "downwardapi-volume-73decce2-6611-4f20-9ebc-168ab0c5b7b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022387516s +STEP: Saw pod success 07/27/23 01:44:01.55 +Jul 27 01:44:01.550: INFO: Pod "downwardapi-volume-73decce2-6611-4f20-9ebc-168ab0c5b7b2" satisfied condition "Succeeded or Failed" +Jul 27 01:44:01.567: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-73decce2-6611-4f20-9ebc-168ab0c5b7b2 container client-container: +STEP: delete the pod 07/27/23 01:44:01.585 +Jul 27 01:44:01.611: INFO: Waiting for pod downwardapi-volume-73decce2-6611-4f20-9ebc-168ab0c5b7b2 to disappear +Jul 27 01:44:01.626: INFO: Pod downwardapi-volume-73decce2-6611-4f20-9ebc-168ab0c5b7b2 no longer exists +[AfterEach] [sig-storage] Downward API volume test/e2e/framework/node/init/init.go:32 -Jun 12 20:57:56.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +Jul 27 01:44:01.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +[DeferCleanup (Each)] [sig-storage] Downward API volume dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +[DeferCleanup (Each)] [sig-storage] Downward API volume tear down framework | framework.go:193 -STEP: Destroying namespace "resourcequota-6801" for this suite. 06/12/23 20:57:56.142 +STEP: Destroying namespace "downward-api-7069" for this suite. 07/27/23 01:44:01.649 ------------------------------ -• [SLOW TEST] [13.326 seconds] -[sig-api-machinery] ResourceQuota -test/e2e/apimachinery/framework.go:23 - should create a ResourceQuota and capture the life of a pod. [Conformance] - test/e2e/apimachinery/resource_quota.go:230 +• [4.239 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should provide podname only [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:53 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] ResourceQuota + [BeforeEach] [sig-storage] Downward API volume set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:57:42.836 - Jun 12 20:57:42.837: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename resourcequota 06/12/23 20:57:42.837 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:57:42.885 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:57:42.919 - [BeforeEach] [sig-api-machinery] ResourceQuota + STEP: Creating a kubernetes client 07/27/23 01:43:57.435 + Jul 27 01:43:57.435: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename downward-api 07/27/23 01:43:57.436 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:43:57.485 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:43:57.493 + [BeforeEach] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:31 - [It] should create a ResourceQuota and capture the life of a pod. [Conformance] - test/e2e/apimachinery/resource_quota.go:230 - STEP: Counting existing ResourceQuota 06/12/23 20:57:42.93 - STEP: Creating a ResourceQuota 06/12/23 20:57:47.944 - STEP: Ensuring resource quota status is calculated 06/12/23 20:57:47.963 - STEP: Creating a Pod that fits quota 06/12/23 20:57:49.982 - STEP: Ensuring ResourceQuota status captures the pod usage 06/12/23 20:57:50.022 - STEP: Not allowing a pod to be created that exceeds remaining quota 06/12/23 20:57:52.036 - STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) 06/12/23 20:57:52.046 - STEP: Ensuring a pod cannot update its resource requirements 06/12/23 20:57:52.058 - STEP: Ensuring attempts to update pod resource requirements did not change quota usage 06/12/23 20:57:52.074 - STEP: Deleting the pod 06/12/23 20:57:54.087 - STEP: Ensuring resource quota status released the pod usage 06/12/23 20:57:54.113 - [AfterEach] [sig-api-machinery] ResourceQuota + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 + [It] should provide podname only [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:53 + STEP: Creating a pod to test downward API volume plugin 07/27/23 01:43:57.503 + Jul 27 01:43:57.527: INFO: Waiting up to 5m0s for pod "downwardapi-volume-73decce2-6611-4f20-9ebc-168ab0c5b7b2" in namespace "downward-api-7069" to be "Succeeded or Failed" + Jul 27 01:43:57.536: INFO: Pod "downwardapi-volume-73decce2-6611-4f20-9ebc-168ab0c5b7b2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.81069ms + Jul 27 01:43:59.546: INFO: Pod "downwardapi-volume-73decce2-6611-4f20-9ebc-168ab0c5b7b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018463827s + Jul 27 01:44:01.550: INFO: Pod "downwardapi-volume-73decce2-6611-4f20-9ebc-168ab0c5b7b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022387516s + STEP: Saw pod success 07/27/23 01:44:01.55 + Jul 27 01:44:01.550: INFO: Pod "downwardapi-volume-73decce2-6611-4f20-9ebc-168ab0c5b7b2" satisfied condition "Succeeded or Failed" + Jul 27 01:44:01.567: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-73decce2-6611-4f20-9ebc-168ab0c5b7b2 container client-container: + STEP: delete the pod 07/27/23 01:44:01.585 + Jul 27 01:44:01.611: INFO: Waiting for pod downwardapi-volume-73decce2-6611-4f20-9ebc-168ab0c5b7b2 to disappear + Jul 27 01:44:01.626: INFO: Pod downwardapi-volume-73decce2-6611-4f20-9ebc-168ab0c5b7b2 no longer exists + [AfterEach] [sig-storage] Downward API volume test/e2e/framework/node/init/init.go:32 - Jun 12 20:57:56.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + Jul 27 01:44:01.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + [DeferCleanup (Each)] [sig-storage] Downward API volume dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + [DeferCleanup (Each)] [sig-storage] Downward API volume tear down framework | framework.go:193 - STEP: Destroying namespace "resourcequota-6801" for this suite. 06/12/23 20:57:56.142 + STEP: Destroying namespace "downward-api-7069" for this suite. 07/27/23 01:44:01.649 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSS ------------------------------ -[sig-node] Probing container - with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] - test/e2e/common/node/container_probe.go:108 -[BeforeEach] [sig-node] Probing container +[sig-storage] Projected downwardAPI + should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:221 +[BeforeEach] [sig-storage] Projected downwardAPI set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:57:56.165 -Jun 12 20:57:56.165: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename container-probe 06/12/23 20:57:56.168 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:57:56.208 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:57:56.22 -[BeforeEach] [sig-node] Probing container +STEP: Creating a kubernetes client 07/27/23 01:44:01.675 +Jul 27 01:44:01.675: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename projected 07/27/23 01:44:01.676 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:44:01.735 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:44:01.743 +[BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-node] Probing container - test/e2e/common/node/container_probe.go:63 -[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] - test/e2e/common/node/container_probe.go:108 -[AfterEach] [sig-node] Probing container +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 +[It] should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:221 +STEP: Creating a pod to test downward API volume plugin 07/27/23 01:44:01.757 +W0727 01:44:01.785480 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +Jul 27 01:44:01.785: INFO: Waiting up to 5m0s for pod "downwardapi-volume-97064529-98eb-42b7-a4b0-9dab15a941e4" in namespace "projected-6702" to be "Succeeded or Failed" +Jul 27 01:44:01.793: INFO: Pod "downwardapi-volume-97064529-98eb-42b7-a4b0-9dab15a941e4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.818496ms +Jul 27 01:44:03.803: INFO: Pod "downwardapi-volume-97064529-98eb-42b7-a4b0-9dab15a941e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01746334s +Jul 27 01:44:05.803: INFO: Pod "downwardapi-volume-97064529-98eb-42b7-a4b0-9dab15a941e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01790094s +STEP: Saw pod success 07/27/23 01:44:05.803 +Jul 27 01:44:05.803: INFO: Pod "downwardapi-volume-97064529-98eb-42b7-a4b0-9dab15a941e4" satisfied condition "Succeeded or Failed" +Jul 27 01:44:05.811: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-97064529-98eb-42b7-a4b0-9dab15a941e4 container client-container: +STEP: delete the pod 07/27/23 01:44:05.828 +Jul 27 01:44:05.849: INFO: Waiting for pod downwardapi-volume-97064529-98eb-42b7-a4b0-9dab15a941e4 to disappear +Jul 27 01:44:05.859: INFO: Pod downwardapi-volume-97064529-98eb-42b7-a4b0-9dab15a941e4 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/node/init/init.go:32 -Jun 12 20:58:56.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Probing container +Jul 27 01:44:05.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Probing container +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Probing container +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI tear down framework | framework.go:193 -STEP: Destroying namespace "container-probe-7300" for this suite. 06/12/23 20:58:56.291 +STEP: Destroying namespace "projected-6702" for this suite. 07/27/23 01:44:05.877 ------------------------------ -• [SLOW TEST] [60.147 seconds] -[sig-node] Probing container -test/e2e/common/node/framework.go:23 - with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] - test/e2e/common/node/container_probe.go:108 +• [4.226 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:221 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Probing container + [BeforeEach] [sig-storage] Projected downwardAPI set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:57:56.165 - Jun 12 20:57:56.165: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename container-probe 06/12/23 20:57:56.168 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:57:56.208 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:57:56.22 - [BeforeEach] [sig-node] Probing container + STEP: Creating a kubernetes client 07/27/23 01:44:01.675 + Jul 27 01:44:01.675: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename projected 07/27/23 01:44:01.676 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:44:01.735 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:44:01.743 + [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-node] Probing container - test/e2e/common/node/container_probe.go:63 - [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] - test/e2e/common/node/container_probe.go:108 - [AfterEach] [sig-node] Probing container + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 + [It] should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:221 + STEP: Creating a pod to test downward API volume plugin 07/27/23 01:44:01.757 + W0727 01:44:01.785480 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + Jul 27 01:44:01.785: INFO: Waiting up to 5m0s for pod "downwardapi-volume-97064529-98eb-42b7-a4b0-9dab15a941e4" in namespace "projected-6702" to be "Succeeded or Failed" + Jul 27 01:44:01.793: INFO: Pod "downwardapi-volume-97064529-98eb-42b7-a4b0-9dab15a941e4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.818496ms + Jul 27 01:44:03.803: INFO: Pod "downwardapi-volume-97064529-98eb-42b7-a4b0-9dab15a941e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01746334s + Jul 27 01:44:05.803: INFO: Pod "downwardapi-volume-97064529-98eb-42b7-a4b0-9dab15a941e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01790094s + STEP: Saw pod success 07/27/23 01:44:05.803 + Jul 27 01:44:05.803: INFO: Pod "downwardapi-volume-97064529-98eb-42b7-a4b0-9dab15a941e4" satisfied condition "Succeeded or Failed" + Jul 27 01:44:05.811: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-97064529-98eb-42b7-a4b0-9dab15a941e4 container client-container: + STEP: delete the pod 07/27/23 01:44:05.828 + Jul 27 01:44:05.849: INFO: Waiting for pod downwardapi-volume-97064529-98eb-42b7-a4b0-9dab15a941e4 to disappear + Jul 27 01:44:05.859: INFO: Pod downwardapi-volume-97064529-98eb-42b7-a4b0-9dab15a941e4 no longer exists + [AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/node/init/init.go:32 - Jun 12 20:58:56.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Probing container + Jul 27 01:44:05.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Probing container + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Probing container + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI tear down framework | framework.go:193 - STEP: Destroying namespace "container-probe-7300" for this suite. 06/12/23 20:58:56.291 + STEP: Destroying namespace "projected-6702" for this suite. 07/27/23 01:44:05.877 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSS +SSSSSSSSS ------------------------------ -[sig-storage] CSIStorageCapacity - should support CSIStorageCapacities API operations [Conformance] - test/e2e/storage/csistoragecapacity.go:49 -[BeforeEach] [sig-storage] CSIStorageCapacity +[sig-apps] DisruptionController + should create a PodDisruptionBudget [Conformance] + test/e2e/apps/disruption.go:108 +[BeforeEach] [sig-apps] DisruptionController set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:58:56.315 -Jun 12 20:58:56.315: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename csistoragecapacity 06/12/23 20:58:56.316 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:58:56.358 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:58:56.368 -[BeforeEach] [sig-storage] CSIStorageCapacity +STEP: Creating a kubernetes client 07/27/23 01:44:05.901 +Jul 27 01:44:05.901: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename disruption 07/27/23 01:44:05.902 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:44:05.947 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:44:05.955 +[BeforeEach] [sig-apps] DisruptionController test/e2e/framework/metrics/init/init.go:31 -[It] should support CSIStorageCapacities API operations [Conformance] - test/e2e/storage/csistoragecapacity.go:49 -STEP: getting /apis 06/12/23 20:58:56.379 -STEP: getting /apis/storage.k8s.io 06/12/23 20:58:56.393 -STEP: getting /apis/storage.k8s.io/v1 06/12/23 20:58:56.396 -STEP: creating 06/12/23 20:58:56.4 -STEP: watching 06/12/23 20:58:56.457 -Jun 12 20:58:56.457: INFO: starting watch -STEP: getting 06/12/23 20:58:56.476 -STEP: listing in namespace 06/12/23 20:58:56.484 -STEP: listing across namespaces 06/12/23 20:58:56.496 -STEP: patching 06/12/23 20:58:56.508 -STEP: updating 06/12/23 20:58:56.541 -Jun 12 20:58:56.555: INFO: waiting for watch events with expected annotations in namespace -Jun 12 20:58:56.555: INFO: waiting for watch events with expected annotations across namespace -STEP: deleting 06/12/23 20:58:56.556 -STEP: deleting a collection 06/12/23 20:58:56.589 -[AfterEach] [sig-storage] CSIStorageCapacity +[BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:72 +[It] should create a PodDisruptionBudget [Conformance] + test/e2e/apps/disruption.go:108 +STEP: creating the pdb 07/27/23 01:44:05.964 +STEP: Waiting for the pdb to be processed 07/27/23 01:44:05.979 +STEP: updating the pdb 07/27/23 01:44:07.997 +STEP: Waiting for the pdb to be processed 07/27/23 01:44:08.017 +STEP: patching the pdb 07/27/23 01:44:10.035 +STEP: Waiting for the pdb to be processed 07/27/23 01:44:10.063 +STEP: Waiting for the pdb to be deleted 07/27/23 01:44:10.086 +[AfterEach] [sig-apps] DisruptionController test/e2e/framework/node/init/init.go:32 -Jun 12 20:58:56.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] CSIStorageCapacity +Jul 27 01:44:10.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] DisruptionController test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] CSIStorageCapacity +[DeferCleanup (Each)] [sig-apps] DisruptionController dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] CSIStorageCapacity +[DeferCleanup (Each)] [sig-apps] DisruptionController tear down framework | framework.go:193 -STEP: Destroying namespace "csistoragecapacity-8436" for this suite. 06/12/23 20:58:56.649 +STEP: Destroying namespace "disruption-9162" for this suite. 07/27/23 01:44:10.108 ------------------------------ -• [0.347 seconds] -[sig-storage] CSIStorageCapacity -test/e2e/storage/utils/framework.go:23 - should support CSIStorageCapacities API operations [Conformance] - test/e2e/storage/csistoragecapacity.go:49 +• [4.228 seconds] +[sig-apps] DisruptionController +test/e2e/apps/framework.go:23 + should create a PodDisruptionBudget [Conformance] + test/e2e/apps/disruption.go:108 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] CSIStorageCapacity + [BeforeEach] [sig-apps] DisruptionController set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:58:56.315 - Jun 12 20:58:56.315: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename csistoragecapacity 06/12/23 20:58:56.316 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:58:56.358 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:58:56.368 - [BeforeEach] [sig-storage] CSIStorageCapacity + STEP: Creating a kubernetes client 07/27/23 01:44:05.901 + Jul 27 01:44:05.901: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename disruption 07/27/23 01:44:05.902 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:44:05.947 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:44:05.955 + [BeforeEach] [sig-apps] DisruptionController test/e2e/framework/metrics/init/init.go:31 - [It] should support CSIStorageCapacities API operations [Conformance] - test/e2e/storage/csistoragecapacity.go:49 - STEP: getting /apis 06/12/23 20:58:56.379 - STEP: getting /apis/storage.k8s.io 06/12/23 20:58:56.393 - STEP: getting /apis/storage.k8s.io/v1 06/12/23 20:58:56.396 - STEP: creating 06/12/23 20:58:56.4 - STEP: watching 06/12/23 20:58:56.457 - Jun 12 20:58:56.457: INFO: starting watch - STEP: getting 06/12/23 20:58:56.476 - STEP: listing in namespace 06/12/23 20:58:56.484 - STEP: listing across namespaces 06/12/23 20:58:56.496 - STEP: patching 06/12/23 20:58:56.508 - STEP: updating 06/12/23 20:58:56.541 - Jun 12 20:58:56.555: INFO: waiting for watch events with expected annotations in namespace - Jun 12 20:58:56.555: INFO: waiting for watch events with expected annotations across namespace - STEP: deleting 06/12/23 20:58:56.556 - STEP: deleting a collection 06/12/23 20:58:56.589 - [AfterEach] [sig-storage] CSIStorageCapacity + [BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:72 + [It] should create a PodDisruptionBudget [Conformance] + test/e2e/apps/disruption.go:108 + STEP: creating the pdb 07/27/23 01:44:05.964 + STEP: Waiting for the pdb to be processed 07/27/23 01:44:05.979 + STEP: updating the pdb 07/27/23 01:44:07.997 + STEP: Waiting for the pdb to be processed 07/27/23 01:44:08.017 + STEP: patching the pdb 07/27/23 01:44:10.035 + STEP: Waiting for the pdb to be processed 07/27/23 01:44:10.063 + STEP: Waiting for the pdb to be deleted 07/27/23 01:44:10.086 + [AfterEach] [sig-apps] DisruptionController test/e2e/framework/node/init/init.go:32 - Jun 12 20:58:56.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] CSIStorageCapacity + Jul 27 01:44:10.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] DisruptionController test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] CSIStorageCapacity + [DeferCleanup (Each)] [sig-apps] DisruptionController dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] CSIStorageCapacity + [DeferCleanup (Each)] [sig-apps] DisruptionController tear down framework | framework.go:193 - STEP: Destroying namespace "csistoragecapacity-8436" for this suite. 06/12/23 20:58:56.649 + STEP: Destroying namespace "disruption-9162" for this suite. 07/27/23 01:44:10.108 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSS ------------------------------ -[sig-storage] Secrets - should be immutable if `immutable` field is set [Conformance] - test/e2e/common/storage/secrets_volume.go:386 -[BeforeEach] [sig-storage] Secrets +[sig-node] Variable Expansion + should allow composing env vars into new env vars [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:44 +[BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:58:56.674 -Jun 12 20:58:56.674: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename secrets 06/12/23 20:58:56.676 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:58:56.72 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:58:56.732 -[BeforeEach] [sig-storage] Secrets +STEP: Creating a kubernetes client 07/27/23 01:44:10.13 +Jul 27 01:44:10.131: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename var-expansion 07/27/23 01:44:10.132 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:44:10.168 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:44:10.177 +[BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 -[It] should be immutable if `immutable` field is set [Conformance] - test/e2e/common/storage/secrets_volume.go:386 -[AfterEach] [sig-storage] Secrets +[It] should allow composing env vars into new env vars [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:44 +STEP: Creating a pod to test env composition 07/27/23 01:44:10.186 +Jul 27 01:44:10.246: INFO: Waiting up to 5m0s for pod "var-expansion-0d22c0e7-5d33-44f8-912f-d1bb8203dac3" in namespace "var-expansion-1331" to be "Succeeded or Failed" +Jul 27 01:44:10.254: INFO: Pod "var-expansion-0d22c0e7-5d33-44f8-912f-d1bb8203dac3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.254097ms +Jul 27 01:44:12.292: INFO: Pod "var-expansion-0d22c0e7-5d33-44f8-912f-d1bb8203dac3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046151436s +Jul 27 01:44:14.264: INFO: Pod "var-expansion-0d22c0e7-5d33-44f8-912f-d1bb8203dac3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018484317s +STEP: Saw pod success 07/27/23 01:44:14.265 +Jul 27 01:44:14.265: INFO: Pod "var-expansion-0d22c0e7-5d33-44f8-912f-d1bb8203dac3" satisfied condition "Succeeded or Failed" +Jul 27 01:44:14.274: INFO: Trying to get logs from node 10.245.128.19 pod var-expansion-0d22c0e7-5d33-44f8-912f-d1bb8203dac3 container dapi-container: +STEP: delete the pod 07/27/23 01:44:14.296 +Jul 27 01:44:14.319: INFO: Waiting for pod var-expansion-0d22c0e7-5d33-44f8-912f-d1bb8203dac3 to disappear +Jul 27 01:44:14.327: INFO: Pod var-expansion-0d22c0e7-5d33-44f8-912f-d1bb8203dac3 no longer exists +[AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 -Jun 12 20:58:56.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Secrets +Jul 27 01:44:14.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Secrets +[DeferCleanup (Each)] [sig-node] Variable Expansion dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Secrets +[DeferCleanup (Each)] [sig-node] Variable Expansion tear down framework | framework.go:193 -STEP: Destroying namespace "secrets-6054" for this suite. 06/12/23 20:58:56.882 +STEP: Destroying namespace "var-expansion-1331" for this suite. 07/27/23 01:44:14.342 ------------------------------ -• [0.225 seconds] -[sig-storage] Secrets -test/e2e/common/storage/framework.go:23 - should be immutable if `immutable` field is set [Conformance] - test/e2e/common/storage/secrets_volume.go:386 +• [4.233 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should allow composing env vars into new env vars [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:44 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Secrets + [BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:58:56.674 - Jun 12 20:58:56.674: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename secrets 06/12/23 20:58:56.676 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:58:56.72 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:58:56.732 - [BeforeEach] [sig-storage] Secrets + STEP: Creating a kubernetes client 07/27/23 01:44:10.13 + Jul 27 01:44:10.131: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename var-expansion 07/27/23 01:44:10.132 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:44:10.168 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:44:10.177 + [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 - [It] should be immutable if `immutable` field is set [Conformance] - test/e2e/common/storage/secrets_volume.go:386 - [AfterEach] [sig-storage] Secrets + [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:44 + STEP: Creating a pod to test env composition 07/27/23 01:44:10.186 + Jul 27 01:44:10.246: INFO: Waiting up to 5m0s for pod "var-expansion-0d22c0e7-5d33-44f8-912f-d1bb8203dac3" in namespace "var-expansion-1331" to be "Succeeded or Failed" + Jul 27 01:44:10.254: INFO: Pod "var-expansion-0d22c0e7-5d33-44f8-912f-d1bb8203dac3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.254097ms + Jul 27 01:44:12.292: INFO: Pod "var-expansion-0d22c0e7-5d33-44f8-912f-d1bb8203dac3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046151436s + Jul 27 01:44:14.264: INFO: Pod "var-expansion-0d22c0e7-5d33-44f8-912f-d1bb8203dac3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018484317s + STEP: Saw pod success 07/27/23 01:44:14.265 + Jul 27 01:44:14.265: INFO: Pod "var-expansion-0d22c0e7-5d33-44f8-912f-d1bb8203dac3" satisfied condition "Succeeded or Failed" + Jul 27 01:44:14.274: INFO: Trying to get logs from node 10.245.128.19 pod var-expansion-0d22c0e7-5d33-44f8-912f-d1bb8203dac3 container dapi-container: + STEP: delete the pod 07/27/23 01:44:14.296 + Jul 27 01:44:14.319: INFO: Waiting for pod var-expansion-0d22c0e7-5d33-44f8-912f-d1bb8203dac3 to disappear + Jul 27 01:44:14.327: INFO: Pod var-expansion-0d22c0e7-5d33-44f8-912f-d1bb8203dac3 no longer exists + [AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 - Jun 12 20:58:56.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Secrets + Jul 27 01:44:14.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Secrets + [DeferCleanup (Each)] [sig-node] Variable Expansion dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Secrets + [DeferCleanup (Each)] [sig-node] Variable Expansion tear down framework | framework.go:193 - STEP: Destroying namespace "secrets-6054" for this suite. 06/12/23 20:58:56.882 + STEP: Destroying namespace "var-expansion-1331" for this suite. 07/27/23 01:44:14.342 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSS +SSSSSSSSSSSSSS ------------------------------ -[sig-network] Services - should find a service from listing all namespaces [Conformance] - test/e2e/network/service.go:3219 -[BeforeEach] [sig-network] Services +[sig-apps] ReplicaSet + Replace and Patch tests [Conformance] + test/e2e/apps/replica_set.go:154 +[BeforeEach] [sig-apps] ReplicaSet set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:58:56.904 -Jun 12 20:58:56.904: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename services 06/12/23 20:58:56.907 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:58:56.951 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:58:56.961 -[BeforeEach] [sig-network] Services +STEP: Creating a kubernetes client 07/27/23 01:44:14.364 +Jul 27 01:44:14.364: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename replicaset 07/27/23 01:44:14.365 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:44:14.405 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:44:14.415 +[BeforeEach] [sig-apps] ReplicaSet test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-network] Services - test/e2e/network/service.go:766 -[It] should find a service from listing all namespaces [Conformance] - test/e2e/network/service.go:3219 -STEP: fetching services 06/12/23 20:58:56.977 -[AfterEach] [sig-network] Services +[It] Replace and Patch tests [Conformance] + test/e2e/apps/replica_set.go:154 +Jul 27 01:44:14.468: INFO: Pod name sample-pod: Found 0 pods out of 1 +Jul 27 01:44:19.479: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running 07/27/23 01:44:19.479 +STEP: Scaling up "test-rs" replicaset 07/27/23 01:44:19.479 +Jul 27 01:44:19.507: INFO: Updating replica set "test-rs" +STEP: patching the ReplicaSet 07/27/23 01:44:19.507 +W0727 01:44:19.528692 20 warnings.go:70] unknown field "spec.template.spec.TerminationGracePeriodSeconds" +Jul 27 01:44:19.537: INFO: observed ReplicaSet test-rs in namespace replicaset-3065 with ReadyReplicas 1, AvailableReplicas 1 +Jul 27 01:44:19.558: INFO: observed ReplicaSet test-rs in namespace replicaset-3065 with ReadyReplicas 1, AvailableReplicas 1 +Jul 27 01:44:19.606: INFO: observed ReplicaSet test-rs in namespace replicaset-3065 with ReadyReplicas 1, AvailableReplicas 1 +Jul 27 01:44:19.623: INFO: observed ReplicaSet test-rs in namespace replicaset-3065 with ReadyReplicas 1, AvailableReplicas 1 +Jul 27 01:44:21.194: INFO: observed ReplicaSet test-rs in namespace replicaset-3065 with ReadyReplicas 2, AvailableReplicas 2 +Jul 27 01:44:21.809: INFO: observed Replicaset test-rs in namespace replicaset-3065 with ReadyReplicas 3 found true +[AfterEach] [sig-apps] ReplicaSet test/e2e/framework/node/init/init.go:32 -Jun 12 20:58:57.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] Services +Jul 27 01:44:21.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicaSet test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] Services +[DeferCleanup (Each)] [sig-apps] ReplicaSet dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] Services +[DeferCleanup (Each)] [sig-apps] ReplicaSet tear down framework | framework.go:193 -STEP: Destroying namespace "services-3535" for this suite. 06/12/23 20:58:57.05 +STEP: Destroying namespace "replicaset-3065" for this suite. 07/27/23 01:44:21.823 ------------------------------ -• [0.166 seconds] -[sig-network] Services -test/e2e/network/common/framework.go:23 - should find a service from listing all namespaces [Conformance] - test/e2e/network/service.go:3219 +• [SLOW TEST] [7.482 seconds] +[sig-apps] ReplicaSet +test/e2e/apps/framework.go:23 + Replace and Patch tests [Conformance] + test/e2e/apps/replica_set.go:154 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] Services + [BeforeEach] [sig-apps] ReplicaSet set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:58:56.904 - Jun 12 20:58:56.904: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename services 06/12/23 20:58:56.907 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:58:56.951 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:58:56.961 - [BeforeEach] [sig-network] Services + STEP: Creating a kubernetes client 07/27/23 01:44:14.364 + Jul 27 01:44:14.364: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename replicaset 07/27/23 01:44:14.365 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:44:14.405 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:44:14.415 + [BeforeEach] [sig-apps] ReplicaSet test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-network] Services - test/e2e/network/service.go:766 - [It] should find a service from listing all namespaces [Conformance] - test/e2e/network/service.go:3219 - STEP: fetching services 06/12/23 20:58:56.977 - [AfterEach] [sig-network] Services + [It] Replace and Patch tests [Conformance] + test/e2e/apps/replica_set.go:154 + Jul 27 01:44:14.468: INFO: Pod name sample-pod: Found 0 pods out of 1 + Jul 27 01:44:19.479: INFO: Pod name sample-pod: Found 1 pods out of 1 + STEP: ensuring each pod is running 07/27/23 01:44:19.479 + STEP: Scaling up "test-rs" replicaset 07/27/23 01:44:19.479 + Jul 27 01:44:19.507: INFO: Updating replica set "test-rs" + STEP: patching the ReplicaSet 07/27/23 01:44:19.507 + W0727 01:44:19.528692 20 warnings.go:70] unknown field "spec.template.spec.TerminationGracePeriodSeconds" + Jul 27 01:44:19.537: INFO: observed ReplicaSet test-rs in namespace replicaset-3065 with ReadyReplicas 1, AvailableReplicas 1 + Jul 27 01:44:19.558: INFO: observed ReplicaSet test-rs in namespace replicaset-3065 with ReadyReplicas 1, AvailableReplicas 1 + Jul 27 01:44:19.606: INFO: observed ReplicaSet test-rs in namespace replicaset-3065 with ReadyReplicas 1, AvailableReplicas 1 + Jul 27 01:44:19.623: INFO: observed ReplicaSet test-rs in namespace replicaset-3065 with ReadyReplicas 1, AvailableReplicas 1 + Jul 27 01:44:21.194: INFO: observed ReplicaSet test-rs in namespace replicaset-3065 with ReadyReplicas 2, AvailableReplicas 2 + Jul 27 01:44:21.809: INFO: observed Replicaset test-rs in namespace replicaset-3065 with ReadyReplicas 3 found true + [AfterEach] [sig-apps] ReplicaSet test/e2e/framework/node/init/init.go:32 - Jun 12 20:58:57.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] Services + Jul 27 01:44:21.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicaSet test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] Services + [DeferCleanup (Each)] [sig-apps] ReplicaSet dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] Services + [DeferCleanup (Each)] [sig-apps] ReplicaSet tear down framework | framework.go:193 - STEP: Destroying namespace "services-3535" for this suite. 06/12/23 20:58:57.05 + STEP: Destroying namespace "replicaset-3065" for this suite. 07/27/23 01:44:21.823 << End Captured GinkgoWriter Output ------------------------------ -[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - should unconditionally reject operations on fail closed webhook [Conformance] - test/e2e/apimachinery/webhook.go:239 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + test/e2e/apimachinery/garbage_collector.go:735 +[BeforeEach] [sig-api-machinery] Garbage collector set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:58:57.078 -Jun 12 20:58:57.078: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename webhook 06/12/23 20:58:57.086 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:58:57.183 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:58:57.275 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +STEP: Creating a kubernetes client 07/27/23 01:44:21.848 +Jul 27 01:44:21.848: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename gc 07/27/23 01:44:21.849 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:44:21.887 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:44:21.897 +[BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:90 -STEP: Setting up server cert 06/12/23 20:58:57.432 -STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 20:58:59.426 -STEP: Deploying the webhook pod 06/12/23 20:58:59.458 -STEP: Wait for the deployment to be ready 06/12/23 20:58:59.484 -Jun 12 20:58:59.501: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set -Jun 12 20:59:01.537: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 20, 58, 59, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 58, 59, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 20, 58, 59, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 58, 59, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} -STEP: Deploying the webhook service 06/12/23 20:59:03.6 -STEP: Verifying the service has paired with the endpoint 06/12/23 20:59:03.638 -Jun 12 20:59:04.639: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 -[It] should unconditionally reject operations on fail closed webhook [Conformance] - test/e2e/apimachinery/webhook.go:239 -STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API 06/12/23 20:59:04.655 -STEP: create a namespace for the webhook 06/12/23 20:59:04.785 -STEP: create a configmap should be unconditionally rejected by the webhook 06/12/23 20:59:04.886 -[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + test/e2e/apimachinery/garbage_collector.go:735 +STEP: create the rc1 07/27/23 01:44:21.92 +STEP: create the rc2 07/27/23 01:44:21.945 +STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well 07/27/23 01:44:27.019 +STEP: delete the rc simpletest-rc-to-be-deleted 07/27/23 01:44:28.201 +STEP: wait for the rc to be deleted 07/27/23 01:44:28.24 +STEP: Gathering metrics 07/27/23 01:44:33.274 +W0727 01:44:33.293739 20 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +Jul 27 01:44:33.293: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +Jul 27 01:44:33.293: INFO: Deleting pod "simpletest-rc-to-be-deleted-4gvzz" in namespace "gc-5752" +Jul 27 01:44:33.323: INFO: Deleting pod "simpletest-rc-to-be-deleted-4stwd" in namespace "gc-5752" +Jul 27 01:44:33.350: INFO: Deleting pod "simpletest-rc-to-be-deleted-4vxxf" in namespace "gc-5752" +Jul 27 01:44:33.382: INFO: Deleting pod "simpletest-rc-to-be-deleted-57z5m" in namespace "gc-5752" +Jul 27 01:44:33.417: INFO: Deleting pod "simpletest-rc-to-be-deleted-58ksj" in namespace "gc-5752" +Jul 27 01:44:33.445: INFO: Deleting pod "simpletest-rc-to-be-deleted-597wg" in namespace "gc-5752" +Jul 27 01:44:33.466: INFO: Deleting pod "simpletest-rc-to-be-deleted-5dhnp" in namespace "gc-5752" +Jul 27 01:44:33.485: INFO: Deleting pod "simpletest-rc-to-be-deleted-5dr95" in namespace "gc-5752" +Jul 27 01:44:33.510: INFO: Deleting pod "simpletest-rc-to-be-deleted-5xvfs" in namespace "gc-5752" +Jul 27 01:44:33.534: INFO: Deleting pod "simpletest-rc-to-be-deleted-64b9c" in namespace "gc-5752" +Jul 27 01:44:33.557: INFO: Deleting pod "simpletest-rc-to-be-deleted-66qpd" in namespace "gc-5752" +Jul 27 01:44:33.582: INFO: Deleting pod "simpletest-rc-to-be-deleted-687g7" in namespace "gc-5752" +Jul 27 01:44:33.604: INFO: Deleting pod "simpletest-rc-to-be-deleted-6blq4" in namespace "gc-5752" +Jul 27 01:44:33.629: INFO: Deleting pod "simpletest-rc-to-be-deleted-6smkh" in namespace "gc-5752" +Jul 27 01:44:33.664: INFO: Deleting pod "simpletest-rc-to-be-deleted-6sts6" in namespace "gc-5752" +Jul 27 01:44:33.690: INFO: Deleting pod "simpletest-rc-to-be-deleted-6v29g" in namespace "gc-5752" +Jul 27 01:44:33.715: INFO: Deleting pod "simpletest-rc-to-be-deleted-6vw7r" in namespace "gc-5752" +Jul 27 01:44:33.740: INFO: Deleting pod "simpletest-rc-to-be-deleted-72mvg" in namespace "gc-5752" +Jul 27 01:44:33.777: INFO: Deleting pod "simpletest-rc-to-be-deleted-78nqp" in namespace "gc-5752" +Jul 27 01:44:33.801: INFO: Deleting pod "simpletest-rc-to-be-deleted-7rsbw" in namespace "gc-5752" +Jul 27 01:44:33.830: INFO: Deleting pod "simpletest-rc-to-be-deleted-7zts7" in namespace "gc-5752" +Jul 27 01:44:33.878: INFO: Deleting pod "simpletest-rc-to-be-deleted-8h2lp" in namespace "gc-5752" +Jul 27 01:44:33.907: INFO: Deleting pod "simpletest-rc-to-be-deleted-8kjmn" in namespace "gc-5752" +Jul 27 01:44:33.930: INFO: Deleting pod "simpletest-rc-to-be-deleted-8t8ss" in namespace "gc-5752" +Jul 27 01:44:33.955: INFO: Deleting pod "simpletest-rc-to-be-deleted-998lp" in namespace "gc-5752" +Jul 27 01:44:33.990: INFO: Deleting pod "simpletest-rc-to-be-deleted-99xjh" in namespace "gc-5752" +Jul 27 01:44:34.030: INFO: Deleting pod "simpletest-rc-to-be-deleted-9grww" in namespace "gc-5752" +Jul 27 01:44:34.056: INFO: Deleting pod "simpletest-rc-to-be-deleted-9pcmb" in namespace "gc-5752" +Jul 27 01:44:34.083: INFO: Deleting pod "simpletest-rc-to-be-deleted-9sffv" in namespace "gc-5752" +Jul 27 01:44:34.111: INFO: Deleting pod "simpletest-rc-to-be-deleted-bbtlb" in namespace "gc-5752" +Jul 27 01:44:34.152: INFO: Deleting pod "simpletest-rc-to-be-deleted-bfsj9" in namespace "gc-5752" +Jul 27 01:44:34.179: INFO: Deleting pod "simpletest-rc-to-be-deleted-c24zd" in namespace "gc-5752" +Jul 27 01:44:34.224: INFO: Deleting pod "simpletest-rc-to-be-deleted-c6t7r" in namespace "gc-5752" +Jul 27 01:44:34.255: INFO: Deleting pod "simpletest-rc-to-be-deleted-c9hnt" in namespace "gc-5752" +Jul 27 01:44:34.273: INFO: Deleting pod "simpletest-rc-to-be-deleted-cdnb2" in namespace "gc-5752" +Jul 27 01:44:34.294: INFO: Deleting pod "simpletest-rc-to-be-deleted-ckqq6" in namespace "gc-5752" +Jul 27 01:44:34.321: INFO: Deleting pod "simpletest-rc-to-be-deleted-crd7f" in namespace "gc-5752" +Jul 27 01:44:34.352: INFO: Deleting pod "simpletest-rc-to-be-deleted-ctv8z" in namespace "gc-5752" +Jul 27 01:44:34.379: INFO: Deleting pod "simpletest-rc-to-be-deleted-d8st8" in namespace "gc-5752" +Jul 27 01:44:34.408: INFO: Deleting pod "simpletest-rc-to-be-deleted-d9p2g" in namespace "gc-5752" +Jul 27 01:44:34.456: INFO: Deleting pod "simpletest-rc-to-be-deleted-dszqh" in namespace "gc-5752" +Jul 27 01:44:34.503: INFO: Deleting pod "simpletest-rc-to-be-deleted-f2tcg" in namespace "gc-5752" +Jul 27 01:44:34.527: INFO: Deleting pod "simpletest-rc-to-be-deleted-f48z8" in namespace "gc-5752" +Jul 27 01:44:34.549: INFO: Deleting pod "simpletest-rc-to-be-deleted-fkj98" in namespace "gc-5752" +Jul 27 01:44:34.576: INFO: Deleting pod "simpletest-rc-to-be-deleted-fzpg7" in namespace "gc-5752" +Jul 27 01:44:34.606: INFO: Deleting pod "simpletest-rc-to-be-deleted-gfg4z" in namespace "gc-5752" +Jul 27 01:44:34.628: INFO: Deleting pod "simpletest-rc-to-be-deleted-ghmgw" in namespace "gc-5752" +Jul 27 01:44:34.658: INFO: Deleting pod "simpletest-rc-to-be-deleted-gs6lx" in namespace "gc-5752" +Jul 27 01:44:34.681: INFO: Deleting pod "simpletest-rc-to-be-deleted-h8rmz" in namespace "gc-5752" +Jul 27 01:44:34.748: INFO: Deleting pod "simpletest-rc-to-be-deleted-hdpxf" in namespace "gc-5752" +[AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 -Jun 12 20:59:05.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:105 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +Jul 27 01:44:34.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector tear down framework | framework.go:193 -STEP: Destroying namespace "webhook-4161" for this suite. 06/12/23 20:59:05.178 -STEP: Destroying namespace "webhook-4161-markers" for this suite. 06/12/23 20:59:05.202 +STEP: Destroying namespace "gc-5752" for this suite. 07/27/23 01:44:34.824 ------------------------------ -• [SLOW TEST] [8.143 seconds] -[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +• [SLOW TEST] [13.030 seconds] +[sig-api-machinery] Garbage collector test/e2e/apimachinery/framework.go:23 - should unconditionally reject operations on fail closed webhook [Conformance] - test/e2e/apimachinery/webhook.go:239 + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + test/e2e/apimachinery/garbage_collector.go:735 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [BeforeEach] [sig-api-machinery] Garbage collector set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:58:57.078 - Jun 12 20:58:57.078: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename webhook 06/12/23 20:58:57.086 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:58:57.183 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:58:57.275 - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + STEP: Creating a kubernetes client 07/27/23 01:44:21.848 + Jul 27 01:44:21.848: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename gc 07/27/23 01:44:21.849 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:44:21.887 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:44:21.897 + [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:90 - STEP: Setting up server cert 06/12/23 20:58:57.432 - STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 20:58:59.426 - STEP: Deploying the webhook pod 06/12/23 20:58:59.458 - STEP: Wait for the deployment to be ready 06/12/23 20:58:59.484 - Jun 12 20:58:59.501: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set - Jun 12 20:59:01.537: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 20, 58, 59, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 58, 59, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 20, 58, 59, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 20, 58, 59, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} - STEP: Deploying the webhook service 06/12/23 20:59:03.6 - STEP: Verifying the service has paired with the endpoint 06/12/23 20:59:03.638 - Jun 12 20:59:04.639: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 - [It] should unconditionally reject operations on fail closed webhook [Conformance] - test/e2e/apimachinery/webhook.go:239 - STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API 06/12/23 20:59:04.655 - STEP: create a namespace for the webhook 06/12/23 20:59:04.785 - STEP: create a configmap should be unconditionally rejected by the webhook 06/12/23 20:59:04.886 - [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/framework/node/init/init.go:32 - Jun 12 20:59:05.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:105 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - tear down framework | framework.go:193 - STEP: Destroying namespace "webhook-4161" for this suite. 06/12/23 20:59:05.178 - STEP: Destroying namespace "webhook-4161-markers" for this suite. 06/12/23 20:59:05.202 - << End Captured GinkgoWriter Output ------------------------------- -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------- -[sig-network] DNS - should provide DNS for services [Conformance] - test/e2e/network/dns.go:137 -[BeforeEach] [sig-network] DNS - set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:59:05.229 -Jun 12 20:59:05.229: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename dns 06/12/23 20:59:05.236 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:59:05.342 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:59:05.375 -[BeforeEach] [sig-network] DNS - test/e2e/framework/metrics/init/init.go:31 -[It] should provide DNS for services [Conformance] - test/e2e/network/dns.go:137 -STEP: Creating a test headless service 06/12/23 20:59:05.392 -STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1995.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1995.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1995.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1995.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1995.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1995.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1995.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1995.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1995.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1995.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1995.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1995.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 139.86.21.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.21.86.139_udp@PTR;check="$$(dig +tcp +noall +answer +search 139.86.21.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.21.86.139_tcp@PTR;sleep 1; done - 06/12/23 20:59:05.448 -STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1995.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1995.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1995.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1995.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1995.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1995.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1995.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1995.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1995.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1995.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1995.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1995.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 139.86.21.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.21.86.139_udp@PTR;check="$$(dig +tcp +noall +answer +search 139.86.21.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.21.86.139_tcp@PTR;sleep 1; done - 06/12/23 20:59:05.449 -STEP: creating a pod to probe DNS 06/12/23 20:59:05.449 -STEP: submitting the pod to kubernetes 06/12/23 20:59:05.45 -Jun 12 20:59:05.475: INFO: Waiting up to 15m0s for pod "dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c" in namespace "dns-1995" to be "running" -Jun 12 20:59:05.487: INFO: Pod "dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.777351ms -Jun 12 20:59:07.499: INFO: Pod "dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024040345s -Jun 12 20:59:09.511: INFO: Pod "dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035730278s -Jun 12 20:59:11.502: INFO: Pod "dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026664149s -Jun 12 20:59:13.497: INFO: Pod "dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021652373s -Jun 12 20:59:15.498: INFO: Pod "dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022921371s -Jun 12 20:59:17.500: INFO: Pod "dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.025249575s -Jun 12 20:59:19.498: INFO: Pod "dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.022612342s -Jun 12 20:59:21.498: INFO: Pod "dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.022940168s -Jun 12 20:59:23.500: INFO: Pod "dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c": Phase="Running", Reason="", readiness=true. Elapsed: 18.024356974s -Jun 12 20:59:23.500: INFO: Pod "dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c" satisfied condition "running" -STEP: retrieving the pod 06/12/23 20:59:23.5 -STEP: looking for the results for each expected name from probers 06/12/23 20:59:23.521 -Jun 12 20:59:23.566: INFO: Unable to read wheezy_udp@dns-test-service.dns-1995.svc.cluster.local from pod dns-1995/dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c: the server could not find the requested resource (get pods dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c) -Jun 12 20:59:23.606: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1995.svc.cluster.local from pod dns-1995/dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c: the server could not find the requested resource (get pods dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c) -Jun 12 20:59:23.628: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1995.svc.cluster.local from pod dns-1995/dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c: the server could not find the requested resource (get pods dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c) -Jun 12 20:59:23.654: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1995.svc.cluster.local from pod dns-1995/dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c: the server could not find the requested resource (get pods dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c) -Jun 12 20:59:23.760: INFO: Unable to read jessie_udp@dns-test-service.dns-1995.svc.cluster.local from pod dns-1995/dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c: the server could not find the requested resource (get pods dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c) -Jun 12 20:59:23.792: INFO: Unable to read jessie_tcp@dns-test-service.dns-1995.svc.cluster.local from pod dns-1995/dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c: the server could not find the requested resource (get pods dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c) -Jun 12 20:59:23.806: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1995.svc.cluster.local from pod dns-1995/dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c: the server could not find the requested resource (get pods dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c) -Jun 12 20:59:23.825: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1995.svc.cluster.local from pod dns-1995/dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c: the server could not find the requested resource (get pods dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c) -Jun 12 20:59:23.939: INFO: Lookups using dns-1995/dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c failed for: [wheezy_udp@dns-test-service.dns-1995.svc.cluster.local wheezy_tcp@dns-test-service.dns-1995.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1995.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1995.svc.cluster.local jessie_udp@dns-test-service.dns-1995.svc.cluster.local jessie_tcp@dns-test-service.dns-1995.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1995.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1995.svc.cluster.local] - -Jun 12 20:59:29.152: INFO: DNS probes using dns-1995/dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c succeeded - -STEP: deleting the pod 06/12/23 20:59:29.152 -STEP: deleting the test service 06/12/23 20:59:29.186 -STEP: deleting the test headless service 06/12/23 20:59:29.243 -[AfterEach] [sig-network] DNS + [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + test/e2e/apimachinery/garbage_collector.go:735 + STEP: create the rc1 07/27/23 01:44:21.92 + STEP: create the rc2 07/27/23 01:44:21.945 + STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well 07/27/23 01:44:27.019 + STEP: delete the rc simpletest-rc-to-be-deleted 07/27/23 01:44:28.201 + STEP: wait for the rc to be deleted 07/27/23 01:44:28.24 + STEP: Gathering metrics 07/27/23 01:44:33.274 + W0727 01:44:33.293739 20 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. + Jul 27 01:44:33.293: INFO: For apiserver_request_total: + For apiserver_request_latency_seconds: + For apiserver_init_events_total: + For garbage_collector_attempt_to_delete_queue_latency: + For garbage_collector_attempt_to_delete_work_duration: + For garbage_collector_attempt_to_orphan_queue_latency: + For garbage_collector_attempt_to_orphan_work_duration: + For garbage_collector_dirty_processing_latency_microseconds: + For garbage_collector_event_processing_latency_microseconds: + For garbage_collector_graph_changes_queue_latency: + For garbage_collector_graph_changes_work_duration: + For garbage_collector_orphan_processing_latency_microseconds: + For namespace_queue_latency: + For namespace_queue_latency_sum: + For namespace_queue_latency_count: + For namespace_retries: + For namespace_work_duration: + For namespace_work_duration_sum: + For namespace_work_duration_count: + For function_duration_seconds: + For errors_total: + For evicted_pods_total: + + Jul 27 01:44:33.293: INFO: Deleting pod "simpletest-rc-to-be-deleted-4gvzz" in namespace "gc-5752" + Jul 27 01:44:33.323: INFO: Deleting pod "simpletest-rc-to-be-deleted-4stwd" in namespace "gc-5752" + Jul 27 01:44:33.350: INFO: Deleting pod "simpletest-rc-to-be-deleted-4vxxf" in namespace "gc-5752" + Jul 27 01:44:33.382: INFO: Deleting pod "simpletest-rc-to-be-deleted-57z5m" in namespace "gc-5752" + Jul 27 01:44:33.417: INFO: Deleting pod "simpletest-rc-to-be-deleted-58ksj" in namespace "gc-5752" + Jul 27 01:44:33.445: INFO: Deleting pod "simpletest-rc-to-be-deleted-597wg" in namespace "gc-5752" + Jul 27 01:44:33.466: INFO: Deleting pod "simpletest-rc-to-be-deleted-5dhnp" in namespace "gc-5752" + Jul 27 01:44:33.485: INFO: Deleting pod "simpletest-rc-to-be-deleted-5dr95" in namespace "gc-5752" + Jul 27 01:44:33.510: INFO: Deleting pod "simpletest-rc-to-be-deleted-5xvfs" in namespace "gc-5752" + Jul 27 01:44:33.534: INFO: Deleting pod "simpletest-rc-to-be-deleted-64b9c" in namespace "gc-5752" + Jul 27 01:44:33.557: INFO: Deleting pod "simpletest-rc-to-be-deleted-66qpd" in namespace "gc-5752" + Jul 27 01:44:33.582: INFO: Deleting pod "simpletest-rc-to-be-deleted-687g7" in namespace "gc-5752" + Jul 27 01:44:33.604: INFO: Deleting pod "simpletest-rc-to-be-deleted-6blq4" in namespace "gc-5752" + Jul 27 01:44:33.629: INFO: Deleting pod "simpletest-rc-to-be-deleted-6smkh" in namespace "gc-5752" + Jul 27 01:44:33.664: INFO: Deleting pod "simpletest-rc-to-be-deleted-6sts6" in namespace "gc-5752" + Jul 27 01:44:33.690: INFO: Deleting pod "simpletest-rc-to-be-deleted-6v29g" in namespace "gc-5752" + Jul 27 01:44:33.715: INFO: Deleting pod "simpletest-rc-to-be-deleted-6vw7r" in namespace "gc-5752" + Jul 27 01:44:33.740: INFO: Deleting pod "simpletest-rc-to-be-deleted-72mvg" in namespace "gc-5752" + Jul 27 01:44:33.777: INFO: Deleting pod "simpletest-rc-to-be-deleted-78nqp" in namespace "gc-5752" + Jul 27 01:44:33.801: INFO: Deleting pod "simpletest-rc-to-be-deleted-7rsbw" in namespace "gc-5752" + Jul 27 01:44:33.830: INFO: Deleting pod "simpletest-rc-to-be-deleted-7zts7" in namespace "gc-5752" + Jul 27 01:44:33.878: INFO: Deleting pod "simpletest-rc-to-be-deleted-8h2lp" in namespace "gc-5752" + Jul 27 01:44:33.907: INFO: Deleting pod "simpletest-rc-to-be-deleted-8kjmn" in namespace "gc-5752" + Jul 27 01:44:33.930: INFO: Deleting pod "simpletest-rc-to-be-deleted-8t8ss" in namespace "gc-5752" + Jul 27 01:44:33.955: INFO: Deleting pod "simpletest-rc-to-be-deleted-998lp" in namespace "gc-5752" + Jul 27 01:44:33.990: INFO: Deleting pod "simpletest-rc-to-be-deleted-99xjh" in namespace "gc-5752" + Jul 27 01:44:34.030: INFO: Deleting pod "simpletest-rc-to-be-deleted-9grww" in namespace "gc-5752" + Jul 27 01:44:34.056: INFO: Deleting pod "simpletest-rc-to-be-deleted-9pcmb" in namespace "gc-5752" + Jul 27 01:44:34.083: INFO: Deleting pod "simpletest-rc-to-be-deleted-9sffv" in namespace "gc-5752" + Jul 27 01:44:34.111: INFO: Deleting pod "simpletest-rc-to-be-deleted-bbtlb" in namespace "gc-5752" + Jul 27 01:44:34.152: INFO: Deleting pod "simpletest-rc-to-be-deleted-bfsj9" in namespace "gc-5752" + Jul 27 01:44:34.179: INFO: Deleting pod "simpletest-rc-to-be-deleted-c24zd" in namespace "gc-5752" + Jul 27 01:44:34.224: INFO: Deleting pod "simpletest-rc-to-be-deleted-c6t7r" in namespace "gc-5752" + Jul 27 01:44:34.255: INFO: Deleting pod "simpletest-rc-to-be-deleted-c9hnt" in namespace "gc-5752" + Jul 27 01:44:34.273: INFO: Deleting pod "simpletest-rc-to-be-deleted-cdnb2" in namespace "gc-5752" + Jul 27 01:44:34.294: INFO: Deleting pod "simpletest-rc-to-be-deleted-ckqq6" in namespace "gc-5752" + Jul 27 01:44:34.321: INFO: Deleting pod "simpletest-rc-to-be-deleted-crd7f" in namespace "gc-5752" + Jul 27 01:44:34.352: INFO: Deleting pod "simpletest-rc-to-be-deleted-ctv8z" in namespace "gc-5752" + Jul 27 01:44:34.379: INFO: Deleting pod "simpletest-rc-to-be-deleted-d8st8" in namespace "gc-5752" + Jul 27 01:44:34.408: INFO: Deleting pod "simpletest-rc-to-be-deleted-d9p2g" in namespace "gc-5752" + Jul 27 01:44:34.456: INFO: Deleting pod "simpletest-rc-to-be-deleted-dszqh" in namespace "gc-5752" + Jul 27 01:44:34.503: INFO: Deleting pod "simpletest-rc-to-be-deleted-f2tcg" in namespace "gc-5752" + Jul 27 01:44:34.527: INFO: Deleting pod "simpletest-rc-to-be-deleted-f48z8" in namespace "gc-5752" + Jul 27 01:44:34.549: INFO: Deleting pod "simpletest-rc-to-be-deleted-fkj98" in namespace "gc-5752" + Jul 27 01:44:34.576: INFO: Deleting pod "simpletest-rc-to-be-deleted-fzpg7" in namespace "gc-5752" + Jul 27 01:44:34.606: INFO: Deleting pod "simpletest-rc-to-be-deleted-gfg4z" in namespace "gc-5752" + Jul 27 01:44:34.628: INFO: Deleting pod "simpletest-rc-to-be-deleted-ghmgw" in namespace "gc-5752" + Jul 27 01:44:34.658: INFO: Deleting pod "simpletest-rc-to-be-deleted-gs6lx" in namespace "gc-5752" + Jul 27 01:44:34.681: INFO: Deleting pod "simpletest-rc-to-be-deleted-h8rmz" in namespace "gc-5752" + Jul 27 01:44:34.748: INFO: Deleting pod "simpletest-rc-to-be-deleted-hdpxf" in namespace "gc-5752" + [AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/node/init/init.go:32 + Jul 27 01:44:34.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + tear down framework | framework.go:193 + STEP: Destroying namespace "gc-5752" for this suite. 07/27/23 01:44:34.824 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:57 +[BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 07/27/23 01:44:34.89 +Jul 27 01:44:34.891: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename projected 07/27/23 01:44:34.908 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:44:34.966 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:44:34.981 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:57 +STEP: Creating configMap with name projected-configmap-test-volume-b897a874-f519-41fe-8b9f-1d85d34bcc89 07/27/23 01:44:34.999 +STEP: Creating a pod to test consume configMaps 07/27/23 01:44:35.016 +Jul 27 01:44:35.046: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bbde2097-78ec-4b09-a33b-2e39216718c2" in namespace "projected-2774" to be "Succeeded or Failed" +Jul 27 01:44:35.061: INFO: Pod "pod-projected-configmaps-bbde2097-78ec-4b09-a33b-2e39216718c2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.847369ms +Jul 27 01:44:37.072: INFO: Pod "pod-projected-configmaps-bbde2097-78ec-4b09-a33b-2e39216718c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026054411s +Jul 27 01:44:39.071: INFO: Pod "pod-projected-configmaps-bbde2097-78ec-4b09-a33b-2e39216718c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025511712s +Jul 27 01:44:41.070: INFO: Pod "pod-projected-configmaps-bbde2097-78ec-4b09-a33b-2e39216718c2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024046096s +Jul 27 01:44:43.071: INFO: Pod "pod-projected-configmaps-bbde2097-78ec-4b09-a33b-2e39216718c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.025130224s +STEP: Saw pod success 07/27/23 01:44:43.071 +Jul 27 01:44:43.071: INFO: Pod "pod-projected-configmaps-bbde2097-78ec-4b09-a33b-2e39216718c2" satisfied condition "Succeeded or Failed" +Jul 27 01:44:43.079: INFO: Trying to get logs from node 10.245.128.19 pod pod-projected-configmaps-bbde2097-78ec-4b09-a33b-2e39216718c2 container agnhost-container: +STEP: delete the pod 07/27/23 01:44:43.098 +Jul 27 01:44:43.122: INFO: Waiting for pod pod-projected-configmaps-bbde2097-78ec-4b09-a33b-2e39216718c2 to disappear +Jul 27 01:44:43.130: INFO: Pod pod-projected-configmaps-bbde2097-78ec-4b09-a33b-2e39216718c2 no longer exists +[AfterEach] [sig-storage] Projected configMap test/e2e/framework/node/init/init.go:32 -Jun 12 20:59:29.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] DNS +Jul 27 01:44:43.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] DNS +[DeferCleanup (Each)] [sig-storage] Projected configMap dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] DNS +[DeferCleanup (Each)] [sig-storage] Projected configMap tear down framework | framework.go:193 -STEP: Destroying namespace "dns-1995" for this suite. 06/12/23 20:59:29.3 +STEP: Destroying namespace "projected-2774" for this suite. 07/27/23 01:44:43.144 ------------------------------ -• [SLOW TEST] [24.085 seconds] -[sig-network] DNS -test/e2e/network/common/framework.go:23 - should provide DNS for services [Conformance] - test/e2e/network/dns.go:137 +• [SLOW TEST] [8.278 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:57 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] DNS + [BeforeEach] [sig-storage] Projected configMap set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:59:05.229 - Jun 12 20:59:05.229: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename dns 06/12/23 20:59:05.236 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:59:05.342 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:59:05.375 - [BeforeEach] [sig-network] DNS + STEP: Creating a kubernetes client 07/27/23 01:44:34.89 + Jul 27 01:44:34.891: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename projected 07/27/23 01:44:34.908 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:44:34.966 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:44:34.981 + [BeforeEach] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:31 - [It] should provide DNS for services [Conformance] - test/e2e/network/dns.go:137 - STEP: Creating a test headless service 06/12/23 20:59:05.392 - STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1995.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1995.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1995.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1995.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1995.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1995.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1995.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1995.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1995.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1995.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1995.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1995.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 139.86.21.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.21.86.139_udp@PTR;check="$$(dig +tcp +noall +answer +search 139.86.21.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.21.86.139_tcp@PTR;sleep 1; done - 06/12/23 20:59:05.448 - STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1995.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1995.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1995.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1995.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1995.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1995.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1995.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1995.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1995.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1995.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1995.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1995.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 139.86.21.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.21.86.139_udp@PTR;check="$$(dig +tcp +noall +answer +search 139.86.21.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.21.86.139_tcp@PTR;sleep 1; done - 06/12/23 20:59:05.449 - STEP: creating a pod to probe DNS 06/12/23 20:59:05.449 - STEP: submitting the pod to kubernetes 06/12/23 20:59:05.45 - Jun 12 20:59:05.475: INFO: Waiting up to 15m0s for pod "dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c" in namespace "dns-1995" to be "running" - Jun 12 20:59:05.487: INFO: Pod "dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.777351ms - Jun 12 20:59:07.499: INFO: Pod "dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024040345s - Jun 12 20:59:09.511: INFO: Pod "dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035730278s - Jun 12 20:59:11.502: INFO: Pod "dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026664149s - Jun 12 20:59:13.497: INFO: Pod "dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021652373s - Jun 12 20:59:15.498: INFO: Pod "dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022921371s - Jun 12 20:59:17.500: INFO: Pod "dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.025249575s - Jun 12 20:59:19.498: INFO: Pod "dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.022612342s - Jun 12 20:59:21.498: INFO: Pod "dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.022940168s - Jun 12 20:59:23.500: INFO: Pod "dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c": Phase="Running", Reason="", readiness=true. Elapsed: 18.024356974s - Jun 12 20:59:23.500: INFO: Pod "dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c" satisfied condition "running" - STEP: retrieving the pod 06/12/23 20:59:23.5 - STEP: looking for the results for each expected name from probers 06/12/23 20:59:23.521 - Jun 12 20:59:23.566: INFO: Unable to read wheezy_udp@dns-test-service.dns-1995.svc.cluster.local from pod dns-1995/dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c: the server could not find the requested resource (get pods dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c) - Jun 12 20:59:23.606: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1995.svc.cluster.local from pod dns-1995/dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c: the server could not find the requested resource (get pods dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c) - Jun 12 20:59:23.628: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1995.svc.cluster.local from pod dns-1995/dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c: the server could not find the requested resource (get pods dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c) - Jun 12 20:59:23.654: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1995.svc.cluster.local from pod dns-1995/dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c: the server could not find the requested resource (get pods dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c) - Jun 12 20:59:23.760: INFO: Unable to read jessie_udp@dns-test-service.dns-1995.svc.cluster.local from pod dns-1995/dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c: the server could not find the requested resource (get pods dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c) - Jun 12 20:59:23.792: INFO: Unable to read jessie_tcp@dns-test-service.dns-1995.svc.cluster.local from pod dns-1995/dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c: the server could not find the requested resource (get pods dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c) - Jun 12 20:59:23.806: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1995.svc.cluster.local from pod dns-1995/dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c: the server could not find the requested resource (get pods dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c) - Jun 12 20:59:23.825: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1995.svc.cluster.local from pod dns-1995/dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c: the server could not find the requested resource (get pods dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c) - Jun 12 20:59:23.939: INFO: Lookups using dns-1995/dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c failed for: [wheezy_udp@dns-test-service.dns-1995.svc.cluster.local wheezy_tcp@dns-test-service.dns-1995.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1995.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1995.svc.cluster.local jessie_udp@dns-test-service.dns-1995.svc.cluster.local jessie_tcp@dns-test-service.dns-1995.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1995.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1995.svc.cluster.local] - - Jun 12 20:59:29.152: INFO: DNS probes using dns-1995/dns-test-70a8da57-8725-41fb-8fe5-966ad4ba550c succeeded - - STEP: deleting the pod 06/12/23 20:59:29.152 - STEP: deleting the test service 06/12/23 20:59:29.186 - STEP: deleting the test headless service 06/12/23 20:59:29.243 - [AfterEach] [sig-network] DNS + [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:57 + STEP: Creating configMap with name projected-configmap-test-volume-b897a874-f519-41fe-8b9f-1d85d34bcc89 07/27/23 01:44:34.999 + STEP: Creating a pod to test consume configMaps 07/27/23 01:44:35.016 + Jul 27 01:44:35.046: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bbde2097-78ec-4b09-a33b-2e39216718c2" in namespace "projected-2774" to be "Succeeded or Failed" + Jul 27 01:44:35.061: INFO: Pod "pod-projected-configmaps-bbde2097-78ec-4b09-a33b-2e39216718c2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.847369ms + Jul 27 01:44:37.072: INFO: Pod "pod-projected-configmaps-bbde2097-78ec-4b09-a33b-2e39216718c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026054411s + Jul 27 01:44:39.071: INFO: Pod "pod-projected-configmaps-bbde2097-78ec-4b09-a33b-2e39216718c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025511712s + Jul 27 01:44:41.070: INFO: Pod "pod-projected-configmaps-bbde2097-78ec-4b09-a33b-2e39216718c2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024046096s + Jul 27 01:44:43.071: INFO: Pod "pod-projected-configmaps-bbde2097-78ec-4b09-a33b-2e39216718c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.025130224s + STEP: Saw pod success 07/27/23 01:44:43.071 + Jul 27 01:44:43.071: INFO: Pod "pod-projected-configmaps-bbde2097-78ec-4b09-a33b-2e39216718c2" satisfied condition "Succeeded or Failed" + Jul 27 01:44:43.079: INFO: Trying to get logs from node 10.245.128.19 pod pod-projected-configmaps-bbde2097-78ec-4b09-a33b-2e39216718c2 container agnhost-container: + STEP: delete the pod 07/27/23 01:44:43.098 + Jul 27 01:44:43.122: INFO: Waiting for pod pod-projected-configmaps-bbde2097-78ec-4b09-a33b-2e39216718c2 to disappear + Jul 27 01:44:43.130: INFO: Pod pod-projected-configmaps-bbde2097-78ec-4b09-a33b-2e39216718c2 no longer exists + [AfterEach] [sig-storage] Projected configMap test/e2e/framework/node/init/init.go:32 - Jun 12 20:59:29.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] DNS + Jul 27 01:44:43.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] DNS + [DeferCleanup (Each)] [sig-storage] Projected configMap dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] DNS + [DeferCleanup (Each)] [sig-storage] Projected configMap tear down framework | framework.go:193 - STEP: Destroying namespace "dns-1995" for this suite. 06/12/23 20:59:29.3 + STEP: Destroying namespace "projected-2774" for this suite. 07/27/23 01:44:43.144 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-node] Container Runtime blackbox test on terminated container - should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] - test/e2e/common/node/runtime.go:232 -[BeforeEach] [sig-node] Container Runtime +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + should be able to convert from CR v1 to CR v2 [Conformance] + test/e2e/apimachinery/crd_conversion_webhook.go:149 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:59:29.324 -Jun 12 20:59:29.325: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename container-runtime 06/12/23 20:59:29.328 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:59:29.378 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:59:29.395 -[BeforeEach] [sig-node] Container Runtime +STEP: Creating a kubernetes client 07/27/23 01:44:43.17 +Jul 27 01:44:43.170: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename crd-webhook 07/27/23 01:44:43.171 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:44:43.214 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:44:43.225 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 -[It] should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] - test/e2e/common/node/runtime.go:232 -STEP: create the container 06/12/23 20:59:29.416 -STEP: wait for the container to reach Succeeded 06/12/23 20:59:29.468 -STEP: get the container status 06/12/23 20:59:36.566 -STEP: the container should be terminated 06/12/23 20:59:36.575 -STEP: the termination message should be set 06/12/23 20:59:36.576 -Jun 12 20:59:36.576: INFO: Expected: &{} to match Container's Termination Message: -- -STEP: delete the container 06/12/23 20:59:36.576 -[AfterEach] [sig-node] Container Runtime +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:128 +STEP: Setting up server cert 07/27/23 01:44:43.238 +STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication 07/27/23 01:44:43.775 +STEP: Deploying the custom resource conversion webhook pod 07/27/23 01:44:43.809 +STEP: Wait for the deployment to be ready 07/27/23 01:44:43.834 +Jul 27 01:44:43.851: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 07/27/23 01:44:45.904 +STEP: Verifying the service has paired with the endpoint 07/27/23 01:44:45.97 +Jul 27 01:44:46.971: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 +[It] should be able to convert from CR v1 to CR v2 [Conformance] + test/e2e/apimachinery/crd_conversion_webhook.go:149 +Jul 27 01:44:46.984: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Creating a v1 custom resource 07/27/23 01:44:49.739 +STEP: v2 custom resource should be converted 07/27/23 01:44:49.756 +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 -Jun 12 20:59:36.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Container Runtime +Jul 27 01:44:50.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:139 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Container Runtime +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Container Runtime +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 -STEP: Destroying namespace "container-runtime-4709" for this suite. 06/12/23 20:59:36.621 +STEP: Destroying namespace "crd-webhook-1460" for this suite. 07/27/23 01:44:50.471 ------------------------------ -• [SLOW TEST] [7.316 seconds] -[sig-node] Container Runtime -test/e2e/common/node/framework.go:23 - blackbox test - test/e2e/common/node/runtime.go:44 - on terminated container - test/e2e/common/node/runtime.go:137 - should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] - test/e2e/common/node/runtime.go:232 +• [SLOW TEST] [7.328 seconds] +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should be able to convert from CR v1 to CR v2 [Conformance] + test/e2e/apimachinery/crd_conversion_webhook.go:149 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Container Runtime + [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:59:29.324 - Jun 12 20:59:29.325: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename container-runtime 06/12/23 20:59:29.328 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:59:29.378 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:59:29.395 - [BeforeEach] [sig-node] Container Runtime + STEP: Creating a kubernetes client 07/27/23 01:44:43.17 + Jul 27 01:44:43.170: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename crd-webhook 07/27/23 01:44:43.171 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:44:43.214 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:44:43.225 + [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 - [It] should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] - test/e2e/common/node/runtime.go:232 - STEP: create the container 06/12/23 20:59:29.416 - STEP: wait for the container to reach Succeeded 06/12/23 20:59:29.468 - STEP: get the container status 06/12/23 20:59:36.566 - STEP: the container should be terminated 06/12/23 20:59:36.575 - STEP: the termination message should be set 06/12/23 20:59:36.576 - Jun 12 20:59:36.576: INFO: Expected: &{} to match Container's Termination Message: -- - STEP: delete the container 06/12/23 20:59:36.576 - [AfterEach] [sig-node] Container Runtime + [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:128 + STEP: Setting up server cert 07/27/23 01:44:43.238 + STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication 07/27/23 01:44:43.775 + STEP: Deploying the custom resource conversion webhook pod 07/27/23 01:44:43.809 + STEP: Wait for the deployment to be ready 07/27/23 01:44:43.834 + Jul 27 01:44:43.851: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 07/27/23 01:44:45.904 + STEP: Verifying the service has paired with the endpoint 07/27/23 01:44:45.97 + Jul 27 01:44:46.971: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 + [It] should be able to convert from CR v1 to CR v2 [Conformance] + test/e2e/apimachinery/crd_conversion_webhook.go:149 + Jul 27 01:44:46.984: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Creating a v1 custom resource 07/27/23 01:44:49.739 + STEP: v2 custom resource should be converted 07/27/23 01:44:49.756 + [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 20:59:36.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Container Runtime + Jul 27 01:44:50.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:139 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Container Runtime + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Container Runtime + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "container-runtime-4709" for this suite. 06/12/23 20:59:36.621 + STEP: Destroying namespace "crd-webhook-1460" for this suite. 07/27/23 01:44:50.471 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSS +SSSSSSSSSS ------------------------------ -[sig-storage] Projected configMap - should be consumable from pods in volume [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:47 -[BeforeEach] [sig-storage] Projected configMap +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny custom resource creation, update and deletion [Conformance] + test/e2e/apimachinery/webhook.go:221 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:59:36.647 -Jun 12 20:59:36.647: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename projected 06/12/23 20:59:36.649 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:59:36.692 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:59:36.699 -[BeforeEach] [sig-storage] Projected configMap +STEP: Creating a kubernetes client 07/27/23 01:44:50.499 +Jul 27 01:44:50.499: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename webhook 07/27/23 01:44:50.5 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:44:50.547 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:44:50.557 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 -[It] should be consumable from pods in volume [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:47 -STEP: Creating configMap with name projected-configmap-test-volume-d6bbd2dd-3521-4fdb-93f5-a09e78bbdca0 06/12/23 20:59:36.708 -STEP: Creating a pod to test consume configMaps 06/12/23 20:59:36.718 -W0612 20:59:36.751191 23 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "agnhost-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "agnhost-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "agnhost-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "agnhost-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") -Jun 12 20:59:36.751: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f983e108-ee23-47d0-b6a7-35450cf96fbb" in namespace "projected-6238" to be "Succeeded or Failed" -Jun 12 20:59:36.762: INFO: Pod "pod-projected-configmaps-f983e108-ee23-47d0-b6a7-35450cf96fbb": Phase="Pending", Reason="", readiness=false. Elapsed: 11.402402ms -Jun 12 20:59:38.774: INFO: Pod "pod-projected-configmaps-f983e108-ee23-47d0-b6a7-35450cf96fbb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023255345s -Jun 12 20:59:40.775: INFO: Pod "pod-projected-configmaps-f983e108-ee23-47d0-b6a7-35450cf96fbb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02347716s -Jun 12 20:59:42.773: INFO: Pod "pod-projected-configmaps-f983e108-ee23-47d0-b6a7-35450cf96fbb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021929439s -STEP: Saw pod success 06/12/23 20:59:42.773 -Jun 12 20:59:42.774: INFO: Pod "pod-projected-configmaps-f983e108-ee23-47d0-b6a7-35450cf96fbb" satisfied condition "Succeeded or Failed" -Jun 12 20:59:42.782: INFO: Trying to get logs from node 10.138.75.70 pod pod-projected-configmaps-f983e108-ee23-47d0-b6a7-35450cf96fbb container agnhost-container: -STEP: delete the pod 06/12/23 20:59:42.829 -Jun 12 20:59:42.867: INFO: Waiting for pod pod-projected-configmaps-f983e108-ee23-47d0-b6a7-35450cf96fbb to disappear -Jun 12 20:59:42.876: INFO: Pod pod-projected-configmaps-f983e108-ee23-47d0-b6a7-35450cf96fbb no longer exists -[AfterEach] [sig-storage] Projected configMap +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 07/27/23 01:44:50.679 +STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 01:44:50.946 +STEP: Deploying the webhook pod 07/27/23 01:44:51.011 +STEP: Wait for the deployment to be ready 07/27/23 01:44:51.075 +Jul 27 01:44:51.120: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 07/27/23 01:44:53.147 +STEP: Verifying the service has paired with the endpoint 07/27/23 01:44:53.18 +Jul 27 01:44:54.181: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny custom resource creation, update and deletion [Conformance] + test/e2e/apimachinery/webhook.go:221 +Jul 27 01:44:54.192: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Registering the custom resource webhook via the AdmissionRegistration API 07/27/23 01:44:54.718 +STEP: Creating a custom resource that should be denied by the webhook 07/27/23 01:44:54.839 +STEP: Creating a custom resource whose deletion would be denied by the webhook 07/27/23 01:44:57.006 +STEP: Updating the custom resource with disallowed data should be denied 07/27/23 01:44:57.03 +STEP: Deleting the custom resource should be denied 07/27/23 01:44:57.068 +STEP: Remove the offending key and value from the custom resource data 07/27/23 01:44:57.098 +STEP: Deleting the updated custom resource should be successful 07/27/23 01:44:57.132 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 -Jun 12 20:59:42.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Projected configMap +Jul 27 01:44:57.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Projected configMap +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Projected configMap +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 -STEP: Destroying namespace "projected-6238" for this suite. 06/12/23 20:59:42.904 +STEP: Destroying namespace "webhook-6512" for this suite. 07/27/23 01:44:57.89 +STEP: Destroying namespace "webhook-6512-markers" for this suite. 07/27/23 01:44:57.943 ------------------------------ -• [SLOW TEST] [6.271 seconds] -[sig-storage] Projected configMap -test/e2e/common/storage/framework.go:23 - should be consumable from pods in volume [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:47 +• [SLOW TEST] [7.495 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should be able to deny custom resource creation, update and deletion [Conformance] + test/e2e/apimachinery/webhook.go:221 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Projected configMap + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:59:36.647 - Jun 12 20:59:36.647: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename projected 06/12/23 20:59:36.649 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:59:36.692 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:59:36.699 - [BeforeEach] [sig-storage] Projected configMap + STEP: Creating a kubernetes client 07/27/23 01:44:50.499 + Jul 27 01:44:50.499: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename webhook 07/27/23 01:44:50.5 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:44:50.547 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:44:50.557 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 - [It] should be consumable from pods in volume [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:47 - STEP: Creating configMap with name projected-configmap-test-volume-d6bbd2dd-3521-4fdb-93f5-a09e78bbdca0 06/12/23 20:59:36.708 - STEP: Creating a pod to test consume configMaps 06/12/23 20:59:36.718 - W0612 20:59:36.751191 23 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "agnhost-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "agnhost-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "agnhost-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "agnhost-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") - Jun 12 20:59:36.751: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f983e108-ee23-47d0-b6a7-35450cf96fbb" in namespace "projected-6238" to be "Succeeded or Failed" - Jun 12 20:59:36.762: INFO: Pod "pod-projected-configmaps-f983e108-ee23-47d0-b6a7-35450cf96fbb": Phase="Pending", Reason="", readiness=false. Elapsed: 11.402402ms - Jun 12 20:59:38.774: INFO: Pod "pod-projected-configmaps-f983e108-ee23-47d0-b6a7-35450cf96fbb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023255345s - Jun 12 20:59:40.775: INFO: Pod "pod-projected-configmaps-f983e108-ee23-47d0-b6a7-35450cf96fbb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02347716s - Jun 12 20:59:42.773: INFO: Pod "pod-projected-configmaps-f983e108-ee23-47d0-b6a7-35450cf96fbb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021929439s - STEP: Saw pod success 06/12/23 20:59:42.773 - Jun 12 20:59:42.774: INFO: Pod "pod-projected-configmaps-f983e108-ee23-47d0-b6a7-35450cf96fbb" satisfied condition "Succeeded or Failed" - Jun 12 20:59:42.782: INFO: Trying to get logs from node 10.138.75.70 pod pod-projected-configmaps-f983e108-ee23-47d0-b6a7-35450cf96fbb container agnhost-container: - STEP: delete the pod 06/12/23 20:59:42.829 - Jun 12 20:59:42.867: INFO: Waiting for pod pod-projected-configmaps-f983e108-ee23-47d0-b6a7-35450cf96fbb to disappear - Jun 12 20:59:42.876: INFO: Pod pod-projected-configmaps-f983e108-ee23-47d0-b6a7-35450cf96fbb no longer exists - [AfterEach] [sig-storage] Projected configMap + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 07/27/23 01:44:50.679 + STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 01:44:50.946 + STEP: Deploying the webhook pod 07/27/23 01:44:51.011 + STEP: Wait for the deployment to be ready 07/27/23 01:44:51.075 + Jul 27 01:44:51.120: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 07/27/23 01:44:53.147 + STEP: Verifying the service has paired with the endpoint 07/27/23 01:44:53.18 + Jul 27 01:44:54.181: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should be able to deny custom resource creation, update and deletion [Conformance] + test/e2e/apimachinery/webhook.go:221 + Jul 27 01:44:54.192: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Registering the custom resource webhook via the AdmissionRegistration API 07/27/23 01:44:54.718 + STEP: Creating a custom resource that should be denied by the webhook 07/27/23 01:44:54.839 + STEP: Creating a custom resource whose deletion would be denied by the webhook 07/27/23 01:44:57.006 + STEP: Updating the custom resource with disallowed data should be denied 07/27/23 01:44:57.03 + STEP: Deleting the custom resource should be denied 07/27/23 01:44:57.068 + STEP: Remove the offending key and value from the custom resource data 07/27/23 01:44:57.098 + STEP: Deleting the updated custom resource should be successful 07/27/23 01:44:57.132 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 20:59:42.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Projected configMap + Jul 27 01:44:57.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Projected configMap + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Projected configMap + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "projected-6238" for this suite. 06/12/23 20:59:42.904 + STEP: Destroying namespace "webhook-6512" for this suite. 07/27/23 01:44:57.89 + STEP: Destroying namespace "webhook-6512-markers" for this suite. 07/27/23 01:44:57.943 << End Captured GinkgoWriter Output ------------------------------ -SSSS +SSSSS ------------------------------ -[sig-network] DNS - should provide /etc/hosts entries for the cluster [Conformance] - test/e2e/network/dns.go:117 -[BeforeEach] [sig-network] DNS +[sig-node] NoExecuteTaintManager Single Pod [Serial] + removing taint cancels eviction [Disruptive] [Conformance] + test/e2e/node/taints.go:293 +[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:59:42.923 -Jun 12 20:59:42.924: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename dns 06/12/23 20:59:42.926 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:59:42.992 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:59:43.003 -[BeforeEach] [sig-network] DNS +STEP: Creating a kubernetes client 07/27/23 01:44:57.994 +Jul 27 01:44:57.994: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename taint-single-pod 07/27/23 01:44:57.995 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:44:58.109 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:44:58.122 +[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] test/e2e/framework/metrics/init/init.go:31 -[It] should provide /etc/hosts entries for the cluster [Conformance] - test/e2e/network/dns.go:117 -STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2988.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2988.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;sleep 1; done - 06/12/23 20:59:43.036 -STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2988.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2988.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;sleep 1; done - 06/12/23 20:59:43.036 -STEP: creating a pod to probe /etc/hosts 06/12/23 20:59:43.037 -STEP: submitting the pod to kubernetes 06/12/23 20:59:43.037 -Jun 12 20:59:43.068: INFO: Waiting up to 15m0s for pod "dns-test-798e841f-26fc-4fd2-b196-e8015cba808a" in namespace "dns-2988" to be "running" -Jun 12 20:59:43.087: INFO: Pod "dns-test-798e841f-26fc-4fd2-b196-e8015cba808a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.357961ms -Jun 12 20:59:45.098: INFO: Pod "dns-test-798e841f-26fc-4fd2-b196-e8015cba808a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029921864s -Jun 12 20:59:47.098: INFO: Pod "dns-test-798e841f-26fc-4fd2-b196-e8015cba808a": Phase="Running", Reason="", readiness=true. Elapsed: 4.030060696s -Jun 12 20:59:47.098: INFO: Pod "dns-test-798e841f-26fc-4fd2-b196-e8015cba808a" satisfied condition "running" -STEP: retrieving the pod 06/12/23 20:59:47.098 -STEP: looking for the results for each expected name from probers 06/12/23 20:59:47.109 -Jun 12 20:59:47.170: INFO: DNS probes using dns-2988/dns-test-798e841f-26fc-4fd2-b196-e8015cba808a succeeded - -STEP: deleting the pod 06/12/23 20:59:47.17 -[AfterEach] [sig-network] DNS +[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + test/e2e/node/taints.go:170 +Jul 27 01:44:58.130: INFO: Waiting up to 1m0s for all nodes to be ready +Jul 27 01:45:58.323: INFO: Waiting for terminating namespaces to be deleted... +[It] removing taint cancels eviction [Disruptive] [Conformance] + test/e2e/node/taints.go:293 +Jul 27 01:45:58.349: INFO: Starting informer... +STEP: Starting pod... 07/27/23 01:45:58.349 +Jul 27 01:45:58.588: INFO: Pod is running on 10.245.128.19. Tainting Node +STEP: Trying to apply a taint on the Node 07/27/23 01:45:58.588 +STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 07/27/23 01:45:58.655 +STEP: Waiting short time to make sure Pod is queued for deletion 07/27/23 01:45:58.662 +Jul 27 01:45:58.663: INFO: Pod wasn't evicted. Proceeding +Jul 27 01:45:58.663: INFO: Removing taint from Node +STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 07/27/23 01:45:58.736 +STEP: Waiting some time to make sure that toleration time passed. 07/27/23 01:45:58.747 +Jul 27 01:47:13.749: INFO: Pod wasn't evicted. Test successful +[AfterEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] test/e2e/framework/node/init/init.go:32 -Jun 12 20:59:47.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] DNS +Jul 27 01:47:13.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Single Pod [Serial] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] DNS +[DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Single Pod [Serial] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] DNS +[DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Single Pod [Serial] tear down framework | framework.go:193 -STEP: Destroying namespace "dns-2988" for this suite. 06/12/23 20:59:47.22 +STEP: Destroying namespace "taint-single-pod-1143" for this suite. 07/27/23 01:47:13.769 ------------------------------ -• [4.311 seconds] -[sig-network] DNS -test/e2e/network/common/framework.go:23 - should provide /etc/hosts entries for the cluster [Conformance] - test/e2e/network/dns.go:117 +• [SLOW TEST] [135.798 seconds] +[sig-node] NoExecuteTaintManager Single Pod [Serial] +test/e2e/node/framework.go:23 + removing taint cancels eviction [Disruptive] [Conformance] + test/e2e/node/taints.go:293 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] DNS + [BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:59:42.923 - Jun 12 20:59:42.924: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename dns 06/12/23 20:59:42.926 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:59:42.992 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:59:43.003 - [BeforeEach] [sig-network] DNS + STEP: Creating a kubernetes client 07/27/23 01:44:57.994 + Jul 27 01:44:57.994: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename taint-single-pod 07/27/23 01:44:57.995 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:44:58.109 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:44:58.122 + [BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] test/e2e/framework/metrics/init/init.go:31 - [It] should provide /etc/hosts entries for the cluster [Conformance] - test/e2e/network/dns.go:117 - STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2988.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2988.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;sleep 1; done - 06/12/23 20:59:43.036 - STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2988.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2988.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;sleep 1; done - 06/12/23 20:59:43.036 - STEP: creating a pod to probe /etc/hosts 06/12/23 20:59:43.037 - STEP: submitting the pod to kubernetes 06/12/23 20:59:43.037 - Jun 12 20:59:43.068: INFO: Waiting up to 15m0s for pod "dns-test-798e841f-26fc-4fd2-b196-e8015cba808a" in namespace "dns-2988" to be "running" - Jun 12 20:59:43.087: INFO: Pod "dns-test-798e841f-26fc-4fd2-b196-e8015cba808a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.357961ms - Jun 12 20:59:45.098: INFO: Pod "dns-test-798e841f-26fc-4fd2-b196-e8015cba808a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029921864s - Jun 12 20:59:47.098: INFO: Pod "dns-test-798e841f-26fc-4fd2-b196-e8015cba808a": Phase="Running", Reason="", readiness=true. Elapsed: 4.030060696s - Jun 12 20:59:47.098: INFO: Pod "dns-test-798e841f-26fc-4fd2-b196-e8015cba808a" satisfied condition "running" - STEP: retrieving the pod 06/12/23 20:59:47.098 - STEP: looking for the results for each expected name from probers 06/12/23 20:59:47.109 - Jun 12 20:59:47.170: INFO: DNS probes using dns-2988/dns-test-798e841f-26fc-4fd2-b196-e8015cba808a succeeded - - STEP: deleting the pod 06/12/23 20:59:47.17 - [AfterEach] [sig-network] DNS + [BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + test/e2e/node/taints.go:170 + Jul 27 01:44:58.130: INFO: Waiting up to 1m0s for all nodes to be ready + Jul 27 01:45:58.323: INFO: Waiting for terminating namespaces to be deleted... + [It] removing taint cancels eviction [Disruptive] [Conformance] + test/e2e/node/taints.go:293 + Jul 27 01:45:58.349: INFO: Starting informer... + STEP: Starting pod... 07/27/23 01:45:58.349 + Jul 27 01:45:58.588: INFO: Pod is running on 10.245.128.19. Tainting Node + STEP: Trying to apply a taint on the Node 07/27/23 01:45:58.588 + STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 07/27/23 01:45:58.655 + STEP: Waiting short time to make sure Pod is queued for deletion 07/27/23 01:45:58.662 + Jul 27 01:45:58.663: INFO: Pod wasn't evicted. Proceeding + Jul 27 01:45:58.663: INFO: Removing taint from Node + STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 07/27/23 01:45:58.736 + STEP: Waiting some time to make sure that toleration time passed. 07/27/23 01:45:58.747 + Jul 27 01:47:13.749: INFO: Pod wasn't evicted. Test successful + [AfterEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] test/e2e/framework/node/init/init.go:32 - Jun 12 20:59:47.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] DNS + Jul 27 01:47:13.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Single Pod [Serial] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] DNS + [DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Single Pod [Serial] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] DNS + [DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Single Pod [Serial] tear down framework | framework.go:193 - STEP: Destroying namespace "dns-2988" for this suite. 06/12/23 20:59:47.22 + STEP: Destroying namespace "taint-single-pod-1143" for this suite. 07/27/23 01:47:13.769 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-api-machinery] Garbage collector - should orphan pods created by rc if delete options say so [Conformance] - test/e2e/apimachinery/garbage_collector.go:370 -[BeforeEach] [sig-api-machinery] Garbage collector +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + patching/updating a mutating webhook should work [Conformance] + test/e2e/apimachinery/webhook.go:508 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 20:59:47.254 -Jun 12 20:59:47.254: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename gc 06/12/23 20:59:47.256 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:59:47.295 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:59:47.314 -[BeforeEach] [sig-api-machinery] Garbage collector +STEP: Creating a kubernetes client 07/27/23 01:47:13.793 +Jul 27 01:47:13.793: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename webhook 07/27/23 01:47:13.794 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:47:13.837 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:47:13.847 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 -[It] should orphan pods created by rc if delete options say so [Conformance] - test/e2e/apimachinery/garbage_collector.go:370 -STEP: create the rc 06/12/23 20:59:47.349 -STEP: delete the rc 06/12/23 20:59:52.448 -STEP: wait for the rc to be deleted 06/12/23 20:59:52.477 -STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods 06/12/23 20:59:57.5 -STEP: Gathering metrics 06/12/23 21:00:27.573 -W0612 21:00:27.594183 23 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. -Jun 12 21:00:27.594: INFO: For apiserver_request_total: -For apiserver_request_latency_seconds: -For apiserver_init_events_total: -For garbage_collector_attempt_to_delete_queue_latency: -For garbage_collector_attempt_to_delete_work_duration: -For garbage_collector_attempt_to_orphan_queue_latency: -For garbage_collector_attempt_to_orphan_work_duration: -For garbage_collector_dirty_processing_latency_microseconds: -For garbage_collector_event_processing_latency_microseconds: -For garbage_collector_graph_changes_queue_latency: -For garbage_collector_graph_changes_work_duration: -For garbage_collector_orphan_processing_latency_microseconds: -For namespace_queue_latency: -For namespace_queue_latency_sum: -For namespace_queue_latency_count: -For namespace_retries: -For namespace_work_duration: -For namespace_work_duration_sum: -For namespace_work_duration_count: -For function_duration_seconds: -For errors_total: -For evicted_pods_total: - -Jun 12 21:00:27.594: INFO: Deleting pod "simpletest.rc-27cz9" in namespace "gc-8751" -Jun 12 21:00:27.622: INFO: Deleting pod "simpletest.rc-28b9q" in namespace "gc-8751" -Jun 12 21:00:27.648: INFO: Deleting pod "simpletest.rc-2hcrt" in namespace "gc-8751" -Jun 12 21:00:27.681: INFO: Deleting pod "simpletest.rc-2hn5l" in namespace "gc-8751" -Jun 12 21:00:27.733: INFO: Deleting pod "simpletest.rc-4489z" in namespace "gc-8751" -Jun 12 21:00:27.783: INFO: Deleting pod "simpletest.rc-45sl9" in namespace "gc-8751" -Jun 12 21:00:27.838: INFO: Deleting pod "simpletest.rc-48ngb" in namespace "gc-8751" -Jun 12 21:00:27.902: INFO: Deleting pod "simpletest.rc-4gqn4" in namespace "gc-8751" -Jun 12 21:00:27.941: INFO: Deleting pod "simpletest.rc-4snl7" in namespace "gc-8751" -Jun 12 21:00:28.007: INFO: Deleting pod "simpletest.rc-4z6xg" in namespace "gc-8751" -Jun 12 21:00:28.058: INFO: Deleting pod "simpletest.rc-589jh" in namespace "gc-8751" -Jun 12 21:00:28.171: INFO: Deleting pod "simpletest.rc-58ztm" in namespace "gc-8751" -Jun 12 21:00:28.221: INFO: Deleting pod "simpletest.rc-5n64l" in namespace "gc-8751" -Jun 12 21:00:28.270: INFO: Deleting pod "simpletest.rc-5v4sd" in namespace "gc-8751" -Jun 12 21:00:28.360: INFO: Deleting pod "simpletest.rc-66x8m" in namespace "gc-8751" -Jun 12 21:00:28.393: INFO: Deleting pod "simpletest.rc-678qw" in namespace "gc-8751" -Jun 12 21:00:28.477: INFO: Deleting pod "simpletest.rc-68k6j" in namespace "gc-8751" -Jun 12 21:00:28.668: INFO: Deleting pod "simpletest.rc-6d9r9" in namespace "gc-8751" -Jun 12 21:00:28.815: INFO: Deleting pod "simpletest.rc-6jb9z" in namespace "gc-8751" -Jun 12 21:00:28.851: INFO: Deleting pod "simpletest.rc-6m65d" in namespace "gc-8751" -Jun 12 21:00:28.919: INFO: Deleting pod "simpletest.rc-6pw7z" in namespace "gc-8751" -Jun 12 21:00:29.001: INFO: Deleting pod "simpletest.rc-6rsnm" in namespace "gc-8751" -Jun 12 21:00:29.079: INFO: Deleting pod "simpletest.rc-74bbn" in namespace "gc-8751" -Jun 12 21:00:29.132: INFO: Deleting pod "simpletest.rc-7t46j" in namespace "gc-8751" -Jun 12 21:00:29.170: INFO: Deleting pod "simpletest.rc-7tfr8" in namespace "gc-8751" -Jun 12 21:00:29.198: INFO: Deleting pod "simpletest.rc-7tvlx" in namespace "gc-8751" -Jun 12 21:00:29.273: INFO: Deleting pod "simpletest.rc-7x2c5" in namespace "gc-8751" -Jun 12 21:00:29.536: INFO: Deleting pod "simpletest.rc-85xh2" in namespace "gc-8751" -Jun 12 21:00:29.820: INFO: Deleting pod "simpletest.rc-885vf" in namespace "gc-8751" -Jun 12 21:00:29.863: INFO: Deleting pod "simpletest.rc-88zt4" in namespace "gc-8751" -Jun 12 21:00:29.903: INFO: Deleting pod "simpletest.rc-8rlg7" in namespace "gc-8751" -Jun 12 21:00:29.943: INFO: Deleting pod "simpletest.rc-9nks7" in namespace "gc-8751" -Jun 12 21:00:29.972: INFO: Deleting pod "simpletest.rc-ck54g" in namespace "gc-8751" -Jun 12 21:00:30.112: INFO: Deleting pod "simpletest.rc-clh9d" in namespace "gc-8751" -Jun 12 21:00:30.142: INFO: Deleting pod "simpletest.rc-cpb9d" in namespace "gc-8751" -Jun 12 21:00:30.188: INFO: Deleting pod "simpletest.rc-d4fbp" in namespace "gc-8751" -Jun 12 21:00:30.257: INFO: Deleting pod "simpletest.rc-d7zdd" in namespace "gc-8751" -Jun 12 21:00:30.335: INFO: Deleting pod "simpletest.rc-d9cdh" in namespace "gc-8751" -Jun 12 21:00:30.485: INFO: Deleting pod "simpletest.rc-dx6qx" in namespace "gc-8751" -Jun 12 21:00:30.534: INFO: Deleting pod "simpletest.rc-dxnnw" in namespace "gc-8751" -Jun 12 21:00:30.562: INFO: Deleting pod "simpletest.rc-f5wlk" in namespace "gc-8751" -Jun 12 21:00:30.592: INFO: Deleting pod "simpletest.rc-fq77j" in namespace "gc-8751" -Jun 12 21:00:30.678: INFO: Deleting pod "simpletest.rc-fs75l" in namespace "gc-8751" -Jun 12 21:00:30.734: INFO: Deleting pod "simpletest.rc-fsg95" in namespace "gc-8751" -Jun 12 21:00:30.821: INFO: Deleting pod "simpletest.rc-g4tjx" in namespace "gc-8751" -Jun 12 21:00:30.928: INFO: Deleting pod "simpletest.rc-ggnnk" in namespace "gc-8751" -Jun 12 21:00:30.971: INFO: Deleting pod "simpletest.rc-ghfp2" in namespace "gc-8751" -Jun 12 21:00:31.187: INFO: Deleting pod "simpletest.rc-gld82" in namespace "gc-8751" -Jun 12 21:00:31.212: INFO: Deleting pod "simpletest.rc-gsvh9" in namespace "gc-8751" -Jun 12 21:00:31.242: INFO: Deleting pod "simpletest.rc-gz4r6" in namespace "gc-8751" -Jun 12 21:00:31.267: INFO: Deleting pod "simpletest.rc-gzhmr" in namespace "gc-8751" -Jun 12 21:00:31.295: INFO: Deleting pod "simpletest.rc-h9zjd" in namespace "gc-8751" -Jun 12 21:00:31.332: INFO: Deleting pod "simpletest.rc-j2brt" in namespace "gc-8751" -Jun 12 21:00:31.375: INFO: Deleting pod "simpletest.rc-jgk8p" in namespace "gc-8751" -Jun 12 21:00:31.405: INFO: Deleting pod "simpletest.rc-jv5vw" in namespace "gc-8751" -Jun 12 21:00:31.445: INFO: Deleting pod "simpletest.rc-kdflt" in namespace "gc-8751" -Jun 12 21:00:31.490: INFO: Deleting pod "simpletest.rc-kf7mb" in namespace "gc-8751" -Jun 12 21:00:31.730: INFO: Deleting pod "simpletest.rc-khxgv" in namespace "gc-8751" -Jun 12 21:00:31.864: INFO: Deleting pod "simpletest.rc-kj6fq" in namespace "gc-8751" -Jun 12 21:00:31.901: INFO: Deleting pod "simpletest.rc-kj72n" in namespace "gc-8751" -Jun 12 21:00:31.994: INFO: Deleting pod "simpletest.rc-ksnnf" in namespace "gc-8751" -Jun 12 21:00:32.048: INFO: Deleting pod "simpletest.rc-ktqvw" in namespace "gc-8751" -Jun 12 21:00:32.075: INFO: Deleting pod "simpletest.rc-kwhtb" in namespace "gc-8751" -Jun 12 21:00:32.130: INFO: Deleting pod "simpletest.rc-l8z76" in namespace "gc-8751" -Jun 12 21:00:32.163: INFO: Deleting pod "simpletest.rc-lhgkp" in namespace "gc-8751" -Jun 12 21:00:32.191: INFO: Deleting pod "simpletest.rc-lkpmt" in namespace "gc-8751" -Jun 12 21:00:32.229: INFO: Deleting pod "simpletest.rc-mhmth" in namespace "gc-8751" -Jun 12 21:00:32.268: INFO: Deleting pod "simpletest.rc-mkjmb" in namespace "gc-8751" -Jun 12 21:00:32.304: INFO: Deleting pod "simpletest.rc-mwrsg" in namespace "gc-8751" -Jun 12 21:00:32.326: INFO: Deleting pod "simpletest.rc-mx5j9" in namespace "gc-8751" -Jun 12 21:00:32.357: INFO: Deleting pod "simpletest.rc-njgp8" in namespace "gc-8751" -Jun 12 21:00:32.396: INFO: Deleting pod "simpletest.rc-nlsdj" in namespace "gc-8751" -Jun 12 21:00:32.424: INFO: Deleting pod "simpletest.rc-nwn4w" in namespace "gc-8751" -Jun 12 21:00:32.477: INFO: Deleting pod "simpletest.rc-p4v4w" in namespace "gc-8751" -Jun 12 21:00:32.595: INFO: Deleting pod "simpletest.rc-pkshx" in namespace "gc-8751" -Jun 12 21:00:32.628: INFO: Deleting pod "simpletest.rc-pptkj" in namespace "gc-8751" -Jun 12 21:00:32.661: INFO: Deleting pod "simpletest.rc-qbgqf" in namespace "gc-8751" -Jun 12 21:00:32.692: INFO: Deleting pod "simpletest.rc-qvntl" in namespace "gc-8751" -Jun 12 21:00:32.716: INFO: Deleting pod "simpletest.rc-qxc45" in namespace "gc-8751" -Jun 12 21:00:32.748: INFO: Deleting pod "simpletest.rc-rbpxc" in namespace "gc-8751" -Jun 12 21:00:32.784: INFO: Deleting pod "simpletest.rc-rf4kd" in namespace "gc-8751" -Jun 12 21:00:32.843: INFO: Deleting pod "simpletest.rc-rmpmm" in namespace "gc-8751" -Jun 12 21:00:32.909: INFO: Deleting pod "simpletest.rc-s5fsr" in namespace "gc-8751" -Jun 12 21:00:32.946: INFO: Deleting pod "simpletest.rc-s5p8r" in namespace "gc-8751" -Jun 12 21:00:33.005: INFO: Deleting pod "simpletest.rc-skdlx" in namespace "gc-8751" -Jun 12 21:00:33.041: INFO: Deleting pod "simpletest.rc-smchl" in namespace "gc-8751" -Jun 12 21:00:33.104: INFO: Deleting pod "simpletest.rc-t8qmn" in namespace "gc-8751" -Jun 12 21:00:33.176: INFO: Deleting pod "simpletest.rc-tpgfv" in namespace "gc-8751" -Jun 12 21:00:33.252: INFO: Deleting pod "simpletest.rc-v2bfq" in namespace "gc-8751" -Jun 12 21:00:33.303: INFO: Deleting pod "simpletest.rc-vbrxx" in namespace "gc-8751" -Jun 12 21:00:33.337: INFO: Deleting pod "simpletest.rc-vfzwh" in namespace "gc-8751" -Jun 12 21:00:33.415: INFO: Deleting pod "simpletest.rc-w4s7w" in namespace "gc-8751" -Jun 12 21:00:33.452: INFO: Deleting pod "simpletest.rc-w99kg" in namespace "gc-8751" -Jun 12 21:00:33.514: INFO: Deleting pod "simpletest.rc-wqknh" in namespace "gc-8751" -Jun 12 21:00:33.556: INFO: Deleting pod "simpletest.rc-wxzpg" in namespace "gc-8751" -Jun 12 21:00:33.613: INFO: Deleting pod "simpletest.rc-x2pvm" in namespace "gc-8751" -Jun 12 21:00:33.647: INFO: Deleting pod "simpletest.rc-xc6f4" in namespace "gc-8751" -Jun 12 21:00:33.830: INFO: Deleting pod "simpletest.rc-z8mm6" in namespace "gc-8751" -Jun 12 21:00:33.904: INFO: Deleting pod "simpletest.rc-z9nqt" in namespace "gc-8751" -Jun 12 21:00:33.933: INFO: Deleting pod "simpletest.rc-z9t4z" in namespace "gc-8751" -[AfterEach] [sig-api-machinery] Garbage collector +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 07/27/23 01:47:13.928 +STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 01:47:14.444 +STEP: Deploying the webhook pod 07/27/23 01:47:14.475 +STEP: Wait for the deployment to be ready 07/27/23 01:47:14.504 +Jul 27 01:47:14.523: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 07/27/23 01:47:16.577 +STEP: Verifying the service has paired with the endpoint 07/27/23 01:47:16.612 +Jul 27 01:47:17.613: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] patching/updating a mutating webhook should work [Conformance] + test/e2e/apimachinery/webhook.go:508 +STEP: Creating a mutating webhook configuration 07/27/23 01:47:17.624 +STEP: Updating a mutating webhook configuration's rules to not include the create operation 07/27/23 01:47:17.725 +STEP: Creating a configMap that should not be mutated 07/27/23 01:47:17.741 +STEP: Patching a mutating webhook configuration's rules to include the create operation 07/27/23 01:47:17.777 +STEP: Creating a configMap that should be mutated 07/27/23 01:47:17.794 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 -Jun 12 21:00:33.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] Garbage collector +Jul 27 01:47:17.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] Garbage collector +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] Garbage collector +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 -STEP: Destroying namespace "gc-8751" for this suite. 06/12/23 21:00:33.981 +STEP: Destroying namespace "webhook-6388" for this suite. 07/27/23 01:47:18.01 +STEP: Destroying namespace "webhook-6388-markers" for this suite. 07/27/23 01:47:18.033 ------------------------------ -• [SLOW TEST] [46.744 seconds] -[sig-api-machinery] Garbage collector +• [4.266 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/apimachinery/framework.go:23 - should orphan pods created by rc if delete options say so [Conformance] - test/e2e/apimachinery/garbage_collector.go:370 + patching/updating a mutating webhook should work [Conformance] + test/e2e/apimachinery/webhook.go:508 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] Garbage collector + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 20:59:47.254 - Jun 12 20:59:47.254: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename gc 06/12/23 20:59:47.256 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 20:59:47.295 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 20:59:47.314 - [BeforeEach] [sig-api-machinery] Garbage collector + STEP: Creating a kubernetes client 07/27/23 01:47:13.793 + Jul 27 01:47:13.793: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename webhook 07/27/23 01:47:13.794 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:47:13.837 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:47:13.847 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 - [It] should orphan pods created by rc if delete options say so [Conformance] - test/e2e/apimachinery/garbage_collector.go:370 - STEP: create the rc 06/12/23 20:59:47.349 - STEP: delete the rc 06/12/23 20:59:52.448 - STEP: wait for the rc to be deleted 06/12/23 20:59:52.477 - STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods 06/12/23 20:59:57.5 - STEP: Gathering metrics 06/12/23 21:00:27.573 - W0612 21:00:27.594183 23 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. - Jun 12 21:00:27.594: INFO: For apiserver_request_total: - For apiserver_request_latency_seconds: - For apiserver_init_events_total: - For garbage_collector_attempt_to_delete_queue_latency: - For garbage_collector_attempt_to_delete_work_duration: - For garbage_collector_attempt_to_orphan_queue_latency: - For garbage_collector_attempt_to_orphan_work_duration: - For garbage_collector_dirty_processing_latency_microseconds: - For garbage_collector_event_processing_latency_microseconds: - For garbage_collector_graph_changes_queue_latency: - For garbage_collector_graph_changes_work_duration: - For garbage_collector_orphan_processing_latency_microseconds: - For namespace_queue_latency: - For namespace_queue_latency_sum: - For namespace_queue_latency_count: - For namespace_retries: - For namespace_work_duration: - For namespace_work_duration_sum: - For namespace_work_duration_count: - For function_duration_seconds: - For errors_total: - For evicted_pods_total: - - Jun 12 21:00:27.594: INFO: Deleting pod "simpletest.rc-27cz9" in namespace "gc-8751" - Jun 12 21:00:27.622: INFO: Deleting pod "simpletest.rc-28b9q" in namespace "gc-8751" - Jun 12 21:00:27.648: INFO: Deleting pod "simpletest.rc-2hcrt" in namespace "gc-8751" - Jun 12 21:00:27.681: INFO: Deleting pod "simpletest.rc-2hn5l" in namespace "gc-8751" - Jun 12 21:00:27.733: INFO: Deleting pod "simpletest.rc-4489z" in namespace "gc-8751" - Jun 12 21:00:27.783: INFO: Deleting pod "simpletest.rc-45sl9" in namespace "gc-8751" - Jun 12 21:00:27.838: INFO: Deleting pod "simpletest.rc-48ngb" in namespace "gc-8751" - Jun 12 21:00:27.902: INFO: Deleting pod "simpletest.rc-4gqn4" in namespace "gc-8751" - Jun 12 21:00:27.941: INFO: Deleting pod "simpletest.rc-4snl7" in namespace "gc-8751" - Jun 12 21:00:28.007: INFO: Deleting pod "simpletest.rc-4z6xg" in namespace "gc-8751" - Jun 12 21:00:28.058: INFO: Deleting pod "simpletest.rc-589jh" in namespace "gc-8751" - Jun 12 21:00:28.171: INFO: Deleting pod "simpletest.rc-58ztm" in namespace "gc-8751" - Jun 12 21:00:28.221: INFO: Deleting pod "simpletest.rc-5n64l" in namespace "gc-8751" - Jun 12 21:00:28.270: INFO: Deleting pod "simpletest.rc-5v4sd" in namespace "gc-8751" - Jun 12 21:00:28.360: INFO: Deleting pod "simpletest.rc-66x8m" in namespace "gc-8751" - Jun 12 21:00:28.393: INFO: Deleting pod "simpletest.rc-678qw" in namespace "gc-8751" - Jun 12 21:00:28.477: INFO: Deleting pod "simpletest.rc-68k6j" in namespace "gc-8751" - Jun 12 21:00:28.668: INFO: Deleting pod "simpletest.rc-6d9r9" in namespace "gc-8751" - Jun 12 21:00:28.815: INFO: Deleting pod "simpletest.rc-6jb9z" in namespace "gc-8751" - Jun 12 21:00:28.851: INFO: Deleting pod "simpletest.rc-6m65d" in namespace "gc-8751" - Jun 12 21:00:28.919: INFO: Deleting pod "simpletest.rc-6pw7z" in namespace "gc-8751" - Jun 12 21:00:29.001: INFO: Deleting pod "simpletest.rc-6rsnm" in namespace "gc-8751" - Jun 12 21:00:29.079: INFO: Deleting pod "simpletest.rc-74bbn" in namespace "gc-8751" - Jun 12 21:00:29.132: INFO: Deleting pod "simpletest.rc-7t46j" in namespace "gc-8751" - Jun 12 21:00:29.170: INFO: Deleting pod "simpletest.rc-7tfr8" in namespace "gc-8751" - Jun 12 21:00:29.198: INFO: Deleting pod "simpletest.rc-7tvlx" in namespace "gc-8751" - Jun 12 21:00:29.273: INFO: Deleting pod "simpletest.rc-7x2c5" in namespace "gc-8751" - Jun 12 21:00:29.536: INFO: Deleting pod "simpletest.rc-85xh2" in namespace "gc-8751" - Jun 12 21:00:29.820: INFO: Deleting pod "simpletest.rc-885vf" in namespace "gc-8751" - Jun 12 21:00:29.863: INFO: Deleting pod "simpletest.rc-88zt4" in namespace "gc-8751" - Jun 12 21:00:29.903: INFO: Deleting pod "simpletest.rc-8rlg7" in namespace "gc-8751" - Jun 12 21:00:29.943: INFO: Deleting pod "simpletest.rc-9nks7" in namespace "gc-8751" - Jun 12 21:00:29.972: INFO: Deleting pod "simpletest.rc-ck54g" in namespace "gc-8751" - Jun 12 21:00:30.112: INFO: Deleting pod "simpletest.rc-clh9d" in namespace "gc-8751" - Jun 12 21:00:30.142: INFO: Deleting pod "simpletest.rc-cpb9d" in namespace "gc-8751" - Jun 12 21:00:30.188: INFO: Deleting pod "simpletest.rc-d4fbp" in namespace "gc-8751" - Jun 12 21:00:30.257: INFO: Deleting pod "simpletest.rc-d7zdd" in namespace "gc-8751" - Jun 12 21:00:30.335: INFO: Deleting pod "simpletest.rc-d9cdh" in namespace "gc-8751" - Jun 12 21:00:30.485: INFO: Deleting pod "simpletest.rc-dx6qx" in namespace "gc-8751" - Jun 12 21:00:30.534: INFO: Deleting pod "simpletest.rc-dxnnw" in namespace "gc-8751" - Jun 12 21:00:30.562: INFO: Deleting pod "simpletest.rc-f5wlk" in namespace "gc-8751" - Jun 12 21:00:30.592: INFO: Deleting pod "simpletest.rc-fq77j" in namespace "gc-8751" - Jun 12 21:00:30.678: INFO: Deleting pod "simpletest.rc-fs75l" in namespace "gc-8751" - Jun 12 21:00:30.734: INFO: Deleting pod "simpletest.rc-fsg95" in namespace "gc-8751" - Jun 12 21:00:30.821: INFO: Deleting pod "simpletest.rc-g4tjx" in namespace "gc-8751" - Jun 12 21:00:30.928: INFO: Deleting pod "simpletest.rc-ggnnk" in namespace "gc-8751" - Jun 12 21:00:30.971: INFO: Deleting pod "simpletest.rc-ghfp2" in namespace "gc-8751" - Jun 12 21:00:31.187: INFO: Deleting pod "simpletest.rc-gld82" in namespace "gc-8751" - Jun 12 21:00:31.212: INFO: Deleting pod "simpletest.rc-gsvh9" in namespace "gc-8751" - Jun 12 21:00:31.242: INFO: Deleting pod "simpletest.rc-gz4r6" in namespace "gc-8751" - Jun 12 21:00:31.267: INFO: Deleting pod "simpletest.rc-gzhmr" in namespace "gc-8751" - Jun 12 21:00:31.295: INFO: Deleting pod "simpletest.rc-h9zjd" in namespace "gc-8751" - Jun 12 21:00:31.332: INFO: Deleting pod "simpletest.rc-j2brt" in namespace "gc-8751" - Jun 12 21:00:31.375: INFO: Deleting pod "simpletest.rc-jgk8p" in namespace "gc-8751" - Jun 12 21:00:31.405: INFO: Deleting pod "simpletest.rc-jv5vw" in namespace "gc-8751" - Jun 12 21:00:31.445: INFO: Deleting pod "simpletest.rc-kdflt" in namespace "gc-8751" - Jun 12 21:00:31.490: INFO: Deleting pod "simpletest.rc-kf7mb" in namespace "gc-8751" - Jun 12 21:00:31.730: INFO: Deleting pod "simpletest.rc-khxgv" in namespace "gc-8751" - Jun 12 21:00:31.864: INFO: Deleting pod "simpletest.rc-kj6fq" in namespace "gc-8751" - Jun 12 21:00:31.901: INFO: Deleting pod "simpletest.rc-kj72n" in namespace "gc-8751" - Jun 12 21:00:31.994: INFO: Deleting pod "simpletest.rc-ksnnf" in namespace "gc-8751" - Jun 12 21:00:32.048: INFO: Deleting pod "simpletest.rc-ktqvw" in namespace "gc-8751" - Jun 12 21:00:32.075: INFO: Deleting pod "simpletest.rc-kwhtb" in namespace "gc-8751" - Jun 12 21:00:32.130: INFO: Deleting pod "simpletest.rc-l8z76" in namespace "gc-8751" - Jun 12 21:00:32.163: INFO: Deleting pod "simpletest.rc-lhgkp" in namespace "gc-8751" - Jun 12 21:00:32.191: INFO: Deleting pod "simpletest.rc-lkpmt" in namespace "gc-8751" - Jun 12 21:00:32.229: INFO: Deleting pod "simpletest.rc-mhmth" in namespace "gc-8751" - Jun 12 21:00:32.268: INFO: Deleting pod "simpletest.rc-mkjmb" in namespace "gc-8751" - Jun 12 21:00:32.304: INFO: Deleting pod "simpletest.rc-mwrsg" in namespace "gc-8751" - Jun 12 21:00:32.326: INFO: Deleting pod "simpletest.rc-mx5j9" in namespace "gc-8751" - Jun 12 21:00:32.357: INFO: Deleting pod "simpletest.rc-njgp8" in namespace "gc-8751" - Jun 12 21:00:32.396: INFO: Deleting pod "simpletest.rc-nlsdj" in namespace "gc-8751" - Jun 12 21:00:32.424: INFO: Deleting pod "simpletest.rc-nwn4w" in namespace "gc-8751" - Jun 12 21:00:32.477: INFO: Deleting pod "simpletest.rc-p4v4w" in namespace "gc-8751" - Jun 12 21:00:32.595: INFO: Deleting pod "simpletest.rc-pkshx" in namespace "gc-8751" - Jun 12 21:00:32.628: INFO: Deleting pod "simpletest.rc-pptkj" in namespace "gc-8751" - Jun 12 21:00:32.661: INFO: Deleting pod "simpletest.rc-qbgqf" in namespace "gc-8751" - Jun 12 21:00:32.692: INFO: Deleting pod "simpletest.rc-qvntl" in namespace "gc-8751" - Jun 12 21:00:32.716: INFO: Deleting pod "simpletest.rc-qxc45" in namespace "gc-8751" - Jun 12 21:00:32.748: INFO: Deleting pod "simpletest.rc-rbpxc" in namespace "gc-8751" - Jun 12 21:00:32.784: INFO: Deleting pod "simpletest.rc-rf4kd" in namespace "gc-8751" - Jun 12 21:00:32.843: INFO: Deleting pod "simpletest.rc-rmpmm" in namespace "gc-8751" - Jun 12 21:00:32.909: INFO: Deleting pod "simpletest.rc-s5fsr" in namespace "gc-8751" - Jun 12 21:00:32.946: INFO: Deleting pod "simpletest.rc-s5p8r" in namespace "gc-8751" - Jun 12 21:00:33.005: INFO: Deleting pod "simpletest.rc-skdlx" in namespace "gc-8751" - Jun 12 21:00:33.041: INFO: Deleting pod "simpletest.rc-smchl" in namespace "gc-8751" - Jun 12 21:00:33.104: INFO: Deleting pod "simpletest.rc-t8qmn" in namespace "gc-8751" - Jun 12 21:00:33.176: INFO: Deleting pod "simpletest.rc-tpgfv" in namespace "gc-8751" - Jun 12 21:00:33.252: INFO: Deleting pod "simpletest.rc-v2bfq" in namespace "gc-8751" - Jun 12 21:00:33.303: INFO: Deleting pod "simpletest.rc-vbrxx" in namespace "gc-8751" - Jun 12 21:00:33.337: INFO: Deleting pod "simpletest.rc-vfzwh" in namespace "gc-8751" - Jun 12 21:00:33.415: INFO: Deleting pod "simpletest.rc-w4s7w" in namespace "gc-8751" - Jun 12 21:00:33.452: INFO: Deleting pod "simpletest.rc-w99kg" in namespace "gc-8751" - Jun 12 21:00:33.514: INFO: Deleting pod "simpletest.rc-wqknh" in namespace "gc-8751" - Jun 12 21:00:33.556: INFO: Deleting pod "simpletest.rc-wxzpg" in namespace "gc-8751" - Jun 12 21:00:33.613: INFO: Deleting pod "simpletest.rc-x2pvm" in namespace "gc-8751" - Jun 12 21:00:33.647: INFO: Deleting pod "simpletest.rc-xc6f4" in namespace "gc-8751" - Jun 12 21:00:33.830: INFO: Deleting pod "simpletest.rc-z8mm6" in namespace "gc-8751" - Jun 12 21:00:33.904: INFO: Deleting pod "simpletest.rc-z9nqt" in namespace "gc-8751" - Jun 12 21:00:33.933: INFO: Deleting pod "simpletest.rc-z9t4z" in namespace "gc-8751" - [AfterEach] [sig-api-machinery] Garbage collector + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 07/27/23 01:47:13.928 + STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 01:47:14.444 + STEP: Deploying the webhook pod 07/27/23 01:47:14.475 + STEP: Wait for the deployment to be ready 07/27/23 01:47:14.504 + Jul 27 01:47:14.523: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 07/27/23 01:47:16.577 + STEP: Verifying the service has paired with the endpoint 07/27/23 01:47:16.612 + Jul 27 01:47:17.613: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] patching/updating a mutating webhook should work [Conformance] + test/e2e/apimachinery/webhook.go:508 + STEP: Creating a mutating webhook configuration 07/27/23 01:47:17.624 + STEP: Updating a mutating webhook configuration's rules to not include the create operation 07/27/23 01:47:17.725 + STEP: Creating a configMap that should not be mutated 07/27/23 01:47:17.741 + STEP: Patching a mutating webhook configuration's rules to include the create operation 07/27/23 01:47:17.777 + STEP: Creating a configMap that should be mutated 07/27/23 01:47:17.794 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 21:00:33.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + Jul 27 01:47:17.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "gc-8751" for this suite. 06/12/23 21:00:33.981 + STEP: Destroying namespace "webhook-6388" for this suite. 07/27/23 01:47:18.01 + STEP: Destroying namespace "webhook-6388-markers" for this suite. 07/27/23 01:47:18.033 << End Captured GinkgoWriter Output ------------------------------ -SS +SSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-api-machinery] server version - should find the server version [Conformance] - test/e2e/apimachinery/server_version.go:39 -[BeforeEach] [sig-api-machinery] server version +[sig-node] Variable Expansion + should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + test/e2e/common/node/expansion.go:152 +[BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:00:34.01 -Jun 12 21:00:34.010: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename server-version 06/12/23 21:00:34.017 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:00:34.078 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:00:34.088 -[BeforeEach] [sig-api-machinery] server version +STEP: Creating a kubernetes client 07/27/23 01:47:18.061 +Jul 27 01:47:18.061: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename var-expansion 07/27/23 01:47:18.061 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:47:18.112 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:47:18.122 +[BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 -[It] should find the server version [Conformance] - test/e2e/apimachinery/server_version.go:39 -STEP: Request ServerVersion 06/12/23 21:00:34.103 -STEP: Confirm major version 06/12/23 21:00:34.107 -Jun 12 21:00:34.108: INFO: Major version: 1 -STEP: Confirm minor version 06/12/23 21:00:34.108 -Jun 12 21:00:34.108: INFO: cleanMinorVersion: 26 -Jun 12 21:00:34.108: INFO: Minor version: 26 -[AfterEach] [sig-api-machinery] server version +[It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + test/e2e/common/node/expansion.go:152 +Jul 27 01:47:18.163: INFO: Waiting up to 2m0s for pod "var-expansion-dea34a07-ad02-4970-a378-8bdbaf8a0d76" in namespace "var-expansion-7258" to be "container 0 failed with reason CreateContainerConfigError" +Jul 27 01:47:18.171: INFO: Pod "var-expansion-dea34a07-ad02-4970-a378-8bdbaf8a0d76": Phase="Pending", Reason="", readiness=false. Elapsed: 8.145099ms +Jul 27 01:47:20.182: INFO: Pod "var-expansion-dea34a07-ad02-4970-a378-8bdbaf8a0d76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018296444s +Jul 27 01:47:20.182: INFO: Pod "var-expansion-dea34a07-ad02-4970-a378-8bdbaf8a0d76" satisfied condition "container 0 failed with reason CreateContainerConfigError" +Jul 27 01:47:20.182: INFO: Deleting pod "var-expansion-dea34a07-ad02-4970-a378-8bdbaf8a0d76" in namespace "var-expansion-7258" +Jul 27 01:47:20.194: INFO: Wait up to 5m0s for pod "var-expansion-dea34a07-ad02-4970-a378-8bdbaf8a0d76" to be fully deleted +[AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 -Jun 12 21:00:34.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] server version +Jul 27 01:47:24.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] server version +[DeferCleanup (Each)] [sig-node] Variable Expansion dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] server version +[DeferCleanup (Each)] [sig-node] Variable Expansion tear down framework | framework.go:193 -STEP: Destroying namespace "server-version-1485" for this suite. 06/12/23 21:00:34.138 +STEP: Destroying namespace "var-expansion-7258" for this suite. 07/27/23 01:47:24.326 ------------------------------ -• [0.144 seconds] -[sig-api-machinery] server version -test/e2e/apimachinery/framework.go:23 - should find the server version [Conformance] - test/e2e/apimachinery/server_version.go:39 +• [SLOW TEST] [6.289 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + test/e2e/common/node/expansion.go:152 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] server version + [BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:00:34.01 - Jun 12 21:00:34.010: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename server-version 06/12/23 21:00:34.017 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:00:34.078 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:00:34.088 - [BeforeEach] [sig-api-machinery] server version + STEP: Creating a kubernetes client 07/27/23 01:47:18.061 + Jul 27 01:47:18.061: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename var-expansion 07/27/23 01:47:18.061 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:47:18.112 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:47:18.122 + [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 - [It] should find the server version [Conformance] - test/e2e/apimachinery/server_version.go:39 - STEP: Request ServerVersion 06/12/23 21:00:34.103 - STEP: Confirm major version 06/12/23 21:00:34.107 - Jun 12 21:00:34.108: INFO: Major version: 1 - STEP: Confirm minor version 06/12/23 21:00:34.108 - Jun 12 21:00:34.108: INFO: cleanMinorVersion: 26 - Jun 12 21:00:34.108: INFO: Minor version: 26 - [AfterEach] [sig-api-machinery] server version + [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + test/e2e/common/node/expansion.go:152 + Jul 27 01:47:18.163: INFO: Waiting up to 2m0s for pod "var-expansion-dea34a07-ad02-4970-a378-8bdbaf8a0d76" in namespace "var-expansion-7258" to be "container 0 failed with reason CreateContainerConfigError" + Jul 27 01:47:18.171: INFO: Pod "var-expansion-dea34a07-ad02-4970-a378-8bdbaf8a0d76": Phase="Pending", Reason="", readiness=false. Elapsed: 8.145099ms + Jul 27 01:47:20.182: INFO: Pod "var-expansion-dea34a07-ad02-4970-a378-8bdbaf8a0d76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018296444s + Jul 27 01:47:20.182: INFO: Pod "var-expansion-dea34a07-ad02-4970-a378-8bdbaf8a0d76" satisfied condition "container 0 failed with reason CreateContainerConfigError" + Jul 27 01:47:20.182: INFO: Deleting pod "var-expansion-dea34a07-ad02-4970-a378-8bdbaf8a0d76" in namespace "var-expansion-7258" + Jul 27 01:47:20.194: INFO: Wait up to 5m0s for pod "var-expansion-dea34a07-ad02-4970-a378-8bdbaf8a0d76" to be fully deleted + [AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 - Jun 12 21:00:34.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] server version + Jul 27 01:47:24.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] server version + [DeferCleanup (Each)] [sig-node] Variable Expansion dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] server version + [DeferCleanup (Each)] [sig-node] Variable Expansion tear down framework | framework.go:193 - STEP: Destroying namespace "server-version-1485" for this suite. 06/12/23 21:00:34.138 + STEP: Destroying namespace "var-expansion-7258" for this suite. 07/27/23 01:47:24.326 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-api-machinery] Watchers - should be able to start watching from a specific resource version [Conformance] - test/e2e/apimachinery/watch.go:142 -[BeforeEach] [sig-api-machinery] Watchers +[sig-storage] EmptyDir volumes + volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:157 +[BeforeEach] [sig-storage] EmptyDir volumes set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:00:34.162 -Jun 12 21:00:34.162: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename watch 06/12/23 21:00:34.168 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:00:34.243 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:00:34.26 -[BeforeEach] [sig-api-machinery] Watchers +STEP: Creating a kubernetes client 07/27/23 01:47:24.351 +Jul 27 01:47:24.351: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename emptydir 07/27/23 01:47:24.352 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:47:24.396 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:47:24.421 +[BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:31 -[It] should be able to start watching from a specific resource version [Conformance] - test/e2e/apimachinery/watch.go:142 -STEP: creating a new configmap 06/12/23 21:00:34.286 -STEP: modifying the configmap once 06/12/23 21:00:34.328 -STEP: modifying the configmap a second time 06/12/23 21:00:34.461 -STEP: deleting the configmap 06/12/23 21:00:34.485 -STEP: creating a watch on configmaps from the resource version returned by the first update 06/12/23 21:00:34.517 -STEP: Expecting to observe notifications for all changes to the configmap after the first update 06/12/23 21:00:34.525 -Jun 12 21:00:34.526: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3046 98711f99-cc10-40e7-b2c3-34b226e207f1 86551 0 2023-06-12 21:00:34 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-06-12 21:00:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} -Jun 12 21:00:34.526: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3046 98711f99-cc10-40e7-b2c3-34b226e207f1 86552 0 2023-06-12 21:00:34 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-06-12 21:00:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} -[AfterEach] [sig-api-machinery] Watchers +[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:157 +STEP: Creating a pod to test emptydir volume type on node default medium 07/27/23 01:47:24.429 +Jul 27 01:47:24.458: INFO: Waiting up to 5m0s for pod "pod-7cb9d872-e8b9-40b0-acb2-24859182a611" in namespace "emptydir-111" to be "Succeeded or Failed" +Jul 27 01:47:24.467: INFO: Pod "pod-7cb9d872-e8b9-40b0-acb2-24859182a611": Phase="Pending", Reason="", readiness=false. Elapsed: 8.55463ms +Jul 27 01:47:26.476: INFO: Pod "pod-7cb9d872-e8b9-40b0-acb2-24859182a611": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017850416s +Jul 27 01:47:28.483: INFO: Pod "pod-7cb9d872-e8b9-40b0-acb2-24859182a611": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024879206s +STEP: Saw pod success 07/27/23 01:47:28.483 +Jul 27 01:47:28.483: INFO: Pod "pod-7cb9d872-e8b9-40b0-acb2-24859182a611" satisfied condition "Succeeded or Failed" +Jul 27 01:47:28.495: INFO: Trying to get logs from node 10.245.128.19 pod pod-7cb9d872-e8b9-40b0-acb2-24859182a611 container test-container: +STEP: delete the pod 07/27/23 01:47:28.543 +Jul 27 01:47:28.563: INFO: Waiting for pod pod-7cb9d872-e8b9-40b0-acb2-24859182a611 to disappear +Jul 27 01:47:28.571: INFO: Pod pod-7cb9d872-e8b9-40b0-acb2-24859182a611 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/node/init/init.go:32 -Jun 12 21:00:34.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] Watchers +Jul 27 01:47:28.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] Watchers +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] Watchers +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes tear down framework | framework.go:193 -STEP: Destroying namespace "watch-3046" for this suite. 06/12/23 21:00:34.603 +STEP: Destroying namespace "emptydir-111" for this suite. 07/27/23 01:47:28.588 ------------------------------ -• [0.631 seconds] -[sig-api-machinery] Watchers -test/e2e/apimachinery/framework.go:23 - should be able to start watching from a specific resource version [Conformance] - test/e2e/apimachinery/watch.go:142 +• [4.266 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:157 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] Watchers + [BeforeEach] [sig-storage] EmptyDir volumes set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:00:34.162 - Jun 12 21:00:34.162: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename watch 06/12/23 21:00:34.168 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:00:34.243 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:00:34.26 - [BeforeEach] [sig-api-machinery] Watchers + STEP: Creating a kubernetes client 07/27/23 01:47:24.351 + Jul 27 01:47:24.351: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename emptydir 07/27/23 01:47:24.352 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:47:24.396 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:47:24.421 + [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:31 - [It] should be able to start watching from a specific resource version [Conformance] - test/e2e/apimachinery/watch.go:142 - STEP: creating a new configmap 06/12/23 21:00:34.286 - STEP: modifying the configmap once 06/12/23 21:00:34.328 - STEP: modifying the configmap a second time 06/12/23 21:00:34.461 - STEP: deleting the configmap 06/12/23 21:00:34.485 - STEP: creating a watch on configmaps from the resource version returned by the first update 06/12/23 21:00:34.517 - STEP: Expecting to observe notifications for all changes to the configmap after the first update 06/12/23 21:00:34.525 - Jun 12 21:00:34.526: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3046 98711f99-cc10-40e7-b2c3-34b226e207f1 86551 0 2023-06-12 21:00:34 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-06-12 21:00:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} - Jun 12 21:00:34.526: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3046 98711f99-cc10-40e7-b2c3-34b226e207f1 86552 0 2023-06-12 21:00:34 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-06-12 21:00:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} - [AfterEach] [sig-api-machinery] Watchers + [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:157 + STEP: Creating a pod to test emptydir volume type on node default medium 07/27/23 01:47:24.429 + Jul 27 01:47:24.458: INFO: Waiting up to 5m0s for pod "pod-7cb9d872-e8b9-40b0-acb2-24859182a611" in namespace "emptydir-111" to be "Succeeded or Failed" + Jul 27 01:47:24.467: INFO: Pod "pod-7cb9d872-e8b9-40b0-acb2-24859182a611": Phase="Pending", Reason="", readiness=false. Elapsed: 8.55463ms + Jul 27 01:47:26.476: INFO: Pod "pod-7cb9d872-e8b9-40b0-acb2-24859182a611": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017850416s + Jul 27 01:47:28.483: INFO: Pod "pod-7cb9d872-e8b9-40b0-acb2-24859182a611": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024879206s + STEP: Saw pod success 07/27/23 01:47:28.483 + Jul 27 01:47:28.483: INFO: Pod "pod-7cb9d872-e8b9-40b0-acb2-24859182a611" satisfied condition "Succeeded or Failed" + Jul 27 01:47:28.495: INFO: Trying to get logs from node 10.245.128.19 pod pod-7cb9d872-e8b9-40b0-acb2-24859182a611 container test-container: + STEP: delete the pod 07/27/23 01:47:28.543 + Jul 27 01:47:28.563: INFO: Waiting for pod pod-7cb9d872-e8b9-40b0-acb2-24859182a611 to disappear + Jul 27 01:47:28.571: INFO: Pod pod-7cb9d872-e8b9-40b0-acb2-24859182a611 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/node/init/init.go:32 - Jun 12 21:00:34.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] Watchers + Jul 27 01:47:28.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] Watchers + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] Watchers + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes tear down framework | framework.go:193 - STEP: Destroying namespace "watch-3046" for this suite. 06/12/23 21:00:34.603 + STEP: Destroying namespace "emptydir-111" for this suite. 07/27/23 01:47:28.588 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSS ------------------------------ -[sig-api-machinery] Discovery - should validate PreferredVersion for each APIGroup [Conformance] - test/e2e/apimachinery/discovery.go:122 -[BeforeEach] [sig-api-machinery] Discovery +[sig-node] Pods + should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:618 +[BeforeEach] [sig-node] Pods set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:00:34.822 -Jun 12 21:00:34.822: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename discovery 06/12/23 21:00:34.825 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:00:35.147 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:00:35.303 -[BeforeEach] [sig-api-machinery] Discovery +STEP: Creating a kubernetes client 07/27/23 01:47:28.619 +Jul 27 01:47:28.619: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename pods 07/27/23 01:47:28.62 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:47:28.661 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:47:28.676 +[BeforeEach] [sig-node] Pods test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-api-machinery] Discovery - test/e2e/apimachinery/discovery.go:43 -STEP: Setting up server cert 06/12/23 21:00:35.329 -[It] should validate PreferredVersion for each APIGroup [Conformance] - test/e2e/apimachinery/discovery.go:122 -Jun 12 21:00:38.024: INFO: Checking APIGroup: apiregistration.k8s.io -Jun 12 21:00:38.170: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 -Jun 12 21:00:38.170: INFO: Versions found [{apiregistration.k8s.io/v1 v1}] -Jun 12 21:00:38.170: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 -Jun 12 21:00:38.170: INFO: Checking APIGroup: apps -Jun 12 21:00:38.194: INFO: PreferredVersion.GroupVersion: apps/v1 -Jun 12 21:00:38.194: INFO: Versions found [{apps/v1 v1}] -Jun 12 21:00:38.194: INFO: apps/v1 matches apps/v1 -Jun 12 21:00:38.194: INFO: Checking APIGroup: events.k8s.io -Jun 12 21:00:38.200: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 -Jun 12 21:00:38.200: INFO: Versions found [{events.k8s.io/v1 v1}] -Jun 12 21:00:38.200: INFO: events.k8s.io/v1 matches events.k8s.io/v1 -Jun 12 21:00:38.200: INFO: Checking APIGroup: authentication.k8s.io -Jun 12 21:00:38.205: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 -Jun 12 21:00:38.205: INFO: Versions found [{authentication.k8s.io/v1 v1}] -Jun 12 21:00:38.205: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 -Jun 12 21:00:38.205: INFO: Checking APIGroup: authorization.k8s.io -Jun 12 21:00:38.211: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 -Jun 12 21:00:38.211: INFO: Versions found [{authorization.k8s.io/v1 v1}] -Jun 12 21:00:38.211: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 -Jun 12 21:00:38.211: INFO: Checking APIGroup: autoscaling -Jun 12 21:00:38.234: INFO: PreferredVersion.GroupVersion: autoscaling/v2 -Jun 12 21:00:38.234: INFO: Versions found [{autoscaling/v2 v2} {autoscaling/v1 v1}] -Jun 12 21:00:38.234: INFO: autoscaling/v2 matches autoscaling/v2 -Jun 12 21:00:38.234: INFO: Checking APIGroup: batch -Jun 12 21:00:38.238: INFO: PreferredVersion.GroupVersion: batch/v1 -Jun 12 21:00:38.239: INFO: Versions found [{batch/v1 v1}] -Jun 12 21:00:38.239: INFO: batch/v1 matches batch/v1 -Jun 12 21:00:38.239: INFO: Checking APIGroup: certificates.k8s.io -Jun 12 21:00:38.243: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 -Jun 12 21:00:38.244: INFO: Versions found [{certificates.k8s.io/v1 v1}] -Jun 12 21:00:38.244: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 -Jun 12 21:00:38.244: INFO: Checking APIGroup: networking.k8s.io -Jun 12 21:00:38.248: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 -Jun 12 21:00:38.248: INFO: Versions found [{networking.k8s.io/v1 v1}] -Jun 12 21:00:38.248: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 -Jun 12 21:00:38.248: INFO: Checking APIGroup: policy -Jun 12 21:00:38.251: INFO: PreferredVersion.GroupVersion: policy/v1 -Jun 12 21:00:38.252: INFO: Versions found [{policy/v1 v1}] -Jun 12 21:00:38.252: INFO: policy/v1 matches policy/v1 -Jun 12 21:00:38.252: INFO: Checking APIGroup: rbac.authorization.k8s.io -Jun 12 21:00:38.256: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 -Jun 12 21:00:38.256: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1}] -Jun 12 21:00:38.256: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 -Jun 12 21:00:38.257: INFO: Checking APIGroup: storage.k8s.io -Jun 12 21:00:38.318: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 -Jun 12 21:00:38.318: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] -Jun 12 21:00:38.318: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 -Jun 12 21:00:38.318: INFO: Checking APIGroup: admissionregistration.k8s.io -Jun 12 21:00:38.325: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 -Jun 12 21:00:38.325: INFO: Versions found [{admissionregistration.k8s.io/v1 v1}] -Jun 12 21:00:38.325: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 -Jun 12 21:00:38.325: INFO: Checking APIGroup: apiextensions.k8s.io -Jun 12 21:00:38.329: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 -Jun 12 21:00:38.329: INFO: Versions found [{apiextensions.k8s.io/v1 v1}] -Jun 12 21:00:38.330: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 -Jun 12 21:00:38.330: INFO: Checking APIGroup: scheduling.k8s.io -Jun 12 21:00:38.351: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 -Jun 12 21:00:38.351: INFO: Versions found [{scheduling.k8s.io/v1 v1}] -Jun 12 21:00:38.351: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 -Jun 12 21:00:38.351: INFO: Checking APIGroup: coordination.k8s.io -Jun 12 21:00:38.366: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 -Jun 12 21:00:38.366: INFO: Versions found [{coordination.k8s.io/v1 v1}] -Jun 12 21:00:38.366: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 -Jun 12 21:00:38.366: INFO: Checking APIGroup: node.k8s.io -Jun 12 21:00:38.376: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 -Jun 12 21:00:38.376: INFO: Versions found [{node.k8s.io/v1 v1}] -Jun 12 21:00:38.377: INFO: node.k8s.io/v1 matches node.k8s.io/v1 -Jun 12 21:00:38.377: INFO: Checking APIGroup: discovery.k8s.io -Jun 12 21:00:38.391: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 -Jun 12 21:00:38.391: INFO: Versions found [{discovery.k8s.io/v1 v1}] -Jun 12 21:00:38.391: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 -Jun 12 21:00:38.391: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io -Jun 12 21:00:38.400: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta3 -Jun 12 21:00:38.400: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta3 v1beta3} {flowcontrol.apiserver.k8s.io/v1beta2 v1beta2}] -Jun 12 21:00:38.400: INFO: flowcontrol.apiserver.k8s.io/v1beta3 matches flowcontrol.apiserver.k8s.io/v1beta3 -Jun 12 21:00:38.400: INFO: Checking APIGroup: apps.openshift.io -Jun 12 21:00:38.408: INFO: PreferredVersion.GroupVersion: apps.openshift.io/v1 -Jun 12 21:00:38.408: INFO: Versions found [{apps.openshift.io/v1 v1}] -Jun 12 21:00:38.408: INFO: apps.openshift.io/v1 matches apps.openshift.io/v1 -Jun 12 21:00:38.408: INFO: Checking APIGroup: authorization.openshift.io -Jun 12 21:00:38.415: INFO: PreferredVersion.GroupVersion: authorization.openshift.io/v1 -Jun 12 21:00:38.415: INFO: Versions found [{authorization.openshift.io/v1 v1}] -Jun 12 21:00:38.415: INFO: authorization.openshift.io/v1 matches authorization.openshift.io/v1 -Jun 12 21:00:38.415: INFO: Checking APIGroup: build.openshift.io -Jun 12 21:00:38.424: INFO: PreferredVersion.GroupVersion: build.openshift.io/v1 -Jun 12 21:00:38.424: INFO: Versions found [{build.openshift.io/v1 v1}] -Jun 12 21:00:38.424: INFO: build.openshift.io/v1 matches build.openshift.io/v1 -Jun 12 21:00:38.424: INFO: Checking APIGroup: image.openshift.io -Jun 12 21:00:38.433: INFO: PreferredVersion.GroupVersion: image.openshift.io/v1 -Jun 12 21:00:38.433: INFO: Versions found [{image.openshift.io/v1 v1}] -Jun 12 21:00:38.433: INFO: image.openshift.io/v1 matches image.openshift.io/v1 -Jun 12 21:00:38.433: INFO: Checking APIGroup: oauth.openshift.io -Jun 12 21:00:38.442: INFO: PreferredVersion.GroupVersion: oauth.openshift.io/v1 -Jun 12 21:00:38.442: INFO: Versions found [{oauth.openshift.io/v1 v1}] -Jun 12 21:00:38.442: INFO: oauth.openshift.io/v1 matches oauth.openshift.io/v1 -Jun 12 21:00:38.442: INFO: Checking APIGroup: project.openshift.io -Jun 12 21:00:38.451: INFO: PreferredVersion.GroupVersion: project.openshift.io/v1 -Jun 12 21:00:38.451: INFO: Versions found [{project.openshift.io/v1 v1}] -Jun 12 21:00:38.451: INFO: project.openshift.io/v1 matches project.openshift.io/v1 -Jun 12 21:00:38.451: INFO: Checking APIGroup: quota.openshift.io -Jun 12 21:00:38.461: INFO: PreferredVersion.GroupVersion: quota.openshift.io/v1 -Jun 12 21:00:38.461: INFO: Versions found [{quota.openshift.io/v1 v1}] -Jun 12 21:00:38.461: INFO: quota.openshift.io/v1 matches quota.openshift.io/v1 -Jun 12 21:00:38.461: INFO: Checking APIGroup: route.openshift.io -Jun 12 21:00:38.471: INFO: PreferredVersion.GroupVersion: route.openshift.io/v1 -Jun 12 21:00:38.471: INFO: Versions found [{route.openshift.io/v1 v1}] -Jun 12 21:00:38.471: INFO: route.openshift.io/v1 matches route.openshift.io/v1 -Jun 12 21:00:38.471: INFO: Checking APIGroup: security.openshift.io -Jun 12 21:00:38.481: INFO: PreferredVersion.GroupVersion: security.openshift.io/v1 -Jun 12 21:00:38.481: INFO: Versions found [{security.openshift.io/v1 v1}] -Jun 12 21:00:38.481: INFO: security.openshift.io/v1 matches security.openshift.io/v1 -Jun 12 21:00:38.481: INFO: Checking APIGroup: template.openshift.io -Jun 12 21:00:38.488: INFO: PreferredVersion.GroupVersion: template.openshift.io/v1 -Jun 12 21:00:38.488: INFO: Versions found [{template.openshift.io/v1 v1}] -Jun 12 21:00:38.488: INFO: template.openshift.io/v1 matches template.openshift.io/v1 -Jun 12 21:00:38.488: INFO: Checking APIGroup: user.openshift.io -Jun 12 21:00:38.504: INFO: PreferredVersion.GroupVersion: user.openshift.io/v1 -Jun 12 21:00:38.504: INFO: Versions found [{user.openshift.io/v1 v1}] -Jun 12 21:00:38.504: INFO: user.openshift.io/v1 matches user.openshift.io/v1 -Jun 12 21:00:38.504: INFO: Checking APIGroup: packages.operators.coreos.com -Jun 12 21:00:38.521: INFO: PreferredVersion.GroupVersion: packages.operators.coreos.com/v1 -Jun 12 21:00:38.521: INFO: Versions found [{packages.operators.coreos.com/v1 v1}] -Jun 12 21:00:38.521: INFO: packages.operators.coreos.com/v1 matches packages.operators.coreos.com/v1 -Jun 12 21:00:38.521: INFO: Checking APIGroup: config.openshift.io -Jun 12 21:00:38.535: INFO: PreferredVersion.GroupVersion: config.openshift.io/v1 -Jun 12 21:00:38.535: INFO: Versions found [{config.openshift.io/v1 v1}] -Jun 12 21:00:38.535: INFO: config.openshift.io/v1 matches config.openshift.io/v1 -Jun 12 21:00:38.535: INFO: Checking APIGroup: operator.openshift.io -Jun 12 21:00:38.542: INFO: PreferredVersion.GroupVersion: operator.openshift.io/v1 -Jun 12 21:00:38.542: INFO: Versions found [{operator.openshift.io/v1 v1} {operator.openshift.io/v1alpha1 v1alpha1}] -Jun 12 21:00:38.542: INFO: operator.openshift.io/v1 matches operator.openshift.io/v1 -Jun 12 21:00:38.542: INFO: Checking APIGroup: apiserver.openshift.io -Jun 12 21:00:38.553: INFO: PreferredVersion.GroupVersion: apiserver.openshift.io/v1 -Jun 12 21:00:38.553: INFO: Versions found [{apiserver.openshift.io/v1 v1}] -Jun 12 21:00:38.553: INFO: apiserver.openshift.io/v1 matches apiserver.openshift.io/v1 -Jun 12 21:00:38.553: INFO: Checking APIGroup: cloudcredential.openshift.io -Jun 12 21:00:38.566: INFO: PreferredVersion.GroupVersion: cloudcredential.openshift.io/v1 -Jun 12 21:00:38.566: INFO: Versions found [{cloudcredential.openshift.io/v1 v1}] -Jun 12 21:00:38.566: INFO: cloudcredential.openshift.io/v1 matches cloudcredential.openshift.io/v1 -Jun 12 21:00:38.566: INFO: Checking APIGroup: console.openshift.io -Jun 12 21:00:38.577: INFO: PreferredVersion.GroupVersion: console.openshift.io/v1 -Jun 12 21:00:38.578: INFO: Versions found [{console.openshift.io/v1 v1} {console.openshift.io/v1alpha1 v1alpha1}] -Jun 12 21:00:38.578: INFO: console.openshift.io/v1 matches console.openshift.io/v1 -Jun 12 21:00:38.578: INFO: Checking APIGroup: crd.projectcalico.org -Jun 12 21:00:38.602: INFO: PreferredVersion.GroupVersion: crd.projectcalico.org/v1 -Jun 12 21:00:38.602: INFO: Versions found [{crd.projectcalico.org/v1 v1}] -Jun 12 21:00:38.602: INFO: crd.projectcalico.org/v1 matches crd.projectcalico.org/v1 -Jun 12 21:00:38.602: INFO: Checking APIGroup: imageregistry.operator.openshift.io -Jun 12 21:00:38.614: INFO: PreferredVersion.GroupVersion: imageregistry.operator.openshift.io/v1 -Jun 12 21:00:38.614: INFO: Versions found [{imageregistry.operator.openshift.io/v1 v1}] -Jun 12 21:00:38.614: INFO: imageregistry.operator.openshift.io/v1 matches imageregistry.operator.openshift.io/v1 -Jun 12 21:00:38.614: INFO: Checking APIGroup: ingress.operator.openshift.io -Jun 12 21:00:38.621: INFO: PreferredVersion.GroupVersion: ingress.operator.openshift.io/v1 -Jun 12 21:00:38.622: INFO: Versions found [{ingress.operator.openshift.io/v1 v1}] -Jun 12 21:00:38.622: INFO: ingress.operator.openshift.io/v1 matches ingress.operator.openshift.io/v1 -Jun 12 21:00:38.622: INFO: Checking APIGroup: k8s.cni.cncf.io -Jun 12 21:00:38.628: INFO: PreferredVersion.GroupVersion: k8s.cni.cncf.io/v1 -Jun 12 21:00:38.628: INFO: Versions found [{k8s.cni.cncf.io/v1 v1}] -Jun 12 21:00:38.628: INFO: k8s.cni.cncf.io/v1 matches k8s.cni.cncf.io/v1 -Jun 12 21:00:38.628: INFO: Checking APIGroup: machineconfiguration.openshift.io -Jun 12 21:00:38.632: INFO: PreferredVersion.GroupVersion: machineconfiguration.openshift.io/v1 -Jun 12 21:00:38.633: INFO: Versions found [{machineconfiguration.openshift.io/v1 v1}] -Jun 12 21:00:38.633: INFO: machineconfiguration.openshift.io/v1 matches machineconfiguration.openshift.io/v1 -Jun 12 21:00:38.633: INFO: Checking APIGroup: monitoring.coreos.com -Jun 12 21:00:38.643: INFO: PreferredVersion.GroupVersion: monitoring.coreos.com/v1 -Jun 12 21:00:38.643: INFO: Versions found [{monitoring.coreos.com/v1 v1} {monitoring.coreos.com/v1beta1 v1beta1} {monitoring.coreos.com/v1alpha1 v1alpha1}] -Jun 12 21:00:38.643: INFO: monitoring.coreos.com/v1 matches monitoring.coreos.com/v1 -Jun 12 21:00:38.643: INFO: Checking APIGroup: network.operator.openshift.io -Jun 12 21:00:38.651: INFO: PreferredVersion.GroupVersion: network.operator.openshift.io/v1 -Jun 12 21:00:38.651: INFO: Versions found [{network.operator.openshift.io/v1 v1}] -Jun 12 21:00:38.651: INFO: network.operator.openshift.io/v1 matches network.operator.openshift.io/v1 -Jun 12 21:00:38.651: INFO: Checking APIGroup: operator.tigera.io -Jun 12 21:00:38.660: INFO: PreferredVersion.GroupVersion: operator.tigera.io/v1 -Jun 12 21:00:38.660: INFO: Versions found [{operator.tigera.io/v1 v1}] -Jun 12 21:00:38.660: INFO: operator.tigera.io/v1 matches operator.tigera.io/v1 -Jun 12 21:00:38.660: INFO: Checking APIGroup: operators.coreos.com -Jun 12 21:00:38.703: INFO: PreferredVersion.GroupVersion: operators.coreos.com/v2 -Jun 12 21:00:38.772: INFO: Versions found [{operators.coreos.com/v2 v2} {operators.coreos.com/v1 v1} {operators.coreos.com/v1alpha2 v1alpha2} {operators.coreos.com/v1alpha1 v1alpha1}] -Jun 12 21:00:38.776: INFO: operators.coreos.com/v2 matches operators.coreos.com/v2 -Jun 12 21:00:38.776: INFO: Checking APIGroup: performance.openshift.io -Jun 12 21:00:38.805: INFO: PreferredVersion.GroupVersion: performance.openshift.io/v2 -Jun 12 21:00:38.805: INFO: Versions found [{performance.openshift.io/v2 v2} {performance.openshift.io/v1 v1} {performance.openshift.io/v1alpha1 v1alpha1}] -Jun 12 21:00:38.805: INFO: performance.openshift.io/v2 matches performance.openshift.io/v2 -Jun 12 21:00:38.805: INFO: Checking APIGroup: samples.operator.openshift.io -Jun 12 21:00:38.814: INFO: PreferredVersion.GroupVersion: samples.operator.openshift.io/v1 -Jun 12 21:00:38.814: INFO: Versions found [{samples.operator.openshift.io/v1 v1}] -Jun 12 21:00:38.814: INFO: samples.operator.openshift.io/v1 matches samples.operator.openshift.io/v1 -Jun 12 21:00:38.814: INFO: Checking APIGroup: security.internal.openshift.io -Jun 12 21:00:38.825: INFO: PreferredVersion.GroupVersion: security.internal.openshift.io/v1 -Jun 12 21:00:38.825: INFO: Versions found [{security.internal.openshift.io/v1 v1}] -Jun 12 21:00:38.825: INFO: security.internal.openshift.io/v1 matches security.internal.openshift.io/v1 -Jun 12 21:00:38.825: INFO: Checking APIGroup: snapshot.storage.k8s.io -Jun 12 21:00:38.831: INFO: PreferredVersion.GroupVersion: snapshot.storage.k8s.io/v1 -Jun 12 21:00:38.831: INFO: Versions found [{snapshot.storage.k8s.io/v1 v1}] -Jun 12 21:00:38.831: INFO: snapshot.storage.k8s.io/v1 matches snapshot.storage.k8s.io/v1 -Jun 12 21:00:38.831: INFO: Checking APIGroup: tuned.openshift.io -Jun 12 21:00:38.838: INFO: PreferredVersion.GroupVersion: tuned.openshift.io/v1 -Jun 12 21:00:38.838: INFO: Versions found [{tuned.openshift.io/v1 v1}] -Jun 12 21:00:38.838: INFO: tuned.openshift.io/v1 matches tuned.openshift.io/v1 -Jun 12 21:00:38.839: INFO: Checking APIGroup: controlplane.operator.openshift.io -Jun 12 21:00:38.845: INFO: PreferredVersion.GroupVersion: controlplane.operator.openshift.io/v1alpha1 -Jun 12 21:00:38.846: INFO: Versions found [{controlplane.operator.openshift.io/v1alpha1 v1alpha1}] -Jun 12 21:00:38.846: INFO: controlplane.operator.openshift.io/v1alpha1 matches controlplane.operator.openshift.io/v1alpha1 -Jun 12 21:00:38.846: INFO: Checking APIGroup: ibm.com -Jun 12 21:00:38.855: INFO: PreferredVersion.GroupVersion: ibm.com/v1alpha1 -Jun 12 21:00:38.855: INFO: Versions found [{ibm.com/v1alpha1 v1alpha1}] -Jun 12 21:00:38.855: INFO: ibm.com/v1alpha1 matches ibm.com/v1alpha1 -Jun 12 21:00:38.855: INFO: Checking APIGroup: migration.k8s.io -Jun 12 21:00:38.861: INFO: PreferredVersion.GroupVersion: migration.k8s.io/v1alpha1 -Jun 12 21:00:38.861: INFO: Versions found [{migration.k8s.io/v1alpha1 v1alpha1}] -Jun 12 21:00:38.862: INFO: migration.k8s.io/v1alpha1 matches migration.k8s.io/v1alpha1 -Jun 12 21:00:38.862: INFO: Checking APIGroup: whereabouts.cni.cncf.io -Jun 12 21:00:38.871: INFO: PreferredVersion.GroupVersion: whereabouts.cni.cncf.io/v1alpha1 -Jun 12 21:00:38.871: INFO: Versions found [{whereabouts.cni.cncf.io/v1alpha1 v1alpha1}] -Jun 12 21:00:38.871: INFO: whereabouts.cni.cncf.io/v1alpha1 matches whereabouts.cni.cncf.io/v1alpha1 -Jun 12 21:00:38.872: INFO: Checking APIGroup: helm.openshift.io -Jun 12 21:00:38.886: INFO: PreferredVersion.GroupVersion: helm.openshift.io/v1beta1 -Jun 12 21:00:38.887: INFO: Versions found [{helm.openshift.io/v1beta1 v1beta1}] -Jun 12 21:00:38.887: INFO: helm.openshift.io/v1beta1 matches helm.openshift.io/v1beta1 -Jun 12 21:00:38.887: INFO: Checking APIGroup: metrics.k8s.io -Jun 12 21:00:38.900: INFO: PreferredVersion.GroupVersion: metrics.k8s.io/v1beta1 -Jun 12 21:00:38.900: INFO: Versions found [{metrics.k8s.io/v1beta1 v1beta1}] -Jun 12 21:00:38.900: INFO: metrics.k8s.io/v1beta1 matches metrics.k8s.io/v1beta1 -[AfterEach] [sig-api-machinery] Discovery +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 +[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:618 +Jul 27 01:47:28.690: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: creating the pod 07/27/23 01:47:28.691 +STEP: submitting the pod to kubernetes 07/27/23 01:47:28.691 +Jul 27 01:47:28.718: INFO: Waiting up to 5m0s for pod "pod-logs-websocket-a0878d00-5e06-47ee-8a90-dd1a4ad78bac" in namespace "pods-9428" to be "running and ready" +Jul 27 01:47:28.733: INFO: Pod "pod-logs-websocket-a0878d00-5e06-47ee-8a90-dd1a4ad78bac": Phase="Pending", Reason="", readiness=false. Elapsed: 14.831933ms +Jul 27 01:47:28.733: INFO: The phase of Pod pod-logs-websocket-a0878d00-5e06-47ee-8a90-dd1a4ad78bac is Pending, waiting for it to be Running (with Ready = true) +Jul 27 01:47:30.754: INFO: Pod "pod-logs-websocket-a0878d00-5e06-47ee-8a90-dd1a4ad78bac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035794558s +Jul 27 01:47:30.754: INFO: The phase of Pod pod-logs-websocket-a0878d00-5e06-47ee-8a90-dd1a4ad78bac is Pending, waiting for it to be Running (with Ready = true) +Jul 27 01:47:32.755: INFO: Pod "pod-logs-websocket-a0878d00-5e06-47ee-8a90-dd1a4ad78bac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036340186s +Jul 27 01:47:32.755: INFO: The phase of Pod pod-logs-websocket-a0878d00-5e06-47ee-8a90-dd1a4ad78bac is Pending, waiting for it to be Running (with Ready = true) +Jul 27 01:47:34.748: INFO: Pod "pod-logs-websocket-a0878d00-5e06-47ee-8a90-dd1a4ad78bac": Phase="Running", Reason="", readiness=true. Elapsed: 6.029254265s +Jul 27 01:47:34.748: INFO: The phase of Pod pod-logs-websocket-a0878d00-5e06-47ee-8a90-dd1a4ad78bac is Running (Ready = true) +Jul 27 01:47:34.748: INFO: Pod "pod-logs-websocket-a0878d00-5e06-47ee-8a90-dd1a4ad78bac" satisfied condition "running and ready" +[AfterEach] [sig-node] Pods test/e2e/framework/node/init/init.go:32 -Jun 12 21:00:38.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] Discovery +Jul 27 01:47:34.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Pods test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] Discovery +[DeferCleanup (Each)] [sig-node] Pods dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] Discovery +[DeferCleanup (Each)] [sig-node] Pods tear down framework | framework.go:193 -STEP: Destroying namespace "discovery-1827" for this suite. 06/12/23 21:00:38.922 +STEP: Destroying namespace "pods-9428" for this suite. 07/27/23 01:47:34.918 ------------------------------ -• [4.113 seconds] -[sig-api-machinery] Discovery -test/e2e/apimachinery/framework.go:23 - should validate PreferredVersion for each APIGroup [Conformance] - test/e2e/apimachinery/discovery.go:122 +• [SLOW TEST] [6.326 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:618 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] Discovery + [BeforeEach] [sig-node] Pods set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:00:34.822 - Jun 12 21:00:34.822: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename discovery 06/12/23 21:00:34.825 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:00:35.147 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:00:35.303 - [BeforeEach] [sig-api-machinery] Discovery + STEP: Creating a kubernetes client 07/27/23 01:47:28.619 + Jul 27 01:47:28.619: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename pods 07/27/23 01:47:28.62 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:47:28.661 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:47:28.676 + [BeforeEach] [sig-node] Pods test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-api-machinery] Discovery - test/e2e/apimachinery/discovery.go:43 - STEP: Setting up server cert 06/12/23 21:00:35.329 - [It] should validate PreferredVersion for each APIGroup [Conformance] - test/e2e/apimachinery/discovery.go:122 - Jun 12 21:00:38.024: INFO: Checking APIGroup: apiregistration.k8s.io - Jun 12 21:00:38.170: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 - Jun 12 21:00:38.170: INFO: Versions found [{apiregistration.k8s.io/v1 v1}] - Jun 12 21:00:38.170: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 - Jun 12 21:00:38.170: INFO: Checking APIGroup: apps - Jun 12 21:00:38.194: INFO: PreferredVersion.GroupVersion: apps/v1 - Jun 12 21:00:38.194: INFO: Versions found [{apps/v1 v1}] - Jun 12 21:00:38.194: INFO: apps/v1 matches apps/v1 - Jun 12 21:00:38.194: INFO: Checking APIGroup: events.k8s.io - Jun 12 21:00:38.200: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 - Jun 12 21:00:38.200: INFO: Versions found [{events.k8s.io/v1 v1}] - Jun 12 21:00:38.200: INFO: events.k8s.io/v1 matches events.k8s.io/v1 - Jun 12 21:00:38.200: INFO: Checking APIGroup: authentication.k8s.io - Jun 12 21:00:38.205: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 - Jun 12 21:00:38.205: INFO: Versions found [{authentication.k8s.io/v1 v1}] - Jun 12 21:00:38.205: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 - Jun 12 21:00:38.205: INFO: Checking APIGroup: authorization.k8s.io - Jun 12 21:00:38.211: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 - Jun 12 21:00:38.211: INFO: Versions found [{authorization.k8s.io/v1 v1}] - Jun 12 21:00:38.211: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 - Jun 12 21:00:38.211: INFO: Checking APIGroup: autoscaling - Jun 12 21:00:38.234: INFO: PreferredVersion.GroupVersion: autoscaling/v2 - Jun 12 21:00:38.234: INFO: Versions found [{autoscaling/v2 v2} {autoscaling/v1 v1}] - Jun 12 21:00:38.234: INFO: autoscaling/v2 matches autoscaling/v2 - Jun 12 21:00:38.234: INFO: Checking APIGroup: batch - Jun 12 21:00:38.238: INFO: PreferredVersion.GroupVersion: batch/v1 - Jun 12 21:00:38.239: INFO: Versions found [{batch/v1 v1}] - Jun 12 21:00:38.239: INFO: batch/v1 matches batch/v1 - Jun 12 21:00:38.239: INFO: Checking APIGroup: certificates.k8s.io - Jun 12 21:00:38.243: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 - Jun 12 21:00:38.244: INFO: Versions found [{certificates.k8s.io/v1 v1}] - Jun 12 21:00:38.244: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 - Jun 12 21:00:38.244: INFO: Checking APIGroup: networking.k8s.io - Jun 12 21:00:38.248: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 - Jun 12 21:00:38.248: INFO: Versions found [{networking.k8s.io/v1 v1}] - Jun 12 21:00:38.248: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 - Jun 12 21:00:38.248: INFO: Checking APIGroup: policy - Jun 12 21:00:38.251: INFO: PreferredVersion.GroupVersion: policy/v1 - Jun 12 21:00:38.252: INFO: Versions found [{policy/v1 v1}] - Jun 12 21:00:38.252: INFO: policy/v1 matches policy/v1 - Jun 12 21:00:38.252: INFO: Checking APIGroup: rbac.authorization.k8s.io - Jun 12 21:00:38.256: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 - Jun 12 21:00:38.256: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1}] - Jun 12 21:00:38.256: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 - Jun 12 21:00:38.257: INFO: Checking APIGroup: storage.k8s.io - Jun 12 21:00:38.318: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 - Jun 12 21:00:38.318: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] - Jun 12 21:00:38.318: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 - Jun 12 21:00:38.318: INFO: Checking APIGroup: admissionregistration.k8s.io - Jun 12 21:00:38.325: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 - Jun 12 21:00:38.325: INFO: Versions found [{admissionregistration.k8s.io/v1 v1}] - Jun 12 21:00:38.325: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 - Jun 12 21:00:38.325: INFO: Checking APIGroup: apiextensions.k8s.io - Jun 12 21:00:38.329: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 - Jun 12 21:00:38.329: INFO: Versions found [{apiextensions.k8s.io/v1 v1}] - Jun 12 21:00:38.330: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 - Jun 12 21:00:38.330: INFO: Checking APIGroup: scheduling.k8s.io - Jun 12 21:00:38.351: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 - Jun 12 21:00:38.351: INFO: Versions found [{scheduling.k8s.io/v1 v1}] - Jun 12 21:00:38.351: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 - Jun 12 21:00:38.351: INFO: Checking APIGroup: coordination.k8s.io - Jun 12 21:00:38.366: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 - Jun 12 21:00:38.366: INFO: Versions found [{coordination.k8s.io/v1 v1}] - Jun 12 21:00:38.366: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 - Jun 12 21:00:38.366: INFO: Checking APIGroup: node.k8s.io - Jun 12 21:00:38.376: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 - Jun 12 21:00:38.376: INFO: Versions found [{node.k8s.io/v1 v1}] - Jun 12 21:00:38.377: INFO: node.k8s.io/v1 matches node.k8s.io/v1 - Jun 12 21:00:38.377: INFO: Checking APIGroup: discovery.k8s.io - Jun 12 21:00:38.391: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 - Jun 12 21:00:38.391: INFO: Versions found [{discovery.k8s.io/v1 v1}] - Jun 12 21:00:38.391: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 - Jun 12 21:00:38.391: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io - Jun 12 21:00:38.400: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta3 - Jun 12 21:00:38.400: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta3 v1beta3} {flowcontrol.apiserver.k8s.io/v1beta2 v1beta2}] - Jun 12 21:00:38.400: INFO: flowcontrol.apiserver.k8s.io/v1beta3 matches flowcontrol.apiserver.k8s.io/v1beta3 - Jun 12 21:00:38.400: INFO: Checking APIGroup: apps.openshift.io - Jun 12 21:00:38.408: INFO: PreferredVersion.GroupVersion: apps.openshift.io/v1 - Jun 12 21:00:38.408: INFO: Versions found [{apps.openshift.io/v1 v1}] - Jun 12 21:00:38.408: INFO: apps.openshift.io/v1 matches apps.openshift.io/v1 - Jun 12 21:00:38.408: INFO: Checking APIGroup: authorization.openshift.io - Jun 12 21:00:38.415: INFO: PreferredVersion.GroupVersion: authorization.openshift.io/v1 - Jun 12 21:00:38.415: INFO: Versions found [{authorization.openshift.io/v1 v1}] - Jun 12 21:00:38.415: INFO: authorization.openshift.io/v1 matches authorization.openshift.io/v1 - Jun 12 21:00:38.415: INFO: Checking APIGroup: build.openshift.io - Jun 12 21:00:38.424: INFO: PreferredVersion.GroupVersion: build.openshift.io/v1 - Jun 12 21:00:38.424: INFO: Versions found [{build.openshift.io/v1 v1}] - Jun 12 21:00:38.424: INFO: build.openshift.io/v1 matches build.openshift.io/v1 - Jun 12 21:00:38.424: INFO: Checking APIGroup: image.openshift.io - Jun 12 21:00:38.433: INFO: PreferredVersion.GroupVersion: image.openshift.io/v1 - Jun 12 21:00:38.433: INFO: Versions found [{image.openshift.io/v1 v1}] - Jun 12 21:00:38.433: INFO: image.openshift.io/v1 matches image.openshift.io/v1 - Jun 12 21:00:38.433: INFO: Checking APIGroup: oauth.openshift.io - Jun 12 21:00:38.442: INFO: PreferredVersion.GroupVersion: oauth.openshift.io/v1 - Jun 12 21:00:38.442: INFO: Versions found [{oauth.openshift.io/v1 v1}] - Jun 12 21:00:38.442: INFO: oauth.openshift.io/v1 matches oauth.openshift.io/v1 - Jun 12 21:00:38.442: INFO: Checking APIGroup: project.openshift.io - Jun 12 21:00:38.451: INFO: PreferredVersion.GroupVersion: project.openshift.io/v1 - Jun 12 21:00:38.451: INFO: Versions found [{project.openshift.io/v1 v1}] - Jun 12 21:00:38.451: INFO: project.openshift.io/v1 matches project.openshift.io/v1 - Jun 12 21:00:38.451: INFO: Checking APIGroup: quota.openshift.io - Jun 12 21:00:38.461: INFO: PreferredVersion.GroupVersion: quota.openshift.io/v1 - Jun 12 21:00:38.461: INFO: Versions found [{quota.openshift.io/v1 v1}] - Jun 12 21:00:38.461: INFO: quota.openshift.io/v1 matches quota.openshift.io/v1 - Jun 12 21:00:38.461: INFO: Checking APIGroup: route.openshift.io - Jun 12 21:00:38.471: INFO: PreferredVersion.GroupVersion: route.openshift.io/v1 - Jun 12 21:00:38.471: INFO: Versions found [{route.openshift.io/v1 v1}] - Jun 12 21:00:38.471: INFO: route.openshift.io/v1 matches route.openshift.io/v1 - Jun 12 21:00:38.471: INFO: Checking APIGroup: security.openshift.io - Jun 12 21:00:38.481: INFO: PreferredVersion.GroupVersion: security.openshift.io/v1 - Jun 12 21:00:38.481: INFO: Versions found [{security.openshift.io/v1 v1}] - Jun 12 21:00:38.481: INFO: security.openshift.io/v1 matches security.openshift.io/v1 - Jun 12 21:00:38.481: INFO: Checking APIGroup: template.openshift.io - Jun 12 21:00:38.488: INFO: PreferredVersion.GroupVersion: template.openshift.io/v1 - Jun 12 21:00:38.488: INFO: Versions found [{template.openshift.io/v1 v1}] - Jun 12 21:00:38.488: INFO: template.openshift.io/v1 matches template.openshift.io/v1 - Jun 12 21:00:38.488: INFO: Checking APIGroup: user.openshift.io - Jun 12 21:00:38.504: INFO: PreferredVersion.GroupVersion: user.openshift.io/v1 - Jun 12 21:00:38.504: INFO: Versions found [{user.openshift.io/v1 v1}] - Jun 12 21:00:38.504: INFO: user.openshift.io/v1 matches user.openshift.io/v1 - Jun 12 21:00:38.504: INFO: Checking APIGroup: packages.operators.coreos.com - Jun 12 21:00:38.521: INFO: PreferredVersion.GroupVersion: packages.operators.coreos.com/v1 - Jun 12 21:00:38.521: INFO: Versions found [{packages.operators.coreos.com/v1 v1}] - Jun 12 21:00:38.521: INFO: packages.operators.coreos.com/v1 matches packages.operators.coreos.com/v1 - Jun 12 21:00:38.521: INFO: Checking APIGroup: config.openshift.io - Jun 12 21:00:38.535: INFO: PreferredVersion.GroupVersion: config.openshift.io/v1 - Jun 12 21:00:38.535: INFO: Versions found [{config.openshift.io/v1 v1}] - Jun 12 21:00:38.535: INFO: config.openshift.io/v1 matches config.openshift.io/v1 - Jun 12 21:00:38.535: INFO: Checking APIGroup: operator.openshift.io - Jun 12 21:00:38.542: INFO: PreferredVersion.GroupVersion: operator.openshift.io/v1 - Jun 12 21:00:38.542: INFO: Versions found [{operator.openshift.io/v1 v1} {operator.openshift.io/v1alpha1 v1alpha1}] - Jun 12 21:00:38.542: INFO: operator.openshift.io/v1 matches operator.openshift.io/v1 - Jun 12 21:00:38.542: INFO: Checking APIGroup: apiserver.openshift.io - Jun 12 21:00:38.553: INFO: PreferredVersion.GroupVersion: apiserver.openshift.io/v1 - Jun 12 21:00:38.553: INFO: Versions found [{apiserver.openshift.io/v1 v1}] - Jun 12 21:00:38.553: INFO: apiserver.openshift.io/v1 matches apiserver.openshift.io/v1 - Jun 12 21:00:38.553: INFO: Checking APIGroup: cloudcredential.openshift.io - Jun 12 21:00:38.566: INFO: PreferredVersion.GroupVersion: cloudcredential.openshift.io/v1 - Jun 12 21:00:38.566: INFO: Versions found [{cloudcredential.openshift.io/v1 v1}] - Jun 12 21:00:38.566: INFO: cloudcredential.openshift.io/v1 matches cloudcredential.openshift.io/v1 - Jun 12 21:00:38.566: INFO: Checking APIGroup: console.openshift.io - Jun 12 21:00:38.577: INFO: PreferredVersion.GroupVersion: console.openshift.io/v1 - Jun 12 21:00:38.578: INFO: Versions found [{console.openshift.io/v1 v1} {console.openshift.io/v1alpha1 v1alpha1}] - Jun 12 21:00:38.578: INFO: console.openshift.io/v1 matches console.openshift.io/v1 - Jun 12 21:00:38.578: INFO: Checking APIGroup: crd.projectcalico.org - Jun 12 21:00:38.602: INFO: PreferredVersion.GroupVersion: crd.projectcalico.org/v1 - Jun 12 21:00:38.602: INFO: Versions found [{crd.projectcalico.org/v1 v1}] - Jun 12 21:00:38.602: INFO: crd.projectcalico.org/v1 matches crd.projectcalico.org/v1 - Jun 12 21:00:38.602: INFO: Checking APIGroup: imageregistry.operator.openshift.io - Jun 12 21:00:38.614: INFO: PreferredVersion.GroupVersion: imageregistry.operator.openshift.io/v1 - Jun 12 21:00:38.614: INFO: Versions found [{imageregistry.operator.openshift.io/v1 v1}] - Jun 12 21:00:38.614: INFO: imageregistry.operator.openshift.io/v1 matches imageregistry.operator.openshift.io/v1 - Jun 12 21:00:38.614: INFO: Checking APIGroup: ingress.operator.openshift.io - Jun 12 21:00:38.621: INFO: PreferredVersion.GroupVersion: ingress.operator.openshift.io/v1 - Jun 12 21:00:38.622: INFO: Versions found [{ingress.operator.openshift.io/v1 v1}] - Jun 12 21:00:38.622: INFO: ingress.operator.openshift.io/v1 matches ingress.operator.openshift.io/v1 - Jun 12 21:00:38.622: INFO: Checking APIGroup: k8s.cni.cncf.io - Jun 12 21:00:38.628: INFO: PreferredVersion.GroupVersion: k8s.cni.cncf.io/v1 - Jun 12 21:00:38.628: INFO: Versions found [{k8s.cni.cncf.io/v1 v1}] - Jun 12 21:00:38.628: INFO: k8s.cni.cncf.io/v1 matches k8s.cni.cncf.io/v1 - Jun 12 21:00:38.628: INFO: Checking APIGroup: machineconfiguration.openshift.io - Jun 12 21:00:38.632: INFO: PreferredVersion.GroupVersion: machineconfiguration.openshift.io/v1 - Jun 12 21:00:38.633: INFO: Versions found [{machineconfiguration.openshift.io/v1 v1}] - Jun 12 21:00:38.633: INFO: machineconfiguration.openshift.io/v1 matches machineconfiguration.openshift.io/v1 - Jun 12 21:00:38.633: INFO: Checking APIGroup: monitoring.coreos.com - Jun 12 21:00:38.643: INFO: PreferredVersion.GroupVersion: monitoring.coreos.com/v1 - Jun 12 21:00:38.643: INFO: Versions found [{monitoring.coreos.com/v1 v1} {monitoring.coreos.com/v1beta1 v1beta1} {monitoring.coreos.com/v1alpha1 v1alpha1}] - Jun 12 21:00:38.643: INFO: monitoring.coreos.com/v1 matches monitoring.coreos.com/v1 - Jun 12 21:00:38.643: INFO: Checking APIGroup: network.operator.openshift.io - Jun 12 21:00:38.651: INFO: PreferredVersion.GroupVersion: network.operator.openshift.io/v1 - Jun 12 21:00:38.651: INFO: Versions found [{network.operator.openshift.io/v1 v1}] - Jun 12 21:00:38.651: INFO: network.operator.openshift.io/v1 matches network.operator.openshift.io/v1 - Jun 12 21:00:38.651: INFO: Checking APIGroup: operator.tigera.io - Jun 12 21:00:38.660: INFO: PreferredVersion.GroupVersion: operator.tigera.io/v1 - Jun 12 21:00:38.660: INFO: Versions found [{operator.tigera.io/v1 v1}] - Jun 12 21:00:38.660: INFO: operator.tigera.io/v1 matches operator.tigera.io/v1 - Jun 12 21:00:38.660: INFO: Checking APIGroup: operators.coreos.com - Jun 12 21:00:38.703: INFO: PreferredVersion.GroupVersion: operators.coreos.com/v2 - Jun 12 21:00:38.772: INFO: Versions found [{operators.coreos.com/v2 v2} {operators.coreos.com/v1 v1} {operators.coreos.com/v1alpha2 v1alpha2} {operators.coreos.com/v1alpha1 v1alpha1}] - Jun 12 21:00:38.776: INFO: operators.coreos.com/v2 matches operators.coreos.com/v2 - Jun 12 21:00:38.776: INFO: Checking APIGroup: performance.openshift.io - Jun 12 21:00:38.805: INFO: PreferredVersion.GroupVersion: performance.openshift.io/v2 - Jun 12 21:00:38.805: INFO: Versions found [{performance.openshift.io/v2 v2} {performance.openshift.io/v1 v1} {performance.openshift.io/v1alpha1 v1alpha1}] - Jun 12 21:00:38.805: INFO: performance.openshift.io/v2 matches performance.openshift.io/v2 - Jun 12 21:00:38.805: INFO: Checking APIGroup: samples.operator.openshift.io - Jun 12 21:00:38.814: INFO: PreferredVersion.GroupVersion: samples.operator.openshift.io/v1 - Jun 12 21:00:38.814: INFO: Versions found [{samples.operator.openshift.io/v1 v1}] - Jun 12 21:00:38.814: INFO: samples.operator.openshift.io/v1 matches samples.operator.openshift.io/v1 - Jun 12 21:00:38.814: INFO: Checking APIGroup: security.internal.openshift.io - Jun 12 21:00:38.825: INFO: PreferredVersion.GroupVersion: security.internal.openshift.io/v1 - Jun 12 21:00:38.825: INFO: Versions found [{security.internal.openshift.io/v1 v1}] - Jun 12 21:00:38.825: INFO: security.internal.openshift.io/v1 matches security.internal.openshift.io/v1 - Jun 12 21:00:38.825: INFO: Checking APIGroup: snapshot.storage.k8s.io - Jun 12 21:00:38.831: INFO: PreferredVersion.GroupVersion: snapshot.storage.k8s.io/v1 - Jun 12 21:00:38.831: INFO: Versions found [{snapshot.storage.k8s.io/v1 v1}] - Jun 12 21:00:38.831: INFO: snapshot.storage.k8s.io/v1 matches snapshot.storage.k8s.io/v1 - Jun 12 21:00:38.831: INFO: Checking APIGroup: tuned.openshift.io - Jun 12 21:00:38.838: INFO: PreferredVersion.GroupVersion: tuned.openshift.io/v1 - Jun 12 21:00:38.838: INFO: Versions found [{tuned.openshift.io/v1 v1}] - Jun 12 21:00:38.838: INFO: tuned.openshift.io/v1 matches tuned.openshift.io/v1 - Jun 12 21:00:38.839: INFO: Checking APIGroup: controlplane.operator.openshift.io - Jun 12 21:00:38.845: INFO: PreferredVersion.GroupVersion: controlplane.operator.openshift.io/v1alpha1 - Jun 12 21:00:38.846: INFO: Versions found [{controlplane.operator.openshift.io/v1alpha1 v1alpha1}] - Jun 12 21:00:38.846: INFO: controlplane.operator.openshift.io/v1alpha1 matches controlplane.operator.openshift.io/v1alpha1 - Jun 12 21:00:38.846: INFO: Checking APIGroup: ibm.com - Jun 12 21:00:38.855: INFO: PreferredVersion.GroupVersion: ibm.com/v1alpha1 - Jun 12 21:00:38.855: INFO: Versions found [{ibm.com/v1alpha1 v1alpha1}] - Jun 12 21:00:38.855: INFO: ibm.com/v1alpha1 matches ibm.com/v1alpha1 - Jun 12 21:00:38.855: INFO: Checking APIGroup: migration.k8s.io - Jun 12 21:00:38.861: INFO: PreferredVersion.GroupVersion: migration.k8s.io/v1alpha1 - Jun 12 21:00:38.861: INFO: Versions found [{migration.k8s.io/v1alpha1 v1alpha1}] - Jun 12 21:00:38.862: INFO: migration.k8s.io/v1alpha1 matches migration.k8s.io/v1alpha1 - Jun 12 21:00:38.862: INFO: Checking APIGroup: whereabouts.cni.cncf.io - Jun 12 21:00:38.871: INFO: PreferredVersion.GroupVersion: whereabouts.cni.cncf.io/v1alpha1 - Jun 12 21:00:38.871: INFO: Versions found [{whereabouts.cni.cncf.io/v1alpha1 v1alpha1}] - Jun 12 21:00:38.871: INFO: whereabouts.cni.cncf.io/v1alpha1 matches whereabouts.cni.cncf.io/v1alpha1 - Jun 12 21:00:38.872: INFO: Checking APIGroup: helm.openshift.io - Jun 12 21:00:38.886: INFO: PreferredVersion.GroupVersion: helm.openshift.io/v1beta1 - Jun 12 21:00:38.887: INFO: Versions found [{helm.openshift.io/v1beta1 v1beta1}] - Jun 12 21:00:38.887: INFO: helm.openshift.io/v1beta1 matches helm.openshift.io/v1beta1 - Jun 12 21:00:38.887: INFO: Checking APIGroup: metrics.k8s.io - Jun 12 21:00:38.900: INFO: PreferredVersion.GroupVersion: metrics.k8s.io/v1beta1 - Jun 12 21:00:38.900: INFO: Versions found [{metrics.k8s.io/v1beta1 v1beta1}] - Jun 12 21:00:38.900: INFO: metrics.k8s.io/v1beta1 matches metrics.k8s.io/v1beta1 - [AfterEach] [sig-api-machinery] Discovery + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 + [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:618 + Jul 27 01:47:28.690: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: creating the pod 07/27/23 01:47:28.691 + STEP: submitting the pod to kubernetes 07/27/23 01:47:28.691 + Jul 27 01:47:28.718: INFO: Waiting up to 5m0s for pod "pod-logs-websocket-a0878d00-5e06-47ee-8a90-dd1a4ad78bac" in namespace "pods-9428" to be "running and ready" + Jul 27 01:47:28.733: INFO: Pod "pod-logs-websocket-a0878d00-5e06-47ee-8a90-dd1a4ad78bac": Phase="Pending", Reason="", readiness=false. Elapsed: 14.831933ms + Jul 27 01:47:28.733: INFO: The phase of Pod pod-logs-websocket-a0878d00-5e06-47ee-8a90-dd1a4ad78bac is Pending, waiting for it to be Running (with Ready = true) + Jul 27 01:47:30.754: INFO: Pod "pod-logs-websocket-a0878d00-5e06-47ee-8a90-dd1a4ad78bac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035794558s + Jul 27 01:47:30.754: INFO: The phase of Pod pod-logs-websocket-a0878d00-5e06-47ee-8a90-dd1a4ad78bac is Pending, waiting for it to be Running (with Ready = true) + Jul 27 01:47:32.755: INFO: Pod "pod-logs-websocket-a0878d00-5e06-47ee-8a90-dd1a4ad78bac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036340186s + Jul 27 01:47:32.755: INFO: The phase of Pod pod-logs-websocket-a0878d00-5e06-47ee-8a90-dd1a4ad78bac is Pending, waiting for it to be Running (with Ready = true) + Jul 27 01:47:34.748: INFO: Pod "pod-logs-websocket-a0878d00-5e06-47ee-8a90-dd1a4ad78bac": Phase="Running", Reason="", readiness=true. Elapsed: 6.029254265s + Jul 27 01:47:34.748: INFO: The phase of Pod pod-logs-websocket-a0878d00-5e06-47ee-8a90-dd1a4ad78bac is Running (Ready = true) + Jul 27 01:47:34.748: INFO: Pod "pod-logs-websocket-a0878d00-5e06-47ee-8a90-dd1a4ad78bac" satisfied condition "running and ready" + [AfterEach] [sig-node] Pods test/e2e/framework/node/init/init.go:32 - Jun 12 21:00:38.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] Discovery + Jul 27 01:47:34.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Pods test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] Discovery + [DeferCleanup (Each)] [sig-node] Pods dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] Discovery + [DeferCleanup (Each)] [sig-node] Pods tear down framework | framework.go:193 - STEP: Destroying namespace "discovery-1827" for this suite. 06/12/23 21:00:38.922 + STEP: Destroying namespace "pods-9428" for this suite. 07/27/23 01:47:34.918 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSS +SSSSSSSSSSS ------------------------------ -[sig-network] Services - should be able to change the type from ExternalName to NodePort [Conformance] - test/e2e/network/service.go:1477 -[BeforeEach] [sig-network] Services +[sig-storage] ConfigMap + binary data should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:175 +[BeforeEach] [sig-storage] ConfigMap set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:00:38.939 -Jun 12 21:00:38.939: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename services 06/12/23 21:00:38.945 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:00:39 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:00:39.019 -[BeforeEach] [sig-network] Services +STEP: Creating a kubernetes client 07/27/23 01:47:34.953 +Jul 27 01:47:34.953: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename configmap 07/27/23 01:47:34.954 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:47:35.005 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:47:35.017 +[BeforeEach] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-network] Services - test/e2e/network/service.go:766 -[It] should be able to change the type from ExternalName to NodePort [Conformance] - test/e2e/network/service.go:1477 -STEP: creating a service externalname-service with the type=ExternalName in namespace services-973 06/12/23 21:00:39.037 -STEP: changing the ExternalName service to type=NodePort 06/12/23 21:00:39.075 -STEP: creating replication controller externalname-service in namespace services-973 06/12/23 21:00:39.183 -I0612 21:00:39.211185 23 runners.go:193] Created replication controller with name: externalname-service, namespace: services-973, replica count: 2 -I0612 21:00:42.275609 23 runners.go:193] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady -I0612 21:00:45.280127 23 runners.go:193] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady -I0612 21:00:48.285032 23 runners.go:193] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady -I0612 21:00:51.286045 23 runners.go:193] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady -I0612 21:00:54.287166 23 runners.go:193] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady -I0612 21:00:57.289081 23 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady -Jun 12 21:00:57.289: INFO: Creating new exec pod -Jun 12 21:00:57.306: INFO: Waiting up to 5m0s for pod "execpod86gzp" in namespace "services-973" to be "running" -Jun 12 21:00:57.314: INFO: Pod "execpod86gzp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.29315ms -Jun 12 21:00:59.328: INFO: Pod "execpod86gzp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021928099s -Jun 12 21:01:01.330: INFO: Pod "execpod86gzp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023995435s -Jun 12 21:01:03.326: INFO: Pod "execpod86gzp": Phase="Running", Reason="", readiness=true. Elapsed: 6.020403393s -Jun 12 21:01:03.327: INFO: Pod "execpod86gzp" satisfied condition "running" -Jun 12 21:01:04.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-973 exec execpod86gzp -- /bin/sh -x -c nc -v -z -w 2 externalname-service 80' -Jun 12 21:01:04.828: INFO: stderr: "+ nc -v -z -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" -Jun 12 21:01:04.828: INFO: stdout: "" -Jun 12 21:01:04.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-973 exec execpod86gzp -- /bin/sh -x -c nc -v -z -w 2 172.21.69.220 80' -Jun 12 21:01:05.330: INFO: stderr: "+ nc -v -z -w 2 172.21.69.220 80\nConnection to 172.21.69.220 80 port [tcp/http] succeeded!\n" -Jun 12 21:01:05.330: INFO: stdout: "" -Jun 12 21:01:05.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-973 exec execpod86gzp -- /bin/sh -x -c nc -v -z -w 2 10.138.75.112 31630' -Jun 12 21:01:05.830: INFO: stderr: "+ nc -v -z -w 2 10.138.75.112 31630\nConnection to 10.138.75.112 31630 port [tcp/*] succeeded!\n" -Jun 12 21:01:05.830: INFO: stdout: "" -Jun 12 21:01:05.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-973 exec execpod86gzp -- /bin/sh -x -c nc -v -z -w 2 10.138.75.116 31630' -Jun 12 21:01:06.352: INFO: stderr: "+ nc -v -z -w 2 10.138.75.116 31630\nConnection to 10.138.75.116 31630 port [tcp/*] succeeded!\n" -Jun 12 21:01:06.352: INFO: stdout: "" -Jun 12 21:01:06.352: INFO: Cleaning up the ExternalName to NodePort test service -[AfterEach] [sig-network] Services +[It] binary data should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:175 +Jul 27 01:47:35.040: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node +STEP: Creating configMap with name configmap-test-upd-fe9b3b8b-f8e2-451a-a34e-fc0cb2c3310e 07/27/23 01:47:35.041 +STEP: Creating the pod 07/27/23 01:47:35.06 +Jul 27 01:47:35.092: INFO: Waiting up to 5m0s for pod "pod-configmaps-5ee1b9cd-77ba-44db-b58f-f73e4ea45e3c" in namespace "configmap-78" to be "running" +Jul 27 01:47:35.103: INFO: Pod "pod-configmaps-5ee1b9cd-77ba-44db-b58f-f73e4ea45e3c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.524067ms +Jul 27 01:47:37.116: INFO: Pod "pod-configmaps-5ee1b9cd-77ba-44db-b58f-f73e4ea45e3c": Phase="Running", Reason="", readiness=false. Elapsed: 2.023683964s +Jul 27 01:47:37.116: INFO: Pod "pod-configmaps-5ee1b9cd-77ba-44db-b58f-f73e4ea45e3c" satisfied condition "running" +STEP: Waiting for pod with text data 07/27/23 01:47:37.116 +STEP: Waiting for pod with binary data 07/27/23 01:47:37.135 +[AfterEach] [sig-storage] ConfigMap test/e2e/framework/node/init/init.go:32 -Jun 12 21:01:06.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] Services +Jul 27 01:47:37.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] Services +[DeferCleanup (Each)] [sig-storage] ConfigMap dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] Services +[DeferCleanup (Each)] [sig-storage] ConfigMap tear down framework | framework.go:193 -STEP: Destroying namespace "services-973" for this suite. 06/12/23 21:01:06.513 +STEP: Destroying namespace "configmap-78" for this suite. 07/27/23 01:47:37.181 ------------------------------ -• [SLOW TEST] [27.610 seconds] -[sig-network] Services -test/e2e/network/common/framework.go:23 - should be able to change the type from ExternalName to NodePort [Conformance] - test/e2e/network/service.go:1477 +• [2.251 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + binary data should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:175 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] Services + [BeforeEach] [sig-storage] ConfigMap set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:00:38.939 - Jun 12 21:00:38.939: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename services 06/12/23 21:00:38.945 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:00:39 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:00:39.019 - [BeforeEach] [sig-network] Services + STEP: Creating a kubernetes client 07/27/23 01:47:34.953 + Jul 27 01:47:34.953: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename configmap 07/27/23 01:47:34.954 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:47:35.005 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:47:35.017 + [BeforeEach] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-network] Services - test/e2e/network/service.go:766 - [It] should be able to change the type from ExternalName to NodePort [Conformance] - test/e2e/network/service.go:1477 - STEP: creating a service externalname-service with the type=ExternalName in namespace services-973 06/12/23 21:00:39.037 - STEP: changing the ExternalName service to type=NodePort 06/12/23 21:00:39.075 - STEP: creating replication controller externalname-service in namespace services-973 06/12/23 21:00:39.183 - I0612 21:00:39.211185 23 runners.go:193] Created replication controller with name: externalname-service, namespace: services-973, replica count: 2 - I0612 21:00:42.275609 23 runners.go:193] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady - I0612 21:00:45.280127 23 runners.go:193] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady - I0612 21:00:48.285032 23 runners.go:193] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady - I0612 21:00:51.286045 23 runners.go:193] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady - I0612 21:00:54.287166 23 runners.go:193] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady - I0612 21:00:57.289081 23 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady - Jun 12 21:00:57.289: INFO: Creating new exec pod - Jun 12 21:00:57.306: INFO: Waiting up to 5m0s for pod "execpod86gzp" in namespace "services-973" to be "running" - Jun 12 21:00:57.314: INFO: Pod "execpod86gzp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.29315ms - Jun 12 21:00:59.328: INFO: Pod "execpod86gzp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021928099s - Jun 12 21:01:01.330: INFO: Pod "execpod86gzp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023995435s - Jun 12 21:01:03.326: INFO: Pod "execpod86gzp": Phase="Running", Reason="", readiness=true. Elapsed: 6.020403393s - Jun 12 21:01:03.327: INFO: Pod "execpod86gzp" satisfied condition "running" - Jun 12 21:01:04.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-973 exec execpod86gzp -- /bin/sh -x -c nc -v -z -w 2 externalname-service 80' - Jun 12 21:01:04.828: INFO: stderr: "+ nc -v -z -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" - Jun 12 21:01:04.828: INFO: stdout: "" - Jun 12 21:01:04.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-973 exec execpod86gzp -- /bin/sh -x -c nc -v -z -w 2 172.21.69.220 80' - Jun 12 21:01:05.330: INFO: stderr: "+ nc -v -z -w 2 172.21.69.220 80\nConnection to 172.21.69.220 80 port [tcp/http] succeeded!\n" - Jun 12 21:01:05.330: INFO: stdout: "" - Jun 12 21:01:05.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-973 exec execpod86gzp -- /bin/sh -x -c nc -v -z -w 2 10.138.75.112 31630' - Jun 12 21:01:05.830: INFO: stderr: "+ nc -v -z -w 2 10.138.75.112 31630\nConnection to 10.138.75.112 31630 port [tcp/*] succeeded!\n" - Jun 12 21:01:05.830: INFO: stdout: "" - Jun 12 21:01:05.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-973 exec execpod86gzp -- /bin/sh -x -c nc -v -z -w 2 10.138.75.116 31630' - Jun 12 21:01:06.352: INFO: stderr: "+ nc -v -z -w 2 10.138.75.116 31630\nConnection to 10.138.75.116 31630 port [tcp/*] succeeded!\n" - Jun 12 21:01:06.352: INFO: stdout: "" - Jun 12 21:01:06.352: INFO: Cleaning up the ExternalName to NodePort test service - [AfterEach] [sig-network] Services + [It] binary data should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:175 + Jul 27 01:47:35.040: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node + STEP: Creating configMap with name configmap-test-upd-fe9b3b8b-f8e2-451a-a34e-fc0cb2c3310e 07/27/23 01:47:35.041 + STEP: Creating the pod 07/27/23 01:47:35.06 + Jul 27 01:47:35.092: INFO: Waiting up to 5m0s for pod "pod-configmaps-5ee1b9cd-77ba-44db-b58f-f73e4ea45e3c" in namespace "configmap-78" to be "running" + Jul 27 01:47:35.103: INFO: Pod "pod-configmaps-5ee1b9cd-77ba-44db-b58f-f73e4ea45e3c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.524067ms + Jul 27 01:47:37.116: INFO: Pod "pod-configmaps-5ee1b9cd-77ba-44db-b58f-f73e4ea45e3c": Phase="Running", Reason="", readiness=false. Elapsed: 2.023683964s + Jul 27 01:47:37.116: INFO: Pod "pod-configmaps-5ee1b9cd-77ba-44db-b58f-f73e4ea45e3c" satisfied condition "running" + STEP: Waiting for pod with text data 07/27/23 01:47:37.116 + STEP: Waiting for pod with binary data 07/27/23 01:47:37.135 + [AfterEach] [sig-storage] ConfigMap test/e2e/framework/node/init/init.go:32 - Jun 12 21:01:06.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] Services + Jul 27 01:47:37.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] Services + [DeferCleanup (Each)] [sig-storage] ConfigMap dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] Services + [DeferCleanup (Each)] [sig-storage] ConfigMap tear down framework | framework.go:193 - STEP: Destroying namespace "services-973" for this suite. 06/12/23 21:01:06.513 + STEP: Destroying namespace "configmap-78" for this suite. 07/27/23 01:47:37.181 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSS +SSSSSS ------------------------------ -[sig-apps] ReplicationController - should serve a basic image on each replica with a public image [Conformance] - test/e2e/apps/rc.go:67 -[BeforeEach] [sig-apps] ReplicationController +[sig-node] Security Context when creating containers with AllowPrivilegeEscalation + should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:609 +[BeforeEach] [sig-node] Security Context set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:01:06.553 -Jun 12 21:01:06.554: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename replication-controller 06/12/23 21:01:06.555 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:01:06.677 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:01:06.703 -[BeforeEach] [sig-apps] ReplicationController +STEP: Creating a kubernetes client 07/27/23 01:47:37.204 +Jul 27 01:47:37.204: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename security-context-test 07/27/23 01:47:37.206 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:47:37.252 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:47:37.266 +[BeforeEach] [sig-node] Security Context test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-apps] ReplicationController - test/e2e/apps/rc.go:57 -[It] should serve a basic image on each replica with a public image [Conformance] - test/e2e/apps/rc.go:67 -STEP: Creating replication controller my-hostname-basic-21f4de9c-c442-4c9c-b933-c0909c8e6996 06/12/23 21:01:06.72 -Jun 12 21:01:06.769: INFO: Pod name my-hostname-basic-21f4de9c-c442-4c9c-b933-c0909c8e6996: Found 1 pods out of 1 -Jun 12 21:01:06.769: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-21f4de9c-c442-4c9c-b933-c0909c8e6996" are running -Jun 12 21:01:06.769: INFO: Waiting up to 5m0s for pod "my-hostname-basic-21f4de9c-c442-4c9c-b933-c0909c8e6996-rxzl8" in namespace "replication-controller-298" to be "running" -Jun 12 21:01:06.799: INFO: Pod "my-hostname-basic-21f4de9c-c442-4c9c-b933-c0909c8e6996-rxzl8": Phase="Pending", Reason="", readiness=false. Elapsed: 29.199648ms -Jun 12 21:01:08.810: INFO: Pod "my-hostname-basic-21f4de9c-c442-4c9c-b933-c0909c8e6996-rxzl8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04039152s -Jun 12 21:01:10.811: INFO: Pod "my-hostname-basic-21f4de9c-c442-4c9c-b933-c0909c8e6996-rxzl8": Phase="Running", Reason="", readiness=true. Elapsed: 4.041177156s -Jun 12 21:01:10.811: INFO: Pod "my-hostname-basic-21f4de9c-c442-4c9c-b933-c0909c8e6996-rxzl8" satisfied condition "running" -Jun 12 21:01:10.811: INFO: Pod "my-hostname-basic-21f4de9c-c442-4c9c-b933-c0909c8e6996-rxzl8" is running (conditions: []) -Jun 12 21:01:10.811: INFO: Trying to dial the pod -Jun 12 21:01:15.855: INFO: Controller my-hostname-basic-21f4de9c-c442-4c9c-b933-c0909c8e6996: Got expected result from replica 1 [my-hostname-basic-21f4de9c-c442-4c9c-b933-c0909c8e6996-rxzl8]: "my-hostname-basic-21f4de9c-c442-4c9c-b933-c0909c8e6996-rxzl8", 1 of 1 required successes so far -[AfterEach] [sig-apps] ReplicationController +[BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:50 +[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:609 +Jul 27 01:47:37.307: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-c22faa8b-7e38-4711-9c9c-58c43681a6cb" in namespace "security-context-test-8977" to be "Succeeded or Failed" +Jul 27 01:47:37.316: INFO: Pod "alpine-nnp-false-c22faa8b-7e38-4711-9c9c-58c43681a6cb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.016147ms +Jul 27 01:47:39.327: INFO: Pod "alpine-nnp-false-c22faa8b-7e38-4711-9c9c-58c43681a6cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019354218s +Jul 27 01:47:41.325: INFO: Pod "alpine-nnp-false-c22faa8b-7e38-4711-9c9c-58c43681a6cb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017924691s +Jul 27 01:47:43.332: INFO: Pod "alpine-nnp-false-c22faa8b-7e38-4711-9c9c-58c43681a6cb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024833357s +Jul 27 01:47:45.326: INFO: Pod "alpine-nnp-false-c22faa8b-7e38-4711-9c9c-58c43681a6cb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019012907s +Jul 27 01:47:47.326: INFO: Pod "alpine-nnp-false-c22faa8b-7e38-4711-9c9c-58c43681a6cb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.018300692s +Jul 27 01:47:49.326: INFO: Pod "alpine-nnp-false-c22faa8b-7e38-4711-9c9c-58c43681a6cb": Phase="Pending", Reason="", readiness=false. Elapsed: 12.018912453s +Jul 27 01:47:51.328: INFO: Pod "alpine-nnp-false-c22faa8b-7e38-4711-9c9c-58c43681a6cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.020490599s +Jul 27 01:47:51.328: INFO: Pod "alpine-nnp-false-c22faa8b-7e38-4711-9c9c-58c43681a6cb" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context test/e2e/framework/node/init/init.go:32 -Jun 12 21:01:15.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] ReplicationController +Jul 27 01:47:51.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Security Context test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] ReplicationController +[DeferCleanup (Each)] [sig-node] Security Context dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] ReplicationController +[DeferCleanup (Each)] [sig-node] Security Context tear down framework | framework.go:193 -STEP: Destroying namespace "replication-controller-298" for this suite. 06/12/23 21:01:15.891 +STEP: Destroying namespace "security-context-test-8977" for this suite. 07/27/23 01:47:51.36 ------------------------------ -• [SLOW TEST] [9.355 seconds] -[sig-apps] ReplicationController -test/e2e/apps/framework.go:23 - should serve a basic image on each replica with a public image [Conformance] - test/e2e/apps/rc.go:67 +• [SLOW TEST] [14.179 seconds] +[sig-node] Security Context +test/e2e/common/node/framework.go:23 + when creating containers with AllowPrivilegeEscalation + test/e2e/common/node/security_context.go:555 + should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:609 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] ReplicationController + [BeforeEach] [sig-node] Security Context set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:01:06.553 - Jun 12 21:01:06.554: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename replication-controller 06/12/23 21:01:06.555 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:01:06.677 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:01:06.703 - [BeforeEach] [sig-apps] ReplicationController + STEP: Creating a kubernetes client 07/27/23 01:47:37.204 + Jul 27 01:47:37.204: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename security-context-test 07/27/23 01:47:37.206 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:47:37.252 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:47:37.266 + [BeforeEach] [sig-node] Security Context test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-apps] ReplicationController - test/e2e/apps/rc.go:57 - [It] should serve a basic image on each replica with a public image [Conformance] - test/e2e/apps/rc.go:67 - STEP: Creating replication controller my-hostname-basic-21f4de9c-c442-4c9c-b933-c0909c8e6996 06/12/23 21:01:06.72 - Jun 12 21:01:06.769: INFO: Pod name my-hostname-basic-21f4de9c-c442-4c9c-b933-c0909c8e6996: Found 1 pods out of 1 - Jun 12 21:01:06.769: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-21f4de9c-c442-4c9c-b933-c0909c8e6996" are running - Jun 12 21:01:06.769: INFO: Waiting up to 5m0s for pod "my-hostname-basic-21f4de9c-c442-4c9c-b933-c0909c8e6996-rxzl8" in namespace "replication-controller-298" to be "running" - Jun 12 21:01:06.799: INFO: Pod "my-hostname-basic-21f4de9c-c442-4c9c-b933-c0909c8e6996-rxzl8": Phase="Pending", Reason="", readiness=false. Elapsed: 29.199648ms - Jun 12 21:01:08.810: INFO: Pod "my-hostname-basic-21f4de9c-c442-4c9c-b933-c0909c8e6996-rxzl8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04039152s - Jun 12 21:01:10.811: INFO: Pod "my-hostname-basic-21f4de9c-c442-4c9c-b933-c0909c8e6996-rxzl8": Phase="Running", Reason="", readiness=true. Elapsed: 4.041177156s - Jun 12 21:01:10.811: INFO: Pod "my-hostname-basic-21f4de9c-c442-4c9c-b933-c0909c8e6996-rxzl8" satisfied condition "running" - Jun 12 21:01:10.811: INFO: Pod "my-hostname-basic-21f4de9c-c442-4c9c-b933-c0909c8e6996-rxzl8" is running (conditions: []) - Jun 12 21:01:10.811: INFO: Trying to dial the pod - Jun 12 21:01:15.855: INFO: Controller my-hostname-basic-21f4de9c-c442-4c9c-b933-c0909c8e6996: Got expected result from replica 1 [my-hostname-basic-21f4de9c-c442-4c9c-b933-c0909c8e6996-rxzl8]: "my-hostname-basic-21f4de9c-c442-4c9c-b933-c0909c8e6996-rxzl8", 1 of 1 required successes so far - [AfterEach] [sig-apps] ReplicationController + [BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:50 + [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:609 + Jul 27 01:47:37.307: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-c22faa8b-7e38-4711-9c9c-58c43681a6cb" in namespace "security-context-test-8977" to be "Succeeded or Failed" + Jul 27 01:47:37.316: INFO: Pod "alpine-nnp-false-c22faa8b-7e38-4711-9c9c-58c43681a6cb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.016147ms + Jul 27 01:47:39.327: INFO: Pod "alpine-nnp-false-c22faa8b-7e38-4711-9c9c-58c43681a6cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019354218s + Jul 27 01:47:41.325: INFO: Pod "alpine-nnp-false-c22faa8b-7e38-4711-9c9c-58c43681a6cb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017924691s + Jul 27 01:47:43.332: INFO: Pod "alpine-nnp-false-c22faa8b-7e38-4711-9c9c-58c43681a6cb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024833357s + Jul 27 01:47:45.326: INFO: Pod "alpine-nnp-false-c22faa8b-7e38-4711-9c9c-58c43681a6cb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019012907s + Jul 27 01:47:47.326: INFO: Pod "alpine-nnp-false-c22faa8b-7e38-4711-9c9c-58c43681a6cb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.018300692s + Jul 27 01:47:49.326: INFO: Pod "alpine-nnp-false-c22faa8b-7e38-4711-9c9c-58c43681a6cb": Phase="Pending", Reason="", readiness=false. Elapsed: 12.018912453s + Jul 27 01:47:51.328: INFO: Pod "alpine-nnp-false-c22faa8b-7e38-4711-9c9c-58c43681a6cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.020490599s + Jul 27 01:47:51.328: INFO: Pod "alpine-nnp-false-c22faa8b-7e38-4711-9c9c-58c43681a6cb" satisfied condition "Succeeded or Failed" + [AfterEach] [sig-node] Security Context test/e2e/framework/node/init/init.go:32 - Jun 12 21:01:15.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] ReplicationController + Jul 27 01:47:51.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Security Context test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] ReplicationController + [DeferCleanup (Each)] [sig-node] Security Context dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] ReplicationController + [DeferCleanup (Each)] [sig-node] Security Context tear down framework | framework.go:193 - STEP: Destroying namespace "replication-controller-298" for this suite. 06/12/23 21:01:15.891 + STEP: Destroying namespace "security-context-test-8977" for this suite. 07/27/23 01:47:51.36 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSS ------------------------------ -[sig-storage] Subpath Atomic writer volumes - should support subpaths with downward pod [Conformance] - test/e2e/storage/subpath.go:92 -[BeforeEach] [sig-storage] Subpath +[sig-storage] Projected downwardAPI + should update annotations on modification [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:162 +[BeforeEach] [sig-storage] Projected downwardAPI set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:01:15.912 -Jun 12 21:01:15.912: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename subpath 06/12/23 21:01:15.916 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:01:15.981 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:01:16.015 -[BeforeEach] [sig-storage] Subpath +STEP: Creating a kubernetes client 07/27/23 01:47:51.383 +Jul 27 01:47:51.383: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename projected 07/27/23 01:47:51.384 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:47:51.474 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:47:51.483 +[BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] Atomic writer volumes - test/e2e/storage/subpath.go:40 -STEP: Setting up data 06/12/23 21:01:16.023 -[It] should support subpaths with downward pod [Conformance] - test/e2e/storage/subpath.go:92 -STEP: Creating pod pod-subpath-test-downwardapi-224z 06/12/23 21:01:16.055 -STEP: Creating a pod to test atomic-volume-subpath 06/12/23 21:01:16.056 -Jun 12 21:01:16.089: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-224z" in namespace "subpath-5306" to be "Succeeded or Failed" -Jun 12 21:01:16.107: INFO: Pod "pod-subpath-test-downwardapi-224z": Phase="Pending", Reason="", readiness=false. Elapsed: 17.590725ms -Jun 12 21:01:18.117: INFO: Pod "pod-subpath-test-downwardapi-224z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027694739s -Jun 12 21:01:20.150: INFO: Pod "pod-subpath-test-downwardapi-224z": Phase="Running", Reason="", readiness=true. Elapsed: 4.06075974s -Jun 12 21:01:22.117: INFO: Pod "pod-subpath-test-downwardapi-224z": Phase="Running", Reason="", readiness=true. Elapsed: 6.027957506s -Jun 12 21:01:24.148: INFO: Pod "pod-subpath-test-downwardapi-224z": Phase="Running", Reason="", readiness=true. Elapsed: 8.059281261s -Jun 12 21:01:26.116: INFO: Pod "pod-subpath-test-downwardapi-224z": Phase="Running", Reason="", readiness=true. Elapsed: 10.027407444s -Jun 12 21:01:28.118: INFO: Pod "pod-subpath-test-downwardapi-224z": Phase="Running", Reason="", readiness=true. Elapsed: 12.029109217s -Jun 12 21:01:30.121: INFO: Pod "pod-subpath-test-downwardapi-224z": Phase="Running", Reason="", readiness=true. Elapsed: 14.031781512s -Jun 12 21:01:32.149: INFO: Pod "pod-subpath-test-downwardapi-224z": Phase="Running", Reason="", readiness=true. Elapsed: 16.059645346s -Jun 12 21:01:34.118: INFO: Pod "pod-subpath-test-downwardapi-224z": Phase="Running", Reason="", readiness=true. Elapsed: 18.028845247s -Jun 12 21:01:36.117: INFO: Pod "pod-subpath-test-downwardapi-224z": Phase="Running", Reason="", readiness=true. Elapsed: 20.027830538s -Jun 12 21:01:38.121: INFO: Pod "pod-subpath-test-downwardapi-224z": Phase="Running", Reason="", readiness=true. Elapsed: 22.031711225s -Jun 12 21:01:40.119: INFO: Pod "pod-subpath-test-downwardapi-224z": Phase="Running", Reason="", readiness=false. Elapsed: 24.030284937s -Jun 12 21:01:42.118: INFO: Pod "pod-subpath-test-downwardapi-224z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.029156002s -STEP: Saw pod success 06/12/23 21:01:42.118 -Jun 12 21:01:42.119: INFO: Pod "pod-subpath-test-downwardapi-224z" satisfied condition "Succeeded or Failed" -Jun 12 21:01:42.129: INFO: Trying to get logs from node 10.138.75.112 pod pod-subpath-test-downwardapi-224z container test-container-subpath-downwardapi-224z: -STEP: delete the pod 06/12/23 21:01:42.194 -Jun 12 21:01:42.226: INFO: Waiting for pod pod-subpath-test-downwardapi-224z to disappear -Jun 12 21:01:42.236: INFO: Pod pod-subpath-test-downwardapi-224z no longer exists -STEP: Deleting pod pod-subpath-test-downwardapi-224z 06/12/23 21:01:42.236 -Jun 12 21:01:42.236: INFO: Deleting pod "pod-subpath-test-downwardapi-224z" in namespace "subpath-5306" -[AfterEach] [sig-storage] Subpath +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 +[It] should update annotations on modification [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:162 +STEP: Creating the pod 07/27/23 01:47:51.493 +Jul 27 01:47:51.521: INFO: Waiting up to 5m0s for pod "annotationupdate345f9478-f545-488d-8135-854313a09e0c" in namespace "projected-3892" to be "running and ready" +Jul 27 01:47:51.530: INFO: Pod "annotationupdate345f9478-f545-488d-8135-854313a09e0c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.758995ms +Jul 27 01:47:51.530: INFO: The phase of Pod annotationupdate345f9478-f545-488d-8135-854313a09e0c is Pending, waiting for it to be Running (with Ready = true) +Jul 27 01:47:53.539: INFO: Pod "annotationupdate345f9478-f545-488d-8135-854313a09e0c": Phase="Running", Reason="", readiness=true. Elapsed: 2.017786422s +Jul 27 01:47:53.539: INFO: The phase of Pod annotationupdate345f9478-f545-488d-8135-854313a09e0c is Running (Ready = true) +Jul 27 01:47:53.539: INFO: Pod "annotationupdate345f9478-f545-488d-8135-854313a09e0c" satisfied condition "running and ready" +Jul 27 01:47:54.113: INFO: Successfully updated pod "annotationupdate345f9478-f545-488d-8135-854313a09e0c" +[AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/node/init/init.go:32 -Jun 12 21:01:42.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Subpath +Jul 27 01:47:58.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Subpath +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Subpath +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI tear down framework | framework.go:193 -STEP: Destroying namespace "subpath-5306" for this suite. 06/12/23 21:01:42.265 +STEP: Destroying namespace "projected-3892" for this suite. 07/27/23 01:47:58.274 ------------------------------ -• [SLOW TEST] [26.367 seconds] -[sig-storage] Subpath -test/e2e/storage/utils/framework.go:23 - Atomic writer volumes - test/e2e/storage/subpath.go:36 - should support subpaths with downward pod [Conformance] - test/e2e/storage/subpath.go:92 +• [SLOW TEST] [6.917 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should update annotations on modification [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:162 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Subpath + [BeforeEach] [sig-storage] Projected downwardAPI set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:01:15.912 - Jun 12 21:01:15.912: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename subpath 06/12/23 21:01:15.916 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:01:15.981 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:01:16.015 - [BeforeEach] [sig-storage] Subpath + STEP: Creating a kubernetes client 07/27/23 01:47:51.383 + Jul 27 01:47:51.383: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename projected 07/27/23 01:47:51.384 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:47:51.474 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:47:51.483 + [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] Atomic writer volumes - test/e2e/storage/subpath.go:40 - STEP: Setting up data 06/12/23 21:01:16.023 - [It] should support subpaths with downward pod [Conformance] - test/e2e/storage/subpath.go:92 - STEP: Creating pod pod-subpath-test-downwardapi-224z 06/12/23 21:01:16.055 - STEP: Creating a pod to test atomic-volume-subpath 06/12/23 21:01:16.056 - Jun 12 21:01:16.089: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-224z" in namespace "subpath-5306" to be "Succeeded or Failed" - Jun 12 21:01:16.107: INFO: Pod "pod-subpath-test-downwardapi-224z": Phase="Pending", Reason="", readiness=false. Elapsed: 17.590725ms - Jun 12 21:01:18.117: INFO: Pod "pod-subpath-test-downwardapi-224z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027694739s - Jun 12 21:01:20.150: INFO: Pod "pod-subpath-test-downwardapi-224z": Phase="Running", Reason="", readiness=true. Elapsed: 4.06075974s - Jun 12 21:01:22.117: INFO: Pod "pod-subpath-test-downwardapi-224z": Phase="Running", Reason="", readiness=true. Elapsed: 6.027957506s - Jun 12 21:01:24.148: INFO: Pod "pod-subpath-test-downwardapi-224z": Phase="Running", Reason="", readiness=true. Elapsed: 8.059281261s - Jun 12 21:01:26.116: INFO: Pod "pod-subpath-test-downwardapi-224z": Phase="Running", Reason="", readiness=true. Elapsed: 10.027407444s - Jun 12 21:01:28.118: INFO: Pod "pod-subpath-test-downwardapi-224z": Phase="Running", Reason="", readiness=true. Elapsed: 12.029109217s - Jun 12 21:01:30.121: INFO: Pod "pod-subpath-test-downwardapi-224z": Phase="Running", Reason="", readiness=true. Elapsed: 14.031781512s - Jun 12 21:01:32.149: INFO: Pod "pod-subpath-test-downwardapi-224z": Phase="Running", Reason="", readiness=true. Elapsed: 16.059645346s - Jun 12 21:01:34.118: INFO: Pod "pod-subpath-test-downwardapi-224z": Phase="Running", Reason="", readiness=true. Elapsed: 18.028845247s - Jun 12 21:01:36.117: INFO: Pod "pod-subpath-test-downwardapi-224z": Phase="Running", Reason="", readiness=true. Elapsed: 20.027830538s - Jun 12 21:01:38.121: INFO: Pod "pod-subpath-test-downwardapi-224z": Phase="Running", Reason="", readiness=true. Elapsed: 22.031711225s - Jun 12 21:01:40.119: INFO: Pod "pod-subpath-test-downwardapi-224z": Phase="Running", Reason="", readiness=false. Elapsed: 24.030284937s - Jun 12 21:01:42.118: INFO: Pod "pod-subpath-test-downwardapi-224z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.029156002s - STEP: Saw pod success 06/12/23 21:01:42.118 - Jun 12 21:01:42.119: INFO: Pod "pod-subpath-test-downwardapi-224z" satisfied condition "Succeeded or Failed" - Jun 12 21:01:42.129: INFO: Trying to get logs from node 10.138.75.112 pod pod-subpath-test-downwardapi-224z container test-container-subpath-downwardapi-224z: - STEP: delete the pod 06/12/23 21:01:42.194 - Jun 12 21:01:42.226: INFO: Waiting for pod pod-subpath-test-downwardapi-224z to disappear - Jun 12 21:01:42.236: INFO: Pod pod-subpath-test-downwardapi-224z no longer exists - STEP: Deleting pod pod-subpath-test-downwardapi-224z 06/12/23 21:01:42.236 - Jun 12 21:01:42.236: INFO: Deleting pod "pod-subpath-test-downwardapi-224z" in namespace "subpath-5306" - [AfterEach] [sig-storage] Subpath + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 + [It] should update annotations on modification [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:162 + STEP: Creating the pod 07/27/23 01:47:51.493 + Jul 27 01:47:51.521: INFO: Waiting up to 5m0s for pod "annotationupdate345f9478-f545-488d-8135-854313a09e0c" in namespace "projected-3892" to be "running and ready" + Jul 27 01:47:51.530: INFO: Pod "annotationupdate345f9478-f545-488d-8135-854313a09e0c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.758995ms + Jul 27 01:47:51.530: INFO: The phase of Pod annotationupdate345f9478-f545-488d-8135-854313a09e0c is Pending, waiting for it to be Running (with Ready = true) + Jul 27 01:47:53.539: INFO: Pod "annotationupdate345f9478-f545-488d-8135-854313a09e0c": Phase="Running", Reason="", readiness=true. Elapsed: 2.017786422s + Jul 27 01:47:53.539: INFO: The phase of Pod annotationupdate345f9478-f545-488d-8135-854313a09e0c is Running (Ready = true) + Jul 27 01:47:53.539: INFO: Pod "annotationupdate345f9478-f545-488d-8135-854313a09e0c" satisfied condition "running and ready" + Jul 27 01:47:54.113: INFO: Successfully updated pod "annotationupdate345f9478-f545-488d-8135-854313a09e0c" + [AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/node/init/init.go:32 - Jun 12 21:01:42.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Subpath + Jul 27 01:47:58.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Subpath + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Subpath + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI tear down framework | framework.go:193 - STEP: Destroying namespace "subpath-5306" for this suite. 06/12/23 21:01:42.265 + STEP: Destroying namespace "projected-3892" for this suite. 07/27/23 01:47:58.274 << End Captured GinkgoWriter Output ------------------------------ -[sig-apps] CronJob - should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] - test/e2e/apps/cronjob.go:124 -[BeforeEach] [sig-apps] CronJob +SSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + test/e2e/apps/statefulset.go:587 +[BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:01:42.279 -Jun 12 21:01:42.282: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename cronjob 06/12/23 21:01:42.284 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:01:42.359 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:01:42.374 -[BeforeEach] [sig-apps] CronJob +STEP: Creating a kubernetes client 07/27/23 01:47:58.3 +Jul 27 01:47:58.301: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename statefulset 07/27/23 01:47:58.301 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:47:58.35 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:47:58.359 +[BeforeEach] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:31 -[It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] - test/e2e/apps/cronjob.go:124 -STEP: Creating a ForbidConcurrent cronjob 06/12/23 21:01:42.389 -STEP: Ensuring a job is scheduled 06/12/23 21:01:42.407 -STEP: Ensuring exactly one is scheduled 06/12/23 21:02:00.417 -STEP: Ensuring exactly one running job exists by listing jobs explicitly 06/12/23 21:02:00.443 -STEP: Ensuring no more jobs are scheduled 06/12/23 21:02:00.452 -STEP: Removing cronjob 06/12/23 21:07:00.502 -[AfterEach] [sig-apps] CronJob +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 +STEP: Creating service test in namespace statefulset-3399 07/27/23 01:47:58.374 +[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + test/e2e/apps/statefulset.go:587 +STEP: Initializing watcher for selector baz=blah,foo=bar 07/27/23 01:47:58.398 +STEP: Creating stateful set ss in namespace statefulset-3399 07/27/23 01:47:58.408 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3399 07/27/23 01:47:58.448 +Jul 27 01:47:58.457: INFO: Found 0 stateful pods, waiting for 1 +Jul 27 01:48:08.467: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod 07/27/23 01:48:08.467 +Jul 27 01:48:08.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-3399 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Jul 27 01:48:09.202: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Jul 27 01:48:09.202: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Jul 27 01:48:09.202: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Jul 27 01:48:09.230: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Jul 27 01:48:09.230: INFO: Waiting for statefulset status.replicas updated to 0 +Jul 27 01:48:09.300: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999754s +Jul 27 01:48:10.310: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.989599945s +Jul 27 01:48:11.319: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.980445207s +Jul 27 01:48:12.335: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.971228479s +Jul 27 01:48:13.345: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.955103112s +Jul 27 01:48:14.357: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.942822489s +Jul 27 01:48:15.368: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.932009731s +Jul 27 01:48:16.378: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.922286123s +Jul 27 01:48:17.393: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.911351497s +Jul 27 01:48:18.403: INFO: Verifying statefulset ss doesn't scale past 1 for another 895.865608ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3399 07/27/23 01:48:19.403 +Jul 27 01:48:19.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-3399 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Jul 27 01:48:19.664: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Jul 27 01:48:19.664: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Jul 27 01:48:19.664: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Jul 27 01:48:19.678: INFO: Found 1 stateful pods, waiting for 3 +Jul 27 01:48:29.722: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Jul 27 01:48:29.722: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Jul 27 01:48:29.722: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Verifying that stateful set ss was scaled up in order 07/27/23 01:48:29.722 +STEP: Scale down will halt with unhealthy stateful pod 07/27/23 01:48:29.722 +Jul 27 01:48:29.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-3399 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Jul 27 01:48:30.096: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Jul 27 01:48:30.096: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Jul 27 01:48:30.096: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Jul 27 01:48:30.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-3399 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Jul 27 01:48:30.388: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Jul 27 01:48:30.388: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Jul 27 01:48:30.388: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Jul 27 01:48:30.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-3399 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Jul 27 01:48:30.659: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Jul 27 01:48:30.659: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Jul 27 01:48:30.659: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Jul 27 01:48:30.659: INFO: Waiting for statefulset status.replicas updated to 0 +Jul 27 01:48:30.672: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 +Jul 27 01:48:40.701: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Jul 27 01:48:40.701: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Jul 27 01:48:40.701: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Jul 27 01:48:40.753: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999682s +Jul 27 01:48:41.763: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.980327887s +Jul 27 01:48:42.774: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.970350534s +Jul 27 01:48:43.785: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.959342514s +Jul 27 01:48:44.797: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.947799078s +Jul 27 01:48:45.808: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.935315854s +Jul 27 01:48:46.831: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.924407058s +Jul 27 01:48:47.842: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.901525246s +Jul 27 01:48:48.853: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.891134979s +Jul 27 01:48:49.868: INFO: Verifying statefulset ss doesn't scale past 3 for another 879.298429ms +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3399 07/27/23 01:48:50.869 +Jul 27 01:48:50.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-3399 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Jul 27 01:48:51.109: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Jul 27 01:48:51.109: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Jul 27 01:48:51.109: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Jul 27 01:48:51.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-3399 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Jul 27 01:48:51.349: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Jul 27 01:48:51.349: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Jul 27 01:48:51.349: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Jul 27 01:48:51.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-3399 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Jul 27 01:48:51.630: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Jul 27 01:48:51.630: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Jul 27 01:48:51.630: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Jul 27 01:48:51.630: INFO: Scaling statefulset ss to 0 +STEP: Verifying that stateful set ss was scaled down in reverse order 07/27/23 01:49:01.683 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 +Jul 27 01:49:01.683: INFO: Deleting all statefulset in ns statefulset-3399 +Jul 27 01:49:01.756: INFO: Scaling statefulset ss to 0 +Jul 27 01:49:01.829: INFO: Waiting for statefulset status.replicas updated to 0 +Jul 27 01:49:01.851: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 -Jun 12 21:07:00.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] CronJob +Jul 27 01:49:01.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] CronJob +[DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] CronJob +[DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 -STEP: Destroying namespace "cronjob-4087" for this suite. 06/12/23 21:07:00.532 +STEP: Destroying namespace "statefulset-3399" for this suite. 07/27/23 01:49:02.031 ------------------------------ -• [SLOW TEST] [318.268 seconds] -[sig-apps] CronJob +• [SLOW TEST] [63.758 seconds] +[sig-apps] StatefulSet test/e2e/apps/framework.go:23 - should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] - test/e2e/apps/cronjob.go:124 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:103 + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + test/e2e/apps/statefulset.go:587 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] CronJob + [BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:01:42.279 - Jun 12 21:01:42.282: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename cronjob 06/12/23 21:01:42.284 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:01:42.359 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:01:42.374 - [BeforeEach] [sig-apps] CronJob + STEP: Creating a kubernetes client 07/27/23 01:47:58.3 + Jul 27 01:47:58.301: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename statefulset 07/27/23 01:47:58.301 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:47:58.35 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:47:58.359 + [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:31 - [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] - test/e2e/apps/cronjob.go:124 - STEP: Creating a ForbidConcurrent cronjob 06/12/23 21:01:42.389 - STEP: Ensuring a job is scheduled 06/12/23 21:01:42.407 - STEP: Ensuring exactly one is scheduled 06/12/23 21:02:00.417 - STEP: Ensuring exactly one running job exists by listing jobs explicitly 06/12/23 21:02:00.443 - STEP: Ensuring no more jobs are scheduled 06/12/23 21:02:00.452 - STEP: Removing cronjob 06/12/23 21:07:00.502 - [AfterEach] [sig-apps] CronJob + [BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 + [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 + STEP: Creating service test in namespace statefulset-3399 07/27/23 01:47:58.374 + [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + test/e2e/apps/statefulset.go:587 + STEP: Initializing watcher for selector baz=blah,foo=bar 07/27/23 01:47:58.398 + STEP: Creating stateful set ss in namespace statefulset-3399 07/27/23 01:47:58.408 + STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3399 07/27/23 01:47:58.448 + Jul 27 01:47:58.457: INFO: Found 0 stateful pods, waiting for 1 + Jul 27 01:48:08.467: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true + STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod 07/27/23 01:48:08.467 + Jul 27 01:48:08.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-3399 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Jul 27 01:48:09.202: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Jul 27 01:48:09.202: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Jul 27 01:48:09.202: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Jul 27 01:48:09.230: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false + Jul 27 01:48:09.230: INFO: Waiting for statefulset status.replicas updated to 0 + Jul 27 01:48:09.300: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999754s + Jul 27 01:48:10.310: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.989599945s + Jul 27 01:48:11.319: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.980445207s + Jul 27 01:48:12.335: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.971228479s + Jul 27 01:48:13.345: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.955103112s + Jul 27 01:48:14.357: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.942822489s + Jul 27 01:48:15.368: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.932009731s + Jul 27 01:48:16.378: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.922286123s + Jul 27 01:48:17.393: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.911351497s + Jul 27 01:48:18.403: INFO: Verifying statefulset ss doesn't scale past 1 for another 895.865608ms + STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3399 07/27/23 01:48:19.403 + Jul 27 01:48:19.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-3399 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Jul 27 01:48:19.664: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" + Jul 27 01:48:19.664: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Jul 27 01:48:19.664: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + Jul 27 01:48:19.678: INFO: Found 1 stateful pods, waiting for 3 + Jul 27 01:48:29.722: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true + Jul 27 01:48:29.722: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true + Jul 27 01:48:29.722: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true + STEP: Verifying that stateful set ss was scaled up in order 07/27/23 01:48:29.722 + STEP: Scale down will halt with unhealthy stateful pod 07/27/23 01:48:29.722 + Jul 27 01:48:29.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-3399 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Jul 27 01:48:30.096: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Jul 27 01:48:30.096: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Jul 27 01:48:30.096: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Jul 27 01:48:30.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-3399 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Jul 27 01:48:30.388: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Jul 27 01:48:30.388: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Jul 27 01:48:30.388: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Jul 27 01:48:30.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-3399 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Jul 27 01:48:30.659: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Jul 27 01:48:30.659: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Jul 27 01:48:30.659: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Jul 27 01:48:30.659: INFO: Waiting for statefulset status.replicas updated to 0 + Jul 27 01:48:30.672: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 + Jul 27 01:48:40.701: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false + Jul 27 01:48:40.701: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false + Jul 27 01:48:40.701: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false + Jul 27 01:48:40.753: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999682s + Jul 27 01:48:41.763: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.980327887s + Jul 27 01:48:42.774: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.970350534s + Jul 27 01:48:43.785: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.959342514s + Jul 27 01:48:44.797: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.947799078s + Jul 27 01:48:45.808: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.935315854s + Jul 27 01:48:46.831: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.924407058s + Jul 27 01:48:47.842: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.901525246s + Jul 27 01:48:48.853: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.891134979s + Jul 27 01:48:49.868: INFO: Verifying statefulset ss doesn't scale past 3 for another 879.298429ms + STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3399 07/27/23 01:48:50.869 + Jul 27 01:48:50.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-3399 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Jul 27 01:48:51.109: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" + Jul 27 01:48:51.109: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Jul 27 01:48:51.109: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + Jul 27 01:48:51.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-3399 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Jul 27 01:48:51.349: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" + Jul 27 01:48:51.349: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Jul 27 01:48:51.349: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + Jul 27 01:48:51.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-3399 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Jul 27 01:48:51.630: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" + Jul 27 01:48:51.630: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Jul 27 01:48:51.630: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + Jul 27 01:48:51.630: INFO: Scaling statefulset ss to 0 + STEP: Verifying that stateful set ss was scaled down in reverse order 07/27/23 01:49:01.683 + [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 + Jul 27 01:49:01.683: INFO: Deleting all statefulset in ns statefulset-3399 + Jul 27 01:49:01.756: INFO: Scaling statefulset ss to 0 + Jul 27 01:49:01.829: INFO: Waiting for statefulset status.replicas updated to 0 + Jul 27 01:49:01.851: INFO: Deleting statefulset ss + [AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 - Jun 12 21:07:00.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] CronJob + Jul 27 01:49:01.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] CronJob + [DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] CronJob + [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 - STEP: Destroying namespace "cronjob-4087" for this suite. 06/12/23 21:07:00.532 + STEP: Destroying namespace "statefulset-3399" for this suite. 07/27/23 01:49:02.031 << End Captured GinkgoWriter Output ------------------------------ -SSSS +SSSSSSSS ------------------------------ -[sig-apps] ReplicaSet - should validate Replicaset Status endpoints [Conformance] - test/e2e/apps/replica_set.go:176 -[BeforeEach] [sig-apps] ReplicaSet +[sig-node] Kubelet when scheduling a busybox command in a pod + should print the output to logs [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:52 +[BeforeEach] [sig-node] Kubelet set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:07:00.548 -Jun 12 21:07:00.548: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename replicaset 06/12/23 21:07:00.549 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:07:00.628 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:07:00.637 -[BeforeEach] [sig-apps] ReplicaSet +STEP: Creating a kubernetes client 07/27/23 01:49:02.06 +Jul 27 01:49:02.060: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename kubelet-test 07/27/23 01:49:02.06 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:49:02.209 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:49:02.259 +[BeforeEach] [sig-node] Kubelet test/e2e/framework/metrics/init/init.go:31 -[It] should validate Replicaset Status endpoints [Conformance] - test/e2e/apps/replica_set.go:176 -STEP: Create a Replicaset 06/12/23 21:07:00.709 -STEP: Verify that the required pods have come up. 06/12/23 21:07:00.749 -Jun 12 21:07:00.763: INFO: Pod name sample-pod: Found 1 pods out of 1 -STEP: ensuring each pod is running 06/12/23 21:07:00.763 -Jun 12 21:07:00.764: INFO: Waiting up to 5m0s for pod "test-rs-4mwrt" in namespace "replicaset-869" to be "running" -Jun 12 21:07:00.775: INFO: Pod "test-rs-4mwrt": Phase="Pending", Reason="", readiness=false. Elapsed: 11.608633ms -Jun 12 21:07:02.789: INFO: Pod "test-rs-4mwrt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024958681s -Jun 12 21:07:04.786: INFO: Pod "test-rs-4mwrt": Phase="Running", Reason="", readiness=true. Elapsed: 4.022517391s -Jun 12 21:07:04.786: INFO: Pod "test-rs-4mwrt" satisfied condition "running" -STEP: Getting /status 06/12/23 21:07:04.786 -Jun 12 21:07:04.799: INFO: Replicaset test-rs has Conditions: [] -STEP: updating the Replicaset Status 06/12/23 21:07:04.799 -Jun 12 21:07:04.825: INFO: updatedStatus.Conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} -STEP: watching for the ReplicaSet status to be updated 06/12/23 21:07:04.825 -Jun 12 21:07:04.831: INFO: Observed &ReplicaSet event: ADDED -Jun 12 21:07:04.832: INFO: Observed &ReplicaSet event: MODIFIED -Jun 12 21:07:04.832: INFO: Observed &ReplicaSet event: MODIFIED -Jun 12 21:07:04.833: INFO: Observed &ReplicaSet event: MODIFIED -Jun 12 21:07:04.833: INFO: Found replicaset test-rs in namespace replicaset-869 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] -Jun 12 21:07:04.833: INFO: Replicaset test-rs has an updated status -STEP: patching the Replicaset Status 06/12/23 21:07:04.833 -Jun 12 21:07:04.833: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} -Jun 12 21:07:04.852: INFO: Patched status conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} -STEP: watching for the Replicaset status to be patched 06/12/23 21:07:04.852 -Jun 12 21:07:04.858: INFO: Observed &ReplicaSet event: ADDED -Jun 12 21:07:04.859: INFO: Observed &ReplicaSet event: MODIFIED -Jun 12 21:07:04.859: INFO: Observed &ReplicaSet event: MODIFIED -Jun 12 21:07:04.860: INFO: Observed &ReplicaSet event: MODIFIED -Jun 12 21:07:04.861: INFO: Observed replicaset test-rs in namespace replicaset-869 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} -Jun 12 21:07:04.861: INFO: Observed &ReplicaSet event: MODIFIED -Jun 12 21:07:04.861: INFO: Found replicaset test-rs in namespace replicaset-869 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC } -Jun 12 21:07:04.861: INFO: Replicaset test-rs has a patched status -[AfterEach] [sig-apps] ReplicaSet +[BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 +[It] should print the output to logs [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:52 +Jul 27 01:49:02.345: INFO: Waiting up to 5m0s for pod "busybox-scheduling-cb8a19f6-765a-4197-8536-3f62b6726f40" in namespace "kubelet-test-4470" to be "running and ready" +Jul 27 01:49:02.366: INFO: Pod "busybox-scheduling-cb8a19f6-765a-4197-8536-3f62b6726f40": Phase="Pending", Reason="", readiness=false. Elapsed: 21.126178ms +Jul 27 01:49:02.366: INFO: The phase of Pod busybox-scheduling-cb8a19f6-765a-4197-8536-3f62b6726f40 is Pending, waiting for it to be Running (with Ready = true) +Jul 27 01:49:04.375: INFO: Pod "busybox-scheduling-cb8a19f6-765a-4197-8536-3f62b6726f40": Phase="Running", Reason="", readiness=true. Elapsed: 2.029904754s +Jul 27 01:49:04.375: INFO: The phase of Pod busybox-scheduling-cb8a19f6-765a-4197-8536-3f62b6726f40 is Running (Ready = true) +Jul 27 01:49:04.375: INFO: Pod "busybox-scheduling-cb8a19f6-765a-4197-8536-3f62b6726f40" satisfied condition "running and ready" +[AfterEach] [sig-node] Kubelet test/e2e/framework/node/init/init.go:32 -Jun 12 21:07:04.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] ReplicaSet +Jul 27 01:49:04.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Kubelet test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] ReplicaSet +[DeferCleanup (Each)] [sig-node] Kubelet dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] ReplicaSet +[DeferCleanup (Each)] [sig-node] Kubelet tear down framework | framework.go:193 -STEP: Destroying namespace "replicaset-869" for this suite. 06/12/23 21:07:04.878 +STEP: Destroying namespace "kubelet-test-4470" for this suite. 07/27/23 01:49:04.425 ------------------------------ -• [4.349 seconds] -[sig-apps] ReplicaSet -test/e2e/apps/framework.go:23 - should validate Replicaset Status endpoints [Conformance] - test/e2e/apps/replica_set.go:176 +• [2.386 seconds] +[sig-node] Kubelet +test/e2e/common/node/framework.go:23 + when scheduling a busybox command in a pod + test/e2e/common/node/kubelet.go:44 + should print the output to logs [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:52 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] ReplicaSet + [BeforeEach] [sig-node] Kubelet set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:07:00.548 - Jun 12 21:07:00.548: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename replicaset 06/12/23 21:07:00.549 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:07:00.628 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:07:00.637 - [BeforeEach] [sig-apps] ReplicaSet + STEP: Creating a kubernetes client 07/27/23 01:49:02.06 + Jul 27 01:49:02.060: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename kubelet-test 07/27/23 01:49:02.06 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:49:02.209 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:49:02.259 + [BeforeEach] [sig-node] Kubelet test/e2e/framework/metrics/init/init.go:31 - [It] should validate Replicaset Status endpoints [Conformance] - test/e2e/apps/replica_set.go:176 - STEP: Create a Replicaset 06/12/23 21:07:00.709 - STEP: Verify that the required pods have come up. 06/12/23 21:07:00.749 - Jun 12 21:07:00.763: INFO: Pod name sample-pod: Found 1 pods out of 1 - STEP: ensuring each pod is running 06/12/23 21:07:00.763 - Jun 12 21:07:00.764: INFO: Waiting up to 5m0s for pod "test-rs-4mwrt" in namespace "replicaset-869" to be "running" - Jun 12 21:07:00.775: INFO: Pod "test-rs-4mwrt": Phase="Pending", Reason="", readiness=false. Elapsed: 11.608633ms - Jun 12 21:07:02.789: INFO: Pod "test-rs-4mwrt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024958681s - Jun 12 21:07:04.786: INFO: Pod "test-rs-4mwrt": Phase="Running", Reason="", readiness=true. Elapsed: 4.022517391s - Jun 12 21:07:04.786: INFO: Pod "test-rs-4mwrt" satisfied condition "running" - STEP: Getting /status 06/12/23 21:07:04.786 - Jun 12 21:07:04.799: INFO: Replicaset test-rs has Conditions: [] - STEP: updating the Replicaset Status 06/12/23 21:07:04.799 - Jun 12 21:07:04.825: INFO: updatedStatus.Conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} - STEP: watching for the ReplicaSet status to be updated 06/12/23 21:07:04.825 - Jun 12 21:07:04.831: INFO: Observed &ReplicaSet event: ADDED - Jun 12 21:07:04.832: INFO: Observed &ReplicaSet event: MODIFIED - Jun 12 21:07:04.832: INFO: Observed &ReplicaSet event: MODIFIED - Jun 12 21:07:04.833: INFO: Observed &ReplicaSet event: MODIFIED - Jun 12 21:07:04.833: INFO: Found replicaset test-rs in namespace replicaset-869 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] - Jun 12 21:07:04.833: INFO: Replicaset test-rs has an updated status - STEP: patching the Replicaset Status 06/12/23 21:07:04.833 - Jun 12 21:07:04.833: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} - Jun 12 21:07:04.852: INFO: Patched status conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} - STEP: watching for the Replicaset status to be patched 06/12/23 21:07:04.852 - Jun 12 21:07:04.858: INFO: Observed &ReplicaSet event: ADDED - Jun 12 21:07:04.859: INFO: Observed &ReplicaSet event: MODIFIED - Jun 12 21:07:04.859: INFO: Observed &ReplicaSet event: MODIFIED - Jun 12 21:07:04.860: INFO: Observed &ReplicaSet event: MODIFIED - Jun 12 21:07:04.861: INFO: Observed replicaset test-rs in namespace replicaset-869 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} - Jun 12 21:07:04.861: INFO: Observed &ReplicaSet event: MODIFIED - Jun 12 21:07:04.861: INFO: Found replicaset test-rs in namespace replicaset-869 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC } - Jun 12 21:07:04.861: INFO: Replicaset test-rs has a patched status - [AfterEach] [sig-apps] ReplicaSet - test/e2e/framework/node/init/init.go:32 - Jun 12 21:07:04.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] ReplicaSet + [BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 + [It] should print the output to logs [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:52 + Jul 27 01:49:02.345: INFO: Waiting up to 5m0s for pod "busybox-scheduling-cb8a19f6-765a-4197-8536-3f62b6726f40" in namespace "kubelet-test-4470" to be "running and ready" + Jul 27 01:49:02.366: INFO: Pod "busybox-scheduling-cb8a19f6-765a-4197-8536-3f62b6726f40": Phase="Pending", Reason="", readiness=false. Elapsed: 21.126178ms + Jul 27 01:49:02.366: INFO: The phase of Pod busybox-scheduling-cb8a19f6-765a-4197-8536-3f62b6726f40 is Pending, waiting for it to be Running (with Ready = true) + Jul 27 01:49:04.375: INFO: Pod "busybox-scheduling-cb8a19f6-765a-4197-8536-3f62b6726f40": Phase="Running", Reason="", readiness=true. Elapsed: 2.029904754s + Jul 27 01:49:04.375: INFO: The phase of Pod busybox-scheduling-cb8a19f6-765a-4197-8536-3f62b6726f40 is Running (Ready = true) + Jul 27 01:49:04.375: INFO: Pod "busybox-scheduling-cb8a19f6-765a-4197-8536-3f62b6726f40" satisfied condition "running and ready" + [AfterEach] [sig-node] Kubelet + test/e2e/framework/node/init/init.go:32 + Jul 27 01:49:04.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Kubelet test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] ReplicaSet + [DeferCleanup (Each)] [sig-node] Kubelet dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] ReplicaSet + [DeferCleanup (Each)] [sig-node] Kubelet tear down framework | framework.go:193 - STEP: Destroying namespace "replicaset-869" for this suite. 06/12/23 21:07:04.878 + STEP: Destroying namespace "kubelet-test-4470" for this suite. 07/27/23 01:49:04.425 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] Projected secret - should be consumable from pods in volume with mappings [NodeConformance] [Conformance] - test/e2e/common/storage/projected_secret.go:78 -[BeforeEach] [sig-storage] Projected secret +[sig-node] NoExecuteTaintManager Multiple Pods [Serial] + evicts pods with minTolerationSeconds [Disruptive] [Conformance] + test/e2e/node/taints.go:455 +[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:07:04.9 -Jun 12 21:07:04.900: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename projected 06/12/23 21:07:04.902 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:07:04.942 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:07:04.964 -[BeforeEach] [sig-storage] Projected secret +STEP: Creating a kubernetes client 07/27/23 01:49:04.449 +Jul 27 01:49:04.449: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename taint-multiple-pods 07/27/23 01:49:04.45 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:49:04.49 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:49:04.499 +[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] test/e2e/framework/metrics/init/init.go:31 -[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] - test/e2e/common/storage/projected_secret.go:78 -STEP: Creating projection with secret that has name projected-secret-test-map-09a09dea-31ec-46f3-880a-5359cc29ee2f 06/12/23 21:07:04.979 -STEP: Creating a pod to test consume secrets 06/12/23 21:07:05 -Jun 12 21:07:05.046: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a9a9e4db-270d-4abd-8e19-dfefbb78425e" in namespace "projected-2031" to be "Succeeded or Failed" -Jun 12 21:07:05.058: INFO: Pod "pod-projected-secrets-a9a9e4db-270d-4abd-8e19-dfefbb78425e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.144482ms -Jun 12 21:07:07.067: INFO: Pod "pod-projected-secrets-a9a9e4db-270d-4abd-8e19-dfefbb78425e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02161516s -Jun 12 21:07:09.073: INFO: Pod "pod-projected-secrets-a9a9e4db-270d-4abd-8e19-dfefbb78425e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026838862s -Jun 12 21:07:11.073: INFO: Pod "pod-projected-secrets-a9a9e4db-270d-4abd-8e19-dfefbb78425e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027444534s -STEP: Saw pod success 06/12/23 21:07:11.073 -Jun 12 21:07:11.074: INFO: Pod "pod-projected-secrets-a9a9e4db-270d-4abd-8e19-dfefbb78425e" satisfied condition "Succeeded or Failed" -Jun 12 21:07:11.083: INFO: Trying to get logs from node 10.138.75.70 pod pod-projected-secrets-a9a9e4db-270d-4abd-8e19-dfefbb78425e container projected-secret-volume-test: -STEP: delete the pod 06/12/23 21:07:11.142 -Jun 12 21:07:11.184: INFO: Waiting for pod pod-projected-secrets-a9a9e4db-270d-4abd-8e19-dfefbb78425e to disappear -Jun 12 21:07:11.197: INFO: Pod pod-projected-secrets-a9a9e4db-270d-4abd-8e19-dfefbb78425e no longer exists -[AfterEach] [sig-storage] Projected secret +[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + test/e2e/node/taints.go:383 +Jul 27 01:49:04.508: INFO: Waiting up to 1m0s for all nodes to be ready +Jul 27 01:50:04.688: INFO: Waiting for terminating namespaces to be deleted... +[It] evicts pods with minTolerationSeconds [Disruptive] [Conformance] + test/e2e/node/taints.go:455 +Jul 27 01:50:04.713: INFO: Starting informer... +STEP: Starting pods... 07/27/23 01:50:04.713 +Jul 27 01:50:04.965: INFO: Pod1 is running on 10.245.128.19. Tainting Node +Jul 27 01:50:05.192: INFO: Waiting up to 5m0s for pod "taint-eviction-b1" in namespace "taint-multiple-pods-1749" to be "running" +Jul 27 01:50:05.202: INFO: Pod "taint-eviction-b1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.06708ms +Jul 27 01:50:07.234: INFO: Pod "taint-eviction-b1": Phase="Running", Reason="", readiness=true. Elapsed: 2.041390995s +Jul 27 01:50:07.234: INFO: Pod "taint-eviction-b1" satisfied condition "running" +Jul 27 01:50:07.234: INFO: Waiting up to 5m0s for pod "taint-eviction-b2" in namespace "taint-multiple-pods-1749" to be "running" +Jul 27 01:50:07.248: INFO: Pod "taint-eviction-b2": Phase="Running", Reason="", readiness=true. Elapsed: 13.900149ms +Jul 27 01:50:07.248: INFO: Pod "taint-eviction-b2" satisfied condition "running" +Jul 27 01:50:07.248: INFO: Pod2 is running on 10.245.128.19. Tainting Node +STEP: Trying to apply a taint on the Node 07/27/23 01:50:07.248 +STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 07/27/23 01:50:07.309 +STEP: Waiting for Pod1 and Pod2 to be deleted 07/27/23 01:50:07.364 +Jul 27 01:50:13.244: INFO: Noticed Pod "taint-eviction-b1" gets evicted. +Jul 27 01:50:33.316: INFO: Noticed Pod "taint-eviction-b2" gets evicted. +STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 07/27/23 01:50:33.351 +[AfterEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] test/e2e/framework/node/init/init.go:32 -Jun 12 21:07:11.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Projected secret +Jul 27 01:50:33.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Projected secret +[DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Projected secret +[DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] tear down framework | framework.go:193 -STEP: Destroying namespace "projected-2031" for this suite. 06/12/23 21:07:11.215 +STEP: Destroying namespace "taint-multiple-pods-1749" for this suite. 07/27/23 01:50:33.37 ------------------------------ -• [SLOW TEST] [6.336 seconds] -[sig-storage] Projected secret -test/e2e/common/storage/framework.go:23 - should be consumable from pods in volume with mappings [NodeConformance] [Conformance] - test/e2e/common/storage/projected_secret.go:78 +• [SLOW TEST] [88.946 seconds] +[sig-node] NoExecuteTaintManager Multiple Pods [Serial] +test/e2e/node/framework.go:23 + evicts pods with minTolerationSeconds [Disruptive] [Conformance] + test/e2e/node/taints.go:455 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Projected secret + [BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:07:04.9 - Jun 12 21:07:04.900: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename projected 06/12/23 21:07:04.902 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:07:04.942 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:07:04.964 - [BeforeEach] [sig-storage] Projected secret + STEP: Creating a kubernetes client 07/27/23 01:49:04.449 + Jul 27 01:49:04.449: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename taint-multiple-pods 07/27/23 01:49:04.45 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:49:04.49 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:49:04.499 + [BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] test/e2e/framework/metrics/init/init.go:31 - [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] - test/e2e/common/storage/projected_secret.go:78 - STEP: Creating projection with secret that has name projected-secret-test-map-09a09dea-31ec-46f3-880a-5359cc29ee2f 06/12/23 21:07:04.979 - STEP: Creating a pod to test consume secrets 06/12/23 21:07:05 - Jun 12 21:07:05.046: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a9a9e4db-270d-4abd-8e19-dfefbb78425e" in namespace "projected-2031" to be "Succeeded or Failed" - Jun 12 21:07:05.058: INFO: Pod "pod-projected-secrets-a9a9e4db-270d-4abd-8e19-dfefbb78425e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.144482ms - Jun 12 21:07:07.067: INFO: Pod "pod-projected-secrets-a9a9e4db-270d-4abd-8e19-dfefbb78425e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02161516s - Jun 12 21:07:09.073: INFO: Pod "pod-projected-secrets-a9a9e4db-270d-4abd-8e19-dfefbb78425e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026838862s - Jun 12 21:07:11.073: INFO: Pod "pod-projected-secrets-a9a9e4db-270d-4abd-8e19-dfefbb78425e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027444534s - STEP: Saw pod success 06/12/23 21:07:11.073 - Jun 12 21:07:11.074: INFO: Pod "pod-projected-secrets-a9a9e4db-270d-4abd-8e19-dfefbb78425e" satisfied condition "Succeeded or Failed" - Jun 12 21:07:11.083: INFO: Trying to get logs from node 10.138.75.70 pod pod-projected-secrets-a9a9e4db-270d-4abd-8e19-dfefbb78425e container projected-secret-volume-test: - STEP: delete the pod 06/12/23 21:07:11.142 - Jun 12 21:07:11.184: INFO: Waiting for pod pod-projected-secrets-a9a9e4db-270d-4abd-8e19-dfefbb78425e to disappear - Jun 12 21:07:11.197: INFO: Pod pod-projected-secrets-a9a9e4db-270d-4abd-8e19-dfefbb78425e no longer exists - [AfterEach] [sig-storage] Projected secret + [BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + test/e2e/node/taints.go:383 + Jul 27 01:49:04.508: INFO: Waiting up to 1m0s for all nodes to be ready + Jul 27 01:50:04.688: INFO: Waiting for terminating namespaces to be deleted... + [It] evicts pods with minTolerationSeconds [Disruptive] [Conformance] + test/e2e/node/taints.go:455 + Jul 27 01:50:04.713: INFO: Starting informer... + STEP: Starting pods... 07/27/23 01:50:04.713 + Jul 27 01:50:04.965: INFO: Pod1 is running on 10.245.128.19. Tainting Node + Jul 27 01:50:05.192: INFO: Waiting up to 5m0s for pod "taint-eviction-b1" in namespace "taint-multiple-pods-1749" to be "running" + Jul 27 01:50:05.202: INFO: Pod "taint-eviction-b1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.06708ms + Jul 27 01:50:07.234: INFO: Pod "taint-eviction-b1": Phase="Running", Reason="", readiness=true. Elapsed: 2.041390995s + Jul 27 01:50:07.234: INFO: Pod "taint-eviction-b1" satisfied condition "running" + Jul 27 01:50:07.234: INFO: Waiting up to 5m0s for pod "taint-eviction-b2" in namespace "taint-multiple-pods-1749" to be "running" + Jul 27 01:50:07.248: INFO: Pod "taint-eviction-b2": Phase="Running", Reason="", readiness=true. Elapsed: 13.900149ms + Jul 27 01:50:07.248: INFO: Pod "taint-eviction-b2" satisfied condition "running" + Jul 27 01:50:07.248: INFO: Pod2 is running on 10.245.128.19. Tainting Node + STEP: Trying to apply a taint on the Node 07/27/23 01:50:07.248 + STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 07/27/23 01:50:07.309 + STEP: Waiting for Pod1 and Pod2 to be deleted 07/27/23 01:50:07.364 + Jul 27 01:50:13.244: INFO: Noticed Pod "taint-eviction-b1" gets evicted. + Jul 27 01:50:33.316: INFO: Noticed Pod "taint-eviction-b2" gets evicted. + STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 07/27/23 01:50:33.351 + [AfterEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] test/e2e/framework/node/init/init.go:32 - Jun 12 21:07:11.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Projected secret + Jul 27 01:50:33.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Projected secret + [DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Projected secret + [DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] tear down framework | framework.go:193 - STEP: Destroying namespace "projected-2031" for this suite. 06/12/23 21:07:11.215 + STEP: Destroying namespace "taint-multiple-pods-1749" for this suite. 07/27/23 01:50:33.37 << End Captured GinkgoWriter Output ------------------------------ -SSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-node] Container Runtime blackbox test when starting a container that exits - should run with the expected status [NodeConformance] [Conformance] - test/e2e/common/node/runtime.go:52 -[BeforeEach] [sig-node] Container Runtime +[sig-storage] EmptyDir volumes + should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:127 +[BeforeEach] [sig-storage] EmptyDir volumes set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:07:11.238 -Jun 12 21:07:11.239: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename container-runtime 06/12/23 21:07:11.241 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:07:11.29 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:07:11.299 -[BeforeEach] [sig-node] Container Runtime +STEP: Creating a kubernetes client 07/27/23 01:50:33.398 +Jul 27 01:50:33.398: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename emptydir 07/27/23 01:50:33.398 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:50:33.449 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:50:33.459 +[BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:31 -[It] should run with the expected status [NodeConformance] [Conformance] - test/e2e/common/node/runtime.go:52 -STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' 06/12/23 21:07:11.343 -STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' 06/12/23 21:07:29.736 -STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition 06/12/23 21:07:29.749 -STEP: Container 'terminate-cmd-rpa': should get the expected 'State' 06/12/23 21:07:29.771 -STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] 06/12/23 21:07:29.771 -STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' 06/12/23 21:07:29.825 -STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' 06/12/23 21:07:33.886 -STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition 06/12/23 21:07:35.915 -STEP: Container 'terminate-cmd-rpof': should get the expected 'State' 06/12/23 21:07:35.931 -STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] 06/12/23 21:07:35.932 -STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' 06/12/23 21:07:35.988 -STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' 06/12/23 21:07:37.011 -STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition 06/12/23 21:07:42.079 -STEP: Container 'terminate-cmd-rpn': should get the expected 'State' 06/12/23 21:07:42.097 -STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] 06/12/23 21:07:42.097 -[AfterEach] [sig-node] Container Runtime +[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:127 +STEP: Creating a pod to test emptydir 0644 on tmpfs 07/27/23 01:50:33.47 +Jul 27 01:50:33.506: INFO: Waiting up to 5m0s for pod "pod-be9a4117-2de3-4955-9c35-aa5818342ebc" in namespace "emptydir-6737" to be "Succeeded or Failed" +Jul 27 01:50:33.515: INFO: Pod "pod-be9a4117-2de3-4955-9c35-aa5818342ebc": Phase="Pending", Reason="", readiness=false. Elapsed: 9.713902ms +Jul 27 01:50:35.524: INFO: Pod "pod-be9a4117-2de3-4955-9c35-aa5818342ebc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018596789s +Jul 27 01:50:37.527: INFO: Pod "pod-be9a4117-2de3-4955-9c35-aa5818342ebc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021765737s +STEP: Saw pod success 07/27/23 01:50:37.527 +Jul 27 01:50:37.528: INFO: Pod "pod-be9a4117-2de3-4955-9c35-aa5818342ebc" satisfied condition "Succeeded or Failed" +Jul 27 01:50:37.537: INFO: Trying to get logs from node 10.245.128.19 pod pod-be9a4117-2de3-4955-9c35-aa5818342ebc container test-container: +STEP: delete the pod 07/27/23 01:50:37.58 +Jul 27 01:50:37.611: INFO: Waiting for pod pod-be9a4117-2de3-4955-9c35-aa5818342ebc to disappear +Jul 27 01:50:37.621: INFO: Pod pod-be9a4117-2de3-4955-9c35-aa5818342ebc no longer exists +[AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/node/init/init.go:32 -Jun 12 21:07:42.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Container Runtime +Jul 27 01:50:37.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Container Runtime +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Container Runtime +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes tear down framework | framework.go:193 -STEP: Destroying namespace "container-runtime-7578" for this suite. 06/12/23 21:07:42.181 +STEP: Destroying namespace "emptydir-6737" for this suite. 07/27/23 01:50:37.633 ------------------------------ -• [SLOW TEST] [30.957 seconds] -[sig-node] Container Runtime -test/e2e/common/node/framework.go:23 - blackbox test - test/e2e/common/node/runtime.go:44 - when starting a container that exits - test/e2e/common/node/runtime.go:45 - should run with the expected status [NodeConformance] [Conformance] - test/e2e/common/node/runtime.go:52 +• [4.258 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:127 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Container Runtime + [BeforeEach] [sig-storage] EmptyDir volumes set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:07:11.238 - Jun 12 21:07:11.239: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename container-runtime 06/12/23 21:07:11.241 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:07:11.29 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:07:11.299 - [BeforeEach] [sig-node] Container Runtime + STEP: Creating a kubernetes client 07/27/23 01:50:33.398 + Jul 27 01:50:33.398: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename emptydir 07/27/23 01:50:33.398 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:50:33.449 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:50:33.459 + [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:31 - [It] should run with the expected status [NodeConformance] [Conformance] - test/e2e/common/node/runtime.go:52 - STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' 06/12/23 21:07:11.343 - STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' 06/12/23 21:07:29.736 - STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition 06/12/23 21:07:29.749 - STEP: Container 'terminate-cmd-rpa': should get the expected 'State' 06/12/23 21:07:29.771 - STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] 06/12/23 21:07:29.771 - STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' 06/12/23 21:07:29.825 - STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' 06/12/23 21:07:33.886 - STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition 06/12/23 21:07:35.915 - STEP: Container 'terminate-cmd-rpof': should get the expected 'State' 06/12/23 21:07:35.931 - STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] 06/12/23 21:07:35.932 - STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' 06/12/23 21:07:35.988 - STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' 06/12/23 21:07:37.011 - STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition 06/12/23 21:07:42.079 - STEP: Container 'terminate-cmd-rpn': should get the expected 'State' 06/12/23 21:07:42.097 - STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] 06/12/23 21:07:42.097 - [AfterEach] [sig-node] Container Runtime + [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:127 + STEP: Creating a pod to test emptydir 0644 on tmpfs 07/27/23 01:50:33.47 + Jul 27 01:50:33.506: INFO: Waiting up to 5m0s for pod "pod-be9a4117-2de3-4955-9c35-aa5818342ebc" in namespace "emptydir-6737" to be "Succeeded or Failed" + Jul 27 01:50:33.515: INFO: Pod "pod-be9a4117-2de3-4955-9c35-aa5818342ebc": Phase="Pending", Reason="", readiness=false. Elapsed: 9.713902ms + Jul 27 01:50:35.524: INFO: Pod "pod-be9a4117-2de3-4955-9c35-aa5818342ebc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018596789s + Jul 27 01:50:37.527: INFO: Pod "pod-be9a4117-2de3-4955-9c35-aa5818342ebc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021765737s + STEP: Saw pod success 07/27/23 01:50:37.527 + Jul 27 01:50:37.528: INFO: Pod "pod-be9a4117-2de3-4955-9c35-aa5818342ebc" satisfied condition "Succeeded or Failed" + Jul 27 01:50:37.537: INFO: Trying to get logs from node 10.245.128.19 pod pod-be9a4117-2de3-4955-9c35-aa5818342ebc container test-container: + STEP: delete the pod 07/27/23 01:50:37.58 + Jul 27 01:50:37.611: INFO: Waiting for pod pod-be9a4117-2de3-4955-9c35-aa5818342ebc to disappear + Jul 27 01:50:37.621: INFO: Pod pod-be9a4117-2de3-4955-9c35-aa5818342ebc no longer exists + [AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/node/init/init.go:32 - Jun 12 21:07:42.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Container Runtime + Jul 27 01:50:37.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Container Runtime + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Container Runtime + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes tear down framework | framework.go:193 - STEP: Destroying namespace "container-runtime-7578" for this suite. 06/12/23 21:07:42.181 + STEP: Destroying namespace "emptydir-6737" for this suite. 07/27/23 01:50:37.633 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSS +SSSSSSSS ------------------------------ -[sig-scheduling] SchedulerPredicates [Serial] - validates that NodeSelector is respected if not matching [Conformance] - test/e2e/scheduling/predicates.go:443 -[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] +[sig-api-machinery] Watchers + should be able to restart watching from the last resource version observed by the previous watch [Conformance] + test/e2e/apimachinery/watch.go:191 +[BeforeEach] [sig-api-machinery] Watchers set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:07:42.203 -Jun 12 21:07:42.203: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename sched-pred 06/12/23 21:07:42.205 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:07:42.243 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:07:42.254 -[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] +STEP: Creating a kubernetes client 07/27/23 01:50:37.656 +Jul 27 01:50:37.657: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename watch 07/27/23 01:50:37.657 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:50:37.737 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:50:37.747 +[BeforeEach] [sig-api-machinery] Watchers test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] - test/e2e/scheduling/predicates.go:97 -Jun 12 21:07:42.267: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready -Jun 12 21:07:42.308: INFO: Waiting for terminating namespaces to be deleted... -Jun 12 21:07:42.328: INFO: -Logging pods the apiserver thinks is on node 10.138.75.112 before test -Jun 12 21:07:42.394: INFO: calico-node-b9sdb from calico-system started at 2023-06-12 17:53:02 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.395: INFO: Container calico-node ready: true, restart count 0 -Jun 12 21:07:42.395: INFO: calico-typha-74d94b74f5-dc6td from calico-system started at 2023-06-12 17:53:09 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.395: INFO: Container calico-typha ready: true, restart count 0 -Jun 12 21:07:42.395: INFO: ibm-cloud-provider-ip-168-1-198-197-75947fc545-gxzn7 from ibm-system started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.395: INFO: Container ibm-cloud-provider-ip-168-1-198-197 ready: true, restart count 0 -Jun 12 21:07:42.395: INFO: ibm-keepalived-watcher-5hc6v from kube-system started at 2023-06-12 17:40:13 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.395: INFO: Container keepalived-watcher ready: true, restart count 0 -Jun 12 21:07:42.395: INFO: ibm-master-proxy-static-10.138.75.112 from kube-system started at 2023-06-12 17:40:09 +0000 UTC (2 container statuses recorded) -Jun 12 21:07:42.395: INFO: Container ibm-master-proxy-static ready: true, restart count 0 -Jun 12 21:07:42.395: INFO: Container pause ready: true, restart count 0 -Jun 12 21:07:42.396: INFO: ibmcloud-block-storage-driver-5zqmj from kube-system started at 2023-06-12 17:40:20 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.396: INFO: Container ibmcloud-block-storage-driver-container ready: true, restart count 0 -Jun 12 21:07:42.396: INFO: tuned-phslc from openshift-cluster-node-tuning-operator started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.396: INFO: Container tuned ready: true, restart count 0 -Jun 12 21:07:42.396: INFO: csi-snapshot-controller-7f8879b9ff-p456r from openshift-cluster-storage-operator started at 2023-06-12 17:55:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.396: INFO: Container snapshot-controller ready: true, restart count 0 -Jun 12 21:07:42.396: INFO: csi-snapshot-webhook-7bd9594b6d-bp5dr from openshift-cluster-storage-operator started at 2023-06-12 17:55:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.396: INFO: Container webhook ready: true, restart count 0 -Jun 12 21:07:42.396: INFO: console-5bf97c7949-w5sn5 from openshift-console started at 2023-06-12 18:01:02 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.396: INFO: Container console ready: true, restart count 0 -Jun 12 21:07:42.396: INFO: downloads-8b57f44bb-55ss5 from openshift-console started at 2023-06-12 17:55:24 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.396: INFO: Container download-server ready: true, restart count 0 -Jun 12 21:07:42.396: INFO: dns-default-hpnqj from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (2 container statuses recorded) -Jun 12 21:07:42.397: INFO: Container dns ready: true, restart count 0 -Jun 12 21:07:42.397: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:07:42.397: INFO: node-resolver-5st6j from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.397: INFO: Container dns-node-resolver ready: true, restart count 0 -Jun 12 21:07:42.397: INFO: image-registry-6c79bcf5c4-p7ss4 from openshift-image-registry started at 2023-06-12 18:00:30 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.397: INFO: Container registry ready: true, restart count 0 -Jun 12 21:07:42.397: INFO: node-ca-qm7sb from openshift-image-registry started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.397: INFO: Container node-ca ready: true, restart count 0 -Jun 12 21:07:42.397: INFO: ingress-canary-5qpcw from openshift-ingress-canary started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.397: INFO: Container serve-healthcheck-canary ready: true, restart count 0 -Jun 12 21:07:42.397: INFO: router-default-7d454f944c-62qgz from openshift-ingress started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.397: INFO: Container router ready: true, restart count 0 -Jun 12 21:07:42.397: INFO: openshift-kube-proxy-b9xs9 from openshift-kube-proxy started at 2023-06-12 17:47:53 +0000 UTC (2 container statuses recorded) -Jun 12 21:07:42.397: INFO: Container kube-proxy ready: true, restart count 0 -Jun 12 21:07:42.398: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:07:42.398: INFO: migrator-cfb6c8f7c-vx2tr from openshift-kube-storage-version-migrator started at 2023-06-12 17:55:28 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.398: INFO: Container migrator ready: true, restart count 0 -Jun 12 21:07:42.398: INFO: community-operators-fm8cx from openshift-marketplace started at 2023-06-12 17:57:19 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.398: INFO: Container registry-server ready: true, restart count 0 -Jun 12 21:07:42.398: INFO: redhat-operators-pr47d from openshift-marketplace started at 2023-06-12 19:05:36 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.398: INFO: Container registry-server ready: true, restart count 0 -Jun 12 21:07:42.398: INFO: alertmanager-main-1 from openshift-monitoring started at 2023-06-12 18:01:06 +0000 UTC (6 container statuses recorded) -Jun 12 21:07:42.398: INFO: Container alertmanager ready: true, restart count 1 -Jun 12 21:07:42.398: INFO: Container alertmanager-proxy ready: true, restart count 0 -Jun 12 21:07:42.398: INFO: Container config-reloader ready: true, restart count 0 -Jun 12 21:07:42.398: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:07:42.398: INFO: Container kube-rbac-proxy-metric ready: true, restart count 0 -Jun 12 21:07:42.398: INFO: Container prom-label-proxy ready: true, restart count 0 -Jun 12 21:07:42.399: INFO: kube-state-metrics-6ccfb58dc4-rgnnh from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (3 container statuses recorded) -Jun 12 21:07:42.399: INFO: Container kube-rbac-proxy-main ready: true, restart count 0 -Jun 12 21:07:42.399: INFO: Container kube-rbac-proxy-self ready: true, restart count 0 -Jun 12 21:07:42.399: INFO: Container kube-state-metrics ready: true, restart count 0 -Jun 12 21:07:42.399: INFO: node-exporter-r799t from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (2 container statuses recorded) -Jun 12 21:07:42.399: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:07:42.399: INFO: Container node-exporter ready: true, restart count 0 -Jun 12 21:07:42.399: INFO: prometheus-adapter-7c58c77c58-xfd55 from openshift-monitoring started at 2023-06-12 17:59:36 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.399: INFO: Container prometheus-adapter ready: true, restart count 0 -Jun 12 21:07:42.399: INFO: prometheus-k8s-0 from openshift-monitoring started at 2023-06-12 18:01:32 +0000 UTC (6 container statuses recorded) -Jun 12 21:07:42.399: INFO: Container config-reloader ready: true, restart count 0 -Jun 12 21:07:42.400: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:07:42.400: INFO: Container kube-rbac-proxy-thanos ready: true, restart count 0 -Jun 12 21:07:42.400: INFO: Container prometheus ready: true, restart count 0 -Jun 12 21:07:42.400: INFO: Container prometheus-proxy ready: true, restart count 0 -Jun 12 21:07:42.400: INFO: Container thanos-sidecar ready: true, restart count 0 -Jun 12 21:07:42.400: INFO: prometheus-operator-admission-webhook-5d679565bb-66wnf from openshift-monitoring started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.400: INFO: Container prometheus-operator-admission-webhook ready: true, restart count 0 -Jun 12 21:07:42.400: INFO: thanos-querier-6497df7b9-djrsc from openshift-monitoring started at 2023-06-12 17:59:42 +0000 UTC (6 container statuses recorded) -Jun 12 21:07:42.400: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:07:42.400: INFO: Container kube-rbac-proxy-metrics ready: true, restart count 0 -Jun 12 21:07:42.400: INFO: Container kube-rbac-proxy-rules ready: true, restart count 0 -Jun 12 21:07:42.400: INFO: Container oauth-proxy ready: true, restart count 0 -Jun 12 21:07:42.400: INFO: Container prom-label-proxy ready: true, restart count 0 -Jun 12 21:07:42.400: INFO: Container thanos-query ready: true, restart count 0 -Jun 12 21:07:42.401: INFO: multus-additional-cni-plugins-zpr6c from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.401: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 -Jun 12 21:07:42.401: INFO: multus-q452d from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.401: INFO: Container kube-multus ready: true, restart count 0 -Jun 12 21:07:42.401: INFO: network-metrics-daemon-vx56x from openshift-multus started at 2023-06-12 17:47:49 +0000 UTC (2 container statuses recorded) -Jun 12 21:07:42.401: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:07:42.401: INFO: Container network-metrics-daemon ready: true, restart count 0 -Jun 12 21:07:42.401: INFO: network-check-target-lfvfw from openshift-network-diagnostics started at 2023-06-12 17:47:56 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.401: INFO: Container network-check-target-container ready: true, restart count 0 -Jun 12 21:07:42.401: INFO: network-operator-5498bf7dc6-xv8r2 from openshift-network-operator started at 2023-06-12 17:47:21 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.401: INFO: Container network-operator ready: true, restart count 1 -Jun 12 21:07:42.401: INFO: collect-profiles-28110060-nx85j from openshift-operator-lifecycle-manager started at 2023-06-12 21:00:00 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.401: INFO: Container collect-profiles ready: false, restart count 0 -Jun 12 21:07:42.401: INFO: packageserver-7f8bd8c95b-fgfhz from openshift-operator-lifecycle-manager started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.402: INFO: Container packageserver ready: true, restart count 0 -Jun 12 21:07:42.402: INFO: sonobuoy-systemd-logs-daemon-set-129e475d543c42cc-xk7f7 from sonobuoy started at 2023-06-12 20:39:06 +0000 UTC (2 container statuses recorded) -Jun 12 21:07:42.402: INFO: Container sonobuoy-worker ready: true, restart count 0 -Jun 12 21:07:42.402: INFO: Container systemd-logs ready: true, restart count 0 -Jun 12 21:07:42.402: INFO: -Logging pods the apiserver thinks is on node 10.138.75.116 before test -Jun 12 21:07:42.463: INFO: calico-kube-controllers-58944988fc-kv6pq from calico-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.487: INFO: Container calico-kube-controllers ready: true, restart count 0 -Jun 12 21:07:42.487: INFO: calico-node-nhd4m from calico-system started at 2023-06-12 17:53:02 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.487: INFO: Container calico-node ready: true, restart count 0 -Jun 12 21:07:42.487: INFO: ibm-file-plugin-5f8cc7b66-hc7b9 from kube-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.487: INFO: Container ibm-file-plugin-container ready: true, restart count 0 -Jun 12 21:07:42.487: INFO: ibm-keepalived-watcher-zp24l from kube-system started at 2023-06-12 17:40:01 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.487: INFO: Container keepalived-watcher ready: true, restart count 0 -Jun 12 21:07:42.487: INFO: ibm-master-proxy-static-10.138.75.116 from kube-system started at 2023-06-12 17:39:58 +0000 UTC (2 container statuses recorded) -Jun 12 21:07:42.487: INFO: Container ibm-master-proxy-static ready: true, restart count 0 -Jun 12 21:07:42.488: INFO: Container pause ready: true, restart count 0 -Jun 12 21:07:42.488: INFO: ibm-storage-watcher-f4db746b4-mlm76 from kube-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.488: INFO: Container ibm-storage-watcher-container ready: true, restart count 0 -Jun 12 21:07:42.488: INFO: ibmcloud-block-storage-driver-4wh25 from kube-system started at 2023-06-12 17:40:09 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.488: INFO: Container ibmcloud-block-storage-driver-container ready: true, restart count 0 -Jun 12 21:07:42.488: INFO: ibmcloud-block-storage-plugin-5f85bc9665-2ltn5 from kube-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.488: INFO: Container ibmcloud-block-storage-plugin-container ready: true, restart count 0 -Jun 12 21:07:42.488: INFO: vpn-7bc564c55c-htxd6 from kube-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.488: INFO: Container vpn ready: true, restart count 0 -Jun 12 21:07:42.488: INFO: cluster-node-tuning-operator-5f6cff5c99-z22gd from openshift-cluster-node-tuning-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.488: INFO: Container cluster-node-tuning-operator ready: true, restart count 0 -Jun 12 21:07:42.489: INFO: tuned-44pqh from openshift-cluster-node-tuning-operator started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.489: INFO: Container tuned ready: true, restart count 0 -Jun 12 21:07:42.489: INFO: cluster-samples-operator-597884bb5d-bv9cn from openshift-cluster-samples-operator started at 2023-06-12 17:54:08 +0000 UTC (2 container statuses recorded) -Jun 12 21:07:42.489: INFO: Container cluster-samples-operator ready: true, restart count 0 -Jun 12 21:07:42.489: INFO: Container cluster-samples-operator-watch ready: true, restart count 0 -Jun 12 21:07:42.489: INFO: cluster-storage-operator-75bb97486-7xrgf from openshift-cluster-storage-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.489: INFO: Container cluster-storage-operator ready: true, restart count 1 -Jun 12 21:07:42.489: INFO: csi-snapshot-controller-operator-69df8b995f-flpdz from openshift-cluster-storage-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.489: INFO: Container csi-snapshot-controller-operator ready: true, restart count 0 -Jun 12 21:07:42.489: INFO: console-operator-747447cc44-5hk9p from openshift-console-operator started at 2023-06-12 17:54:08 +0000 UTC (2 container statuses recorded) -Jun 12 21:07:42.490: INFO: Container console-operator ready: true, restart count 1 -Jun 12 21:07:42.490: INFO: Container conversion-webhook-server ready: true, restart count 2 -Jun 12 21:07:42.490: INFO: console-5bf97c7949-22prk from openshift-console started at 2023-06-12 18:01:30 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.490: INFO: Container console ready: true, restart count 0 -Jun 12 21:07:42.490: INFO: dns-operator-65c495d75-cd4fc from openshift-dns-operator started at 2023-06-12 17:54:08 +0000 UTC (2 container statuses recorded) -Jun 12 21:07:42.490: INFO: Container dns-operator ready: true, restart count 0 -Jun 12 21:07:42.493: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:07:42.493: INFO: dns-default-cw4pt from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (2 container statuses recorded) -Jun 12 21:07:42.493: INFO: Container dns ready: true, restart count 0 -Jun 12 21:07:42.493: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:07:42.494: INFO: node-resolver-8mss5 from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.494: INFO: Container dns-node-resolver ready: true, restart count 0 -Jun 12 21:07:42.494: INFO: cluster-image-registry-operator-f9c46b94f-swtmm from openshift-image-registry started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.494: INFO: Container cluster-image-registry-operator ready: true, restart count 0 -Jun 12 21:07:42.494: INFO: node-ca-5cs7d from openshift-image-registry started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.494: INFO: Container node-ca ready: true, restart count 0 -Jun 12 21:07:42.495: INFO: registry-pvc-permissions-j28ls from openshift-image-registry started at 2023-06-12 18:00:38 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.495: INFO: Container pvc-permissions ready: false, restart count 0 -Jun 12 21:07:42.495: INFO: ingress-canary-9xbwx from openshift-ingress-canary started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.495: INFO: Container serve-healthcheck-canary ready: true, restart count 0 -Jun 12 21:07:42.495: INFO: ingress-operator-57d9f78b9c-59cl8 from openshift-ingress-operator started at 2023-06-12 17:54:08 +0000 UTC (2 container statuses recorded) -Jun 12 21:07:42.495: INFO: Container ingress-operator ready: true, restart count 0 -Jun 12 21:07:42.496: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:07:42.496: INFO: insights-operator-7dfcfbc664-j8swm from openshift-insights started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.496: INFO: Container insights-operator ready: true, restart count 1 -Jun 12 21:07:42.496: INFO: openshift-kube-proxy-5hl4f from openshift-kube-proxy started at 2023-06-12 17:47:53 +0000 UTC (2 container statuses recorded) -Jun 12 21:07:42.496: INFO: Container kube-proxy ready: true, restart count 0 -Jun 12 21:07:42.504: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:07:42.505: INFO: kube-storage-version-migrator-operator-689b97b878-cqw2l from openshift-kube-storage-version-migrator-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.505: INFO: Container kube-storage-version-migrator-operator ready: true, restart count 1 -Jun 12 21:07:42.505: INFO: marketplace-operator-769ddf547d-mm52g from openshift-marketplace started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.505: INFO: Container marketplace-operator ready: true, restart count 0 -Jun 12 21:07:42.505: INFO: cluster-monitoring-operator-7df766d4db-cnq44 from openshift-monitoring started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.505: INFO: Container cluster-monitoring-operator ready: true, restart count 0 -Jun 12 21:07:42.505: INFO: node-exporter-s9sgk from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (2 container statuses recorded) -Jun 12 21:07:42.505: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:07:42.506: INFO: Container node-exporter ready: true, restart count 0 -Jun 12 21:07:42.506: INFO: multus-additional-cni-plugins-rsr27 from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.506: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 -Jun 12 21:07:42.506: INFO: multus-admission-controller-5894dd7875-bfbwp from openshift-multus started at 2023-06-12 17:58:31 +0000 UTC (2 container statuses recorded) -Jun 12 21:07:42.506: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:07:42.506: INFO: Container multus-admission-controller ready: true, restart count 0 -Jun 12 21:07:42.506: INFO: multus-ln9rr from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.506: INFO: Container kube-multus ready: true, restart count 0 -Jun 12 21:07:42.506: INFO: network-metrics-daemon-75s49 from openshift-multus started at 2023-06-12 17:47:49 +0000 UTC (2 container statuses recorded) -Jun 12 21:07:42.506: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:07:42.506: INFO: Container network-metrics-daemon ready: true, restart count 0 -Jun 12 21:07:42.507: INFO: network-check-source-7f6b75fdb6-8882l from openshift-network-diagnostics started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.507: INFO: Container check-endpoints ready: true, restart count 0 -Jun 12 21:07:42.507: INFO: network-check-target-kjfll from openshift-network-diagnostics started at 2023-06-12 17:47:56 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.507: INFO: Container network-check-target-container ready: true, restart count 0 -Jun 12 21:07:42.507: INFO: catalog-operator-874999f59-jggx9 from openshift-operator-lifecycle-manager started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.507: INFO: Container catalog-operator ready: true, restart count 0 -Jun 12 21:07:42.507: INFO: collect-profiles-28110030-fzbkf from openshift-operator-lifecycle-manager started at 2023-06-12 20:30:00 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.507: INFO: Container collect-profiles ready: false, restart count 0 -Jun 12 21:07:42.507: INFO: collect-profiles-28110045-fcbk8 from openshift-operator-lifecycle-manager started at 2023-06-12 20:45:00 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.507: INFO: Container collect-profiles ready: false, restart count 0 -Jun 12 21:07:42.507: INFO: olm-operator-bdbf4b468-8vj6q from openshift-operator-lifecycle-manager started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.508: INFO: Container olm-operator ready: true, restart count 0 -Jun 12 21:07:42.508: INFO: package-server-manager-5b897cb946-pz59r from openshift-operator-lifecycle-manager started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.508: INFO: Container package-server-manager ready: true, restart count 0 -Jun 12 21:07:42.508: INFO: packageserver-7f8bd8c95b-2zntg from openshift-operator-lifecycle-manager started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.508: INFO: Container packageserver ready: true, restart count 0 -Jun 12 21:07:42.508: INFO: metrics-78c5579cb7-nlfqq from openshift-roks-metrics started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.508: INFO: Container metrics ready: true, restart count 3 -Jun 12 21:07:42.508: INFO: push-gateway-85f6799b47-cgtdt from openshift-roks-metrics started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.508: INFO: Container push-gateway ready: true, restart count 0 -Jun 12 21:07:42.508: INFO: service-ca-operator-86d6dcd567-8jc2t from openshift-service-ca-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.508: INFO: Container service-ca-operator ready: true, restart count 1 -Jun 12 21:07:42.508: INFO: service-ca-7c79786568-vhxsl from openshift-service-ca started at 2023-06-12 17:55:23 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.509: INFO: Container service-ca-controller ready: true, restart count 0 -Jun 12 21:07:42.509: INFO: sonobuoy-e2e-job-9876719f3d1644bf from sonobuoy started at 2023-06-12 20:39:06 +0000 UTC (2 container statuses recorded) -Jun 12 21:07:42.509: INFO: Container e2e ready: true, restart count 0 -Jun 12 21:07:42.509: INFO: Container sonobuoy-worker ready: true, restart count 0 -Jun 12 21:07:42.509: INFO: sonobuoy-systemd-logs-daemon-set-129e475d543c42cc-nbw64 from sonobuoy started at 2023-06-12 20:39:07 +0000 UTC (2 container statuses recorded) -Jun 12 21:07:42.509: INFO: Container sonobuoy-worker ready: true, restart count 0 -Jun 12 21:07:42.509: INFO: Container systemd-logs ready: true, restart count 0 -Jun 12 21:07:42.509: INFO: tigera-operator-5b48cf996b-z7p6p from tigera-operator started at 2023-06-12 17:40:11 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.509: INFO: Container tigera-operator ready: true, restart count 7 -Jun 12 21:07:42.509: INFO: -Logging pods the apiserver thinks is on node 10.138.75.70 before test -Jun 12 21:07:42.588: INFO: calico-node-v822j from calico-system started at 2023-06-12 17:53:02 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.589: INFO: Container calico-node ready: true, restart count 0 -Jun 12 21:07:42.589: INFO: calico-typha-74d94b74f5-db4zz from calico-system started at 2023-06-12 17:53:02 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.589: INFO: Container calico-typha ready: true, restart count 0 -Jun 12 21:07:42.589: INFO: ibm-cloud-provider-ip-168-1-198-197-75947fc545-9m2wx from ibm-system started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.589: INFO: Container ibm-cloud-provider-ip-168-1-198-197 ready: true, restart count 0 -Jun 12 21:07:42.589: INFO: ibm-keepalived-watcher-nl9l9 from kube-system started at 2023-06-12 17:40:20 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.589: INFO: Container keepalived-watcher ready: true, restart count 0 -Jun 12 21:07:42.590: INFO: ibm-master-proxy-static-10.138.75.70 from kube-system started at 2023-06-12 17:40:17 +0000 UTC (2 container statuses recorded) -Jun 12 21:07:42.590: INFO: Container ibm-master-proxy-static ready: true, restart count 0 -Jun 12 21:07:42.590: INFO: Container pause ready: true, restart count 0 -Jun 12 21:07:42.590: INFO: ibmcloud-block-storage-driver-jl8fq from kube-system started at 2023-06-12 17:40:28 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.590: INFO: Container ibmcloud-block-storage-driver-container ready: true, restart count 0 -Jun 12 21:07:42.590: INFO: tuned-dmlsr from openshift-cluster-node-tuning-operator started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.590: INFO: Container tuned ready: true, restart count 0 -Jun 12 21:07:42.590: INFO: csi-snapshot-controller-7f8879b9ff-lhkmp from openshift-cluster-storage-operator started at 2023-06-12 17:55:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.591: INFO: Container snapshot-controller ready: true, restart count 0 -Jun 12 21:07:42.591: INFO: csi-snapshot-webhook-7bd9594b6d-9f476 from openshift-cluster-storage-operator started at 2023-06-12 17:55:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.591: INFO: Container webhook ready: true, restart count 0 -Jun 12 21:07:42.591: INFO: downloads-8b57f44bb-f7r76 from openshift-console started at 2023-06-12 17:55:24 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.591: INFO: Container download-server ready: true, restart count 0 -Jun 12 21:07:42.591: INFO: dns-default-5d2sp from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (2 container statuses recorded) -Jun 12 21:07:42.591: INFO: Container dns ready: true, restart count 0 -Jun 12 21:07:42.592: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:07:42.592: INFO: node-resolver-lf2bx from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.592: INFO: Container dns-node-resolver ready: true, restart count 0 -Jun 12 21:07:42.592: INFO: node-ca-mwjbd from openshift-image-registry started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.592: INFO: Container node-ca ready: true, restart count 0 -Jun 12 21:07:42.592: INFO: ingress-canary-xwc5b from openshift-ingress-canary started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.592: INFO: Container serve-healthcheck-canary ready: true, restart count 0 -Jun 12 21:07:42.593: INFO: router-default-7d454f944c-s862z from openshift-ingress started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.593: INFO: Container router ready: true, restart count 0 -Jun 12 21:07:42.593: INFO: openshift-kube-proxy-rckf9 from openshift-kube-proxy started at 2023-06-12 17:47:53 +0000 UTC (2 container statuses recorded) -Jun 12 21:07:42.593: INFO: Container kube-proxy ready: true, restart count 0 -Jun 12 21:07:42.593: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:07:42.593: INFO: certified-operators-9jhxm from openshift-marketplace started at 2023-06-12 17:57:19 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.594: INFO: Container registry-server ready: true, restart count 0 -Jun 12 21:07:42.595: INFO: redhat-marketplace-n9tcn from openshift-marketplace started at 2023-06-12 17:57:19 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.595: INFO: Container registry-server ready: true, restart count 0 -Jun 12 21:07:42.595: INFO: alertmanager-main-0 from openshift-monitoring started at 2023-06-12 18:01:41 +0000 UTC (6 container statuses recorded) -Jun 12 21:07:42.595: INFO: Container alertmanager ready: true, restart count 1 -Jun 12 21:07:42.595: INFO: Container alertmanager-proxy ready: true, restart count 0 -Jun 12 21:07:42.595: INFO: Container config-reloader ready: true, restart count 0 -Jun 12 21:07:42.595: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:07:42.596: INFO: Container kube-rbac-proxy-metric ready: true, restart count 0 -Jun 12 21:07:42.596: INFO: Container prom-label-proxy ready: true, restart count 0 -Jun 12 21:07:42.596: INFO: node-exporter-5vgf6 from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (2 container statuses recorded) -Jun 12 21:07:42.596: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:07:42.596: INFO: Container node-exporter ready: true, restart count 0 -Jun 12 21:07:42.596: INFO: openshift-state-metrics-7d7f8b4cf8-6kdhb from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (3 container statuses recorded) -Jun 12 21:07:42.596: INFO: Container kube-rbac-proxy-main ready: true, restart count 0 -Jun 12 21:07:42.596: INFO: Container kube-rbac-proxy-self ready: true, restart count 0 -Jun 12 21:07:42.596: INFO: Container openshift-state-metrics ready: true, restart count 0 -Jun 12 21:07:42.596: INFO: prometheus-adapter-7c58c77c58-2j47k from openshift-monitoring started at 2023-06-12 17:59:36 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.596: INFO: Container prometheus-adapter ready: true, restart count 0 -Jun 12 21:07:42.597: INFO: prometheus-k8s-1 from openshift-monitoring started at 2023-06-12 18:01:12 +0000 UTC (6 container statuses recorded) -Jun 12 21:07:42.597: INFO: Container config-reloader ready: true, restart count 0 -Jun 12 21:07:42.597: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:07:42.597: INFO: Container kube-rbac-proxy-thanos ready: true, restart count 0 -Jun 12 21:07:42.597: INFO: Container prometheus ready: true, restart count 0 -Jun 12 21:07:42.597: INFO: Container prometheus-proxy ready: true, restart count 0 -Jun 12 21:07:42.597: INFO: Container thanos-sidecar ready: true, restart count 0 -Jun 12 21:07:42.597: INFO: prometheus-operator-5d978dbf9c-zvq6g from openshift-monitoring started at 2023-06-12 17:59:19 +0000 UTC (2 container statuses recorded) -Jun 12 21:07:42.597: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:07:42.597: INFO: Container prometheus-operator ready: true, restart count 0 -Jun 12 21:07:42.597: INFO: prometheus-operator-admission-webhook-5d679565bb-sj42p from openshift-monitoring started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.597: INFO: Container prometheus-operator-admission-webhook ready: true, restart count 0 -Jun 12 21:07:42.598: INFO: telemeter-client-55c7b57d84-vh47h from openshift-monitoring started at 2023-06-12 17:59:37 +0000 UTC (3 container statuses recorded) -Jun 12 21:07:42.598: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:07:42.598: INFO: Container reload ready: true, restart count 0 -Jun 12 21:07:42.598: INFO: Container telemeter-client ready: true, restart count 0 -Jun 12 21:07:42.598: INFO: thanos-querier-6497df7b9-pg2z9 from openshift-monitoring started at 2023-06-12 17:59:42 +0000 UTC (6 container statuses recorded) -Jun 12 21:07:42.598: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:07:42.598: INFO: Container kube-rbac-proxy-metrics ready: true, restart count 0 -Jun 12 21:07:42.598: INFO: Container kube-rbac-proxy-rules ready: true, restart count 0 -Jun 12 21:07:42.598: INFO: Container oauth-proxy ready: true, restart count 0 -Jun 12 21:07:42.598: INFO: Container prom-label-proxy ready: true, restart count 0 -Jun 12 21:07:42.598: INFO: Container thanos-query ready: true, restart count 0 -Jun 12 21:07:42.599: INFO: multus-26bfs from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.599: INFO: Container kube-multus ready: true, restart count 0 -Jun 12 21:07:42.599: INFO: multus-additional-cni-plugins-9vls6 from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.599: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 -Jun 12 21:07:42.599: INFO: multus-admission-controller-5894dd7875-xldt9 from openshift-multus started at 2023-06-12 17:58:44 +0000 UTC (2 container statuses recorded) -Jun 12 21:07:42.599: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:07:42.599: INFO: Container multus-admission-controller ready: true, restart count 0 -Jun 12 21:07:42.599: INFO: network-metrics-daemon-g9zzs from openshift-multus started at 2023-06-12 17:47:49 +0000 UTC (2 container statuses recorded) -Jun 12 21:07:42.599: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:07:42.599: INFO: Container network-metrics-daemon ready: true, restart count 0 -Jun 12 21:07:42.599: INFO: network-check-target-l622r from openshift-network-diagnostics started at 2023-06-12 17:47:56 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.600: INFO: Container network-check-target-container ready: true, restart count 0 -Jun 12 21:07:42.600: INFO: sonobuoy from sonobuoy started at 2023-06-12 20:38:54 +0000 UTC (1 container statuses recorded) -Jun 12 21:07:42.600: INFO: Container kube-sonobuoy ready: true, restart count 0 -Jun 12 21:07:42.600: INFO: sonobuoy-systemd-logs-daemon-set-129e475d543c42cc-4dn8s from sonobuoy started at 2023-06-12 20:39:07 +0000 UTC (2 container statuses recorded) -Jun 12 21:07:42.600: INFO: Container sonobuoy-worker ready: true, restart count 0 -Jun 12 21:07:42.600: INFO: Container systemd-logs ready: true, restart count 0 -[It] validates that NodeSelector is respected if not matching [Conformance] - test/e2e/scheduling/predicates.go:443 -STEP: Trying to schedule Pod with nonempty NodeSelector. 06/12/23 21:07:42.6 -STEP: Considering event: -Type = [Warning], Name = [restricted-pod.1768057582e93169], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..] 06/12/23 21:07:42.728 -[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] +[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] + test/e2e/apimachinery/watch.go:191 +STEP: creating a watch on configmaps 07/27/23 01:50:37.757 +STEP: creating a new configmap 07/27/23 01:50:37.761 +STEP: modifying the configmap once 07/27/23 01:50:37.781 +STEP: closing the watch once it receives two notifications 07/27/23 01:50:37.82 +Jul 27 01:50:37.821: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1532 ce20e077-1c50-49a3-be3b-7184afc5586e 79069 0 2023-07-27 01:50:37 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-07-27 01:50:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Jul 27 01:50:37.821: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1532 ce20e077-1c50-49a3-be3b-7184afc5586e 79075 0 2023-07-27 01:50:37 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-07-27 01:50:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying the configmap a second time, while the watch is closed 07/27/23 01:50:37.821 +STEP: creating a new watch on configmaps from the last resource version observed by the first watch 07/27/23 01:50:37.863 +STEP: deleting the configmap 07/27/23 01:50:37.868 +STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed 07/27/23 01:50:37.889 +Jul 27 01:50:37.889: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1532 ce20e077-1c50-49a3-be3b-7184afc5586e 79080 0 2023-07-27 01:50:37 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-07-27 01:50:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Jul 27 01:50:37.890: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1532 ce20e077-1c50-49a3-be3b-7184afc5586e 79084 0 2023-07-27 01:50:37 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-07-27 01:50:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers test/e2e/framework/node/init/init.go:32 -Jun 12 21:07:43.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] - test/e2e/scheduling/predicates.go:88 -[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] +Jul 27 01:50:37.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Watchers test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] +[DeferCleanup (Each)] [sig-api-machinery] Watchers dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] +[DeferCleanup (Each)] [sig-api-machinery] Watchers tear down framework | framework.go:193 -STEP: Destroying namespace "sched-pred-6768" for this suite. 06/12/23 21:07:43.744 +STEP: Destroying namespace "watch-1532" for this suite. 07/27/23 01:50:37.901 ------------------------------ -• [1.554 seconds] -[sig-scheduling] SchedulerPredicates [Serial] -test/e2e/scheduling/framework.go:40 - validates that NodeSelector is respected if not matching [Conformance] - test/e2e/scheduling/predicates.go:443 +• [0.268 seconds] +[sig-api-machinery] Watchers +test/e2e/apimachinery/framework.go:23 + should be able to restart watching from the last resource version observed by the previous watch [Conformance] + test/e2e/apimachinery/watch.go:191 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + [BeforeEach] [sig-api-machinery] Watchers set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:07:42.203 - Jun 12 21:07:42.203: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename sched-pred 06/12/23 21:07:42.205 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:07:42.243 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:07:42.254 - [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + STEP: Creating a kubernetes client 07/27/23 01:50:37.656 + Jul 27 01:50:37.657: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename watch 07/27/23 01:50:37.657 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:50:37.737 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:50:37.747 + [BeforeEach] [sig-api-machinery] Watchers test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] - test/e2e/scheduling/predicates.go:97 - Jun 12 21:07:42.267: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready - Jun 12 21:07:42.308: INFO: Waiting for terminating namespaces to be deleted... - Jun 12 21:07:42.328: INFO: - Logging pods the apiserver thinks is on node 10.138.75.112 before test - Jun 12 21:07:42.394: INFO: calico-node-b9sdb from calico-system started at 2023-06-12 17:53:02 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.395: INFO: Container calico-node ready: true, restart count 0 - Jun 12 21:07:42.395: INFO: calico-typha-74d94b74f5-dc6td from calico-system started at 2023-06-12 17:53:09 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.395: INFO: Container calico-typha ready: true, restart count 0 - Jun 12 21:07:42.395: INFO: ibm-cloud-provider-ip-168-1-198-197-75947fc545-gxzn7 from ibm-system started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.395: INFO: Container ibm-cloud-provider-ip-168-1-198-197 ready: true, restart count 0 - Jun 12 21:07:42.395: INFO: ibm-keepalived-watcher-5hc6v from kube-system started at 2023-06-12 17:40:13 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.395: INFO: Container keepalived-watcher ready: true, restart count 0 - Jun 12 21:07:42.395: INFO: ibm-master-proxy-static-10.138.75.112 from kube-system started at 2023-06-12 17:40:09 +0000 UTC (2 container statuses recorded) - Jun 12 21:07:42.395: INFO: Container ibm-master-proxy-static ready: true, restart count 0 - Jun 12 21:07:42.395: INFO: Container pause ready: true, restart count 0 - Jun 12 21:07:42.396: INFO: ibmcloud-block-storage-driver-5zqmj from kube-system started at 2023-06-12 17:40:20 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.396: INFO: Container ibmcloud-block-storage-driver-container ready: true, restart count 0 - Jun 12 21:07:42.396: INFO: tuned-phslc from openshift-cluster-node-tuning-operator started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.396: INFO: Container tuned ready: true, restart count 0 - Jun 12 21:07:42.396: INFO: csi-snapshot-controller-7f8879b9ff-p456r from openshift-cluster-storage-operator started at 2023-06-12 17:55:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.396: INFO: Container snapshot-controller ready: true, restart count 0 - Jun 12 21:07:42.396: INFO: csi-snapshot-webhook-7bd9594b6d-bp5dr from openshift-cluster-storage-operator started at 2023-06-12 17:55:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.396: INFO: Container webhook ready: true, restart count 0 - Jun 12 21:07:42.396: INFO: console-5bf97c7949-w5sn5 from openshift-console started at 2023-06-12 18:01:02 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.396: INFO: Container console ready: true, restart count 0 - Jun 12 21:07:42.396: INFO: downloads-8b57f44bb-55ss5 from openshift-console started at 2023-06-12 17:55:24 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.396: INFO: Container download-server ready: true, restart count 0 - Jun 12 21:07:42.396: INFO: dns-default-hpnqj from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (2 container statuses recorded) - Jun 12 21:07:42.397: INFO: Container dns ready: true, restart count 0 - Jun 12 21:07:42.397: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:07:42.397: INFO: node-resolver-5st6j from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.397: INFO: Container dns-node-resolver ready: true, restart count 0 - Jun 12 21:07:42.397: INFO: image-registry-6c79bcf5c4-p7ss4 from openshift-image-registry started at 2023-06-12 18:00:30 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.397: INFO: Container registry ready: true, restart count 0 - Jun 12 21:07:42.397: INFO: node-ca-qm7sb from openshift-image-registry started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.397: INFO: Container node-ca ready: true, restart count 0 - Jun 12 21:07:42.397: INFO: ingress-canary-5qpcw from openshift-ingress-canary started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.397: INFO: Container serve-healthcheck-canary ready: true, restart count 0 - Jun 12 21:07:42.397: INFO: router-default-7d454f944c-62qgz from openshift-ingress started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.397: INFO: Container router ready: true, restart count 0 - Jun 12 21:07:42.397: INFO: openshift-kube-proxy-b9xs9 from openshift-kube-proxy started at 2023-06-12 17:47:53 +0000 UTC (2 container statuses recorded) - Jun 12 21:07:42.397: INFO: Container kube-proxy ready: true, restart count 0 - Jun 12 21:07:42.398: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:07:42.398: INFO: migrator-cfb6c8f7c-vx2tr from openshift-kube-storage-version-migrator started at 2023-06-12 17:55:28 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.398: INFO: Container migrator ready: true, restart count 0 - Jun 12 21:07:42.398: INFO: community-operators-fm8cx from openshift-marketplace started at 2023-06-12 17:57:19 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.398: INFO: Container registry-server ready: true, restart count 0 - Jun 12 21:07:42.398: INFO: redhat-operators-pr47d from openshift-marketplace started at 2023-06-12 19:05:36 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.398: INFO: Container registry-server ready: true, restart count 0 - Jun 12 21:07:42.398: INFO: alertmanager-main-1 from openshift-monitoring started at 2023-06-12 18:01:06 +0000 UTC (6 container statuses recorded) - Jun 12 21:07:42.398: INFO: Container alertmanager ready: true, restart count 1 - Jun 12 21:07:42.398: INFO: Container alertmanager-proxy ready: true, restart count 0 - Jun 12 21:07:42.398: INFO: Container config-reloader ready: true, restart count 0 - Jun 12 21:07:42.398: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:07:42.398: INFO: Container kube-rbac-proxy-metric ready: true, restart count 0 - Jun 12 21:07:42.398: INFO: Container prom-label-proxy ready: true, restart count 0 - Jun 12 21:07:42.399: INFO: kube-state-metrics-6ccfb58dc4-rgnnh from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (3 container statuses recorded) - Jun 12 21:07:42.399: INFO: Container kube-rbac-proxy-main ready: true, restart count 0 - Jun 12 21:07:42.399: INFO: Container kube-rbac-proxy-self ready: true, restart count 0 - Jun 12 21:07:42.399: INFO: Container kube-state-metrics ready: true, restart count 0 - Jun 12 21:07:42.399: INFO: node-exporter-r799t from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (2 container statuses recorded) - Jun 12 21:07:42.399: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:07:42.399: INFO: Container node-exporter ready: true, restart count 0 - Jun 12 21:07:42.399: INFO: prometheus-adapter-7c58c77c58-xfd55 from openshift-monitoring started at 2023-06-12 17:59:36 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.399: INFO: Container prometheus-adapter ready: true, restart count 0 - Jun 12 21:07:42.399: INFO: prometheus-k8s-0 from openshift-monitoring started at 2023-06-12 18:01:32 +0000 UTC (6 container statuses recorded) - Jun 12 21:07:42.399: INFO: Container config-reloader ready: true, restart count 0 - Jun 12 21:07:42.400: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:07:42.400: INFO: Container kube-rbac-proxy-thanos ready: true, restart count 0 - Jun 12 21:07:42.400: INFO: Container prometheus ready: true, restart count 0 - Jun 12 21:07:42.400: INFO: Container prometheus-proxy ready: true, restart count 0 - Jun 12 21:07:42.400: INFO: Container thanos-sidecar ready: true, restart count 0 - Jun 12 21:07:42.400: INFO: prometheus-operator-admission-webhook-5d679565bb-66wnf from openshift-monitoring started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.400: INFO: Container prometheus-operator-admission-webhook ready: true, restart count 0 - Jun 12 21:07:42.400: INFO: thanos-querier-6497df7b9-djrsc from openshift-monitoring started at 2023-06-12 17:59:42 +0000 UTC (6 container statuses recorded) - Jun 12 21:07:42.400: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:07:42.400: INFO: Container kube-rbac-proxy-metrics ready: true, restart count 0 - Jun 12 21:07:42.400: INFO: Container kube-rbac-proxy-rules ready: true, restart count 0 - Jun 12 21:07:42.400: INFO: Container oauth-proxy ready: true, restart count 0 - Jun 12 21:07:42.400: INFO: Container prom-label-proxy ready: true, restart count 0 - Jun 12 21:07:42.400: INFO: Container thanos-query ready: true, restart count 0 - Jun 12 21:07:42.401: INFO: multus-additional-cni-plugins-zpr6c from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.401: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 - Jun 12 21:07:42.401: INFO: multus-q452d from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.401: INFO: Container kube-multus ready: true, restart count 0 - Jun 12 21:07:42.401: INFO: network-metrics-daemon-vx56x from openshift-multus started at 2023-06-12 17:47:49 +0000 UTC (2 container statuses recorded) - Jun 12 21:07:42.401: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:07:42.401: INFO: Container network-metrics-daemon ready: true, restart count 0 - Jun 12 21:07:42.401: INFO: network-check-target-lfvfw from openshift-network-diagnostics started at 2023-06-12 17:47:56 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.401: INFO: Container network-check-target-container ready: true, restart count 0 - Jun 12 21:07:42.401: INFO: network-operator-5498bf7dc6-xv8r2 from openshift-network-operator started at 2023-06-12 17:47:21 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.401: INFO: Container network-operator ready: true, restart count 1 - Jun 12 21:07:42.401: INFO: collect-profiles-28110060-nx85j from openshift-operator-lifecycle-manager started at 2023-06-12 21:00:00 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.401: INFO: Container collect-profiles ready: false, restart count 0 - Jun 12 21:07:42.401: INFO: packageserver-7f8bd8c95b-fgfhz from openshift-operator-lifecycle-manager started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.402: INFO: Container packageserver ready: true, restart count 0 - Jun 12 21:07:42.402: INFO: sonobuoy-systemd-logs-daemon-set-129e475d543c42cc-xk7f7 from sonobuoy started at 2023-06-12 20:39:06 +0000 UTC (2 container statuses recorded) - Jun 12 21:07:42.402: INFO: Container sonobuoy-worker ready: true, restart count 0 - Jun 12 21:07:42.402: INFO: Container systemd-logs ready: true, restart count 0 - Jun 12 21:07:42.402: INFO: - Logging pods the apiserver thinks is on node 10.138.75.116 before test - Jun 12 21:07:42.463: INFO: calico-kube-controllers-58944988fc-kv6pq from calico-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.487: INFO: Container calico-kube-controllers ready: true, restart count 0 - Jun 12 21:07:42.487: INFO: calico-node-nhd4m from calico-system started at 2023-06-12 17:53:02 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.487: INFO: Container calico-node ready: true, restart count 0 - Jun 12 21:07:42.487: INFO: ibm-file-plugin-5f8cc7b66-hc7b9 from kube-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.487: INFO: Container ibm-file-plugin-container ready: true, restart count 0 - Jun 12 21:07:42.487: INFO: ibm-keepalived-watcher-zp24l from kube-system started at 2023-06-12 17:40:01 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.487: INFO: Container keepalived-watcher ready: true, restart count 0 - Jun 12 21:07:42.487: INFO: ibm-master-proxy-static-10.138.75.116 from kube-system started at 2023-06-12 17:39:58 +0000 UTC (2 container statuses recorded) - Jun 12 21:07:42.487: INFO: Container ibm-master-proxy-static ready: true, restart count 0 - Jun 12 21:07:42.488: INFO: Container pause ready: true, restart count 0 - Jun 12 21:07:42.488: INFO: ibm-storage-watcher-f4db746b4-mlm76 from kube-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.488: INFO: Container ibm-storage-watcher-container ready: true, restart count 0 - Jun 12 21:07:42.488: INFO: ibmcloud-block-storage-driver-4wh25 from kube-system started at 2023-06-12 17:40:09 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.488: INFO: Container ibmcloud-block-storage-driver-container ready: true, restart count 0 - Jun 12 21:07:42.488: INFO: ibmcloud-block-storage-plugin-5f85bc9665-2ltn5 from kube-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.488: INFO: Container ibmcloud-block-storage-plugin-container ready: true, restart count 0 - Jun 12 21:07:42.488: INFO: vpn-7bc564c55c-htxd6 from kube-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.488: INFO: Container vpn ready: true, restart count 0 - Jun 12 21:07:42.488: INFO: cluster-node-tuning-operator-5f6cff5c99-z22gd from openshift-cluster-node-tuning-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.488: INFO: Container cluster-node-tuning-operator ready: true, restart count 0 - Jun 12 21:07:42.489: INFO: tuned-44pqh from openshift-cluster-node-tuning-operator started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.489: INFO: Container tuned ready: true, restart count 0 - Jun 12 21:07:42.489: INFO: cluster-samples-operator-597884bb5d-bv9cn from openshift-cluster-samples-operator started at 2023-06-12 17:54:08 +0000 UTC (2 container statuses recorded) - Jun 12 21:07:42.489: INFO: Container cluster-samples-operator ready: true, restart count 0 - Jun 12 21:07:42.489: INFO: Container cluster-samples-operator-watch ready: true, restart count 0 - Jun 12 21:07:42.489: INFO: cluster-storage-operator-75bb97486-7xrgf from openshift-cluster-storage-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.489: INFO: Container cluster-storage-operator ready: true, restart count 1 - Jun 12 21:07:42.489: INFO: csi-snapshot-controller-operator-69df8b995f-flpdz from openshift-cluster-storage-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.489: INFO: Container csi-snapshot-controller-operator ready: true, restart count 0 - Jun 12 21:07:42.489: INFO: console-operator-747447cc44-5hk9p from openshift-console-operator started at 2023-06-12 17:54:08 +0000 UTC (2 container statuses recorded) - Jun 12 21:07:42.490: INFO: Container console-operator ready: true, restart count 1 - Jun 12 21:07:42.490: INFO: Container conversion-webhook-server ready: true, restart count 2 - Jun 12 21:07:42.490: INFO: console-5bf97c7949-22prk from openshift-console started at 2023-06-12 18:01:30 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.490: INFO: Container console ready: true, restart count 0 - Jun 12 21:07:42.490: INFO: dns-operator-65c495d75-cd4fc from openshift-dns-operator started at 2023-06-12 17:54:08 +0000 UTC (2 container statuses recorded) - Jun 12 21:07:42.490: INFO: Container dns-operator ready: true, restart count 0 - Jun 12 21:07:42.493: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:07:42.493: INFO: dns-default-cw4pt from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (2 container statuses recorded) - Jun 12 21:07:42.493: INFO: Container dns ready: true, restart count 0 - Jun 12 21:07:42.493: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:07:42.494: INFO: node-resolver-8mss5 from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.494: INFO: Container dns-node-resolver ready: true, restart count 0 - Jun 12 21:07:42.494: INFO: cluster-image-registry-operator-f9c46b94f-swtmm from openshift-image-registry started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.494: INFO: Container cluster-image-registry-operator ready: true, restart count 0 - Jun 12 21:07:42.494: INFO: node-ca-5cs7d from openshift-image-registry started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.494: INFO: Container node-ca ready: true, restart count 0 - Jun 12 21:07:42.495: INFO: registry-pvc-permissions-j28ls from openshift-image-registry started at 2023-06-12 18:00:38 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.495: INFO: Container pvc-permissions ready: false, restart count 0 - Jun 12 21:07:42.495: INFO: ingress-canary-9xbwx from openshift-ingress-canary started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.495: INFO: Container serve-healthcheck-canary ready: true, restart count 0 - Jun 12 21:07:42.495: INFO: ingress-operator-57d9f78b9c-59cl8 from openshift-ingress-operator started at 2023-06-12 17:54:08 +0000 UTC (2 container statuses recorded) - Jun 12 21:07:42.495: INFO: Container ingress-operator ready: true, restart count 0 - Jun 12 21:07:42.496: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:07:42.496: INFO: insights-operator-7dfcfbc664-j8swm from openshift-insights started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.496: INFO: Container insights-operator ready: true, restart count 1 - Jun 12 21:07:42.496: INFO: openshift-kube-proxy-5hl4f from openshift-kube-proxy started at 2023-06-12 17:47:53 +0000 UTC (2 container statuses recorded) - Jun 12 21:07:42.496: INFO: Container kube-proxy ready: true, restart count 0 - Jun 12 21:07:42.504: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:07:42.505: INFO: kube-storage-version-migrator-operator-689b97b878-cqw2l from openshift-kube-storage-version-migrator-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.505: INFO: Container kube-storage-version-migrator-operator ready: true, restart count 1 - Jun 12 21:07:42.505: INFO: marketplace-operator-769ddf547d-mm52g from openshift-marketplace started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.505: INFO: Container marketplace-operator ready: true, restart count 0 - Jun 12 21:07:42.505: INFO: cluster-monitoring-operator-7df766d4db-cnq44 from openshift-monitoring started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.505: INFO: Container cluster-monitoring-operator ready: true, restart count 0 - Jun 12 21:07:42.505: INFO: node-exporter-s9sgk from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (2 container statuses recorded) - Jun 12 21:07:42.505: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:07:42.506: INFO: Container node-exporter ready: true, restart count 0 - Jun 12 21:07:42.506: INFO: multus-additional-cni-plugins-rsr27 from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.506: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 - Jun 12 21:07:42.506: INFO: multus-admission-controller-5894dd7875-bfbwp from openshift-multus started at 2023-06-12 17:58:31 +0000 UTC (2 container statuses recorded) - Jun 12 21:07:42.506: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:07:42.506: INFO: Container multus-admission-controller ready: true, restart count 0 - Jun 12 21:07:42.506: INFO: multus-ln9rr from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.506: INFO: Container kube-multus ready: true, restart count 0 - Jun 12 21:07:42.506: INFO: network-metrics-daemon-75s49 from openshift-multus started at 2023-06-12 17:47:49 +0000 UTC (2 container statuses recorded) - Jun 12 21:07:42.506: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:07:42.506: INFO: Container network-metrics-daemon ready: true, restart count 0 - Jun 12 21:07:42.507: INFO: network-check-source-7f6b75fdb6-8882l from openshift-network-diagnostics started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.507: INFO: Container check-endpoints ready: true, restart count 0 - Jun 12 21:07:42.507: INFO: network-check-target-kjfll from openshift-network-diagnostics started at 2023-06-12 17:47:56 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.507: INFO: Container network-check-target-container ready: true, restart count 0 - Jun 12 21:07:42.507: INFO: catalog-operator-874999f59-jggx9 from openshift-operator-lifecycle-manager started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.507: INFO: Container catalog-operator ready: true, restart count 0 - Jun 12 21:07:42.507: INFO: collect-profiles-28110030-fzbkf from openshift-operator-lifecycle-manager started at 2023-06-12 20:30:00 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.507: INFO: Container collect-profiles ready: false, restart count 0 - Jun 12 21:07:42.507: INFO: collect-profiles-28110045-fcbk8 from openshift-operator-lifecycle-manager started at 2023-06-12 20:45:00 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.507: INFO: Container collect-profiles ready: false, restart count 0 - Jun 12 21:07:42.507: INFO: olm-operator-bdbf4b468-8vj6q from openshift-operator-lifecycle-manager started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.508: INFO: Container olm-operator ready: true, restart count 0 - Jun 12 21:07:42.508: INFO: package-server-manager-5b897cb946-pz59r from openshift-operator-lifecycle-manager started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.508: INFO: Container package-server-manager ready: true, restart count 0 - Jun 12 21:07:42.508: INFO: packageserver-7f8bd8c95b-2zntg from openshift-operator-lifecycle-manager started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.508: INFO: Container packageserver ready: true, restart count 0 - Jun 12 21:07:42.508: INFO: metrics-78c5579cb7-nlfqq from openshift-roks-metrics started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.508: INFO: Container metrics ready: true, restart count 3 - Jun 12 21:07:42.508: INFO: push-gateway-85f6799b47-cgtdt from openshift-roks-metrics started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.508: INFO: Container push-gateway ready: true, restart count 0 - Jun 12 21:07:42.508: INFO: service-ca-operator-86d6dcd567-8jc2t from openshift-service-ca-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.508: INFO: Container service-ca-operator ready: true, restart count 1 - Jun 12 21:07:42.508: INFO: service-ca-7c79786568-vhxsl from openshift-service-ca started at 2023-06-12 17:55:23 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.509: INFO: Container service-ca-controller ready: true, restart count 0 - Jun 12 21:07:42.509: INFO: sonobuoy-e2e-job-9876719f3d1644bf from sonobuoy started at 2023-06-12 20:39:06 +0000 UTC (2 container statuses recorded) - Jun 12 21:07:42.509: INFO: Container e2e ready: true, restart count 0 - Jun 12 21:07:42.509: INFO: Container sonobuoy-worker ready: true, restart count 0 - Jun 12 21:07:42.509: INFO: sonobuoy-systemd-logs-daemon-set-129e475d543c42cc-nbw64 from sonobuoy started at 2023-06-12 20:39:07 +0000 UTC (2 container statuses recorded) - Jun 12 21:07:42.509: INFO: Container sonobuoy-worker ready: true, restart count 0 - Jun 12 21:07:42.509: INFO: Container systemd-logs ready: true, restart count 0 - Jun 12 21:07:42.509: INFO: tigera-operator-5b48cf996b-z7p6p from tigera-operator started at 2023-06-12 17:40:11 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.509: INFO: Container tigera-operator ready: true, restart count 7 - Jun 12 21:07:42.509: INFO: - Logging pods the apiserver thinks is on node 10.138.75.70 before test - Jun 12 21:07:42.588: INFO: calico-node-v822j from calico-system started at 2023-06-12 17:53:02 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.589: INFO: Container calico-node ready: true, restart count 0 - Jun 12 21:07:42.589: INFO: calico-typha-74d94b74f5-db4zz from calico-system started at 2023-06-12 17:53:02 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.589: INFO: Container calico-typha ready: true, restart count 0 - Jun 12 21:07:42.589: INFO: ibm-cloud-provider-ip-168-1-198-197-75947fc545-9m2wx from ibm-system started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.589: INFO: Container ibm-cloud-provider-ip-168-1-198-197 ready: true, restart count 0 - Jun 12 21:07:42.589: INFO: ibm-keepalived-watcher-nl9l9 from kube-system started at 2023-06-12 17:40:20 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.589: INFO: Container keepalived-watcher ready: true, restart count 0 - Jun 12 21:07:42.590: INFO: ibm-master-proxy-static-10.138.75.70 from kube-system started at 2023-06-12 17:40:17 +0000 UTC (2 container statuses recorded) - Jun 12 21:07:42.590: INFO: Container ibm-master-proxy-static ready: true, restart count 0 - Jun 12 21:07:42.590: INFO: Container pause ready: true, restart count 0 - Jun 12 21:07:42.590: INFO: ibmcloud-block-storage-driver-jl8fq from kube-system started at 2023-06-12 17:40:28 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.590: INFO: Container ibmcloud-block-storage-driver-container ready: true, restart count 0 - Jun 12 21:07:42.590: INFO: tuned-dmlsr from openshift-cluster-node-tuning-operator started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.590: INFO: Container tuned ready: true, restart count 0 - Jun 12 21:07:42.590: INFO: csi-snapshot-controller-7f8879b9ff-lhkmp from openshift-cluster-storage-operator started at 2023-06-12 17:55:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.591: INFO: Container snapshot-controller ready: true, restart count 0 - Jun 12 21:07:42.591: INFO: csi-snapshot-webhook-7bd9594b6d-9f476 from openshift-cluster-storage-operator started at 2023-06-12 17:55:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.591: INFO: Container webhook ready: true, restart count 0 - Jun 12 21:07:42.591: INFO: downloads-8b57f44bb-f7r76 from openshift-console started at 2023-06-12 17:55:24 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.591: INFO: Container download-server ready: true, restart count 0 - Jun 12 21:07:42.591: INFO: dns-default-5d2sp from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (2 container statuses recorded) - Jun 12 21:07:42.591: INFO: Container dns ready: true, restart count 0 - Jun 12 21:07:42.592: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:07:42.592: INFO: node-resolver-lf2bx from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.592: INFO: Container dns-node-resolver ready: true, restart count 0 - Jun 12 21:07:42.592: INFO: node-ca-mwjbd from openshift-image-registry started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.592: INFO: Container node-ca ready: true, restart count 0 - Jun 12 21:07:42.592: INFO: ingress-canary-xwc5b from openshift-ingress-canary started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.592: INFO: Container serve-healthcheck-canary ready: true, restart count 0 - Jun 12 21:07:42.593: INFO: router-default-7d454f944c-s862z from openshift-ingress started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.593: INFO: Container router ready: true, restart count 0 - Jun 12 21:07:42.593: INFO: openshift-kube-proxy-rckf9 from openshift-kube-proxy started at 2023-06-12 17:47:53 +0000 UTC (2 container statuses recorded) - Jun 12 21:07:42.593: INFO: Container kube-proxy ready: true, restart count 0 - Jun 12 21:07:42.593: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:07:42.593: INFO: certified-operators-9jhxm from openshift-marketplace started at 2023-06-12 17:57:19 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.594: INFO: Container registry-server ready: true, restart count 0 - Jun 12 21:07:42.595: INFO: redhat-marketplace-n9tcn from openshift-marketplace started at 2023-06-12 17:57:19 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.595: INFO: Container registry-server ready: true, restart count 0 - Jun 12 21:07:42.595: INFO: alertmanager-main-0 from openshift-monitoring started at 2023-06-12 18:01:41 +0000 UTC (6 container statuses recorded) - Jun 12 21:07:42.595: INFO: Container alertmanager ready: true, restart count 1 - Jun 12 21:07:42.595: INFO: Container alertmanager-proxy ready: true, restart count 0 - Jun 12 21:07:42.595: INFO: Container config-reloader ready: true, restart count 0 - Jun 12 21:07:42.595: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:07:42.596: INFO: Container kube-rbac-proxy-metric ready: true, restart count 0 - Jun 12 21:07:42.596: INFO: Container prom-label-proxy ready: true, restart count 0 - Jun 12 21:07:42.596: INFO: node-exporter-5vgf6 from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (2 container statuses recorded) - Jun 12 21:07:42.596: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:07:42.596: INFO: Container node-exporter ready: true, restart count 0 - Jun 12 21:07:42.596: INFO: openshift-state-metrics-7d7f8b4cf8-6kdhb from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (3 container statuses recorded) - Jun 12 21:07:42.596: INFO: Container kube-rbac-proxy-main ready: true, restart count 0 - Jun 12 21:07:42.596: INFO: Container kube-rbac-proxy-self ready: true, restart count 0 - Jun 12 21:07:42.596: INFO: Container openshift-state-metrics ready: true, restart count 0 - Jun 12 21:07:42.596: INFO: prometheus-adapter-7c58c77c58-2j47k from openshift-monitoring started at 2023-06-12 17:59:36 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.596: INFO: Container prometheus-adapter ready: true, restart count 0 - Jun 12 21:07:42.597: INFO: prometheus-k8s-1 from openshift-monitoring started at 2023-06-12 18:01:12 +0000 UTC (6 container statuses recorded) - Jun 12 21:07:42.597: INFO: Container config-reloader ready: true, restart count 0 - Jun 12 21:07:42.597: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:07:42.597: INFO: Container kube-rbac-proxy-thanos ready: true, restart count 0 - Jun 12 21:07:42.597: INFO: Container prometheus ready: true, restart count 0 - Jun 12 21:07:42.597: INFO: Container prometheus-proxy ready: true, restart count 0 - Jun 12 21:07:42.597: INFO: Container thanos-sidecar ready: true, restart count 0 - Jun 12 21:07:42.597: INFO: prometheus-operator-5d978dbf9c-zvq6g from openshift-monitoring started at 2023-06-12 17:59:19 +0000 UTC (2 container statuses recorded) - Jun 12 21:07:42.597: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:07:42.597: INFO: Container prometheus-operator ready: true, restart count 0 - Jun 12 21:07:42.597: INFO: prometheus-operator-admission-webhook-5d679565bb-sj42p from openshift-monitoring started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.597: INFO: Container prometheus-operator-admission-webhook ready: true, restart count 0 - Jun 12 21:07:42.598: INFO: telemeter-client-55c7b57d84-vh47h from openshift-monitoring started at 2023-06-12 17:59:37 +0000 UTC (3 container statuses recorded) - Jun 12 21:07:42.598: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:07:42.598: INFO: Container reload ready: true, restart count 0 - Jun 12 21:07:42.598: INFO: Container telemeter-client ready: true, restart count 0 - Jun 12 21:07:42.598: INFO: thanos-querier-6497df7b9-pg2z9 from openshift-monitoring started at 2023-06-12 17:59:42 +0000 UTC (6 container statuses recorded) - Jun 12 21:07:42.598: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:07:42.598: INFO: Container kube-rbac-proxy-metrics ready: true, restart count 0 - Jun 12 21:07:42.598: INFO: Container kube-rbac-proxy-rules ready: true, restart count 0 - Jun 12 21:07:42.598: INFO: Container oauth-proxy ready: true, restart count 0 - Jun 12 21:07:42.598: INFO: Container prom-label-proxy ready: true, restart count 0 - Jun 12 21:07:42.598: INFO: Container thanos-query ready: true, restart count 0 - Jun 12 21:07:42.599: INFO: multus-26bfs from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.599: INFO: Container kube-multus ready: true, restart count 0 - Jun 12 21:07:42.599: INFO: multus-additional-cni-plugins-9vls6 from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.599: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 - Jun 12 21:07:42.599: INFO: multus-admission-controller-5894dd7875-xldt9 from openshift-multus started at 2023-06-12 17:58:44 +0000 UTC (2 container statuses recorded) - Jun 12 21:07:42.599: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:07:42.599: INFO: Container multus-admission-controller ready: true, restart count 0 - Jun 12 21:07:42.599: INFO: network-metrics-daemon-g9zzs from openshift-multus started at 2023-06-12 17:47:49 +0000 UTC (2 container statuses recorded) - Jun 12 21:07:42.599: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:07:42.599: INFO: Container network-metrics-daemon ready: true, restart count 0 - Jun 12 21:07:42.599: INFO: network-check-target-l622r from openshift-network-diagnostics started at 2023-06-12 17:47:56 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.600: INFO: Container network-check-target-container ready: true, restart count 0 - Jun 12 21:07:42.600: INFO: sonobuoy from sonobuoy started at 2023-06-12 20:38:54 +0000 UTC (1 container statuses recorded) - Jun 12 21:07:42.600: INFO: Container kube-sonobuoy ready: true, restart count 0 - Jun 12 21:07:42.600: INFO: sonobuoy-systemd-logs-daemon-set-129e475d543c42cc-4dn8s from sonobuoy started at 2023-06-12 20:39:07 +0000 UTC (2 container statuses recorded) - Jun 12 21:07:42.600: INFO: Container sonobuoy-worker ready: true, restart count 0 - Jun 12 21:07:42.600: INFO: Container systemd-logs ready: true, restart count 0 - [It] validates that NodeSelector is respected if not matching [Conformance] - test/e2e/scheduling/predicates.go:443 - STEP: Trying to schedule Pod with nonempty NodeSelector. 06/12/23 21:07:42.6 - STEP: Considering event: - Type = [Warning], Name = [restricted-pod.1768057582e93169], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..] 06/12/23 21:07:42.728 - [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] + test/e2e/apimachinery/watch.go:191 + STEP: creating a watch on configmaps 07/27/23 01:50:37.757 + STEP: creating a new configmap 07/27/23 01:50:37.761 + STEP: modifying the configmap once 07/27/23 01:50:37.781 + STEP: closing the watch once it receives two notifications 07/27/23 01:50:37.82 + Jul 27 01:50:37.821: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1532 ce20e077-1c50-49a3-be3b-7184afc5586e 79069 0 2023-07-27 01:50:37 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-07-27 01:50:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} + Jul 27 01:50:37.821: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1532 ce20e077-1c50-49a3-be3b-7184afc5586e 79075 0 2023-07-27 01:50:37 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-07-27 01:50:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} + STEP: modifying the configmap a second time, while the watch is closed 07/27/23 01:50:37.821 + STEP: creating a new watch on configmaps from the last resource version observed by the first watch 07/27/23 01:50:37.863 + STEP: deleting the configmap 07/27/23 01:50:37.868 + STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed 07/27/23 01:50:37.889 + Jul 27 01:50:37.889: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1532 ce20e077-1c50-49a3-be3b-7184afc5586e 79080 0 2023-07-27 01:50:37 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-07-27 01:50:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + Jul 27 01:50:37.890: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1532 ce20e077-1c50-49a3-be3b-7184afc5586e 79084 0 2023-07-27 01:50:37 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-07-27 01:50:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + [AfterEach] [sig-api-machinery] Watchers test/e2e/framework/node/init/init.go:32 - Jun 12 21:07:43.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] - test/e2e/scheduling/predicates.go:88 - [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + Jul 27 01:50:37.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Watchers test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + [DeferCleanup (Each)] [sig-api-machinery] Watchers dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + [DeferCleanup (Each)] [sig-api-machinery] Watchers tear down framework | framework.go:193 - STEP: Destroying namespace "sched-pred-6768" for this suite. 06/12/23 21:07:43.744 + STEP: Destroying namespace "watch-1532" for this suite. 07/27/23 01:50:37.901 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - should deny crd creation [Conformance] - test/e2e/apimachinery/webhook.go:308 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[sig-network] Services + should find a service from listing all namespaces [Conformance] + test/e2e/network/service.go:3219 +[BeforeEach] [sig-network] Services set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:07:43.763 -Jun 12 21:07:43.763: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename webhook 06/12/23 21:07:43.765 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:07:43.808 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:07:43.832 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +STEP: Creating a kubernetes client 07/27/23 01:50:37.926 +Jul 27 01:50:37.926: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename services 07/27/23 01:50:37.927 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:50:37.976 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:50:37.986 +[BeforeEach] [sig-network] Services test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:90 -STEP: Setting up server cert 06/12/23 21:07:43.887 -STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 21:07:44.877 -STEP: Deploying the webhook pod 06/12/23 21:07:44.912 -STEP: Wait for the deployment to be ready 06/12/23 21:07:44.937 -Jun 12 21:07:44.975: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set -Jun 12 21:07:47.039: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 21, 7, 45, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 7, 45, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 7, 45, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 7, 44, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} -STEP: Deploying the webhook service 06/12/23 21:07:49.048 -STEP: Verifying the service has paired with the endpoint 06/12/23 21:07:49.085 -Jun 12 21:07:50.087: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 -[It] should deny crd creation [Conformance] - test/e2e/apimachinery/webhook.go:308 -STEP: Registering the crd webhook via the AdmissionRegistration API 06/12/23 21:07:50.095 -STEP: Creating a custom resource definition that should be denied by the webhook 06/12/23 21:07:50.153 -Jun 12 21:07:50.153: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should find a service from listing all namespaces [Conformance] + test/e2e/network/service.go:3219 +STEP: fetching services 07/27/23 01:50:37.996 +[AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 -Jun 12 21:07:50.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:105 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +Jul 27 01:50:38.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-network] Services dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 -STEP: Destroying namespace "webhook-1403" for this suite. 06/12/23 21:07:50.344 -STEP: Destroying namespace "webhook-1403-markers" for this suite. 06/12/23 21:07:50.365 +STEP: Destroying namespace "services-7131" for this suite. 07/27/23 01:50:38.039 ------------------------------ -• [SLOW TEST] [6.621 seconds] -[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] -test/e2e/apimachinery/framework.go:23 - should deny crd creation [Conformance] - test/e2e/apimachinery/webhook.go:308 +• [0.172 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should find a service from listing all namespaces [Conformance] + test/e2e/network/service.go:3219 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [BeforeEach] [sig-network] Services set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:07:43.763 - Jun 12 21:07:43.763: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename webhook 06/12/23 21:07:43.765 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:07:43.808 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:07:43.832 - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + STEP: Creating a kubernetes client 07/27/23 01:50:37.926 + Jul 27 01:50:37.926: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename services 07/27/23 01:50:37.927 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:50:37.976 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:50:37.986 + [BeforeEach] [sig-network] Services test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:90 - STEP: Setting up server cert 06/12/23 21:07:43.887 - STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 21:07:44.877 - STEP: Deploying the webhook pod 06/12/23 21:07:44.912 - STEP: Wait for the deployment to be ready 06/12/23 21:07:44.937 - Jun 12 21:07:44.975: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set - Jun 12 21:07:47.039: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 21, 7, 45, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 7, 45, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 7, 45, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 7, 44, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} - STEP: Deploying the webhook service 06/12/23 21:07:49.048 - STEP: Verifying the service has paired with the endpoint 06/12/23 21:07:49.085 - Jun 12 21:07:50.087: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 - [It] should deny crd creation [Conformance] - test/e2e/apimachinery/webhook.go:308 - STEP: Registering the crd webhook via the AdmissionRegistration API 06/12/23 21:07:50.095 - STEP: Creating a custom resource definition that should be denied by the webhook 06/12/23 21:07:50.153 - Jun 12 21:07:50.153: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should find a service from listing all namespaces [Conformance] + test/e2e/network/service.go:3219 + STEP: fetching services 07/27/23 01:50:37.996 + [AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 - Jun 12 21:07:50.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:105 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + Jul 27 01:50:38.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-network] Services dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 - STEP: Destroying namespace "webhook-1403" for this suite. 06/12/23 21:07:50.344 - STEP: Destroying namespace "webhook-1403-markers" for this suite. 06/12/23 21:07:50.365 + STEP: Destroying namespace "services-7131" for this suite. 07/27/23 01:50:38.039 << End Captured GinkgoWriter Output ------------------------------ -SSSSSS +SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-node] Sysctls [LinuxOnly] [NodeConformance] - should support sysctls [MinimumKubeletVersion:1.21] [Conformance] - test/e2e/common/node/sysctl.go:77 -[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - test/e2e/common/node/sysctl.go:37 -[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] +[sig-apps] Job + should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + test/e2e/apps/job.go:426 +[BeforeEach] [sig-apps] Job set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:07:50.386 -Jun 12 21:07:50.386: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename sysctl 06/12/23 21:07:50.388 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:07:50.455 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:07:50.47 -[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] +STEP: Creating a kubernetes client 07/27/23 01:50:38.099 +Jul 27 01:50:38.099: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename job 07/27/23 01:50:38.1 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:50:38.172 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:50:38.182 +[BeforeEach] [sig-apps] Job test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - test/e2e/common/node/sysctl.go:67 -[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] - test/e2e/common/node/sysctl.go:77 -STEP: Creating a pod with the kernel.shm_rmid_forced sysctl 06/12/23 21:07:50.484 -STEP: Watching for error events or started pod 06/12/23 21:07:50.536 -STEP: Waiting for pod completion 06/12/23 21:07:54.546 -Jun 12 21:07:54.546: INFO: Waiting up to 3m0s for pod "sysctl-7663b72e-00f9-48a7-8bf7-be9fdbb2752d" in namespace "sysctl-4014" to be "completed" -Jun 12 21:07:54.555: INFO: Pod "sysctl-7663b72e-00f9-48a7-8bf7-be9fdbb2752d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.090066ms -Jun 12 21:07:56.567: INFO: Pod "sysctl-7663b72e-00f9-48a7-8bf7-be9fdbb2752d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021661139s -Jun 12 21:07:56.568: INFO: Pod "sysctl-7663b72e-00f9-48a7-8bf7-be9fdbb2752d" satisfied condition "completed" -STEP: Checking that the pod succeeded 06/12/23 21:07:56.595 -STEP: Getting logs from the pod 06/12/23 21:07:56.596 -STEP: Checking that the sysctl is actually updated 06/12/23 21:07:56.618 -[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] +[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + test/e2e/apps/job.go:426 +STEP: Creating a job 07/27/23 01:50:38.191 +W0727 01:50:38.211057 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "c" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "c" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "c" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "c" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +STEP: Ensuring job reaches completions 07/27/23 01:50:38.211 +[AfterEach] [sig-apps] Job test/e2e/framework/node/init/init.go:32 -Jun 12 21:07:56.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] +Jul 27 01:50:52.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Job test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] +[DeferCleanup (Each)] [sig-apps] Job dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] +[DeferCleanup (Each)] [sig-apps] Job tear down framework | framework.go:193 -STEP: Destroying namespace "sysctl-4014" for this suite. 06/12/23 21:07:56.633 +STEP: Destroying namespace "job-942" for this suite. 07/27/23 01:50:52.241 ------------------------------ -• [SLOW TEST] [6.261 seconds] -[sig-node] Sysctls [LinuxOnly] [NodeConformance] -test/e2e/common/node/framework.go:23 - should support sysctls [MinimumKubeletVersion:1.21] [Conformance] - test/e2e/common/node/sysctl.go:77 +• [SLOW TEST] [14.165 seconds] +[sig-apps] Job +test/e2e/apps/framework.go:23 + should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + test/e2e/apps/job.go:426 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - test/e2e/common/node/sysctl.go:37 - [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + [BeforeEach] [sig-apps] Job set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:07:50.386 - Jun 12 21:07:50.386: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename sysctl 06/12/23 21:07:50.388 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:07:50.455 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:07:50.47 - [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + STEP: Creating a kubernetes client 07/27/23 01:50:38.099 + Jul 27 01:50:38.099: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename job 07/27/23 01:50:38.1 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:50:38.172 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:50:38.182 + [BeforeEach] [sig-apps] Job test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - test/e2e/common/node/sysctl.go:67 - [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] - test/e2e/common/node/sysctl.go:77 - STEP: Creating a pod with the kernel.shm_rmid_forced sysctl 06/12/23 21:07:50.484 - STEP: Watching for error events or started pod 06/12/23 21:07:50.536 - STEP: Waiting for pod completion 06/12/23 21:07:54.546 - Jun 12 21:07:54.546: INFO: Waiting up to 3m0s for pod "sysctl-7663b72e-00f9-48a7-8bf7-be9fdbb2752d" in namespace "sysctl-4014" to be "completed" - Jun 12 21:07:54.555: INFO: Pod "sysctl-7663b72e-00f9-48a7-8bf7-be9fdbb2752d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.090066ms - Jun 12 21:07:56.567: INFO: Pod "sysctl-7663b72e-00f9-48a7-8bf7-be9fdbb2752d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021661139s - Jun 12 21:07:56.568: INFO: Pod "sysctl-7663b72e-00f9-48a7-8bf7-be9fdbb2752d" satisfied condition "completed" - STEP: Checking that the pod succeeded 06/12/23 21:07:56.595 - STEP: Getting logs from the pod 06/12/23 21:07:56.596 - STEP: Checking that the sysctl is actually updated 06/12/23 21:07:56.618 - [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + test/e2e/apps/job.go:426 + STEP: Creating a job 07/27/23 01:50:38.191 + W0727 01:50:38.211057 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "c" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "c" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "c" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "c" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + STEP: Ensuring job reaches completions 07/27/23 01:50:38.211 + [AfterEach] [sig-apps] Job test/e2e/framework/node/init/init.go:32 - Jun 12 21:07:56.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + Jul 27 01:50:52.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Job test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + [DeferCleanup (Each)] [sig-apps] Job dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + [DeferCleanup (Each)] [sig-apps] Job tear down framework | framework.go:193 - STEP: Destroying namespace "sysctl-4014" for this suite. 06/12/23 21:07:56.633 + STEP: Destroying namespace "job-942" for this suite. 07/27/23 01:50:52.241 << End Captured GinkgoWriter Output ------------------------------ -[sig-instrumentation] Events - should manage the lifecycle of an event [Conformance] - test/e2e/instrumentation/core_events.go:57 -[BeforeEach] [sig-instrumentation] Events +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:207 +[BeforeEach] [sig-storage] Downward API volume set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:07:56.649 -Jun 12 21:07:56.649: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename events 06/12/23 21:07:56.664 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:07:56.712 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:07:56.729 -[BeforeEach] [sig-instrumentation] Events +STEP: Creating a kubernetes client 07/27/23 01:50:52.265 +Jul 27 01:50:52.265: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename downward-api 07/27/23 01:50:52.266 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:50:52.318 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:50:52.328 +[BeforeEach] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:31 -[It] should manage the lifecycle of an event [Conformance] - test/e2e/instrumentation/core_events.go:57 -STEP: creating a test event 06/12/23 21:07:56.739 -STEP: listing all events in all namespaces 06/12/23 21:07:56.755 -STEP: patching the test event 06/12/23 21:07:56.814 -STEP: fetching the test event 06/12/23 21:07:56.838 -STEP: updating the test event 06/12/23 21:07:56.848 -STEP: getting the test event 06/12/23 21:07:56.878 -STEP: deleting the test event 06/12/23 21:07:56.889 -STEP: listing all events in all namespaces 06/12/23 21:07:56.913 -[AfterEach] [sig-instrumentation] Events +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 +[It] should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:207 +STEP: Creating a pod to test downward API volume plugin 07/27/23 01:50:52.339 +Jul 27 01:50:52.383: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4a310970-5908-4cd5-9167-6d3c7b5908c2" in namespace "downward-api-1069" to be "Succeeded or Failed" +Jul 27 01:50:52.397: INFO: Pod "downwardapi-volume-4a310970-5908-4cd5-9167-6d3c7b5908c2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.291428ms +Jul 27 01:50:54.407: INFO: Pod "downwardapi-volume-4a310970-5908-4cd5-9167-6d3c7b5908c2": Phase="Running", Reason="", readiness=true. Elapsed: 2.024005951s +Jul 27 01:50:56.407: INFO: Pod "downwardapi-volume-4a310970-5908-4cd5-9167-6d3c7b5908c2": Phase="Running", Reason="", readiness=false. Elapsed: 4.024332565s +Jul 27 01:50:58.411: INFO: Pod "downwardapi-volume-4a310970-5908-4cd5-9167-6d3c7b5908c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028206284s +STEP: Saw pod success 07/27/23 01:50:58.411 +Jul 27 01:50:58.411: INFO: Pod "downwardapi-volume-4a310970-5908-4cd5-9167-6d3c7b5908c2" satisfied condition "Succeeded or Failed" +Jul 27 01:50:58.421: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-4a310970-5908-4cd5-9167-6d3c7b5908c2 container client-container: +STEP: delete the pod 07/27/23 01:50:58.441 +Jul 27 01:50:58.466: INFO: Waiting for pod downwardapi-volume-4a310970-5908-4cd5-9167-6d3c7b5908c2 to disappear +Jul 27 01:50:58.474: INFO: Pod downwardapi-volume-4a310970-5908-4cd5-9167-6d3c7b5908c2 no longer exists +[AfterEach] [sig-storage] Downward API volume test/e2e/framework/node/init/init.go:32 -Jun 12 21:07:56.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-instrumentation] Events +Jul 27 01:50:58.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-instrumentation] Events +[DeferCleanup (Each)] [sig-storage] Downward API volume dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-instrumentation] Events +[DeferCleanup (Each)] [sig-storage] Downward API volume tear down framework | framework.go:193 -STEP: Destroying namespace "events-4883" for this suite. 06/12/23 21:07:56.98 +STEP: Destroying namespace "downward-api-1069" for this suite. 07/27/23 01:50:58.521 ------------------------------ -• [0.347 seconds] -[sig-instrumentation] Events -test/e2e/instrumentation/common/framework.go:23 - should manage the lifecycle of an event [Conformance] - test/e2e/instrumentation/core_events.go:57 +• [SLOW TEST] [6.280 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:207 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-instrumentation] Events + [BeforeEach] [sig-storage] Downward API volume set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:07:56.649 - Jun 12 21:07:56.649: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename events 06/12/23 21:07:56.664 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:07:56.712 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:07:56.729 - [BeforeEach] [sig-instrumentation] Events + STEP: Creating a kubernetes client 07/27/23 01:50:52.265 + Jul 27 01:50:52.265: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename downward-api 07/27/23 01:50:52.266 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:50:52.318 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:50:52.328 + [BeforeEach] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:31 - [It] should manage the lifecycle of an event [Conformance] - test/e2e/instrumentation/core_events.go:57 - STEP: creating a test event 06/12/23 21:07:56.739 - STEP: listing all events in all namespaces 06/12/23 21:07:56.755 - STEP: patching the test event 06/12/23 21:07:56.814 - STEP: fetching the test event 06/12/23 21:07:56.838 - STEP: updating the test event 06/12/23 21:07:56.848 - STEP: getting the test event 06/12/23 21:07:56.878 - STEP: deleting the test event 06/12/23 21:07:56.889 - STEP: listing all events in all namespaces 06/12/23 21:07:56.913 - [AfterEach] [sig-instrumentation] Events + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 + [It] should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:207 + STEP: Creating a pod to test downward API volume plugin 07/27/23 01:50:52.339 + Jul 27 01:50:52.383: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4a310970-5908-4cd5-9167-6d3c7b5908c2" in namespace "downward-api-1069" to be "Succeeded or Failed" + Jul 27 01:50:52.397: INFO: Pod "downwardapi-volume-4a310970-5908-4cd5-9167-6d3c7b5908c2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.291428ms + Jul 27 01:50:54.407: INFO: Pod "downwardapi-volume-4a310970-5908-4cd5-9167-6d3c7b5908c2": Phase="Running", Reason="", readiness=true. Elapsed: 2.024005951s + Jul 27 01:50:56.407: INFO: Pod "downwardapi-volume-4a310970-5908-4cd5-9167-6d3c7b5908c2": Phase="Running", Reason="", readiness=false. Elapsed: 4.024332565s + Jul 27 01:50:58.411: INFO: Pod "downwardapi-volume-4a310970-5908-4cd5-9167-6d3c7b5908c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028206284s + STEP: Saw pod success 07/27/23 01:50:58.411 + Jul 27 01:50:58.411: INFO: Pod "downwardapi-volume-4a310970-5908-4cd5-9167-6d3c7b5908c2" satisfied condition "Succeeded or Failed" + Jul 27 01:50:58.421: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-4a310970-5908-4cd5-9167-6d3c7b5908c2 container client-container: + STEP: delete the pod 07/27/23 01:50:58.441 + Jul 27 01:50:58.466: INFO: Waiting for pod downwardapi-volume-4a310970-5908-4cd5-9167-6d3c7b5908c2 to disappear + Jul 27 01:50:58.474: INFO: Pod downwardapi-volume-4a310970-5908-4cd5-9167-6d3c7b5908c2 no longer exists + [AfterEach] [sig-storage] Downward API volume test/e2e/framework/node/init/init.go:32 - Jun 12 21:07:56.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-instrumentation] Events + Jul 27 01:50:58.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-instrumentation] Events + [DeferCleanup (Each)] [sig-storage] Downward API volume dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-instrumentation] Events + [DeferCleanup (Each)] [sig-storage] Downward API volume tear down framework | framework.go:193 - STEP: Destroying namespace "events-4883" for this suite. 06/12/23 21:07:56.98 + STEP: Destroying namespace "downward-api-1069" for this suite. 07/27/23 01:50:58.521 << End Captured GinkgoWriter Output ------------------------------ -SSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] EmptyDir volumes - volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:157 -[BeforeEach] [sig-storage] EmptyDir volumes +[sig-storage] ConfigMap + updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:124 +[BeforeEach] [sig-storage] ConfigMap set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:07:57.002 -Jun 12 21:07:57.002: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename emptydir 06/12/23 21:07:57.007 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:07:57.054 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:07:57.07 -[BeforeEach] [sig-storage] EmptyDir volumes +STEP: Creating a kubernetes client 07/27/23 01:50:58.555 +Jul 27 01:50:58.555: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename configmap 07/27/23 01:50:58.556 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:50:58.605 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:50:58.616 +[BeforeEach] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:31 -[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:157 -STEP: Creating a pod to test emptydir volume type on node default medium 06/12/23 21:07:57.082 -Jun 12 21:07:57.114: INFO: Waiting up to 5m0s for pod "pod-484c1c3c-aefa-4531-95df-da556a0b95ed" in namespace "emptydir-9092" to be "Succeeded or Failed" -Jun 12 21:07:57.131: INFO: Pod "pod-484c1c3c-aefa-4531-95df-da556a0b95ed": Phase="Pending", Reason="", readiness=false. Elapsed: 17.22265ms -Jun 12 21:07:59.151: INFO: Pod "pod-484c1c3c-aefa-4531-95df-da556a0b95ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036687358s -Jun 12 21:08:01.163: INFO: Pod "pod-484c1c3c-aefa-4531-95df-da556a0b95ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048714453s -Jun 12 21:08:03.156: INFO: Pod "pod-484c1c3c-aefa-4531-95df-da556a0b95ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041832953s -STEP: Saw pod success 06/12/23 21:08:03.156 -Jun 12 21:08:03.157: INFO: Pod "pod-484c1c3c-aefa-4531-95df-da556a0b95ed" satisfied condition "Succeeded or Failed" -Jun 12 21:08:03.166: INFO: Trying to get logs from node 10.138.75.70 pod pod-484c1c3c-aefa-4531-95df-da556a0b95ed container test-container: -STEP: delete the pod 06/12/23 21:08:03.191 -Jun 12 21:08:03.225: INFO: Waiting for pod pod-484c1c3c-aefa-4531-95df-da556a0b95ed to disappear -Jun 12 21:08:03.234: INFO: Pod pod-484c1c3c-aefa-4531-95df-da556a0b95ed no longer exists -[AfterEach] [sig-storage] EmptyDir volumes +[It] updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:124 +Jul 27 01:50:58.637: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node +STEP: Creating configMap with name configmap-test-upd-f8845aad-7a66-4a5a-b05a-749514ebb7f3 07/27/23 01:50:58.637 +STEP: Creating the pod 07/27/23 01:50:58.671 +Jul 27 01:50:58.698: INFO: Waiting up to 5m0s for pod "pod-configmaps-f07b03d3-3e2d-488c-b1ed-fdc3dc9c47fb" in namespace "configmap-2849" to be "running and ready" +Jul 27 01:50:58.711: INFO: Pod "pod-configmaps-f07b03d3-3e2d-488c-b1ed-fdc3dc9c47fb": Phase="Pending", Reason="", readiness=false. Elapsed: 12.428901ms +Jul 27 01:50:58.711: INFO: The phase of Pod pod-configmaps-f07b03d3-3e2d-488c-b1ed-fdc3dc9c47fb is Pending, waiting for it to be Running (with Ready = true) +Jul 27 01:51:00.720: INFO: Pod "pod-configmaps-f07b03d3-3e2d-488c-b1ed-fdc3dc9c47fb": Phase="Running", Reason="", readiness=true. Elapsed: 2.022167784s +Jul 27 01:51:00.720: INFO: The phase of Pod pod-configmaps-f07b03d3-3e2d-488c-b1ed-fdc3dc9c47fb is Running (Ready = true) +Jul 27 01:51:00.720: INFO: Pod "pod-configmaps-f07b03d3-3e2d-488c-b1ed-fdc3dc9c47fb" satisfied condition "running and ready" +STEP: Updating configmap configmap-test-upd-f8845aad-7a66-4a5a-b05a-749514ebb7f3 07/27/23 01:51:00.764 +STEP: waiting to observe update in volume 07/27/23 01:51:00.79 +[AfterEach] [sig-storage] ConfigMap test/e2e/framework/node/init/init.go:32 -Jun 12 21:08:03.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +Jul 27 01:51:02.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +[DeferCleanup (Each)] [sig-storage] ConfigMap dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +[DeferCleanup (Each)] [sig-storage] ConfigMap tear down framework | framework.go:193 -STEP: Destroying namespace "emptydir-9092" for this suite. 06/12/23 21:08:03.252 +STEP: Destroying namespace "configmap-2849" for this suite. 07/27/23 01:51:02.873 ------------------------------ -• [SLOW TEST] [6.274 seconds] -[sig-storage] EmptyDir volumes +• [4.357 seconds] +[sig-storage] ConfigMap test/e2e/common/storage/framework.go:23 - volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:157 + updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:124 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] EmptyDir volumes + [BeforeEach] [sig-storage] ConfigMap set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:07:57.002 - Jun 12 21:07:57.002: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename emptydir 06/12/23 21:07:57.007 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:07:57.054 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:07:57.07 - [BeforeEach] [sig-storage] EmptyDir volumes + STEP: Creating a kubernetes client 07/27/23 01:50:58.555 + Jul 27 01:50:58.555: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename configmap 07/27/23 01:50:58.556 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:50:58.605 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:50:58.616 + [BeforeEach] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:31 - [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:157 - STEP: Creating a pod to test emptydir volume type on node default medium 06/12/23 21:07:57.082 - Jun 12 21:07:57.114: INFO: Waiting up to 5m0s for pod "pod-484c1c3c-aefa-4531-95df-da556a0b95ed" in namespace "emptydir-9092" to be "Succeeded or Failed" - Jun 12 21:07:57.131: INFO: Pod "pod-484c1c3c-aefa-4531-95df-da556a0b95ed": Phase="Pending", Reason="", readiness=false. Elapsed: 17.22265ms - Jun 12 21:07:59.151: INFO: Pod "pod-484c1c3c-aefa-4531-95df-da556a0b95ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036687358s - Jun 12 21:08:01.163: INFO: Pod "pod-484c1c3c-aefa-4531-95df-da556a0b95ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048714453s - Jun 12 21:08:03.156: INFO: Pod "pod-484c1c3c-aefa-4531-95df-da556a0b95ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041832953s - STEP: Saw pod success 06/12/23 21:08:03.156 - Jun 12 21:08:03.157: INFO: Pod "pod-484c1c3c-aefa-4531-95df-da556a0b95ed" satisfied condition "Succeeded or Failed" - Jun 12 21:08:03.166: INFO: Trying to get logs from node 10.138.75.70 pod pod-484c1c3c-aefa-4531-95df-da556a0b95ed container test-container: - STEP: delete the pod 06/12/23 21:08:03.191 - Jun 12 21:08:03.225: INFO: Waiting for pod pod-484c1c3c-aefa-4531-95df-da556a0b95ed to disappear - Jun 12 21:08:03.234: INFO: Pod pod-484c1c3c-aefa-4531-95df-da556a0b95ed no longer exists - [AfterEach] [sig-storage] EmptyDir volumes + [It] updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:124 + Jul 27 01:50:58.637: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node + STEP: Creating configMap with name configmap-test-upd-f8845aad-7a66-4a5a-b05a-749514ebb7f3 07/27/23 01:50:58.637 + STEP: Creating the pod 07/27/23 01:50:58.671 + Jul 27 01:50:58.698: INFO: Waiting up to 5m0s for pod "pod-configmaps-f07b03d3-3e2d-488c-b1ed-fdc3dc9c47fb" in namespace "configmap-2849" to be "running and ready" + Jul 27 01:50:58.711: INFO: Pod "pod-configmaps-f07b03d3-3e2d-488c-b1ed-fdc3dc9c47fb": Phase="Pending", Reason="", readiness=false. Elapsed: 12.428901ms + Jul 27 01:50:58.711: INFO: The phase of Pod pod-configmaps-f07b03d3-3e2d-488c-b1ed-fdc3dc9c47fb is Pending, waiting for it to be Running (with Ready = true) + Jul 27 01:51:00.720: INFO: Pod "pod-configmaps-f07b03d3-3e2d-488c-b1ed-fdc3dc9c47fb": Phase="Running", Reason="", readiness=true. Elapsed: 2.022167784s + Jul 27 01:51:00.720: INFO: The phase of Pod pod-configmaps-f07b03d3-3e2d-488c-b1ed-fdc3dc9c47fb is Running (Ready = true) + Jul 27 01:51:00.720: INFO: Pod "pod-configmaps-f07b03d3-3e2d-488c-b1ed-fdc3dc9c47fb" satisfied condition "running and ready" + STEP: Updating configmap configmap-test-upd-f8845aad-7a66-4a5a-b05a-749514ebb7f3 07/27/23 01:51:00.764 + STEP: waiting to observe update in volume 07/27/23 01:51:00.79 + [AfterEach] [sig-storage] ConfigMap test/e2e/framework/node/init/init.go:32 - Jun 12 21:08:03.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + Jul 27 01:51:02.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + [DeferCleanup (Each)] [sig-storage] ConfigMap dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + [DeferCleanup (Each)] [sig-storage] ConfigMap tear down framework | framework.go:193 - STEP: Destroying namespace "emptydir-9092" for this suite. 06/12/23 21:08:03.252 + STEP: Destroying namespace "configmap-2849" for this suite. 07/27/23 01:51:02.873 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] - custom resource defaulting for requests and from storage works [Conformance] - test/e2e/apimachinery/custom_resource_definition.go:269 -[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +[sig-apps] Job + should manage the lifecycle of a job [Conformance] + test/e2e/apps/job.go:703 +[BeforeEach] [sig-apps] Job set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:08:03.285 -Jun 12 21:08:03.285: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename custom-resource-definition 06/12/23 21:08:03.286 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:08:03.329 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:08:03.339 -[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +STEP: Creating a kubernetes client 07/27/23 01:51:02.913 +Jul 27 01:51:02.913: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename job 07/27/23 01:51:02.914 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:51:02.971 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:51:02.983 +[BeforeEach] [sig-apps] Job test/e2e/framework/metrics/init/init.go:31 -[It] custom resource defaulting for requests and from storage works [Conformance] - test/e2e/apimachinery/custom_resource_definition.go:269 -Jun 12 21:08:03.351: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +[It] should manage the lifecycle of a job [Conformance] + test/e2e/apps/job.go:703 +STEP: Creating a suspended job 07/27/23 01:51:03.007 +STEP: Patching the Job 07/27/23 01:51:03.025 +STEP: Watching for Job to be patched 07/27/23 01:51:03.08 +Jul 27 01:51:03.104: INFO: Event ADDED observed for Job e2e-jcwn2 in namespace job-7228 with labels: map[e2e-job-label:e2e-jcwn2] and annotations: map[batch.kubernetes.io/job-tracking:] +Jul 27 01:51:03.104: INFO: Event MODIFIED observed for Job e2e-jcwn2 in namespace job-7228 with labels: map[e2e-job-label:e2e-jcwn2] and annotations: map[batch.kubernetes.io/job-tracking:] +Jul 27 01:51:03.104: INFO: Event MODIFIED found for Job e2e-jcwn2 in namespace job-7228 with labels: map[e2e-jcwn2:patched e2e-job-label:e2e-jcwn2] and annotations: map[batch.kubernetes.io/job-tracking:] +STEP: Updating the job 07/27/23 01:51:03.104 +STEP: Watching for Job to be updated 07/27/23 01:51:03.137 +Jul 27 01:51:03.142: INFO: Event MODIFIED found for Job e2e-jcwn2 in namespace job-7228 with labels: map[e2e-jcwn2:patched e2e-job-label:e2e-jcwn2] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] +Jul 27 01:51:03.142: INFO: Found Job annotations: map[string]string{"batch.kubernetes.io/job-tracking":"", "updated":"true"} +STEP: Listing all Jobs with LabelSelector 07/27/23 01:51:03.142 +Jul 27 01:51:03.158: INFO: Job: e2e-jcwn2 as labels: map[e2e-jcwn2:patched e2e-job-label:e2e-jcwn2] +STEP: Waiting for job to complete 07/27/23 01:51:03.158 +STEP: Delete a job collection with a labelselector 07/27/23 01:51:11.171 +STEP: Watching for Job to be deleted 07/27/23 01:51:11.247 +Jul 27 01:51:11.252: INFO: Event MODIFIED observed for Job e2e-jcwn2 in namespace job-7228 with labels: map[e2e-jcwn2:patched e2e-job-label:e2e-jcwn2] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] +Jul 27 01:51:11.252: INFO: Event MODIFIED observed for Job e2e-jcwn2 in namespace job-7228 with labels: map[e2e-jcwn2:patched e2e-job-label:e2e-jcwn2] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] +Jul 27 01:51:11.252: INFO: Event MODIFIED observed for Job e2e-jcwn2 in namespace job-7228 with labels: map[e2e-jcwn2:patched e2e-job-label:e2e-jcwn2] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] +Jul 27 01:51:11.252: INFO: Event MODIFIED observed for Job e2e-jcwn2 in namespace job-7228 with labels: map[e2e-jcwn2:patched e2e-job-label:e2e-jcwn2] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] +Jul 27 01:51:11.252: INFO: Event MODIFIED observed for Job e2e-jcwn2 in namespace job-7228 with labels: map[e2e-jcwn2:patched e2e-job-label:e2e-jcwn2] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] +Jul 27 01:51:11.252: INFO: Event DELETED found for Job e2e-jcwn2 in namespace job-7228 with labels: map[e2e-jcwn2:patched e2e-job-label:e2e-jcwn2] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] +STEP: Relist jobs to confirm deletion 07/27/23 01:51:11.252 +[AfterEach] [sig-apps] Job test/e2e/framework/node/init/init.go:32 -Jun 12 21:08:06.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +Jul 27 01:51:11.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Job test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-apps] Job dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-apps] Job tear down framework | framework.go:193 -STEP: Destroying namespace "custom-resource-definition-7427" for this suite. 06/12/23 21:08:06.684 +STEP: Destroying namespace "job-7228" for this suite. 07/27/23 01:51:11.277 ------------------------------ -• [3.415 seconds] -[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] -test/e2e/apimachinery/framework.go:23 - custom resource defaulting for requests and from storage works [Conformance] - test/e2e/apimachinery/custom_resource_definition.go:269 +• [SLOW TEST] [8.388 seconds] +[sig-apps] Job +test/e2e/apps/framework.go:23 + should manage the lifecycle of a job [Conformance] + test/e2e/apps/job.go:703 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + [BeforeEach] [sig-apps] Job set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:08:03.285 - Jun 12 21:08:03.285: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename custom-resource-definition 06/12/23 21:08:03.286 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:08:03.329 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:08:03.339 - [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + STEP: Creating a kubernetes client 07/27/23 01:51:02.913 + Jul 27 01:51:02.913: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename job 07/27/23 01:51:02.914 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:51:02.971 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:51:02.983 + [BeforeEach] [sig-apps] Job test/e2e/framework/metrics/init/init.go:31 - [It] custom resource defaulting for requests and from storage works [Conformance] - test/e2e/apimachinery/custom_resource_definition.go:269 - Jun 12 21:08:03.351: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + [It] should manage the lifecycle of a job [Conformance] + test/e2e/apps/job.go:703 + STEP: Creating a suspended job 07/27/23 01:51:03.007 + STEP: Patching the Job 07/27/23 01:51:03.025 + STEP: Watching for Job to be patched 07/27/23 01:51:03.08 + Jul 27 01:51:03.104: INFO: Event ADDED observed for Job e2e-jcwn2 in namespace job-7228 with labels: map[e2e-job-label:e2e-jcwn2] and annotations: map[batch.kubernetes.io/job-tracking:] + Jul 27 01:51:03.104: INFO: Event MODIFIED observed for Job e2e-jcwn2 in namespace job-7228 with labels: map[e2e-job-label:e2e-jcwn2] and annotations: map[batch.kubernetes.io/job-tracking:] + Jul 27 01:51:03.104: INFO: Event MODIFIED found for Job e2e-jcwn2 in namespace job-7228 with labels: map[e2e-jcwn2:patched e2e-job-label:e2e-jcwn2] and annotations: map[batch.kubernetes.io/job-tracking:] + STEP: Updating the job 07/27/23 01:51:03.104 + STEP: Watching for Job to be updated 07/27/23 01:51:03.137 + Jul 27 01:51:03.142: INFO: Event MODIFIED found for Job e2e-jcwn2 in namespace job-7228 with labels: map[e2e-jcwn2:patched e2e-job-label:e2e-jcwn2] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] + Jul 27 01:51:03.142: INFO: Found Job annotations: map[string]string{"batch.kubernetes.io/job-tracking":"", "updated":"true"} + STEP: Listing all Jobs with LabelSelector 07/27/23 01:51:03.142 + Jul 27 01:51:03.158: INFO: Job: e2e-jcwn2 as labels: map[e2e-jcwn2:patched e2e-job-label:e2e-jcwn2] + STEP: Waiting for job to complete 07/27/23 01:51:03.158 + STEP: Delete a job collection with a labelselector 07/27/23 01:51:11.171 + STEP: Watching for Job to be deleted 07/27/23 01:51:11.247 + Jul 27 01:51:11.252: INFO: Event MODIFIED observed for Job e2e-jcwn2 in namespace job-7228 with labels: map[e2e-jcwn2:patched e2e-job-label:e2e-jcwn2] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] + Jul 27 01:51:11.252: INFO: Event MODIFIED observed for Job e2e-jcwn2 in namespace job-7228 with labels: map[e2e-jcwn2:patched e2e-job-label:e2e-jcwn2] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] + Jul 27 01:51:11.252: INFO: Event MODIFIED observed for Job e2e-jcwn2 in namespace job-7228 with labels: map[e2e-jcwn2:patched e2e-job-label:e2e-jcwn2] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] + Jul 27 01:51:11.252: INFO: Event MODIFIED observed for Job e2e-jcwn2 in namespace job-7228 with labels: map[e2e-jcwn2:patched e2e-job-label:e2e-jcwn2] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] + Jul 27 01:51:11.252: INFO: Event MODIFIED observed for Job e2e-jcwn2 in namespace job-7228 with labels: map[e2e-jcwn2:patched e2e-job-label:e2e-jcwn2] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] + Jul 27 01:51:11.252: INFO: Event DELETED found for Job e2e-jcwn2 in namespace job-7228 with labels: map[e2e-jcwn2:patched e2e-job-label:e2e-jcwn2] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] + STEP: Relist jobs to confirm deletion 07/27/23 01:51:11.252 + [AfterEach] [sig-apps] Job test/e2e/framework/node/init/init.go:32 - Jun 12 21:08:06.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + Jul 27 01:51:11.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Job test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-apps] Job dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-apps] Job tear down framework | framework.go:193 - STEP: Destroying namespace "custom-resource-definition-7427" for this suite. 06/12/23 21:08:06.684 + STEP: Destroying namespace "job-7228" for this suite. 07/27/23 01:51:11.277 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSS ------------------------------ -[sig-network] Networking Granular Checks: Pods - should function for intra-pod communication: udp [NodeConformance] [Conformance] - test/e2e/common/network/networking.go:93 -[BeforeEach] [sig-network] Networking +[sig-storage] ConfigMap + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:74 +[BeforeEach] [sig-storage] ConfigMap set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:08:06.713 -Jun 12 21:08:06.714: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename pod-network-test 06/12/23 21:08:06.716 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:08:06.789 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:08:06.798 -[BeforeEach] [sig-network] Networking +STEP: Creating a kubernetes client 07/27/23 01:51:11.301 +Jul 27 01:51:11.301: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename configmap 07/27/23 01:51:11.302 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:51:11.346 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:51:11.355 +[BeforeEach] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:31 -[It] should function for intra-pod communication: udp [NodeConformance] [Conformance] - test/e2e/common/network/networking.go:93 -STEP: Performing setup for networking test in namespace pod-network-test-1977 06/12/23 21:08:06.81 -STEP: creating a selector 06/12/23 21:08:06.81 -STEP: Creating the service pods in kubernetes 06/12/23 21:08:06.81 -Jun 12 21:08:06.811: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable -Jun 12 21:08:06.894: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-1977" to be "running and ready" -Jun 12 21:08:06.911: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 17.219732ms -Jun 12 21:08:06.911: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:08:08.922: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028248728s -Jun 12 21:08:08.922: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:08:10.921: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.027299865s -Jun 12 21:08:10.922: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:08:12.922: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.028175579s -Jun 12 21:08:12.922: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:08:14.922: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.027865475s -Jun 12 21:08:14.922: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:08:16.950: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.055651275s -Jun 12 21:08:16.950: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:08:18.951: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.056751436s -Jun 12 21:08:18.951: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:08:20.922: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.027853601s -Jun 12 21:08:20.922: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:08:22.934: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.040433336s -Jun 12 21:08:22.934: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:08:24.995: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.100978172s -Jun 12 21:08:24.995: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:08:26.923: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.028983205s -Jun 12 21:08:26.923: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:08:28.923: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 22.028954818s -Jun 12 21:08:28.923: INFO: The phase of Pod netserver-0 is Running (Ready = true) -Jun 12 21:08:28.923: INFO: Pod "netserver-0" satisfied condition "running and ready" -Jun 12 21:08:28.932: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-1977" to be "running and ready" -Jun 12 21:08:28.940: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 8.400539ms -Jun 12 21:08:28.940: INFO: The phase of Pod netserver-1 is Running (Ready = true) -Jun 12 21:08:28.940: INFO: Pod "netserver-1" satisfied condition "running and ready" -Jun 12 21:08:28.950: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-1977" to be "running and ready" -Jun 12 21:08:28.959: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 9.21251ms -Jun 12 21:08:28.959: INFO: The phase of Pod netserver-2 is Running (Ready = true) -Jun 12 21:08:28.959: INFO: Pod "netserver-2" satisfied condition "running and ready" -STEP: Creating test pods 06/12/23 21:08:28.969 -Jun 12 21:08:28.986: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-1977" to be "running" -Jun 12 21:08:28.996: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 9.874678ms -Jun 12 21:08:31.014: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027766409s -Jun 12 21:08:33.009: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.02276188s -Jun 12 21:08:33.009: INFO: Pod "test-container-pod" satisfied condition "running" -Jun 12 21:08:33.018: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 -Jun 12 21:08:33.018: INFO: Breadth first check of 172.30.161.122 on host 10.138.75.112... -Jun 12 21:08:33.030: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.30.224.30:9080/dial?request=hostname&protocol=udp&host=172.30.161.122&port=8081&tries=1'] Namespace:pod-network-test-1977 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} -Jun 12 21:08:33.030: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 21:08:33.031: INFO: ExecWithOptions: Clientset creation -Jun 12 21:08:33.031: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-1977/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F172.30.224.30%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D172.30.161.122%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) -Jun 12 21:08:33.371: INFO: Waiting for responses: map[] -Jun 12 21:08:33.371: INFO: reached 172.30.161.122 after 0/1 tries -Jun 12 21:08:33.371: INFO: Breadth first check of 172.30.185.132 on host 10.138.75.116... -Jun 12 21:08:33.411: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.30.224.30:9080/dial?request=hostname&protocol=udp&host=172.30.185.132&port=8081&tries=1'] Namespace:pod-network-test-1977 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} -Jun 12 21:08:33.411: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 21:08:33.412: INFO: ExecWithOptions: Clientset creation -Jun 12 21:08:33.413: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-1977/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F172.30.224.30%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D172.30.185.132%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) -Jun 12 21:08:33.599: INFO: Waiting for responses: map[] -Jun 12 21:08:33.599: INFO: reached 172.30.185.132 after 0/1 tries -Jun 12 21:08:33.599: INFO: Breadth first check of 172.30.224.13 on host 10.138.75.70... -Jun 12 21:08:33.622: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.30.224.30:9080/dial?request=hostname&protocol=udp&host=172.30.224.13&port=8081&tries=1'] Namespace:pod-network-test-1977 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} -Jun 12 21:08:33.622: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 21:08:33.623: INFO: ExecWithOptions: Clientset creation -Jun 12 21:08:33.623: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-1977/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F172.30.224.30%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D172.30.224.13%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) -Jun 12 21:08:33.789: INFO: Waiting for responses: map[] -Jun 12 21:08:33.789: INFO: reached 172.30.224.13 after 0/1 tries -Jun 12 21:08:33.789: INFO: Going to retry 0 out of 3 pods.... -[AfterEach] [sig-network] Networking +[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:74 +STEP: Creating configMap with name configmap-test-volume-9bef21e6-e411-4ea9-9de7-ea08685b75d7 07/27/23 01:51:11.366 +STEP: Creating a pod to test consume configMaps 07/27/23 01:51:11.395 +Jul 27 01:51:11.422: INFO: Waiting up to 5m0s for pod "pod-configmaps-982c94be-dc9a-44f0-86ce-1709690e2883" in namespace "configmap-9402" to be "Succeeded or Failed" +Jul 27 01:51:11.441: INFO: Pod "pod-configmaps-982c94be-dc9a-44f0-86ce-1709690e2883": Phase="Pending", Reason="", readiness=false. Elapsed: 19.586253ms +Jul 27 01:51:13.454: INFO: Pod "pod-configmaps-982c94be-dc9a-44f0-86ce-1709690e2883": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031840512s +Jul 27 01:51:15.450: INFO: Pod "pod-configmaps-982c94be-dc9a-44f0-86ce-1709690e2883": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028593881s +Jul 27 01:51:17.457: INFO: Pod "pod-configmaps-982c94be-dc9a-44f0-86ce-1709690e2883": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034884638s +STEP: Saw pod success 07/27/23 01:51:17.457 +Jul 27 01:51:17.457: INFO: Pod "pod-configmaps-982c94be-dc9a-44f0-86ce-1709690e2883" satisfied condition "Succeeded or Failed" +Jul 27 01:51:17.465: INFO: Trying to get logs from node 10.245.128.19 pod pod-configmaps-982c94be-dc9a-44f0-86ce-1709690e2883 container agnhost-container: +STEP: delete the pod 07/27/23 01:51:17.484 +Jul 27 01:51:17.505: INFO: Waiting for pod pod-configmaps-982c94be-dc9a-44f0-86ce-1709690e2883 to disappear +Jul 27 01:51:17.517: INFO: Pod pod-configmaps-982c94be-dc9a-44f0-86ce-1709690e2883 no longer exists +[AfterEach] [sig-storage] ConfigMap test/e2e/framework/node/init/init.go:32 -Jun 12 21:08:33.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] Networking +Jul 27 01:51:17.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] Networking +[DeferCleanup (Each)] [sig-storage] ConfigMap dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] Networking +[DeferCleanup (Each)] [sig-storage] ConfigMap tear down framework | framework.go:193 -STEP: Destroying namespace "pod-network-test-1977" for this suite. 06/12/23 21:08:33.803 +STEP: Destroying namespace "configmap-9402" for this suite. 07/27/23 01:51:17.533 ------------------------------ -• [SLOW TEST] [27.102 seconds] -[sig-network] Networking -test/e2e/common/network/framework.go:23 - Granular Checks: Pods - test/e2e/common/network/networking.go:32 - should function for intra-pod communication: udp [NodeConformance] [Conformance] - test/e2e/common/network/networking.go:93 +• [SLOW TEST] [6.258 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:74 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] Networking + [BeforeEach] [sig-storage] ConfigMap set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:08:06.713 - Jun 12 21:08:06.714: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename pod-network-test 06/12/23 21:08:06.716 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:08:06.789 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:08:06.798 - [BeforeEach] [sig-network] Networking + STEP: Creating a kubernetes client 07/27/23 01:51:11.301 + Jul 27 01:51:11.301: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename configmap 07/27/23 01:51:11.302 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:51:11.346 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:51:11.355 + [BeforeEach] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:31 - [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] - test/e2e/common/network/networking.go:93 - STEP: Performing setup for networking test in namespace pod-network-test-1977 06/12/23 21:08:06.81 - STEP: creating a selector 06/12/23 21:08:06.81 - STEP: Creating the service pods in kubernetes 06/12/23 21:08:06.81 - Jun 12 21:08:06.811: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable - Jun 12 21:08:06.894: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-1977" to be "running and ready" - Jun 12 21:08:06.911: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 17.219732ms - Jun 12 21:08:06.911: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:08:08.922: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028248728s - Jun 12 21:08:08.922: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:08:10.921: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.027299865s - Jun 12 21:08:10.922: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:08:12.922: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.028175579s - Jun 12 21:08:12.922: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:08:14.922: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.027865475s - Jun 12 21:08:14.922: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:08:16.950: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.055651275s - Jun 12 21:08:16.950: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:08:18.951: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.056751436s - Jun 12 21:08:18.951: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:08:20.922: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.027853601s - Jun 12 21:08:20.922: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:08:22.934: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.040433336s - Jun 12 21:08:22.934: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:08:24.995: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.100978172s - Jun 12 21:08:24.995: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:08:26.923: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.028983205s - Jun 12 21:08:26.923: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:08:28.923: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 22.028954818s - Jun 12 21:08:28.923: INFO: The phase of Pod netserver-0 is Running (Ready = true) - Jun 12 21:08:28.923: INFO: Pod "netserver-0" satisfied condition "running and ready" - Jun 12 21:08:28.932: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-1977" to be "running and ready" - Jun 12 21:08:28.940: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 8.400539ms - Jun 12 21:08:28.940: INFO: The phase of Pod netserver-1 is Running (Ready = true) - Jun 12 21:08:28.940: INFO: Pod "netserver-1" satisfied condition "running and ready" - Jun 12 21:08:28.950: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-1977" to be "running and ready" - Jun 12 21:08:28.959: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 9.21251ms - Jun 12 21:08:28.959: INFO: The phase of Pod netserver-2 is Running (Ready = true) - Jun 12 21:08:28.959: INFO: Pod "netserver-2" satisfied condition "running and ready" - STEP: Creating test pods 06/12/23 21:08:28.969 - Jun 12 21:08:28.986: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-1977" to be "running" - Jun 12 21:08:28.996: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 9.874678ms - Jun 12 21:08:31.014: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027766409s - Jun 12 21:08:33.009: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.02276188s - Jun 12 21:08:33.009: INFO: Pod "test-container-pod" satisfied condition "running" - Jun 12 21:08:33.018: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 - Jun 12 21:08:33.018: INFO: Breadth first check of 172.30.161.122 on host 10.138.75.112... - Jun 12 21:08:33.030: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.30.224.30:9080/dial?request=hostname&protocol=udp&host=172.30.161.122&port=8081&tries=1'] Namespace:pod-network-test-1977 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} - Jun 12 21:08:33.030: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 21:08:33.031: INFO: ExecWithOptions: Clientset creation - Jun 12 21:08:33.031: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-1977/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F172.30.224.30%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D172.30.161.122%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) - Jun 12 21:08:33.371: INFO: Waiting for responses: map[] - Jun 12 21:08:33.371: INFO: reached 172.30.161.122 after 0/1 tries - Jun 12 21:08:33.371: INFO: Breadth first check of 172.30.185.132 on host 10.138.75.116... - Jun 12 21:08:33.411: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.30.224.30:9080/dial?request=hostname&protocol=udp&host=172.30.185.132&port=8081&tries=1'] Namespace:pod-network-test-1977 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} - Jun 12 21:08:33.411: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 21:08:33.412: INFO: ExecWithOptions: Clientset creation - Jun 12 21:08:33.413: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-1977/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F172.30.224.30%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D172.30.185.132%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) - Jun 12 21:08:33.599: INFO: Waiting for responses: map[] - Jun 12 21:08:33.599: INFO: reached 172.30.185.132 after 0/1 tries - Jun 12 21:08:33.599: INFO: Breadth first check of 172.30.224.13 on host 10.138.75.70... - Jun 12 21:08:33.622: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.30.224.30:9080/dial?request=hostname&protocol=udp&host=172.30.224.13&port=8081&tries=1'] Namespace:pod-network-test-1977 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} - Jun 12 21:08:33.622: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 21:08:33.623: INFO: ExecWithOptions: Clientset creation - Jun 12 21:08:33.623: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-1977/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F172.30.224.30%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D172.30.224.13%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) - Jun 12 21:08:33.789: INFO: Waiting for responses: map[] - Jun 12 21:08:33.789: INFO: reached 172.30.224.13 after 0/1 tries - Jun 12 21:08:33.789: INFO: Going to retry 0 out of 3 pods.... - [AfterEach] [sig-network] Networking + [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:74 + STEP: Creating configMap with name configmap-test-volume-9bef21e6-e411-4ea9-9de7-ea08685b75d7 07/27/23 01:51:11.366 + STEP: Creating a pod to test consume configMaps 07/27/23 01:51:11.395 + Jul 27 01:51:11.422: INFO: Waiting up to 5m0s for pod "pod-configmaps-982c94be-dc9a-44f0-86ce-1709690e2883" in namespace "configmap-9402" to be "Succeeded or Failed" + Jul 27 01:51:11.441: INFO: Pod "pod-configmaps-982c94be-dc9a-44f0-86ce-1709690e2883": Phase="Pending", Reason="", readiness=false. Elapsed: 19.586253ms + Jul 27 01:51:13.454: INFO: Pod "pod-configmaps-982c94be-dc9a-44f0-86ce-1709690e2883": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031840512s + Jul 27 01:51:15.450: INFO: Pod "pod-configmaps-982c94be-dc9a-44f0-86ce-1709690e2883": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028593881s + Jul 27 01:51:17.457: INFO: Pod "pod-configmaps-982c94be-dc9a-44f0-86ce-1709690e2883": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034884638s + STEP: Saw pod success 07/27/23 01:51:17.457 + Jul 27 01:51:17.457: INFO: Pod "pod-configmaps-982c94be-dc9a-44f0-86ce-1709690e2883" satisfied condition "Succeeded or Failed" + Jul 27 01:51:17.465: INFO: Trying to get logs from node 10.245.128.19 pod pod-configmaps-982c94be-dc9a-44f0-86ce-1709690e2883 container agnhost-container: + STEP: delete the pod 07/27/23 01:51:17.484 + Jul 27 01:51:17.505: INFO: Waiting for pod pod-configmaps-982c94be-dc9a-44f0-86ce-1709690e2883 to disappear + Jul 27 01:51:17.517: INFO: Pod pod-configmaps-982c94be-dc9a-44f0-86ce-1709690e2883 no longer exists + [AfterEach] [sig-storage] ConfigMap test/e2e/framework/node/init/init.go:32 - Jun 12 21:08:33.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] Networking + Jul 27 01:51:17.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] Networking + [DeferCleanup (Each)] [sig-storage] ConfigMap dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] Networking + [DeferCleanup (Each)] [sig-storage] ConfigMap tear down framework | framework.go:193 - STEP: Destroying namespace "pod-network-test-1977" for this suite. 06/12/23 21:08:33.803 + STEP: Destroying namespace "configmap-9402" for this suite. 07/27/23 01:51:17.533 << End Captured GinkgoWriter Output ------------------------------ -SSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-apps] ControllerRevision [Serial] - should manage the lifecycle of a ControllerRevision [Conformance] - test/e2e/apps/controller_revision.go:124 -[BeforeEach] [sig-apps] ControllerRevision [Serial] +[sig-node] PodTemplates + should replace a pod template [Conformance] + test/e2e/common/node/podtemplates.go:176 +[BeforeEach] [sig-node] PodTemplates set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:08:33.819 -Jun 12 21:08:33.820: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename controllerrevisions 06/12/23 21:08:33.825 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:08:33.872 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:08:33.882 -[BeforeEach] [sig-apps] ControllerRevision [Serial] +STEP: Creating a kubernetes client 07/27/23 01:51:17.561 +Jul 27 01:51:17.562: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename podtemplate 07/27/23 01:51:17.563 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:51:17.606 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:51:17.616 +[BeforeEach] [sig-node] PodTemplates test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-apps] ControllerRevision [Serial] - test/e2e/apps/controller_revision.go:93 -[It] should manage the lifecycle of a ControllerRevision [Conformance] - test/e2e/apps/controller_revision.go:124 -STEP: Creating DaemonSet "e2e-8jl6b-daemon-set" 06/12/23 21:08:33.985 -STEP: Check that daemon pods launch on every node of the cluster. 06/12/23 21:08:34.003 -Jun 12 21:08:34.025: INFO: Number of nodes with available pods controlled by daemonset e2e-8jl6b-daemon-set: 0 -Jun 12 21:08:34.026: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 -Jun 12 21:08:35.053: INFO: Number of nodes with available pods controlled by daemonset e2e-8jl6b-daemon-set: 0 -Jun 12 21:08:35.053: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 -Jun 12 21:08:36.094: INFO: Number of nodes with available pods controlled by daemonset e2e-8jl6b-daemon-set: 0 -Jun 12 21:08:36.094: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 -Jun 12 21:08:37.049: INFO: Number of nodes with available pods controlled by daemonset e2e-8jl6b-daemon-set: 3 -Jun 12 21:08:37.049: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset e2e-8jl6b-daemon-set -STEP: Confirm DaemonSet "e2e-8jl6b-daemon-set" successfully created with "daemonset-name=e2e-8jl6b-daemon-set" label 06/12/23 21:08:37.06 -STEP: Listing all ControllerRevisions with label "daemonset-name=e2e-8jl6b-daemon-set" 06/12/23 21:08:37.079 -Jun 12 21:08:37.098: INFO: Located ControllerRevision: "e2e-8jl6b-daemon-set-75965fc679" -STEP: Patching ControllerRevision "e2e-8jl6b-daemon-set-75965fc679" 06/12/23 21:08:37.109 -Jun 12 21:08:37.127: INFO: e2e-8jl6b-daemon-set-75965fc679 has been patched -STEP: Create a new ControllerRevision 06/12/23 21:08:37.127 -Jun 12 21:08:37.143: INFO: Created ControllerRevision: e2e-8jl6b-daemon-set-7467879c59 -STEP: Confirm that there are two ControllerRevisions 06/12/23 21:08:37.143 -Jun 12 21:08:37.143: INFO: Requesting list of ControllerRevisions to confirm quantity -Jun 12 21:08:37.155: INFO: Found 2 ControllerRevisions -STEP: Deleting ControllerRevision "e2e-8jl6b-daemon-set-75965fc679" 06/12/23 21:08:37.155 -STEP: Confirm that there is only one ControllerRevision 06/12/23 21:08:37.177 -Jun 12 21:08:37.177: INFO: Requesting list of ControllerRevisions to confirm quantity -Jun 12 21:08:37.188: INFO: Found 1 ControllerRevisions -STEP: Updating ControllerRevision "e2e-8jl6b-daemon-set-7467879c59" 06/12/23 21:08:37.199 -Jun 12 21:08:37.228: INFO: e2e-8jl6b-daemon-set-7467879c59 has been updated -STEP: Generate another ControllerRevision by patching the Daemonset 06/12/23 21:08:37.228 -W0612 21:08:37.251497 23 warnings.go:70] unknown field "updateStrategy" -STEP: Confirm that there are two ControllerRevisions 06/12/23 21:08:37.251 -Jun 12 21:08:37.252: INFO: Requesting list of ControllerRevisions to confirm quantity -Jun 12 21:08:38.264: INFO: Requesting list of ControllerRevisions to confirm quantity -Jun 12 21:08:38.277: INFO: Found 2 ControllerRevisions -STEP: Removing a ControllerRevision via 'DeleteCollection' with labelSelector: "e2e-8jl6b-daemon-set-7467879c59=updated" 06/12/23 21:08:38.277 -STEP: Confirm that there is only one ControllerRevision 06/12/23 21:08:38.304 -Jun 12 21:08:38.304: INFO: Requesting list of ControllerRevisions to confirm quantity -Jun 12 21:08:38.316: INFO: Found 1 ControllerRevisions -Jun 12 21:08:38.330: INFO: ControllerRevision "e2e-8jl6b-daemon-set-6b6849bd78" has revision 3 -[AfterEach] [sig-apps] ControllerRevision [Serial] - test/e2e/apps/controller_revision.go:58 -STEP: Deleting DaemonSet "e2e-8jl6b-daemon-set" 06/12/23 21:08:38.339 -STEP: deleting DaemonSet.extensions e2e-8jl6b-daemon-set in namespace controllerrevisions-6294, will wait for the garbage collector to delete the pods 06/12/23 21:08:38.339 -Jun 12 21:08:38.413: INFO: Deleting DaemonSet.extensions e2e-8jl6b-daemon-set took: 15.732258ms -Jun 12 21:08:38.514: INFO: Terminating DaemonSet.extensions e2e-8jl6b-daemon-set pods took: 100.953536ms -Jun 12 21:08:41.525: INFO: Number of nodes with available pods controlled by daemonset e2e-8jl6b-daemon-set: 0 -Jun 12 21:08:41.525: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset e2e-8jl6b-daemon-set -Jun 12 21:08:41.536: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"91549"},"items":null} - -Jun 12 21:08:41.545: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"91549"},"items":null} +[It] should replace a pod template [Conformance] + test/e2e/common/node/podtemplates.go:176 +STEP: Create a pod template 07/27/23 01:51:17.627 +W0727 01:51:18.648664 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "e2e-test" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "e2e-test" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "e2e-test" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "e2e-test" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +STEP: Replace a pod template 07/27/23 01:51:18.648 +Jul 27 01:51:18.678: INFO: Found updated podtemplate annotation: "true" -[AfterEach] [sig-apps] ControllerRevision [Serial] +[AfterEach] [sig-node] PodTemplates test/e2e/framework/node/init/init.go:32 -Jun 12 21:08:41.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] +Jul 27 01:51:18.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] PodTemplates test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] +[DeferCleanup (Each)] [sig-node] PodTemplates dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] +[DeferCleanup (Each)] [sig-node] PodTemplates tear down framework | framework.go:193 -STEP: Destroying namespace "controllerrevisions-6294" for this suite. 06/12/23 21:08:41.617 +STEP: Destroying namespace "podtemplate-2834" for this suite. 07/27/23 01:51:18.691 ------------------------------ -• [SLOW TEST] [7.815 seconds] -[sig-apps] ControllerRevision [Serial] -test/e2e/apps/framework.go:23 - should manage the lifecycle of a ControllerRevision [Conformance] - test/e2e/apps/controller_revision.go:124 +• [1.153 seconds] +[sig-node] PodTemplates +test/e2e/common/node/framework.go:23 + should replace a pod template [Conformance] + test/e2e/common/node/podtemplates.go:176 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] ControllerRevision [Serial] + [BeforeEach] [sig-node] PodTemplates set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:08:33.819 - Jun 12 21:08:33.820: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename controllerrevisions 06/12/23 21:08:33.825 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:08:33.872 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:08:33.882 - [BeforeEach] [sig-apps] ControllerRevision [Serial] + STEP: Creating a kubernetes client 07/27/23 01:51:17.561 + Jul 27 01:51:17.562: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename podtemplate 07/27/23 01:51:17.563 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:51:17.606 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:51:17.616 + [BeforeEach] [sig-node] PodTemplates test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-apps] ControllerRevision [Serial] - test/e2e/apps/controller_revision.go:93 - [It] should manage the lifecycle of a ControllerRevision [Conformance] - test/e2e/apps/controller_revision.go:124 - STEP: Creating DaemonSet "e2e-8jl6b-daemon-set" 06/12/23 21:08:33.985 - STEP: Check that daemon pods launch on every node of the cluster. 06/12/23 21:08:34.003 - Jun 12 21:08:34.025: INFO: Number of nodes with available pods controlled by daemonset e2e-8jl6b-daemon-set: 0 - Jun 12 21:08:34.026: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 - Jun 12 21:08:35.053: INFO: Number of nodes with available pods controlled by daemonset e2e-8jl6b-daemon-set: 0 - Jun 12 21:08:35.053: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 - Jun 12 21:08:36.094: INFO: Number of nodes with available pods controlled by daemonset e2e-8jl6b-daemon-set: 0 - Jun 12 21:08:36.094: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 - Jun 12 21:08:37.049: INFO: Number of nodes with available pods controlled by daemonset e2e-8jl6b-daemon-set: 3 - Jun 12 21:08:37.049: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset e2e-8jl6b-daemon-set - STEP: Confirm DaemonSet "e2e-8jl6b-daemon-set" successfully created with "daemonset-name=e2e-8jl6b-daemon-set" label 06/12/23 21:08:37.06 - STEP: Listing all ControllerRevisions with label "daemonset-name=e2e-8jl6b-daemon-set" 06/12/23 21:08:37.079 - Jun 12 21:08:37.098: INFO: Located ControllerRevision: "e2e-8jl6b-daemon-set-75965fc679" - STEP: Patching ControllerRevision "e2e-8jl6b-daemon-set-75965fc679" 06/12/23 21:08:37.109 - Jun 12 21:08:37.127: INFO: e2e-8jl6b-daemon-set-75965fc679 has been patched - STEP: Create a new ControllerRevision 06/12/23 21:08:37.127 - Jun 12 21:08:37.143: INFO: Created ControllerRevision: e2e-8jl6b-daemon-set-7467879c59 - STEP: Confirm that there are two ControllerRevisions 06/12/23 21:08:37.143 - Jun 12 21:08:37.143: INFO: Requesting list of ControllerRevisions to confirm quantity - Jun 12 21:08:37.155: INFO: Found 2 ControllerRevisions - STEP: Deleting ControllerRevision "e2e-8jl6b-daemon-set-75965fc679" 06/12/23 21:08:37.155 - STEP: Confirm that there is only one ControllerRevision 06/12/23 21:08:37.177 - Jun 12 21:08:37.177: INFO: Requesting list of ControllerRevisions to confirm quantity - Jun 12 21:08:37.188: INFO: Found 1 ControllerRevisions - STEP: Updating ControllerRevision "e2e-8jl6b-daemon-set-7467879c59" 06/12/23 21:08:37.199 - Jun 12 21:08:37.228: INFO: e2e-8jl6b-daemon-set-7467879c59 has been updated - STEP: Generate another ControllerRevision by patching the Daemonset 06/12/23 21:08:37.228 - W0612 21:08:37.251497 23 warnings.go:70] unknown field "updateStrategy" - STEP: Confirm that there are two ControllerRevisions 06/12/23 21:08:37.251 - Jun 12 21:08:37.252: INFO: Requesting list of ControllerRevisions to confirm quantity - Jun 12 21:08:38.264: INFO: Requesting list of ControllerRevisions to confirm quantity - Jun 12 21:08:38.277: INFO: Found 2 ControllerRevisions - STEP: Removing a ControllerRevision via 'DeleteCollection' with labelSelector: "e2e-8jl6b-daemon-set-7467879c59=updated" 06/12/23 21:08:38.277 - STEP: Confirm that there is only one ControllerRevision 06/12/23 21:08:38.304 - Jun 12 21:08:38.304: INFO: Requesting list of ControllerRevisions to confirm quantity - Jun 12 21:08:38.316: INFO: Found 1 ControllerRevisions - Jun 12 21:08:38.330: INFO: ControllerRevision "e2e-8jl6b-daemon-set-6b6849bd78" has revision 3 - [AfterEach] [sig-apps] ControllerRevision [Serial] - test/e2e/apps/controller_revision.go:58 - STEP: Deleting DaemonSet "e2e-8jl6b-daemon-set" 06/12/23 21:08:38.339 - STEP: deleting DaemonSet.extensions e2e-8jl6b-daemon-set in namespace controllerrevisions-6294, will wait for the garbage collector to delete the pods 06/12/23 21:08:38.339 - Jun 12 21:08:38.413: INFO: Deleting DaemonSet.extensions e2e-8jl6b-daemon-set took: 15.732258ms - Jun 12 21:08:38.514: INFO: Terminating DaemonSet.extensions e2e-8jl6b-daemon-set pods took: 100.953536ms - Jun 12 21:08:41.525: INFO: Number of nodes with available pods controlled by daemonset e2e-8jl6b-daemon-set: 0 - Jun 12 21:08:41.525: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset e2e-8jl6b-daemon-set - Jun 12 21:08:41.536: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"91549"},"items":null} - - Jun 12 21:08:41.545: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"91549"},"items":null} + [It] should replace a pod template [Conformance] + test/e2e/common/node/podtemplates.go:176 + STEP: Create a pod template 07/27/23 01:51:17.627 + W0727 01:51:18.648664 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "e2e-test" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "e2e-test" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "e2e-test" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "e2e-test" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + STEP: Replace a pod template 07/27/23 01:51:18.648 + Jul 27 01:51:18.678: INFO: Found updated podtemplate annotation: "true" - [AfterEach] [sig-apps] ControllerRevision [Serial] + [AfterEach] [sig-node] PodTemplates test/e2e/framework/node/init/init.go:32 - Jun 12 21:08:41.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] + Jul 27 01:51:18.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] PodTemplates test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] + [DeferCleanup (Each)] [sig-node] PodTemplates dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] + [DeferCleanup (Each)] [sig-node] PodTemplates tear down framework | framework.go:193 - STEP: Destroying namespace "controllerrevisions-6294" for this suite. 06/12/23 21:08:41.617 + STEP: Destroying namespace "podtemplate-2834" for this suite. 07/27/23 01:51:18.691 << End Captured GinkgoWriter Output ------------------------------ -SSSSS +SSSS ------------------------------ -[sig-storage] EmptyDir volumes - should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:107 -[BeforeEach] [sig-storage] EmptyDir volumes +[sig-cli] Kubectl client Kubectl expose + should create services for rc [Conformance] + test/e2e/kubectl/kubectl.go:1415 +[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:08:41.638 -Jun 12 21:08:41.639: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename emptydir 06/12/23 21:08:41.653 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:08:41.696 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:08:41.706 -[BeforeEach] [sig-storage] EmptyDir volumes +STEP: Creating a kubernetes client 07/27/23 01:51:18.715 +Jul 27 01:51:18.715: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename kubectl 07/27/23 01:51:18.716 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:51:18.756 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:51:18.766 +[BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 -[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:107 -STEP: Creating a pod to test emptydir 0666 on tmpfs 06/12/23 21:08:41.721 -Jun 12 21:08:41.752: INFO: Waiting up to 5m0s for pod "pod-04835b66-d3ce-4ab5-b183-770c791f4fb7" in namespace "emptydir-9380" to be "Succeeded or Failed" -Jun 12 21:08:41.766: INFO: Pod "pod-04835b66-d3ce-4ab5-b183-770c791f4fb7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.073649ms -Jun 12 21:08:43.805: INFO: Pod "pod-04835b66-d3ce-4ab5-b183-770c791f4fb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052865696s -Jun 12 21:08:45.776: INFO: Pod "pod-04835b66-d3ce-4ab5-b183-770c791f4fb7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024157666s -Jun 12 21:08:47.777: INFO: Pod "pod-04835b66-d3ce-4ab5-b183-770c791f4fb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024746904s -STEP: Saw pod success 06/12/23 21:08:47.777 -Jun 12 21:08:47.777: INFO: Pod "pod-04835b66-d3ce-4ab5-b183-770c791f4fb7" satisfied condition "Succeeded or Failed" -Jun 12 21:08:47.787: INFO: Trying to get logs from node 10.138.75.70 pod pod-04835b66-d3ce-4ab5-b183-770c791f4fb7 container test-container: -STEP: delete the pod 06/12/23 21:08:47.813 -Jun 12 21:08:47.840: INFO: Waiting for pod pod-04835b66-d3ce-4ab5-b183-770c791f4fb7 to disappear -Jun 12 21:08:47.849: INFO: Pod pod-04835b66-d3ce-4ab5-b183-770c791f4fb7 no longer exists -[AfterEach] [sig-storage] EmptyDir volumes +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[It] should create services for rc [Conformance] + test/e2e/kubectl/kubectl.go:1415 +STEP: creating Agnhost RC 07/27/23 01:51:18.776 +Jul 27 01:51:18.776: INFO: namespace kubectl-4642 +Jul 27 01:51:18.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-4642 create -f -' +Jul 27 01:51:19.724: INFO: stderr: "" +Jul 27 01:51:19.724: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. 07/27/23 01:51:19.724 +Jul 27 01:51:20.735: INFO: Selector matched 1 pods for map[app:agnhost] +Jul 27 01:51:20.735: INFO: Found 0 / 1 +Jul 27 01:51:21.738: INFO: Selector matched 1 pods for map[app:agnhost] +Jul 27 01:51:21.738: INFO: Found 1 / 1 +Jul 27 01:51:21.738: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Jul 27 01:51:21.748: INFO: Selector matched 1 pods for map[app:agnhost] +Jul 27 01:51:21.748: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Jul 27 01:51:21.748: INFO: wait on agnhost-primary startup in kubectl-4642 +Jul 27 01:51:21.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-4642 logs agnhost-primary-hrwlc agnhost-primary' +Jul 27 01:51:21.868: INFO: stderr: "" +Jul 27 01:51:21.868: INFO: stdout: "Paused\n" +STEP: exposing RC 07/27/23 01:51:21.868 +Jul 27 01:51:21.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-4642 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' +Jul 27 01:51:21.979: INFO: stderr: "" +Jul 27 01:51:21.979: INFO: stdout: "service/rm2 exposed\n" +Jul 27 01:51:21.992: INFO: Service rm2 in namespace kubectl-4642 found. +STEP: exposing service 07/27/23 01:51:24.015 +Jul 27 01:51:24.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-4642 expose service rm2 --name=rm3 --port=2345 --target-port=6379' +Jul 27 01:51:24.119: INFO: stderr: "" +Jul 27 01:51:24.119: INFO: stdout: "service/rm3 exposed\n" +Jul 27 01:51:24.131: INFO: Service rm3 in namespace kubectl-4642 found. +[AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 -Jun 12 21:08:47.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +Jul 27 01:51:26.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +[DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +[DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 -STEP: Destroying namespace "emptydir-9380" for this suite. 06/12/23 21:08:47.864 +STEP: Destroying namespace "kubectl-4642" for this suite. 07/27/23 01:51:26.169 ------------------------------ -• [SLOW TEST] [6.241 seconds] -[sig-storage] EmptyDir volumes -test/e2e/common/storage/framework.go:23 - should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:107 +• [SLOW TEST] [7.480 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl expose + test/e2e/kubectl/kubectl.go:1409 + should create services for rc [Conformance] + test/e2e/kubectl/kubectl.go:1415 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] EmptyDir volumes + [BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:08:41.638 - Jun 12 21:08:41.639: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename emptydir 06/12/23 21:08:41.653 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:08:41.696 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:08:41.706 - [BeforeEach] [sig-storage] EmptyDir volumes + STEP: Creating a kubernetes client 07/27/23 01:51:18.715 + Jul 27 01:51:18.715: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename kubectl 07/27/23 01:51:18.716 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:51:18.756 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:51:18.766 + [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 - [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:107 - STEP: Creating a pod to test emptydir 0666 on tmpfs 06/12/23 21:08:41.721 - Jun 12 21:08:41.752: INFO: Waiting up to 5m0s for pod "pod-04835b66-d3ce-4ab5-b183-770c791f4fb7" in namespace "emptydir-9380" to be "Succeeded or Failed" - Jun 12 21:08:41.766: INFO: Pod "pod-04835b66-d3ce-4ab5-b183-770c791f4fb7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.073649ms - Jun 12 21:08:43.805: INFO: Pod "pod-04835b66-d3ce-4ab5-b183-770c791f4fb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052865696s - Jun 12 21:08:45.776: INFO: Pod "pod-04835b66-d3ce-4ab5-b183-770c791f4fb7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024157666s - Jun 12 21:08:47.777: INFO: Pod "pod-04835b66-d3ce-4ab5-b183-770c791f4fb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024746904s - STEP: Saw pod success 06/12/23 21:08:47.777 - Jun 12 21:08:47.777: INFO: Pod "pod-04835b66-d3ce-4ab5-b183-770c791f4fb7" satisfied condition "Succeeded or Failed" - Jun 12 21:08:47.787: INFO: Trying to get logs from node 10.138.75.70 pod pod-04835b66-d3ce-4ab5-b183-770c791f4fb7 container test-container: - STEP: delete the pod 06/12/23 21:08:47.813 - Jun 12 21:08:47.840: INFO: Waiting for pod pod-04835b66-d3ce-4ab5-b183-770c791f4fb7 to disappear - Jun 12 21:08:47.849: INFO: Pod pod-04835b66-d3ce-4ab5-b183-770c791f4fb7 no longer exists - [AfterEach] [sig-storage] EmptyDir volumes + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [It] should create services for rc [Conformance] + test/e2e/kubectl/kubectl.go:1415 + STEP: creating Agnhost RC 07/27/23 01:51:18.776 + Jul 27 01:51:18.776: INFO: namespace kubectl-4642 + Jul 27 01:51:18.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-4642 create -f -' + Jul 27 01:51:19.724: INFO: stderr: "" + Jul 27 01:51:19.724: INFO: stdout: "replicationcontroller/agnhost-primary created\n" + STEP: Waiting for Agnhost primary to start. 07/27/23 01:51:19.724 + Jul 27 01:51:20.735: INFO: Selector matched 1 pods for map[app:agnhost] + Jul 27 01:51:20.735: INFO: Found 0 / 1 + Jul 27 01:51:21.738: INFO: Selector matched 1 pods for map[app:agnhost] + Jul 27 01:51:21.738: INFO: Found 1 / 1 + Jul 27 01:51:21.738: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 + Jul 27 01:51:21.748: INFO: Selector matched 1 pods for map[app:agnhost] + Jul 27 01:51:21.748: INFO: ForEach: Found 1 pods from the filter. Now looping through them. + Jul 27 01:51:21.748: INFO: wait on agnhost-primary startup in kubectl-4642 + Jul 27 01:51:21.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-4642 logs agnhost-primary-hrwlc agnhost-primary' + Jul 27 01:51:21.868: INFO: stderr: "" + Jul 27 01:51:21.868: INFO: stdout: "Paused\n" + STEP: exposing RC 07/27/23 01:51:21.868 + Jul 27 01:51:21.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-4642 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' + Jul 27 01:51:21.979: INFO: stderr: "" + Jul 27 01:51:21.979: INFO: stdout: "service/rm2 exposed\n" + Jul 27 01:51:21.992: INFO: Service rm2 in namespace kubectl-4642 found. + STEP: exposing service 07/27/23 01:51:24.015 + Jul 27 01:51:24.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-4642 expose service rm2 --name=rm3 --port=2345 --target-port=6379' + Jul 27 01:51:24.119: INFO: stderr: "" + Jul 27 01:51:24.119: INFO: stdout: "service/rm3 exposed\n" + Jul 27 01:51:24.131: INFO: Service rm3 in namespace kubectl-4642 found. + [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 - Jun 12 21:08:47.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + Jul 27 01:51:26.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 - STEP: Destroying namespace "emptydir-9380" for this suite. 06/12/23 21:08:47.864 + STEP: Destroying namespace "kubectl-4642" for this suite. 07/27/23 01:51:26.169 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSS +SSSSSSSSSSS ------------------------------ -[sig-node] ConfigMap - should fail to create ConfigMap with empty key [Conformance] - test/e2e/common/node/configmap.go:138 -[BeforeEach] [sig-node] ConfigMap +[sig-api-machinery] Aggregator + Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + test/e2e/apimachinery/aggregator.go:100 +[BeforeEach] [sig-api-machinery] Aggregator set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:08:47.882 -Jun 12 21:08:47.882: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename configmap 06/12/23 21:08:47.884 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:08:47.927 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:08:47.939 -[BeforeEach] [sig-node] ConfigMap +STEP: Creating a kubernetes client 07/27/23 01:51:26.196 +Jul 27 01:51:26.196: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename aggregator 07/27/23 01:51:26.197 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:51:26.239 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:51:26.249 +[BeforeEach] [sig-api-machinery] Aggregator test/e2e/framework/metrics/init/init.go:31 -[It] should fail to create ConfigMap with empty key [Conformance] - test/e2e/common/node/configmap.go:138 -STEP: Creating configMap that has name configmap-test-emptyKey-b1937516-8c7b-4431-a954-b7abdf21f6c1 06/12/23 21:08:47.951 -[AfterEach] [sig-node] ConfigMap +[BeforeEach] [sig-api-machinery] Aggregator + test/e2e/apimachinery/aggregator.go:78 +Jul 27 01:51:26.262: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + test/e2e/apimachinery/aggregator.go:100 +STEP: Registering the sample API server. 07/27/23 01:51:26.262 +Jul 27 01:51:26.647: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set +Jul 27 01:51:28.776: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jul 27 01:51:30.786: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jul 27 01:51:32.787: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jul 27 01:51:34.786: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jul 27 01:51:36.786: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jul 27 01:51:38.786: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jul 27 01:51:40.787: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jul 27 01:51:42.789: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jul 27 01:51:44.785: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jul 27 01:51:46.786: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jul 27 01:51:48.786: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jul 27 01:51:50.788: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jul 27 01:51:52.785: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jul 27 01:51:54.803: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jul 27 01:51:57.045: INFO: Waited 243.455393ms for the sample-apiserver to be ready to handle requests. +I0727 01:51:58.145337 20 request.go:690] Waited for 1.000770621s due to client-side throttling, not priority and fairness, request: GET:https://172.21.0.1:443/apis/template.openshift.io/v1 +STEP: Read Status for v1alpha1.wardle.example.com 07/27/23 01:51:58.471 +STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' 07/27/23 01:51:58.481 +STEP: List APIServices 07/27/23 01:51:58.511 +Jul 27 01:51:58.598: INFO: Found v1alpha1.wardle.example.com in APIServiceList +[AfterEach] [sig-api-machinery] Aggregator + test/e2e/apimachinery/aggregator.go:68 +[AfterEach] [sig-api-machinery] Aggregator test/e2e/framework/node/init/init.go:32 -Jun 12 21:08:47.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] ConfigMap +Jul 27 01:51:59.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Aggregator test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] ConfigMap +[DeferCleanup (Each)] [sig-api-machinery] Aggregator dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] ConfigMap +[DeferCleanup (Each)] [sig-api-machinery] Aggregator tear down framework | framework.go:193 -STEP: Destroying namespace "configmap-3014" for this suite. 06/12/23 21:08:47.981 +STEP: Destroying namespace "aggregator-4424" for this suite. 07/27/23 01:51:59.407 ------------------------------ -• [0.120 seconds] -[sig-node] ConfigMap -test/e2e/common/node/framework.go:23 - should fail to create ConfigMap with empty key [Conformance] - test/e2e/common/node/configmap.go:138 +• [SLOW TEST] [33.274 seconds] +[sig-api-machinery] Aggregator +test/e2e/apimachinery/framework.go:23 + Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + test/e2e/apimachinery/aggregator.go:100 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] ConfigMap + [BeforeEach] [sig-api-machinery] Aggregator set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:08:47.882 - Jun 12 21:08:47.882: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename configmap 06/12/23 21:08:47.884 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:08:47.927 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:08:47.939 - [BeforeEach] [sig-node] ConfigMap + STEP: Creating a kubernetes client 07/27/23 01:51:26.196 + Jul 27 01:51:26.196: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename aggregator 07/27/23 01:51:26.197 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:51:26.239 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:51:26.249 + [BeforeEach] [sig-api-machinery] Aggregator test/e2e/framework/metrics/init/init.go:31 - [It] should fail to create ConfigMap with empty key [Conformance] - test/e2e/common/node/configmap.go:138 - STEP: Creating configMap that has name configmap-test-emptyKey-b1937516-8c7b-4431-a954-b7abdf21f6c1 06/12/23 21:08:47.951 - [AfterEach] [sig-node] ConfigMap + [BeforeEach] [sig-api-machinery] Aggregator + test/e2e/apimachinery/aggregator.go:78 + Jul 27 01:51:26.262: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + test/e2e/apimachinery/aggregator.go:100 + STEP: Registering the sample API server. 07/27/23 01:51:26.262 + Jul 27 01:51:26.647: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set + Jul 27 01:51:28.776: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jul 27 01:51:30.786: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jul 27 01:51:32.787: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jul 27 01:51:34.786: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jul 27 01:51:36.786: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jul 27 01:51:38.786: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jul 27 01:51:40.787: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jul 27 01:51:42.789: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jul 27 01:51:44.785: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jul 27 01:51:46.786: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jul 27 01:51:48.786: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jul 27 01:51:50.788: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jul 27 01:51:52.785: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jul 27 01:51:54.803: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 1, 51, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jul 27 01:51:57.045: INFO: Waited 243.455393ms for the sample-apiserver to be ready to handle requests. + I0727 01:51:58.145337 20 request.go:690] Waited for 1.000770621s due to client-side throttling, not priority and fairness, request: GET:https://172.21.0.1:443/apis/template.openshift.io/v1 + STEP: Read Status for v1alpha1.wardle.example.com 07/27/23 01:51:58.471 + STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' 07/27/23 01:51:58.481 + STEP: List APIServices 07/27/23 01:51:58.511 + Jul 27 01:51:58.598: INFO: Found v1alpha1.wardle.example.com in APIServiceList + [AfterEach] [sig-api-machinery] Aggregator + test/e2e/apimachinery/aggregator.go:68 + [AfterEach] [sig-api-machinery] Aggregator test/e2e/framework/node/init/init.go:32 - Jun 12 21:08:47.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] ConfigMap + Jul 27 01:51:59.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Aggregator test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] ConfigMap + [DeferCleanup (Each)] [sig-api-machinery] Aggregator dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] ConfigMap + [DeferCleanup (Each)] [sig-api-machinery] Aggregator tear down framework | framework.go:193 - STEP: Destroying namespace "configmap-3014" for this suite. 06/12/23 21:08:47.981 + STEP: Destroying namespace "aggregator-4424" for this suite. 07/27/23 01:51:59.407 << End Captured GinkgoWriter Output ------------------------------ -S +SSSSS ------------------------------ -[sig-apps] Deployment - RecreateDeployment should delete old pods and create new ones [Conformance] - test/e2e/apps/deployment.go:113 -[BeforeEach] [sig-apps] Deployment +[sig-storage] Subpath Atomic writer volumes + should support subpaths with projected pod [Conformance] + test/e2e/storage/subpath.go:106 +[BeforeEach] [sig-storage] Subpath set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:08:48.008 -Jun 12 21:08:48.009: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename deployment 06/12/23 21:08:48.01 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:08:48.052 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:08:48.065 -[BeforeEach] [sig-apps] Deployment +STEP: Creating a kubernetes client 07/27/23 01:51:59.469 +Jul 27 01:51:59.470: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename subpath 07/27/23 01:51:59.47 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:51:59.513 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:51:59.522 +[BeforeEach] [sig-storage] Subpath test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:91 -[It] RecreateDeployment should delete old pods and create new ones [Conformance] - test/e2e/apps/deployment.go:113 -Jun 12 21:08:48.083: INFO: Creating deployment "test-recreate-deployment" -Jun 12 21:08:48.098: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 -Jun 12 21:08:48.127: INFO: deployment "test-recreate-deployment" doesn't have the required revision set -Jun 12 21:08:50.144: INFO: Waiting deployment "test-recreate-deployment" to complete -Jun 12 21:08:50.151: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 21, 8, 48, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 8, 48, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 8, 48, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 8, 48, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-795566c5cb\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 21:08:52.161: INFO: Triggering a new rollout for deployment "test-recreate-deployment" -Jun 12 21:08:52.190: INFO: Updating deployment test-recreate-deployment -Jun 12 21:08:52.190: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods -[AfterEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:84 -Jun 12 21:08:52.411: INFO: Deployment "test-recreate-deployment": -&Deployment{ObjectMeta:{test-recreate-deployment deployment-8680 67fae1f5-67cb-44cf-8fb7-91eb252878ca 91795 2 2023-06-12 21:08:48 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-06-12 21:08:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-06-12 21:08:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0049e4d18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-06-12 21:08:52 +0000 UTC,LastTransitionTime:2023-06-12 21:08:52 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-cff6dc657" is progressing.,LastUpdateTime:2023-06-12 21:08:52 +0000 UTC,LastTransitionTime:2023-06-12 21:08:48 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} - -Jun 12 21:08:52.459: INFO: New ReplicaSet "test-recreate-deployment-cff6dc657" of Deployment "test-recreate-deployment": -&ReplicaSet{ObjectMeta:{test-recreate-deployment-cff6dc657 deployment-8680 f2a16444-83be-4e61-b2d7-944d29d33f3e 91793 1 2023-06-12 21:08:52 +0000 UTC map[name:sample-pod-3 pod-template-hash:cff6dc657] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 67fae1f5-67cb-44cf-8fb7-91eb252878ca 0xc00442f640 0xc00442f641}] [] [{kube-controller-manager Update apps/v1 2023-06-12 21:08:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67fae1f5-67cb-44cf-8fb7-91eb252878ca\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-06-12 21:08:52 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: cff6dc657,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:cff6dc657] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00442f8a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} -Jun 12 21:08:52.459: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": -Jun 12 21:08:52.460: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-795566c5cb deployment-8680 de485fd7-3ba7-40c6-bc71-63a78a255a80 91783 2 2023-06-12 21:08:48 +0000 UTC map[name:sample-pod-3 pod-template-hash:795566c5cb] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 67fae1f5-67cb-44cf-8fb7-91eb252878ca 0xc00442f397 0xc00442f398}] [] [{kube-controller-manager Update apps/v1 2023-06-12 21:08:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67fae1f5-67cb-44cf-8fb7-91eb252878ca\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-06-12 21:08:52 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 795566c5cb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:795566c5cb] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00442f528 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} -Jun 12 21:08:52.490: INFO: Pod "test-recreate-deployment-cff6dc657-m224v" is not available: -&Pod{ObjectMeta:{test-recreate-deployment-cff6dc657-m224v test-recreate-deployment-cff6dc657- deployment-8680 43057103-01f7-4fe3-9651-c2cc8347b437 91794 0 2023-06-12 21:08:52 +0000 UTC map[name:sample-pod-3 pod-template-hash:cff6dc657] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-recreate-deployment-cff6dc657 f2a16444-83be-4e61-b2d7-944d29d33f3e 0xc0049e50e7 0xc0049e50e8}] [] [{kube-controller-manager Update v1 2023-06-12 21:08:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f2a16444-83be-4e61-b2d7-944d29d33f3e\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-06-12 21:08:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-tsdh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tsdh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.70,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c42,c34,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-rbbwb,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 21:08:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 21:08:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 21:08:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 21:08:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.70,PodIP:,StartTime:2023-06-12 21:08:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} -[AfterEach] [sig-apps] Deployment +[BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 +STEP: Setting up data 07/27/23 01:51:59.531 +[It] should support subpaths with projected pod [Conformance] + test/e2e/storage/subpath.go:106 +STEP: Creating pod pod-subpath-test-projected-rsdm 07/27/23 01:51:59.562 +STEP: Creating a pod to test atomic-volume-subpath 07/27/23 01:51:59.563 +Jul 27 01:51:59.588: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-rsdm" in namespace "subpath-5353" to be "Succeeded or Failed" +Jul 27 01:51:59.599: INFO: Pod "pod-subpath-test-projected-rsdm": Phase="Pending", Reason="", readiness=false. Elapsed: 11.034073ms +Jul 27 01:52:01.608: INFO: Pod "pod-subpath-test-projected-rsdm": Phase="Running", Reason="", readiness=true. Elapsed: 2.020326848s +Jul 27 01:52:03.608: INFO: Pod "pod-subpath-test-projected-rsdm": Phase="Running", Reason="", readiness=true. Elapsed: 4.020634491s +Jul 27 01:52:05.608: INFO: Pod "pod-subpath-test-projected-rsdm": Phase="Running", Reason="", readiness=true. Elapsed: 6.020337636s +Jul 27 01:52:07.609: INFO: Pod "pod-subpath-test-projected-rsdm": Phase="Running", Reason="", readiness=true. Elapsed: 8.021652705s +Jul 27 01:52:09.610: INFO: Pod "pod-subpath-test-projected-rsdm": Phase="Running", Reason="", readiness=true. Elapsed: 10.022658566s +Jul 27 01:52:11.612: INFO: Pod "pod-subpath-test-projected-rsdm": Phase="Running", Reason="", readiness=true. Elapsed: 12.024742143s +Jul 27 01:52:13.610: INFO: Pod "pod-subpath-test-projected-rsdm": Phase="Running", Reason="", readiness=true. Elapsed: 14.022484393s +Jul 27 01:52:15.609: INFO: Pod "pod-subpath-test-projected-rsdm": Phase="Running", Reason="", readiness=true. Elapsed: 16.021134727s +Jul 27 01:52:17.608: INFO: Pod "pod-subpath-test-projected-rsdm": Phase="Running", Reason="", readiness=true. Elapsed: 18.020756852s +Jul 27 01:52:19.609: INFO: Pod "pod-subpath-test-projected-rsdm": Phase="Running", Reason="", readiness=true. Elapsed: 20.020873497s +Jul 27 01:52:21.609: INFO: Pod "pod-subpath-test-projected-rsdm": Phase="Running", Reason="", readiness=true. Elapsed: 22.021288569s +Jul 27 01:52:23.608: INFO: Pod "pod-subpath-test-projected-rsdm": Phase="Running", Reason="", readiness=true. Elapsed: 24.020746227s +Jul 27 01:52:25.609: INFO: Pod "pod-subpath-test-projected-rsdm": Phase="Running", Reason="", readiness=false. Elapsed: 26.021448566s +Jul 27 01:52:27.608: INFO: Pod "pod-subpath-test-projected-rsdm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.02045533s +STEP: Saw pod success 07/27/23 01:52:27.608 +Jul 27 01:52:27.609: INFO: Pod "pod-subpath-test-projected-rsdm" satisfied condition "Succeeded or Failed" +Jul 27 01:52:27.619: INFO: Trying to get logs from node 10.245.128.19 pod pod-subpath-test-projected-rsdm container test-container-subpath-projected-rsdm: +STEP: delete the pod 07/27/23 01:52:27.643 +Jul 27 01:52:27.669: INFO: Waiting for pod pod-subpath-test-projected-rsdm to disappear +Jul 27 01:52:27.677: INFO: Pod pod-subpath-test-projected-rsdm no longer exists +STEP: Deleting pod pod-subpath-test-projected-rsdm 07/27/23 01:52:27.677 +Jul 27 01:52:27.677: INFO: Deleting pod "pod-subpath-test-projected-rsdm" in namespace "subpath-5353" +[AfterEach] [sig-storage] Subpath test/e2e/framework/node/init/init.go:32 -Jun 12 21:08:52.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] Deployment +Jul 27 01:52:27.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Subpath test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] Deployment +[DeferCleanup (Each)] [sig-storage] Subpath dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] Deployment +[DeferCleanup (Each)] [sig-storage] Subpath tear down framework | framework.go:193 -STEP: Destroying namespace "deployment-8680" for this suite. 06/12/23 21:08:52.545 +STEP: Destroying namespace "subpath-5353" for this suite. 07/27/23 01:52:27.7 ------------------------------ -• [4.578 seconds] -[sig-apps] Deployment -test/e2e/apps/framework.go:23 - RecreateDeployment should delete old pods and create new ones [Conformance] - test/e2e/apps/deployment.go:113 +• [SLOW TEST] [28.258 seconds] +[sig-storage] Subpath +test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + test/e2e/storage/subpath.go:36 + should support subpaths with projected pod [Conformance] + test/e2e/storage/subpath.go:106 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] Deployment + [BeforeEach] [sig-storage] Subpath set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:08:48.008 - Jun 12 21:08:48.009: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename deployment 06/12/23 21:08:48.01 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:08:48.052 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:08:48.065 - [BeforeEach] [sig-apps] Deployment + STEP: Creating a kubernetes client 07/27/23 01:51:59.469 + Jul 27 01:51:59.470: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename subpath 07/27/23 01:51:59.47 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:51:59.513 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:51:59.522 + [BeforeEach] [sig-storage] Subpath test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:91 - [It] RecreateDeployment should delete old pods and create new ones [Conformance] - test/e2e/apps/deployment.go:113 - Jun 12 21:08:48.083: INFO: Creating deployment "test-recreate-deployment" - Jun 12 21:08:48.098: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 - Jun 12 21:08:48.127: INFO: deployment "test-recreate-deployment" doesn't have the required revision set - Jun 12 21:08:50.144: INFO: Waiting deployment "test-recreate-deployment" to complete - Jun 12 21:08:50.151: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 21, 8, 48, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 8, 48, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 8, 48, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 8, 48, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-795566c5cb\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 21:08:52.161: INFO: Triggering a new rollout for deployment "test-recreate-deployment" - Jun 12 21:08:52.190: INFO: Updating deployment test-recreate-deployment - Jun 12 21:08:52.190: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods - [AfterEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:84 - Jun 12 21:08:52.411: INFO: Deployment "test-recreate-deployment": - &Deployment{ObjectMeta:{test-recreate-deployment deployment-8680 67fae1f5-67cb-44cf-8fb7-91eb252878ca 91795 2 2023-06-12 21:08:48 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-06-12 21:08:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-06-12 21:08:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0049e4d18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-06-12 21:08:52 +0000 UTC,LastTransitionTime:2023-06-12 21:08:52 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-cff6dc657" is progressing.,LastUpdateTime:2023-06-12 21:08:52 +0000 UTC,LastTransitionTime:2023-06-12 21:08:48 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} - - Jun 12 21:08:52.459: INFO: New ReplicaSet "test-recreate-deployment-cff6dc657" of Deployment "test-recreate-deployment": - &ReplicaSet{ObjectMeta:{test-recreate-deployment-cff6dc657 deployment-8680 f2a16444-83be-4e61-b2d7-944d29d33f3e 91793 1 2023-06-12 21:08:52 +0000 UTC map[name:sample-pod-3 pod-template-hash:cff6dc657] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 67fae1f5-67cb-44cf-8fb7-91eb252878ca 0xc00442f640 0xc00442f641}] [] [{kube-controller-manager Update apps/v1 2023-06-12 21:08:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67fae1f5-67cb-44cf-8fb7-91eb252878ca\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-06-12 21:08:52 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: cff6dc657,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:cff6dc657] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00442f8a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} - Jun 12 21:08:52.459: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": - Jun 12 21:08:52.460: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-795566c5cb deployment-8680 de485fd7-3ba7-40c6-bc71-63a78a255a80 91783 2 2023-06-12 21:08:48 +0000 UTC map[name:sample-pod-3 pod-template-hash:795566c5cb] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 67fae1f5-67cb-44cf-8fb7-91eb252878ca 0xc00442f397 0xc00442f398}] [] [{kube-controller-manager Update apps/v1 2023-06-12 21:08:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67fae1f5-67cb-44cf-8fb7-91eb252878ca\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-06-12 21:08:52 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 795566c5cb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:795566c5cb] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00442f528 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} - Jun 12 21:08:52.490: INFO: Pod "test-recreate-deployment-cff6dc657-m224v" is not available: - &Pod{ObjectMeta:{test-recreate-deployment-cff6dc657-m224v test-recreate-deployment-cff6dc657- deployment-8680 43057103-01f7-4fe3-9651-c2cc8347b437 91794 0 2023-06-12 21:08:52 +0000 UTC map[name:sample-pod-3 pod-template-hash:cff6dc657] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-recreate-deployment-cff6dc657 f2a16444-83be-4e61-b2d7-944d29d33f3e 0xc0049e50e7 0xc0049e50e8}] [] [{kube-controller-manager Update v1 2023-06-12 21:08:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f2a16444-83be-4e61-b2d7-944d29d33f3e\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-06-12 21:08:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-tsdh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tsdh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.70,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c42,c34,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-rbbwb,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 21:08:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 21:08:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 21:08:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 21:08:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.70,PodIP:,StartTime:2023-06-12 21:08:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} - [AfterEach] [sig-apps] Deployment + [BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 + STEP: Setting up data 07/27/23 01:51:59.531 + [It] should support subpaths with projected pod [Conformance] + test/e2e/storage/subpath.go:106 + STEP: Creating pod pod-subpath-test-projected-rsdm 07/27/23 01:51:59.562 + STEP: Creating a pod to test atomic-volume-subpath 07/27/23 01:51:59.563 + Jul 27 01:51:59.588: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-rsdm" in namespace "subpath-5353" to be "Succeeded or Failed" + Jul 27 01:51:59.599: INFO: Pod "pod-subpath-test-projected-rsdm": Phase="Pending", Reason="", readiness=false. Elapsed: 11.034073ms + Jul 27 01:52:01.608: INFO: Pod "pod-subpath-test-projected-rsdm": Phase="Running", Reason="", readiness=true. Elapsed: 2.020326848s + Jul 27 01:52:03.608: INFO: Pod "pod-subpath-test-projected-rsdm": Phase="Running", Reason="", readiness=true. Elapsed: 4.020634491s + Jul 27 01:52:05.608: INFO: Pod "pod-subpath-test-projected-rsdm": Phase="Running", Reason="", readiness=true. Elapsed: 6.020337636s + Jul 27 01:52:07.609: INFO: Pod "pod-subpath-test-projected-rsdm": Phase="Running", Reason="", readiness=true. Elapsed: 8.021652705s + Jul 27 01:52:09.610: INFO: Pod "pod-subpath-test-projected-rsdm": Phase="Running", Reason="", readiness=true. Elapsed: 10.022658566s + Jul 27 01:52:11.612: INFO: Pod "pod-subpath-test-projected-rsdm": Phase="Running", Reason="", readiness=true. Elapsed: 12.024742143s + Jul 27 01:52:13.610: INFO: Pod "pod-subpath-test-projected-rsdm": Phase="Running", Reason="", readiness=true. Elapsed: 14.022484393s + Jul 27 01:52:15.609: INFO: Pod "pod-subpath-test-projected-rsdm": Phase="Running", Reason="", readiness=true. Elapsed: 16.021134727s + Jul 27 01:52:17.608: INFO: Pod "pod-subpath-test-projected-rsdm": Phase="Running", Reason="", readiness=true. Elapsed: 18.020756852s + Jul 27 01:52:19.609: INFO: Pod "pod-subpath-test-projected-rsdm": Phase="Running", Reason="", readiness=true. Elapsed: 20.020873497s + Jul 27 01:52:21.609: INFO: Pod "pod-subpath-test-projected-rsdm": Phase="Running", Reason="", readiness=true. Elapsed: 22.021288569s + Jul 27 01:52:23.608: INFO: Pod "pod-subpath-test-projected-rsdm": Phase="Running", Reason="", readiness=true. Elapsed: 24.020746227s + Jul 27 01:52:25.609: INFO: Pod "pod-subpath-test-projected-rsdm": Phase="Running", Reason="", readiness=false. Elapsed: 26.021448566s + Jul 27 01:52:27.608: INFO: Pod "pod-subpath-test-projected-rsdm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.02045533s + STEP: Saw pod success 07/27/23 01:52:27.608 + Jul 27 01:52:27.609: INFO: Pod "pod-subpath-test-projected-rsdm" satisfied condition "Succeeded or Failed" + Jul 27 01:52:27.619: INFO: Trying to get logs from node 10.245.128.19 pod pod-subpath-test-projected-rsdm container test-container-subpath-projected-rsdm: + STEP: delete the pod 07/27/23 01:52:27.643 + Jul 27 01:52:27.669: INFO: Waiting for pod pod-subpath-test-projected-rsdm to disappear + Jul 27 01:52:27.677: INFO: Pod pod-subpath-test-projected-rsdm no longer exists + STEP: Deleting pod pod-subpath-test-projected-rsdm 07/27/23 01:52:27.677 + Jul 27 01:52:27.677: INFO: Deleting pod "pod-subpath-test-projected-rsdm" in namespace "subpath-5353" + [AfterEach] [sig-storage] Subpath test/e2e/framework/node/init/init.go:32 - Jun 12 21:08:52.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] Deployment + Jul 27 01:52:27.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Subpath test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] Deployment + [DeferCleanup (Each)] [sig-storage] Subpath dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] Deployment + [DeferCleanup (Each)] [sig-storage] Subpath tear down framework | framework.go:193 - STEP: Destroying namespace "deployment-8680" for this suite. 06/12/23 21:08:52.545 + STEP: Destroying namespace "subpath-5353" for this suite. 07/27/23 01:52:27.7 << End Captured GinkgoWriter Output ------------------------------ -[sig-storage] Secrets - should be consumable from pods in volume with mappings [NodeConformance] [Conformance] - test/e2e/common/storage/secrets_volume.go:79 -[BeforeEach] [sig-storage] Secrets +SSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for pods for Subdomain [Conformance] + test/e2e/network/dns.go:290 +[BeforeEach] [sig-network] DNS set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:08:52.588 -Jun 12 21:08:52.589: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename secrets 06/12/23 21:08:52.59 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:08:52.65 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:08:52.697 -[BeforeEach] [sig-storage] Secrets +STEP: Creating a kubernetes client 07/27/23 01:52:27.73 +Jul 27 01:52:27.730: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename dns 07/27/23 01:52:27.731 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:52:27.774 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:52:27.792 +[BeforeEach] [sig-network] DNS test/e2e/framework/metrics/init/init.go:31 -[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] - test/e2e/common/storage/secrets_volume.go:79 -STEP: Creating secret with name secret-test-map-45396ba0-5370-48e9-9664-b65840cea81f 06/12/23 21:08:52.745 -STEP: Creating a pod to test consume secrets 06/12/23 21:08:52.789 -Jun 12 21:08:52.828: INFO: Waiting up to 5m0s for pod "pod-secrets-91c86522-4272-46c8-9a92-dd2ee17aa792" in namespace "secrets-8501" to be "Succeeded or Failed" -Jun 12 21:08:52.907: INFO: Pod "pod-secrets-91c86522-4272-46c8-9a92-dd2ee17aa792": Phase="Pending", Reason="", readiness=false. Elapsed: 79.145665ms -Jun 12 21:08:54.969: INFO: Pod "pod-secrets-91c86522-4272-46c8-9a92-dd2ee17aa792": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140764038s -Jun 12 21:08:56.990: INFO: Pod "pod-secrets-91c86522-4272-46c8-9a92-dd2ee17aa792": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162110342s -Jun 12 21:08:58.987: INFO: Pod "pod-secrets-91c86522-4272-46c8-9a92-dd2ee17aa792": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.159375933s -STEP: Saw pod success 06/12/23 21:08:58.996 -Jun 12 21:08:58.997: INFO: Pod "pod-secrets-91c86522-4272-46c8-9a92-dd2ee17aa792" satisfied condition "Succeeded or Failed" -Jun 12 21:08:59.008: INFO: Trying to get logs from node 10.138.75.112 pod pod-secrets-91c86522-4272-46c8-9a92-dd2ee17aa792 container secret-volume-test: -STEP: delete the pod 06/12/23 21:08:59.105 -Jun 12 21:08:59.158: INFO: Waiting for pod pod-secrets-91c86522-4272-46c8-9a92-dd2ee17aa792 to disappear -Jun 12 21:08:59.204: INFO: Pod pod-secrets-91c86522-4272-46c8-9a92-dd2ee17aa792 no longer exists -[AfterEach] [sig-storage] Secrets +[It] should provide DNS for pods for Subdomain [Conformance] + test/e2e/network/dns.go:290 +STEP: Creating a test headless service 07/27/23 01:52:27.802 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1096.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-1096.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1096.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1096.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-1096.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-1096.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-1096.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-1096.svc.cluster.local;sleep 1; done + 07/27/23 01:52:27.838 +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1096.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-1096.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1096.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-1096.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-1096.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-1096.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-1096.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-1096.svc.cluster.local;sleep 1; done + 07/27/23 01:52:27.838 +STEP: creating a pod to probe DNS 07/27/23 01:52:27.838 +STEP: submitting the pod to kubernetes 07/27/23 01:52:27.838 +Jul 27 01:52:27.870: INFO: Waiting up to 15m0s for pod "dns-test-03ca2630-9edf-453d-bdb0-9b9c502ac3b5" in namespace "dns-1096" to be "running" +Jul 27 01:52:27.881: INFO: Pod "dns-test-03ca2630-9edf-453d-bdb0-9b9c502ac3b5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.822047ms +Jul 27 01:52:29.904: INFO: Pod "dns-test-03ca2630-9edf-453d-bdb0-9b9c502ac3b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033941842s +Jul 27 01:52:31.914: INFO: Pod "dns-test-03ca2630-9edf-453d-bdb0-9b9c502ac3b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043768846s +Jul 27 01:52:33.899: INFO: Pod "dns-test-03ca2630-9edf-453d-bdb0-9b9c502ac3b5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029011072s +Jul 27 01:52:35.893: INFO: Pod "dns-test-03ca2630-9edf-453d-bdb0-9b9c502ac3b5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.023014509s +Jul 27 01:52:37.894: INFO: Pod "dns-test-03ca2630-9edf-453d-bdb0-9b9c502ac3b5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023440504s +Jul 27 01:52:39.907: INFO: Pod "dns-test-03ca2630-9edf-453d-bdb0-9b9c502ac3b5": Phase="Running", Reason="", readiness=true. Elapsed: 12.036975487s +Jul 27 01:52:39.907: INFO: Pod "dns-test-03ca2630-9edf-453d-bdb0-9b9c502ac3b5" satisfied condition "running" +STEP: retrieving the pod 07/27/23 01:52:39.907 +STEP: looking for the results for each expected name from probers 07/27/23 01:52:39.916 +Jul 27 01:52:40.066: INFO: DNS probes using dns-1096/dns-test-03ca2630-9edf-453d-bdb0-9b9c502ac3b5 succeeded + +STEP: deleting the pod 07/27/23 01:52:40.066 +STEP: deleting the test headless service 07/27/23 01:52:40.096 +[AfterEach] [sig-network] DNS test/e2e/framework/node/init/init.go:32 -Jun 12 21:08:59.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Secrets +Jul 27 01:52:40.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] DNS test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Secrets +[DeferCleanup (Each)] [sig-network] DNS dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Secrets +[DeferCleanup (Each)] [sig-network] DNS tear down framework | framework.go:193 -STEP: Destroying namespace "secrets-8501" for this suite. 06/12/23 21:08:59.223 +STEP: Destroying namespace "dns-1096" for this suite. 07/27/23 01:52:40.146 ------------------------------ -• [SLOW TEST] [6.658 seconds] -[sig-storage] Secrets -test/e2e/common/storage/framework.go:23 - should be consumable from pods in volume with mappings [NodeConformance] [Conformance] - test/e2e/common/storage/secrets_volume.go:79 +• [SLOW TEST] [12.439 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should provide DNS for pods for Subdomain [Conformance] + test/e2e/network/dns.go:290 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Secrets + [BeforeEach] [sig-network] DNS set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:08:52.588 - Jun 12 21:08:52.589: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename secrets 06/12/23 21:08:52.59 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:08:52.65 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:08:52.697 - [BeforeEach] [sig-storage] Secrets + STEP: Creating a kubernetes client 07/27/23 01:52:27.73 + Jul 27 01:52:27.730: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename dns 07/27/23 01:52:27.731 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:52:27.774 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:52:27.792 + [BeforeEach] [sig-network] DNS test/e2e/framework/metrics/init/init.go:31 - [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] - test/e2e/common/storage/secrets_volume.go:79 - STEP: Creating secret with name secret-test-map-45396ba0-5370-48e9-9664-b65840cea81f 06/12/23 21:08:52.745 - STEP: Creating a pod to test consume secrets 06/12/23 21:08:52.789 - Jun 12 21:08:52.828: INFO: Waiting up to 5m0s for pod "pod-secrets-91c86522-4272-46c8-9a92-dd2ee17aa792" in namespace "secrets-8501" to be "Succeeded or Failed" - Jun 12 21:08:52.907: INFO: Pod "pod-secrets-91c86522-4272-46c8-9a92-dd2ee17aa792": Phase="Pending", Reason="", readiness=false. Elapsed: 79.145665ms - Jun 12 21:08:54.969: INFO: Pod "pod-secrets-91c86522-4272-46c8-9a92-dd2ee17aa792": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140764038s - Jun 12 21:08:56.990: INFO: Pod "pod-secrets-91c86522-4272-46c8-9a92-dd2ee17aa792": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162110342s - Jun 12 21:08:58.987: INFO: Pod "pod-secrets-91c86522-4272-46c8-9a92-dd2ee17aa792": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.159375933s - STEP: Saw pod success 06/12/23 21:08:58.996 - Jun 12 21:08:58.997: INFO: Pod "pod-secrets-91c86522-4272-46c8-9a92-dd2ee17aa792" satisfied condition "Succeeded or Failed" - Jun 12 21:08:59.008: INFO: Trying to get logs from node 10.138.75.112 pod pod-secrets-91c86522-4272-46c8-9a92-dd2ee17aa792 container secret-volume-test: - STEP: delete the pod 06/12/23 21:08:59.105 - Jun 12 21:08:59.158: INFO: Waiting for pod pod-secrets-91c86522-4272-46c8-9a92-dd2ee17aa792 to disappear - Jun 12 21:08:59.204: INFO: Pod pod-secrets-91c86522-4272-46c8-9a92-dd2ee17aa792 no longer exists - [AfterEach] [sig-storage] Secrets + [It] should provide DNS for pods for Subdomain [Conformance] + test/e2e/network/dns.go:290 + STEP: Creating a test headless service 07/27/23 01:52:27.802 + STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1096.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-1096.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1096.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1096.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-1096.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-1096.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-1096.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-1096.svc.cluster.local;sleep 1; done + 07/27/23 01:52:27.838 + STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1096.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-1096.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1096.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-1096.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-1096.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-1096.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-1096.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-1096.svc.cluster.local;sleep 1; done + 07/27/23 01:52:27.838 + STEP: creating a pod to probe DNS 07/27/23 01:52:27.838 + STEP: submitting the pod to kubernetes 07/27/23 01:52:27.838 + Jul 27 01:52:27.870: INFO: Waiting up to 15m0s for pod "dns-test-03ca2630-9edf-453d-bdb0-9b9c502ac3b5" in namespace "dns-1096" to be "running" + Jul 27 01:52:27.881: INFO: Pod "dns-test-03ca2630-9edf-453d-bdb0-9b9c502ac3b5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.822047ms + Jul 27 01:52:29.904: INFO: Pod "dns-test-03ca2630-9edf-453d-bdb0-9b9c502ac3b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033941842s + Jul 27 01:52:31.914: INFO: Pod "dns-test-03ca2630-9edf-453d-bdb0-9b9c502ac3b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043768846s + Jul 27 01:52:33.899: INFO: Pod "dns-test-03ca2630-9edf-453d-bdb0-9b9c502ac3b5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029011072s + Jul 27 01:52:35.893: INFO: Pod "dns-test-03ca2630-9edf-453d-bdb0-9b9c502ac3b5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.023014509s + Jul 27 01:52:37.894: INFO: Pod "dns-test-03ca2630-9edf-453d-bdb0-9b9c502ac3b5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023440504s + Jul 27 01:52:39.907: INFO: Pod "dns-test-03ca2630-9edf-453d-bdb0-9b9c502ac3b5": Phase="Running", Reason="", readiness=true. Elapsed: 12.036975487s + Jul 27 01:52:39.907: INFO: Pod "dns-test-03ca2630-9edf-453d-bdb0-9b9c502ac3b5" satisfied condition "running" + STEP: retrieving the pod 07/27/23 01:52:39.907 + STEP: looking for the results for each expected name from probers 07/27/23 01:52:39.916 + Jul 27 01:52:40.066: INFO: DNS probes using dns-1096/dns-test-03ca2630-9edf-453d-bdb0-9b9c502ac3b5 succeeded + + STEP: deleting the pod 07/27/23 01:52:40.066 + STEP: deleting the test headless service 07/27/23 01:52:40.096 + [AfterEach] [sig-network] DNS test/e2e/framework/node/init/init.go:32 - Jun 12 21:08:59.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Secrets + Jul 27 01:52:40.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] DNS test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Secrets + [DeferCleanup (Each)] [sig-network] DNS dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Secrets + [DeferCleanup (Each)] [sig-network] DNS tear down framework | framework.go:193 - STEP: Destroying namespace "secrets-8501" for this suite. 06/12/23 21:08:59.223 + STEP: Destroying namespace "dns-1096" for this suite. 07/27/23 01:52:40.146 << End Captured GinkgoWriter Output ------------------------------ -SSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-node] Security Context - should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] - test/e2e/node/security_context.go:129 -[BeforeEach] [sig-node] Security Context - set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:08:59.258 -Jun 12 21:08:59.259: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename security-context 06/12/23 21:08:59.261 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:08:59.304 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:08:59.349 -[BeforeEach] [sig-node] Security Context - test/e2e/framework/metrics/init/init.go:31 -[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] - test/e2e/node/security_context.go:129 -STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser 06/12/23 21:08:59.374 -Jun 12 21:08:59.406: INFO: Waiting up to 5m0s for pod "security-context-c14a2d3b-fa51-4110-a20f-a14c77b4733b" in namespace "security-context-9499" to be "Succeeded or Failed" -Jun 12 21:08:59.464: INFO: Pod "security-context-c14a2d3b-fa51-4110-a20f-a14c77b4733b": Phase="Pending", Reason="", readiness=false. Elapsed: 57.680713ms -Jun 12 21:09:01.476: INFO: Pod "security-context-c14a2d3b-fa51-4110-a20f-a14c77b4733b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069878515s -Jun 12 21:09:03.475: INFO: Pod "security-context-c14a2d3b-fa51-4110-a20f-a14c77b4733b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068909157s -Jun 12 21:09:05.475: INFO: Pod "security-context-c14a2d3b-fa51-4110-a20f-a14c77b4733b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.069361799s -STEP: Saw pod success 06/12/23 21:09:05.476 -Jun 12 21:09:05.476: INFO: Pod "security-context-c14a2d3b-fa51-4110-a20f-a14c77b4733b" satisfied condition "Succeeded or Failed" -Jun 12 21:09:05.486: INFO: Trying to get logs from node 10.138.75.70 pod security-context-c14a2d3b-fa51-4110-a20f-a14c77b4733b container test-container: -STEP: delete the pod 06/12/23 21:09:05.509 -Jun 12 21:09:05.541: INFO: Waiting for pod security-context-c14a2d3b-fa51-4110-a20f-a14c77b4733b to disappear -Jun 12 21:09:05.550: INFO: Pod security-context-c14a2d3b-fa51-4110-a20f-a14c77b4733b no longer exists -[AfterEach] [sig-node] Security Context +[sig-apps] ReplicaSet + should validate Replicaset Status endpoints [Conformance] + test/e2e/apps/replica_set.go:176 +[BeforeEach] [sig-apps] ReplicaSet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 07/27/23 01:52:40.17 +Jul 27 01:52:40.170: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename replicaset 07/27/23 01:52:40.171 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:52:40.216 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:52:40.225 +[BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:31 +[It] should validate Replicaset Status endpoints [Conformance] + test/e2e/apps/replica_set.go:176 +STEP: Create a Replicaset 07/27/23 01:52:40.255 +W0727 01:52:40.272718 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "httpd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "httpd" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "httpd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "httpd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +STEP: Verify that the required pods have come up. 07/27/23 01:52:40.272 +Jul 27 01:52:40.281: INFO: Pod name sample-pod: Found 0 pods out of 1 +Jul 27 01:52:45.294: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running 07/27/23 01:52:45.294 +STEP: Getting /status 07/27/23 01:52:45.294 +Jul 27 01:52:45.304: INFO: Replicaset test-rs has Conditions: [] +STEP: updating the Replicaset Status 07/27/23 01:52:45.304 +Jul 27 01:52:45.330: INFO: updatedStatus.Conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the ReplicaSet status to be updated 07/27/23 01:52:45.33 +Jul 27 01:52:45.337: INFO: Observed &ReplicaSet event: ADDED +Jul 27 01:52:45.337: INFO: Observed &ReplicaSet event: MODIFIED +Jul 27 01:52:45.338: INFO: Observed &ReplicaSet event: MODIFIED +Jul 27 01:52:45.338: INFO: Observed &ReplicaSet event: MODIFIED +Jul 27 01:52:45.338: INFO: Found replicaset test-rs in namespace replicaset-2025 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Jul 27 01:52:45.338: INFO: Replicaset test-rs has an updated status +STEP: patching the Replicaset Status 07/27/23 01:52:45.338 +Jul 27 01:52:45.338: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Jul 27 01:52:45.355: INFO: Patched status conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} +STEP: watching for the Replicaset status to be patched 07/27/23 01:52:45.355 +Jul 27 01:52:45.365: INFO: Observed &ReplicaSet event: ADDED +Jul 27 01:52:45.365: INFO: Observed &ReplicaSet event: MODIFIED +Jul 27 01:52:45.365: INFO: Observed &ReplicaSet event: MODIFIED +Jul 27 01:52:45.365: INFO: Observed &ReplicaSet event: MODIFIED +Jul 27 01:52:45.365: INFO: Observed replicaset test-rs in namespace replicaset-2025 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Jul 27 01:52:45.365: INFO: Observed &ReplicaSet event: MODIFIED +Jul 27 01:52:45.365: INFO: Found replicaset test-rs in namespace replicaset-2025 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC } +Jul 27 01:52:45.365: INFO: Replicaset test-rs has a patched status +[AfterEach] [sig-apps] ReplicaSet test/e2e/framework/node/init/init.go:32 -Jun 12 21:09:05.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Security Context +Jul 27 01:52:45.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicaSet test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Security Context +[DeferCleanup (Each)] [sig-apps] ReplicaSet dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Security Context +[DeferCleanup (Each)] [sig-apps] ReplicaSet tear down framework | framework.go:193 -STEP: Destroying namespace "security-context-9499" for this suite. 06/12/23 21:09:05.566 +STEP: Destroying namespace "replicaset-2025" for this suite. 07/27/23 01:52:45.378 ------------------------------ -• [SLOW TEST] [6.325 seconds] -[sig-node] Security Context -test/e2e/node/framework.go:23 - should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] - test/e2e/node/security_context.go:129 +• [SLOW TEST] [5.232 seconds] +[sig-apps] ReplicaSet +test/e2e/apps/framework.go:23 + should validate Replicaset Status endpoints [Conformance] + test/e2e/apps/replica_set.go:176 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Security Context + [BeforeEach] [sig-apps] ReplicaSet set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:08:59.258 - Jun 12 21:08:59.259: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename security-context 06/12/23 21:08:59.261 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:08:59.304 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:08:59.349 - [BeforeEach] [sig-node] Security Context + STEP: Creating a kubernetes client 07/27/23 01:52:40.17 + Jul 27 01:52:40.170: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename replicaset 07/27/23 01:52:40.171 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:52:40.216 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:52:40.225 + [BeforeEach] [sig-apps] ReplicaSet test/e2e/framework/metrics/init/init.go:31 - [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] - test/e2e/node/security_context.go:129 - STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser 06/12/23 21:08:59.374 - Jun 12 21:08:59.406: INFO: Waiting up to 5m0s for pod "security-context-c14a2d3b-fa51-4110-a20f-a14c77b4733b" in namespace "security-context-9499" to be "Succeeded or Failed" - Jun 12 21:08:59.464: INFO: Pod "security-context-c14a2d3b-fa51-4110-a20f-a14c77b4733b": Phase="Pending", Reason="", readiness=false. Elapsed: 57.680713ms - Jun 12 21:09:01.476: INFO: Pod "security-context-c14a2d3b-fa51-4110-a20f-a14c77b4733b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069878515s - Jun 12 21:09:03.475: INFO: Pod "security-context-c14a2d3b-fa51-4110-a20f-a14c77b4733b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068909157s - Jun 12 21:09:05.475: INFO: Pod "security-context-c14a2d3b-fa51-4110-a20f-a14c77b4733b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.069361799s - STEP: Saw pod success 06/12/23 21:09:05.476 - Jun 12 21:09:05.476: INFO: Pod "security-context-c14a2d3b-fa51-4110-a20f-a14c77b4733b" satisfied condition "Succeeded or Failed" - Jun 12 21:09:05.486: INFO: Trying to get logs from node 10.138.75.70 pod security-context-c14a2d3b-fa51-4110-a20f-a14c77b4733b container test-container: - STEP: delete the pod 06/12/23 21:09:05.509 - Jun 12 21:09:05.541: INFO: Waiting for pod security-context-c14a2d3b-fa51-4110-a20f-a14c77b4733b to disappear - Jun 12 21:09:05.550: INFO: Pod security-context-c14a2d3b-fa51-4110-a20f-a14c77b4733b no longer exists - [AfterEach] [sig-node] Security Context + [It] should validate Replicaset Status endpoints [Conformance] + test/e2e/apps/replica_set.go:176 + STEP: Create a Replicaset 07/27/23 01:52:40.255 + W0727 01:52:40.272718 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "httpd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "httpd" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "httpd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "httpd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + STEP: Verify that the required pods have come up. 07/27/23 01:52:40.272 + Jul 27 01:52:40.281: INFO: Pod name sample-pod: Found 0 pods out of 1 + Jul 27 01:52:45.294: INFO: Pod name sample-pod: Found 1 pods out of 1 + STEP: ensuring each pod is running 07/27/23 01:52:45.294 + STEP: Getting /status 07/27/23 01:52:45.294 + Jul 27 01:52:45.304: INFO: Replicaset test-rs has Conditions: [] + STEP: updating the Replicaset Status 07/27/23 01:52:45.304 + Jul 27 01:52:45.330: INFO: updatedStatus.Conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} + STEP: watching for the ReplicaSet status to be updated 07/27/23 01:52:45.33 + Jul 27 01:52:45.337: INFO: Observed &ReplicaSet event: ADDED + Jul 27 01:52:45.337: INFO: Observed &ReplicaSet event: MODIFIED + Jul 27 01:52:45.338: INFO: Observed &ReplicaSet event: MODIFIED + Jul 27 01:52:45.338: INFO: Observed &ReplicaSet event: MODIFIED + Jul 27 01:52:45.338: INFO: Found replicaset test-rs in namespace replicaset-2025 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] + Jul 27 01:52:45.338: INFO: Replicaset test-rs has an updated status + STEP: patching the Replicaset Status 07/27/23 01:52:45.338 + Jul 27 01:52:45.338: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} + Jul 27 01:52:45.355: INFO: Patched status conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} + STEP: watching for the Replicaset status to be patched 07/27/23 01:52:45.355 + Jul 27 01:52:45.365: INFO: Observed &ReplicaSet event: ADDED + Jul 27 01:52:45.365: INFO: Observed &ReplicaSet event: MODIFIED + Jul 27 01:52:45.365: INFO: Observed &ReplicaSet event: MODIFIED + Jul 27 01:52:45.365: INFO: Observed &ReplicaSet event: MODIFIED + Jul 27 01:52:45.365: INFO: Observed replicaset test-rs in namespace replicaset-2025 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} + Jul 27 01:52:45.365: INFO: Observed &ReplicaSet event: MODIFIED + Jul 27 01:52:45.365: INFO: Found replicaset test-rs in namespace replicaset-2025 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC } + Jul 27 01:52:45.365: INFO: Replicaset test-rs has a patched status + [AfterEach] [sig-apps] ReplicaSet test/e2e/framework/node/init/init.go:32 - Jun 12 21:09:05.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Security Context + Jul 27 01:52:45.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicaSet test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Security Context + [DeferCleanup (Each)] [sig-apps] ReplicaSet dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Security Context + [DeferCleanup (Each)] [sig-apps] ReplicaSet tear down framework | framework.go:193 - STEP: Destroying namespace "security-context-9499" for this suite. 06/12/23 21:09:05.566 + STEP: Destroying namespace "replicaset-2025" for this suite. 07/27/23 01:52:45.378 << End Captured GinkgoWriter Output ------------------------------ -SSSS +SSSSSSSSS ------------------------------ -[sig-storage] Projected configMap - updates should be reflected in volume [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:124 -[BeforeEach] [sig-storage] Projected configMap +[sig-storage] ConfigMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:57 +[BeforeEach] [sig-storage] ConfigMap set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:09:05.588 -Jun 12 21:09:05.589: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename projected 06/12/23 21:09:05.592 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:09:05.635 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:09:05.646 -[BeforeEach] [sig-storage] Projected configMap +STEP: Creating a kubernetes client 07/27/23 01:52:45.403 +Jul 27 01:52:45.403: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename configmap 07/27/23 01:52:45.403 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:52:45.452 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:52:45.461 +[BeforeEach] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:31 -[It] updates should be reflected in volume [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:124 -Jun 12 21:09:05.680: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node -STEP: Creating projection with configMap that has name projected-configmap-test-upd-8fe92104-0987-4224-ac42-5e86f0d83174 06/12/23 21:09:05.68 -STEP: Creating the pod 06/12/23 21:09:05.696 -Jun 12 21:09:05.724: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b12fd816-be95-4f3c-add2-9bf8835153ed" in namespace "projected-6047" to be "running and ready" -Jun 12 21:09:05.736: INFO: Pod "pod-projected-configmaps-b12fd816-be95-4f3c-add2-9bf8835153ed": Phase="Pending", Reason="", readiness=false. Elapsed: 11.589537ms -Jun 12 21:09:05.736: INFO: The phase of Pod pod-projected-configmaps-b12fd816-be95-4f3c-add2-9bf8835153ed is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:09:07.754: INFO: Pod "pod-projected-configmaps-b12fd816-be95-4f3c-add2-9bf8835153ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029552363s -Jun 12 21:09:07.754: INFO: The phase of Pod pod-projected-configmaps-b12fd816-be95-4f3c-add2-9bf8835153ed is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:09:09.757: INFO: Pod "pod-projected-configmaps-b12fd816-be95-4f3c-add2-9bf8835153ed": Phase="Running", Reason="", readiness=true. Elapsed: 4.033008551s -Jun 12 21:09:09.757: INFO: The phase of Pod pod-projected-configmaps-b12fd816-be95-4f3c-add2-9bf8835153ed is Running (Ready = true) -Jun 12 21:09:09.757: INFO: Pod "pod-projected-configmaps-b12fd816-be95-4f3c-add2-9bf8835153ed" satisfied condition "running and ready" -STEP: Updating configmap projected-configmap-test-upd-8fe92104-0987-4224-ac42-5e86f0d83174 06/12/23 21:09:09.819 -STEP: waiting to observe update in volume 06/12/23 21:09:09.836 -[AfterEach] [sig-storage] Projected configMap +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:57 +STEP: Creating configMap with name configmap-test-volume-e093befe-42c7-466e-a3ea-07434b5835a3 07/27/23 01:52:45.471 +STEP: Creating a pod to test consume configMaps 07/27/23 01:52:45.491 +Jul 27 01:52:45.519: INFO: Waiting up to 5m0s for pod "pod-configmaps-46513af1-1c3c-4f3d-989c-5107a4e23d35" in namespace "configmap-1156" to be "Succeeded or Failed" +Jul 27 01:52:45.533: INFO: Pod "pod-configmaps-46513af1-1c3c-4f3d-989c-5107a4e23d35": Phase="Pending", Reason="", readiness=false. Elapsed: 13.770989ms +Jul 27 01:52:47.545: INFO: Pod "pod-configmaps-46513af1-1c3c-4f3d-989c-5107a4e23d35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026283738s +Jul 27 01:52:49.544: INFO: Pod "pod-configmaps-46513af1-1c3c-4f3d-989c-5107a4e23d35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02512089s +STEP: Saw pod success 07/27/23 01:52:49.544 +Jul 27 01:52:49.544: INFO: Pod "pod-configmaps-46513af1-1c3c-4f3d-989c-5107a4e23d35" satisfied condition "Succeeded or Failed" +Jul 27 01:52:49.580: INFO: Trying to get logs from node 10.245.128.19 pod pod-configmaps-46513af1-1c3c-4f3d-989c-5107a4e23d35 container agnhost-container: +STEP: delete the pod 07/27/23 01:52:49.608 +Jul 27 01:52:49.652: INFO: Waiting for pod pod-configmaps-46513af1-1c3c-4f3d-989c-5107a4e23d35 to disappear +Jul 27 01:52:49.663: INFO: Pod pod-configmaps-46513af1-1c3c-4f3d-989c-5107a4e23d35 no longer exists +[AfterEach] [sig-storage] ConfigMap test/e2e/framework/node/init/init.go:32 -Jun 12 21:10:27.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Projected configMap +Jul 27 01:52:49.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Projected configMap +[DeferCleanup (Each)] [sig-storage] ConfigMap dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Projected configMap +[DeferCleanup (Each)] [sig-storage] ConfigMap tear down framework | framework.go:193 -STEP: Destroying namespace "projected-6047" for this suite. 06/12/23 21:10:27.206 +STEP: Destroying namespace "configmap-1156" for this suite. 07/27/23 01:52:49.676 ------------------------------ -• [SLOW TEST] [81.631 seconds] -[sig-storage] Projected configMap +• [4.382 seconds] +[sig-storage] ConfigMap test/e2e/common/storage/framework.go:23 - updates should be reflected in volume [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:124 + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:57 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Projected configMap + [BeforeEach] [sig-storage] ConfigMap set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:09:05.588 - Jun 12 21:09:05.589: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename projected 06/12/23 21:09:05.592 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:09:05.635 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:09:05.646 - [BeforeEach] [sig-storage] Projected configMap + STEP: Creating a kubernetes client 07/27/23 01:52:45.403 + Jul 27 01:52:45.403: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename configmap 07/27/23 01:52:45.403 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:52:45.452 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:52:45.461 + [BeforeEach] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:31 - [It] updates should be reflected in volume [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:124 - Jun 12 21:09:05.680: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node - STEP: Creating projection with configMap that has name projected-configmap-test-upd-8fe92104-0987-4224-ac42-5e86f0d83174 06/12/23 21:09:05.68 - STEP: Creating the pod 06/12/23 21:09:05.696 - Jun 12 21:09:05.724: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b12fd816-be95-4f3c-add2-9bf8835153ed" in namespace "projected-6047" to be "running and ready" - Jun 12 21:09:05.736: INFO: Pod "pod-projected-configmaps-b12fd816-be95-4f3c-add2-9bf8835153ed": Phase="Pending", Reason="", readiness=false. Elapsed: 11.589537ms - Jun 12 21:09:05.736: INFO: The phase of Pod pod-projected-configmaps-b12fd816-be95-4f3c-add2-9bf8835153ed is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:09:07.754: INFO: Pod "pod-projected-configmaps-b12fd816-be95-4f3c-add2-9bf8835153ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029552363s - Jun 12 21:09:07.754: INFO: The phase of Pod pod-projected-configmaps-b12fd816-be95-4f3c-add2-9bf8835153ed is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:09:09.757: INFO: Pod "pod-projected-configmaps-b12fd816-be95-4f3c-add2-9bf8835153ed": Phase="Running", Reason="", readiness=true. Elapsed: 4.033008551s - Jun 12 21:09:09.757: INFO: The phase of Pod pod-projected-configmaps-b12fd816-be95-4f3c-add2-9bf8835153ed is Running (Ready = true) - Jun 12 21:09:09.757: INFO: Pod "pod-projected-configmaps-b12fd816-be95-4f3c-add2-9bf8835153ed" satisfied condition "running and ready" - STEP: Updating configmap projected-configmap-test-upd-8fe92104-0987-4224-ac42-5e86f0d83174 06/12/23 21:09:09.819 - STEP: waiting to observe update in volume 06/12/23 21:09:09.836 - [AfterEach] [sig-storage] Projected configMap + [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:57 + STEP: Creating configMap with name configmap-test-volume-e093befe-42c7-466e-a3ea-07434b5835a3 07/27/23 01:52:45.471 + STEP: Creating a pod to test consume configMaps 07/27/23 01:52:45.491 + Jul 27 01:52:45.519: INFO: Waiting up to 5m0s for pod "pod-configmaps-46513af1-1c3c-4f3d-989c-5107a4e23d35" in namespace "configmap-1156" to be "Succeeded or Failed" + Jul 27 01:52:45.533: INFO: Pod "pod-configmaps-46513af1-1c3c-4f3d-989c-5107a4e23d35": Phase="Pending", Reason="", readiness=false. Elapsed: 13.770989ms + Jul 27 01:52:47.545: INFO: Pod "pod-configmaps-46513af1-1c3c-4f3d-989c-5107a4e23d35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026283738s + Jul 27 01:52:49.544: INFO: Pod "pod-configmaps-46513af1-1c3c-4f3d-989c-5107a4e23d35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02512089s + STEP: Saw pod success 07/27/23 01:52:49.544 + Jul 27 01:52:49.544: INFO: Pod "pod-configmaps-46513af1-1c3c-4f3d-989c-5107a4e23d35" satisfied condition "Succeeded or Failed" + Jul 27 01:52:49.580: INFO: Trying to get logs from node 10.245.128.19 pod pod-configmaps-46513af1-1c3c-4f3d-989c-5107a4e23d35 container agnhost-container: + STEP: delete the pod 07/27/23 01:52:49.608 + Jul 27 01:52:49.652: INFO: Waiting for pod pod-configmaps-46513af1-1c3c-4f3d-989c-5107a4e23d35 to disappear + Jul 27 01:52:49.663: INFO: Pod pod-configmaps-46513af1-1c3c-4f3d-989c-5107a4e23d35 no longer exists + [AfterEach] [sig-storage] ConfigMap test/e2e/framework/node/init/init.go:32 - Jun 12 21:10:27.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Projected configMap + Jul 27 01:52:49.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Projected configMap + [DeferCleanup (Each)] [sig-storage] ConfigMap dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Projected configMap + [DeferCleanup (Each)] [sig-storage] ConfigMap tear down framework | framework.go:193 - STEP: Destroying namespace "projected-6047" for this suite. 06/12/23 21:10:27.206 + STEP: Destroying namespace "configmap-1156" for this suite. 07/27/23 01:52:49.676 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSS +SSS ------------------------------ -[sig-node] Pods - should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] - test/e2e/common/node/pods.go:398 -[BeforeEach] [sig-node] Pods +[sig-apps] Job + should adopt matching orphans and release non-matching pods [Conformance] + test/e2e/apps/job.go:507 +[BeforeEach] [sig-apps] Job set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:10:27.223 -Jun 12 21:10:27.224: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename pods 06/12/23 21:10:27.226 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:10:27.264 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:10:27.287 -[BeforeEach] [sig-node] Pods +STEP: Creating a kubernetes client 07/27/23 01:52:49.785 +Jul 27 01:52:49.785: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename job 07/27/23 01:52:49.786 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:52:49.894 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:52:49.933 +[BeforeEach] [sig-apps] Job test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-node] Pods - test/e2e/common/node/pods.go:194 -[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] - test/e2e/common/node/pods.go:398 -STEP: creating the pod 06/12/23 21:10:27.304 -STEP: submitting the pod to kubernetes 06/12/23 21:10:27.305 -Jun 12 21:10:27.337: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-95e513e6-d4a0-45b5-a370-c7e4677613a7" in namespace "pods-7775" to be "running and ready" -Jun 12 21:10:27.353: INFO: Pod "pod-update-activedeadlineseconds-95e513e6-d4a0-45b5-a370-c7e4677613a7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.470415ms -Jun 12 21:10:27.354: INFO: The phase of Pod pod-update-activedeadlineseconds-95e513e6-d4a0-45b5-a370-c7e4677613a7 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:10:29.382: INFO: Pod "pod-update-activedeadlineseconds-95e513e6-d4a0-45b5-a370-c7e4677613a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045166665s -Jun 12 21:10:29.382: INFO: The phase of Pod pod-update-activedeadlineseconds-95e513e6-d4a0-45b5-a370-c7e4677613a7 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:10:31.366: INFO: Pod "pod-update-activedeadlineseconds-95e513e6-d4a0-45b5-a370-c7e4677613a7": Phase="Running", Reason="", readiness=true. Elapsed: 4.029451072s -Jun 12 21:10:31.366: INFO: The phase of Pod pod-update-activedeadlineseconds-95e513e6-d4a0-45b5-a370-c7e4677613a7 is Running (Ready = true) -Jun 12 21:10:31.366: INFO: Pod "pod-update-activedeadlineseconds-95e513e6-d4a0-45b5-a370-c7e4677613a7" satisfied condition "running and ready" -STEP: verifying the pod is in kubernetes 06/12/23 21:10:31.376 -STEP: updating the pod 06/12/23 21:10:31.386 -Jun 12 21:10:31.925: INFO: Successfully updated pod "pod-update-activedeadlineseconds-95e513e6-d4a0-45b5-a370-c7e4677613a7" -Jun 12 21:10:31.925: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-95e513e6-d4a0-45b5-a370-c7e4677613a7" in namespace "pods-7775" to be "terminated with reason DeadlineExceeded" -Jun 12 21:10:31.936: INFO: Pod "pod-update-activedeadlineseconds-95e513e6-d4a0-45b5-a370-c7e4677613a7": Phase="Running", Reason="", readiness=true. Elapsed: 9.893149ms -Jun 12 21:10:33.946: INFO: Pod "pod-update-activedeadlineseconds-95e513e6-d4a0-45b5-a370-c7e4677613a7": Phase="Running", Reason="", readiness=false. Elapsed: 2.020043972s -Jun 12 21:10:35.947: INFO: Pod "pod-update-activedeadlineseconds-95e513e6-d4a0-45b5-a370-c7e4677613a7": Phase="Running", Reason="", readiness=false. Elapsed: 4.021076178s -Jun 12 21:10:37.965: INFO: Pod "pod-update-activedeadlineseconds-95e513e6-d4a0-45b5-a370-c7e4677613a7": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 6.039363655s -Jun 12 21:10:37.965: INFO: Pod "pod-update-activedeadlineseconds-95e513e6-d4a0-45b5-a370-c7e4677613a7" satisfied condition "terminated with reason DeadlineExceeded" -[AfterEach] [sig-node] Pods +[It] should adopt matching orphans and release non-matching pods [Conformance] + test/e2e/apps/job.go:507 +STEP: Creating a job 07/27/23 01:52:49.944 +STEP: Ensuring active pods == parallelism 07/27/23 01:52:50.013 +STEP: Orphaning one of the Job's Pods 07/27/23 01:52:52.023 +Jul 27 01:52:52.567: INFO: Successfully updated pod "adopt-release-2tv4w" +STEP: Checking that the Job readopts the Pod 07/27/23 01:52:52.567 +Jul 27 01:52:52.567: INFO: Waiting up to 15m0s for pod "adopt-release-2tv4w" in namespace "job-5457" to be "adopted" +Jul 27 01:52:52.577: INFO: Pod "adopt-release-2tv4w": Phase="Running", Reason="", readiness=true. Elapsed: 9.435379ms +Jul 27 01:52:54.589: INFO: Pod "adopt-release-2tv4w": Phase="Running", Reason="", readiness=true. Elapsed: 2.021271206s +Jul 27 01:52:54.589: INFO: Pod "adopt-release-2tv4w" satisfied condition "adopted" +STEP: Removing the labels from the Job's Pod 07/27/23 01:52:54.589 +Jul 27 01:52:55.119: INFO: Successfully updated pod "adopt-release-2tv4w" +STEP: Checking that the Job releases the Pod 07/27/23 01:52:55.119 +Jul 27 01:52:55.119: INFO: Waiting up to 15m0s for pod "adopt-release-2tv4w" in namespace "job-5457" to be "released" +Jul 27 01:52:55.130: INFO: Pod "adopt-release-2tv4w": Phase="Running", Reason="", readiness=true. Elapsed: 10.970198ms +Jul 27 01:52:57.151: INFO: Pod "adopt-release-2tv4w": Phase="Running", Reason="", readiness=true. Elapsed: 2.032514951s +Jul 27 01:52:57.151: INFO: Pod "adopt-release-2tv4w" satisfied condition "released" +[AfterEach] [sig-apps] Job test/e2e/framework/node/init/init.go:32 -Jun 12 21:10:37.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Pods +Jul 27 01:52:57.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Job test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Pods +[DeferCleanup (Each)] [sig-apps] Job dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Pods +[DeferCleanup (Each)] [sig-apps] Job tear down framework | framework.go:193 -STEP: Destroying namespace "pods-7775" for this suite. 06/12/23 21:10:37.98 +STEP: Destroying namespace "job-5457" for this suite. 07/27/23 01:52:57.165 ------------------------------ -• [SLOW TEST] [10.778 seconds] -[sig-node] Pods -test/e2e/common/node/framework.go:23 - should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] - test/e2e/common/node/pods.go:398 +• [SLOW TEST] [7.418 seconds] +[sig-apps] Job +test/e2e/apps/framework.go:23 + should adopt matching orphans and release non-matching pods [Conformance] + test/e2e/apps/job.go:507 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Pods + [BeforeEach] [sig-apps] Job set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:10:27.223 - Jun 12 21:10:27.224: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename pods 06/12/23 21:10:27.226 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:10:27.264 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:10:27.287 - [BeforeEach] [sig-node] Pods + STEP: Creating a kubernetes client 07/27/23 01:52:49.785 + Jul 27 01:52:49.785: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename job 07/27/23 01:52:49.786 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:52:49.894 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:52:49.933 + [BeforeEach] [sig-apps] Job test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-node] Pods - test/e2e/common/node/pods.go:194 - [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] - test/e2e/common/node/pods.go:398 - STEP: creating the pod 06/12/23 21:10:27.304 - STEP: submitting the pod to kubernetes 06/12/23 21:10:27.305 - Jun 12 21:10:27.337: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-95e513e6-d4a0-45b5-a370-c7e4677613a7" in namespace "pods-7775" to be "running and ready" - Jun 12 21:10:27.353: INFO: Pod "pod-update-activedeadlineseconds-95e513e6-d4a0-45b5-a370-c7e4677613a7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.470415ms - Jun 12 21:10:27.354: INFO: The phase of Pod pod-update-activedeadlineseconds-95e513e6-d4a0-45b5-a370-c7e4677613a7 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:10:29.382: INFO: Pod "pod-update-activedeadlineseconds-95e513e6-d4a0-45b5-a370-c7e4677613a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045166665s - Jun 12 21:10:29.382: INFO: The phase of Pod pod-update-activedeadlineseconds-95e513e6-d4a0-45b5-a370-c7e4677613a7 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:10:31.366: INFO: Pod "pod-update-activedeadlineseconds-95e513e6-d4a0-45b5-a370-c7e4677613a7": Phase="Running", Reason="", readiness=true. Elapsed: 4.029451072s - Jun 12 21:10:31.366: INFO: The phase of Pod pod-update-activedeadlineseconds-95e513e6-d4a0-45b5-a370-c7e4677613a7 is Running (Ready = true) - Jun 12 21:10:31.366: INFO: Pod "pod-update-activedeadlineseconds-95e513e6-d4a0-45b5-a370-c7e4677613a7" satisfied condition "running and ready" - STEP: verifying the pod is in kubernetes 06/12/23 21:10:31.376 - STEP: updating the pod 06/12/23 21:10:31.386 - Jun 12 21:10:31.925: INFO: Successfully updated pod "pod-update-activedeadlineseconds-95e513e6-d4a0-45b5-a370-c7e4677613a7" - Jun 12 21:10:31.925: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-95e513e6-d4a0-45b5-a370-c7e4677613a7" in namespace "pods-7775" to be "terminated with reason DeadlineExceeded" - Jun 12 21:10:31.936: INFO: Pod "pod-update-activedeadlineseconds-95e513e6-d4a0-45b5-a370-c7e4677613a7": Phase="Running", Reason="", readiness=true. Elapsed: 9.893149ms - Jun 12 21:10:33.946: INFO: Pod "pod-update-activedeadlineseconds-95e513e6-d4a0-45b5-a370-c7e4677613a7": Phase="Running", Reason="", readiness=false. Elapsed: 2.020043972s - Jun 12 21:10:35.947: INFO: Pod "pod-update-activedeadlineseconds-95e513e6-d4a0-45b5-a370-c7e4677613a7": Phase="Running", Reason="", readiness=false. Elapsed: 4.021076178s - Jun 12 21:10:37.965: INFO: Pod "pod-update-activedeadlineseconds-95e513e6-d4a0-45b5-a370-c7e4677613a7": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 6.039363655s - Jun 12 21:10:37.965: INFO: Pod "pod-update-activedeadlineseconds-95e513e6-d4a0-45b5-a370-c7e4677613a7" satisfied condition "terminated with reason DeadlineExceeded" - [AfterEach] [sig-node] Pods + [It] should adopt matching orphans and release non-matching pods [Conformance] + test/e2e/apps/job.go:507 + STEP: Creating a job 07/27/23 01:52:49.944 + STEP: Ensuring active pods == parallelism 07/27/23 01:52:50.013 + STEP: Orphaning one of the Job's Pods 07/27/23 01:52:52.023 + Jul 27 01:52:52.567: INFO: Successfully updated pod "adopt-release-2tv4w" + STEP: Checking that the Job readopts the Pod 07/27/23 01:52:52.567 + Jul 27 01:52:52.567: INFO: Waiting up to 15m0s for pod "adopt-release-2tv4w" in namespace "job-5457" to be "adopted" + Jul 27 01:52:52.577: INFO: Pod "adopt-release-2tv4w": Phase="Running", Reason="", readiness=true. Elapsed: 9.435379ms + Jul 27 01:52:54.589: INFO: Pod "adopt-release-2tv4w": Phase="Running", Reason="", readiness=true. Elapsed: 2.021271206s + Jul 27 01:52:54.589: INFO: Pod "adopt-release-2tv4w" satisfied condition "adopted" + STEP: Removing the labels from the Job's Pod 07/27/23 01:52:54.589 + Jul 27 01:52:55.119: INFO: Successfully updated pod "adopt-release-2tv4w" + STEP: Checking that the Job releases the Pod 07/27/23 01:52:55.119 + Jul 27 01:52:55.119: INFO: Waiting up to 15m0s for pod "adopt-release-2tv4w" in namespace "job-5457" to be "released" + Jul 27 01:52:55.130: INFO: Pod "adopt-release-2tv4w": Phase="Running", Reason="", readiness=true. Elapsed: 10.970198ms + Jul 27 01:52:57.151: INFO: Pod "adopt-release-2tv4w": Phase="Running", Reason="", readiness=true. Elapsed: 2.032514951s + Jul 27 01:52:57.151: INFO: Pod "adopt-release-2tv4w" satisfied condition "released" + [AfterEach] [sig-apps] Job test/e2e/framework/node/init/init.go:32 - Jun 12 21:10:37.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Pods + Jul 27 01:52:57.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Job test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Pods + [DeferCleanup (Each)] [sig-apps] Job dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Pods + [DeferCleanup (Each)] [sig-apps] Job tear down framework | framework.go:193 - STEP: Destroying namespace "pods-7775" for this suite. 06/12/23 21:10:37.98 + STEP: Destroying namespace "job-5457" for this suite. 07/27/23 01:52:57.165 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSS +SSSSSSSSSSSSSSSSSS ------------------------------ -[sig-cli] Kubectl client Update Demo - should scale a replication controller [Conformance] - test/e2e/kubectl/kubectl.go:352 -[BeforeEach] [sig-cli] Kubectl client +[sig-apps] ReplicationController + should adopt matching pods on creation [Conformance] + test/e2e/apps/rc.go:92 +[BeforeEach] [sig-apps] ReplicationController set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:10:38.004 -Jun 12 21:10:38.004: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename kubectl 06/12/23 21:10:38.009 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:10:38.091 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:10:38.11 -[BeforeEach] [sig-cli] Kubectl client +STEP: Creating a kubernetes client 07/27/23 01:52:57.204 +Jul 27 01:52:57.204: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename replication-controller 07/27/23 01:52:57.205 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:52:57.248 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:52:57.258 +[BeforeEach] [sig-apps] ReplicationController test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-cli] Kubectl client - test/e2e/kubectl/kubectl.go:274 -[BeforeEach] Update Demo - test/e2e/kubectl/kubectl.go:326 -[It] should scale a replication controller [Conformance] - test/e2e/kubectl/kubectl.go:352 -STEP: creating a replication controller 06/12/23 21:10:38.174 -Jun 12 21:10:38.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 create -f -' -Jun 12 21:10:41.457: INFO: stderr: "" -Jun 12 21:10:41.457: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" -STEP: waiting for all containers in name=update-demo pods to come up. 06/12/23 21:10:41.457 -Jun 12 21:10:41.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' -Jun 12 21:10:41.685: INFO: stderr: "" -Jun 12 21:10:41.685: INFO: stdout: "update-demo-nautilus-npvcg update-demo-nautilus-x6zfg " -Jun 12 21:10:41.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods update-demo-nautilus-npvcg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' -Jun 12 21:10:41.876: INFO: stderr: "" -Jun 12 21:10:41.876: INFO: stdout: "" -Jun 12 21:10:41.876: INFO: update-demo-nautilus-npvcg is created but not running -Jun 12 21:10:46.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' -Jun 12 21:10:47.116: INFO: stderr: "" -Jun 12 21:10:47.116: INFO: stdout: "update-demo-nautilus-npvcg update-demo-nautilus-x6zfg " -Jun 12 21:10:47.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods update-demo-nautilus-npvcg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' -Jun 12 21:10:47.321: INFO: stderr: "" -Jun 12 21:10:47.321: INFO: stdout: "" -Jun 12 21:10:47.321: INFO: update-demo-nautilus-npvcg is created but not running -Jun 12 21:10:52.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' -Jun 12 21:10:52.679: INFO: stderr: "" -Jun 12 21:10:52.679: INFO: stdout: "update-demo-nautilus-npvcg update-demo-nautilus-x6zfg " -Jun 12 21:10:52.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods update-demo-nautilus-npvcg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' -Jun 12 21:10:52.862: INFO: stderr: "" -Jun 12 21:10:52.862: INFO: stdout: "" -Jun 12 21:10:52.862: INFO: update-demo-nautilus-npvcg is created but not running -Jun 12 21:10:57.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' -Jun 12 21:10:58.090: INFO: stderr: "" -Jun 12 21:10:58.090: INFO: stdout: "update-demo-nautilus-npvcg update-demo-nautilus-x6zfg " -Jun 12 21:10:58.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods update-demo-nautilus-npvcg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' -Jun 12 21:10:58.308: INFO: stderr: "" -Jun 12 21:10:58.308: INFO: stdout: "true" -Jun 12 21:10:58.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods update-demo-nautilus-npvcg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' -Jun 12 21:10:58.719: INFO: stderr: "" -Jun 12 21:10:58.719: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" -Jun 12 21:10:58.719: INFO: validating pod update-demo-nautilus-npvcg -Jun 12 21:10:58.738: INFO: got data: { - "image": "nautilus.jpg" -} - -Jun 12 21:10:58.739: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . -Jun 12 21:10:58.739: INFO: update-demo-nautilus-npvcg is verified up and running -Jun 12 21:10:58.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods update-demo-nautilus-x6zfg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' -Jun 12 21:10:58.973: INFO: stderr: "" -Jun 12 21:10:58.974: INFO: stdout: "" -Jun 12 21:10:58.974: INFO: update-demo-nautilus-x6zfg is created but not running -Jun 12 21:11:03.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' -Jun 12 21:11:04.244: INFO: stderr: "" -Jun 12 21:11:04.244: INFO: stdout: "update-demo-nautilus-npvcg update-demo-nautilus-x6zfg " -Jun 12 21:11:04.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods update-demo-nautilus-npvcg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' -Jun 12 21:11:04.415: INFO: stderr: "" -Jun 12 21:11:04.415: INFO: stdout: "true" -Jun 12 21:11:04.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods update-demo-nautilus-npvcg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' -Jun 12 21:11:04.561: INFO: stderr: "" -Jun 12 21:11:04.561: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" -Jun 12 21:11:04.561: INFO: validating pod update-demo-nautilus-npvcg -Jun 12 21:11:04.577: INFO: got data: { - "image": "nautilus.jpg" -} - -Jun 12 21:11:04.577: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . -Jun 12 21:11:04.577: INFO: update-demo-nautilus-npvcg is verified up and running -Jun 12 21:11:04.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods update-demo-nautilus-x6zfg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' -Jun 12 21:11:04.786: INFO: stderr: "" -Jun 12 21:11:04.786: INFO: stdout: "true" -Jun 12 21:11:04.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods update-demo-nautilus-x6zfg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' -Jun 12 21:11:05.008: INFO: stderr: "" -Jun 12 21:11:05.008: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" -Jun 12 21:11:05.008: INFO: validating pod update-demo-nautilus-x6zfg -Jun 12 21:11:05.049: INFO: got data: { - "image": "nautilus.jpg" -} - -Jun 12 21:11:05.049: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . -Jun 12 21:11:05.049: INFO: update-demo-nautilus-x6zfg is verified up and running -STEP: scaling down the replication controller 06/12/23 21:11:05.049 -Jun 12 21:11:05.056: INFO: scanned /root for discovery docs: -Jun 12 21:11:05.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 scale rc update-demo-nautilus --replicas=1 --timeout=5m' -Jun 12 21:11:06.333: INFO: stderr: "" -Jun 12 21:11:06.333: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" -STEP: waiting for all containers in name=update-demo pods to come up. 06/12/23 21:11:06.333 -Jun 12 21:11:06.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' -Jun 12 21:11:06.578: INFO: stderr: "" -Jun 12 21:11:06.578: INFO: stdout: "update-demo-nautilus-npvcg update-demo-nautilus-x6zfg " -STEP: Replicas for name=update-demo: expected=1 actual=2 06/12/23 21:11:06.578 -Jun 12 21:11:11.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' -Jun 12 21:11:12.365: INFO: stderr: "" -Jun 12 21:11:12.365: INFO: stdout: "update-demo-nautilus-npvcg " -Jun 12 21:11:12.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods update-demo-nautilus-npvcg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' -Jun 12 21:11:13.011: INFO: stderr: "" -Jun 12 21:11:13.011: INFO: stdout: "true" -Jun 12 21:11:13.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods update-demo-nautilus-npvcg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' -Jun 12 21:11:13.328: INFO: stderr: "" -Jun 12 21:11:13.328: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" -Jun 12 21:11:13.328: INFO: validating pod update-demo-nautilus-npvcg -Jun 12 21:11:13.370: INFO: got data: { - "image": "nautilus.jpg" -} - -Jun 12 21:11:13.370: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . -Jun 12 21:11:13.370: INFO: update-demo-nautilus-npvcg is verified up and running -STEP: scaling up the replication controller 06/12/23 21:11:13.37 -Jun 12 21:11:13.388: INFO: scanned /root for discovery docs: -Jun 12 21:11:13.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 scale rc update-demo-nautilus --replicas=2 --timeout=5m' -Jun 12 21:11:14.830: INFO: stderr: "" -Jun 12 21:11:14.830: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" -STEP: waiting for all containers in name=update-demo pods to come up. 06/12/23 21:11:14.83 -Jun 12 21:11:14.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' -Jun 12 21:11:15.057: INFO: stderr: "" -Jun 12 21:11:15.057: INFO: stdout: "update-demo-nautilus-npvcg update-demo-nautilus-qcjf4 " -Jun 12 21:11:15.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods update-demo-nautilus-npvcg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' -Jun 12 21:11:15.263: INFO: stderr: "" -Jun 12 21:11:15.263: INFO: stdout: "true" -Jun 12 21:11:15.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods update-demo-nautilus-npvcg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' -Jun 12 21:11:15.449: INFO: stderr: "" -Jun 12 21:11:15.449: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" -Jun 12 21:11:15.449: INFO: validating pod update-demo-nautilus-npvcg -Jun 12 21:11:15.461: INFO: got data: { - "image": "nautilus.jpg" -} - -Jun 12 21:11:15.461: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . -Jun 12 21:11:15.461: INFO: update-demo-nautilus-npvcg is verified up and running -Jun 12 21:11:15.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods update-demo-nautilus-qcjf4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' -Jun 12 21:11:15.675: INFO: stderr: "" -Jun 12 21:11:15.675: INFO: stdout: "true" -Jun 12 21:11:15.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods update-demo-nautilus-qcjf4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' -Jun 12 21:11:15.895: INFO: stderr: "" -Jun 12 21:11:15.895: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" -Jun 12 21:11:15.895: INFO: validating pod update-demo-nautilus-qcjf4 -Jun 12 21:11:15.915: INFO: got data: { - "image": "nautilus.jpg" -} - -Jun 12 21:11:15.916: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . -Jun 12 21:11:15.916: INFO: update-demo-nautilus-qcjf4 is verified up and running -STEP: using delete to clean up resources 06/12/23 21:11:15.916 -Jun 12 21:11:15.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 delete --grace-period=0 --force -f -' -Jun 12 21:11:16.112: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" -Jun 12 21:11:16.112: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" -Jun 12 21:11:16.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get rc,svc -l name=update-demo --no-headers' -Jun 12 21:11:16.411: INFO: stderr: "No resources found in kubectl-1115 namespace.\n" -Jun 12 21:11:16.411: INFO: stdout: "" -Jun 12 21:11:16.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' -Jun 12 21:11:16.636: INFO: stderr: "" -Jun 12 21:11:16.636: INFO: stdout: "" -[AfterEach] [sig-cli] Kubectl client +[BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 +[It] should adopt matching pods on creation [Conformance] + test/e2e/apps/rc.go:92 +STEP: Given a Pod with a 'name' label pod-adoption is created 07/27/23 01:52:57.27 +Jul 27 01:52:57.313: INFO: Waiting up to 5m0s for pod "pod-adoption" in namespace "replication-controller-6463" to be "running and ready" +Jul 27 01:52:57.325: INFO: Pod "pod-adoption": Phase="Pending", Reason="", readiness=false. Elapsed: 11.515324ms +Jul 27 01:52:57.325: INFO: The phase of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) +Jul 27 01:52:59.364: INFO: Pod "pod-adoption": Phase="Running", Reason="", readiness=true. Elapsed: 2.050452798s +Jul 27 01:52:59.364: INFO: The phase of Pod pod-adoption is Running (Ready = true) +Jul 27 01:52:59.364: INFO: Pod "pod-adoption" satisfied condition "running and ready" +STEP: When a replication controller with a matching selector is created 07/27/23 01:52:59.396 +STEP: Then the orphan pod is adopted 07/27/23 01:52:59.418 +[AfterEach] [sig-apps] ReplicationController test/e2e/framework/node/init/init.go:32 -Jun 12 21:11:16.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-cli] Kubectl client +Jul 27 01:53:00.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicationController test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-cli] Kubectl client +[DeferCleanup (Each)] [sig-apps] ReplicationController dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-cli] Kubectl client +[DeferCleanup (Each)] [sig-apps] ReplicationController tear down framework | framework.go:193 -STEP: Destroying namespace "kubectl-1115" for this suite. 06/12/23 21:11:16.652 +STEP: Destroying namespace "replication-controller-6463" for this suite. 07/27/23 01:53:00.481 ------------------------------ -• [SLOW TEST] [38.660 seconds] -[sig-cli] Kubectl client -test/e2e/kubectl/framework.go:23 - Update Demo - test/e2e/kubectl/kubectl.go:324 - should scale a replication controller [Conformance] - test/e2e/kubectl/kubectl.go:352 +• [3.300 seconds] +[sig-apps] ReplicationController +test/e2e/apps/framework.go:23 + should adopt matching pods on creation [Conformance] + test/e2e/apps/rc.go:92 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-cli] Kubectl client + [BeforeEach] [sig-apps] ReplicationController set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:10:38.004 - Jun 12 21:10:38.004: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename kubectl 06/12/23 21:10:38.009 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:10:38.091 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:10:38.11 - [BeforeEach] [sig-cli] Kubectl client + STEP: Creating a kubernetes client 07/27/23 01:52:57.204 + Jul 27 01:52:57.204: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename replication-controller 07/27/23 01:52:57.205 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:52:57.248 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:52:57.258 + [BeforeEach] [sig-apps] ReplicationController test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-cli] Kubectl client - test/e2e/kubectl/kubectl.go:274 - [BeforeEach] Update Demo - test/e2e/kubectl/kubectl.go:326 - [It] should scale a replication controller [Conformance] - test/e2e/kubectl/kubectl.go:352 - STEP: creating a replication controller 06/12/23 21:10:38.174 - Jun 12 21:10:38.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 create -f -' - Jun 12 21:10:41.457: INFO: stderr: "" - Jun 12 21:10:41.457: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" - STEP: waiting for all containers in name=update-demo pods to come up. 06/12/23 21:10:41.457 - Jun 12 21:10:41.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' - Jun 12 21:10:41.685: INFO: stderr: "" - Jun 12 21:10:41.685: INFO: stdout: "update-demo-nautilus-npvcg update-demo-nautilus-x6zfg " - Jun 12 21:10:41.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods update-demo-nautilus-npvcg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' - Jun 12 21:10:41.876: INFO: stderr: "" - Jun 12 21:10:41.876: INFO: stdout: "" - Jun 12 21:10:41.876: INFO: update-demo-nautilus-npvcg is created but not running - Jun 12 21:10:46.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' - Jun 12 21:10:47.116: INFO: stderr: "" - Jun 12 21:10:47.116: INFO: stdout: "update-demo-nautilus-npvcg update-demo-nautilus-x6zfg " - Jun 12 21:10:47.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods update-demo-nautilus-npvcg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' - Jun 12 21:10:47.321: INFO: stderr: "" - Jun 12 21:10:47.321: INFO: stdout: "" - Jun 12 21:10:47.321: INFO: update-demo-nautilus-npvcg is created but not running - Jun 12 21:10:52.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' - Jun 12 21:10:52.679: INFO: stderr: "" - Jun 12 21:10:52.679: INFO: stdout: "update-demo-nautilus-npvcg update-demo-nautilus-x6zfg " - Jun 12 21:10:52.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods update-demo-nautilus-npvcg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' - Jun 12 21:10:52.862: INFO: stderr: "" - Jun 12 21:10:52.862: INFO: stdout: "" - Jun 12 21:10:52.862: INFO: update-demo-nautilus-npvcg is created but not running - Jun 12 21:10:57.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' - Jun 12 21:10:58.090: INFO: stderr: "" - Jun 12 21:10:58.090: INFO: stdout: "update-demo-nautilus-npvcg update-demo-nautilus-x6zfg " - Jun 12 21:10:58.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods update-demo-nautilus-npvcg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' - Jun 12 21:10:58.308: INFO: stderr: "" - Jun 12 21:10:58.308: INFO: stdout: "true" - Jun 12 21:10:58.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods update-demo-nautilus-npvcg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' - Jun 12 21:10:58.719: INFO: stderr: "" - Jun 12 21:10:58.719: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" - Jun 12 21:10:58.719: INFO: validating pod update-demo-nautilus-npvcg - Jun 12 21:10:58.738: INFO: got data: { - "image": "nautilus.jpg" - } - - Jun 12 21:10:58.739: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . - Jun 12 21:10:58.739: INFO: update-demo-nautilus-npvcg is verified up and running - Jun 12 21:10:58.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods update-demo-nautilus-x6zfg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' - Jun 12 21:10:58.973: INFO: stderr: "" - Jun 12 21:10:58.974: INFO: stdout: "" - Jun 12 21:10:58.974: INFO: update-demo-nautilus-x6zfg is created but not running - Jun 12 21:11:03.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' - Jun 12 21:11:04.244: INFO: stderr: "" - Jun 12 21:11:04.244: INFO: stdout: "update-demo-nautilus-npvcg update-demo-nautilus-x6zfg " - Jun 12 21:11:04.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods update-demo-nautilus-npvcg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' - Jun 12 21:11:04.415: INFO: stderr: "" - Jun 12 21:11:04.415: INFO: stdout: "true" - Jun 12 21:11:04.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods update-demo-nautilus-npvcg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' - Jun 12 21:11:04.561: INFO: stderr: "" - Jun 12 21:11:04.561: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" - Jun 12 21:11:04.561: INFO: validating pod update-demo-nautilus-npvcg - Jun 12 21:11:04.577: INFO: got data: { - "image": "nautilus.jpg" - } - - Jun 12 21:11:04.577: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . - Jun 12 21:11:04.577: INFO: update-demo-nautilus-npvcg is verified up and running - Jun 12 21:11:04.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods update-demo-nautilus-x6zfg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' - Jun 12 21:11:04.786: INFO: stderr: "" - Jun 12 21:11:04.786: INFO: stdout: "true" - Jun 12 21:11:04.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods update-demo-nautilus-x6zfg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' - Jun 12 21:11:05.008: INFO: stderr: "" - Jun 12 21:11:05.008: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" - Jun 12 21:11:05.008: INFO: validating pod update-demo-nautilus-x6zfg - Jun 12 21:11:05.049: INFO: got data: { - "image": "nautilus.jpg" - } - - Jun 12 21:11:05.049: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . - Jun 12 21:11:05.049: INFO: update-demo-nautilus-x6zfg is verified up and running - STEP: scaling down the replication controller 06/12/23 21:11:05.049 - Jun 12 21:11:05.056: INFO: scanned /root for discovery docs: - Jun 12 21:11:05.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 scale rc update-demo-nautilus --replicas=1 --timeout=5m' - Jun 12 21:11:06.333: INFO: stderr: "" - Jun 12 21:11:06.333: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" - STEP: waiting for all containers in name=update-demo pods to come up. 06/12/23 21:11:06.333 - Jun 12 21:11:06.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' - Jun 12 21:11:06.578: INFO: stderr: "" - Jun 12 21:11:06.578: INFO: stdout: "update-demo-nautilus-npvcg update-demo-nautilus-x6zfg " - STEP: Replicas for name=update-demo: expected=1 actual=2 06/12/23 21:11:06.578 - Jun 12 21:11:11.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' - Jun 12 21:11:12.365: INFO: stderr: "" - Jun 12 21:11:12.365: INFO: stdout: "update-demo-nautilus-npvcg " - Jun 12 21:11:12.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods update-demo-nautilus-npvcg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' - Jun 12 21:11:13.011: INFO: stderr: "" - Jun 12 21:11:13.011: INFO: stdout: "true" - Jun 12 21:11:13.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods update-demo-nautilus-npvcg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' - Jun 12 21:11:13.328: INFO: stderr: "" - Jun 12 21:11:13.328: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" - Jun 12 21:11:13.328: INFO: validating pod update-demo-nautilus-npvcg - Jun 12 21:11:13.370: INFO: got data: { - "image": "nautilus.jpg" - } - - Jun 12 21:11:13.370: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . - Jun 12 21:11:13.370: INFO: update-demo-nautilus-npvcg is verified up and running - STEP: scaling up the replication controller 06/12/23 21:11:13.37 - Jun 12 21:11:13.388: INFO: scanned /root for discovery docs: - Jun 12 21:11:13.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 scale rc update-demo-nautilus --replicas=2 --timeout=5m' - Jun 12 21:11:14.830: INFO: stderr: "" - Jun 12 21:11:14.830: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" - STEP: waiting for all containers in name=update-demo pods to come up. 06/12/23 21:11:14.83 - Jun 12 21:11:14.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' - Jun 12 21:11:15.057: INFO: stderr: "" - Jun 12 21:11:15.057: INFO: stdout: "update-demo-nautilus-npvcg update-demo-nautilus-qcjf4 " - Jun 12 21:11:15.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods update-demo-nautilus-npvcg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' - Jun 12 21:11:15.263: INFO: stderr: "" - Jun 12 21:11:15.263: INFO: stdout: "true" - Jun 12 21:11:15.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods update-demo-nautilus-npvcg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' - Jun 12 21:11:15.449: INFO: stderr: "" - Jun 12 21:11:15.449: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" - Jun 12 21:11:15.449: INFO: validating pod update-demo-nautilus-npvcg - Jun 12 21:11:15.461: INFO: got data: { - "image": "nautilus.jpg" - } - - Jun 12 21:11:15.461: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . - Jun 12 21:11:15.461: INFO: update-demo-nautilus-npvcg is verified up and running - Jun 12 21:11:15.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods update-demo-nautilus-qcjf4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' - Jun 12 21:11:15.675: INFO: stderr: "" - Jun 12 21:11:15.675: INFO: stdout: "true" - Jun 12 21:11:15.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods update-demo-nautilus-qcjf4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' - Jun 12 21:11:15.895: INFO: stderr: "" - Jun 12 21:11:15.895: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" - Jun 12 21:11:15.895: INFO: validating pod update-demo-nautilus-qcjf4 - Jun 12 21:11:15.915: INFO: got data: { - "image": "nautilus.jpg" - } - - Jun 12 21:11:15.916: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . - Jun 12 21:11:15.916: INFO: update-demo-nautilus-qcjf4 is verified up and running - STEP: using delete to clean up resources 06/12/23 21:11:15.916 - Jun 12 21:11:15.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 delete --grace-period=0 --force -f -' - Jun 12 21:11:16.112: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" - Jun 12 21:11:16.112: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" - Jun 12 21:11:16.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get rc,svc -l name=update-demo --no-headers' - Jun 12 21:11:16.411: INFO: stderr: "No resources found in kubectl-1115 namespace.\n" - Jun 12 21:11:16.411: INFO: stdout: "" - Jun 12 21:11:16.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1115 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' - Jun 12 21:11:16.636: INFO: stderr: "" - Jun 12 21:11:16.636: INFO: stdout: "" - [AfterEach] [sig-cli] Kubectl client + [BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 + [It] should adopt matching pods on creation [Conformance] + test/e2e/apps/rc.go:92 + STEP: Given a Pod with a 'name' label pod-adoption is created 07/27/23 01:52:57.27 + Jul 27 01:52:57.313: INFO: Waiting up to 5m0s for pod "pod-adoption" in namespace "replication-controller-6463" to be "running and ready" + Jul 27 01:52:57.325: INFO: Pod "pod-adoption": Phase="Pending", Reason="", readiness=false. Elapsed: 11.515324ms + Jul 27 01:52:57.325: INFO: The phase of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) + Jul 27 01:52:59.364: INFO: Pod "pod-adoption": Phase="Running", Reason="", readiness=true. Elapsed: 2.050452798s + Jul 27 01:52:59.364: INFO: The phase of Pod pod-adoption is Running (Ready = true) + Jul 27 01:52:59.364: INFO: Pod "pod-adoption" satisfied condition "running and ready" + STEP: When a replication controller with a matching selector is created 07/27/23 01:52:59.396 + STEP: Then the orphan pod is adopted 07/27/23 01:52:59.418 + [AfterEach] [sig-apps] ReplicationController test/e2e/framework/node/init/init.go:32 - Jun 12 21:11:16.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-cli] Kubectl client + Jul 27 01:53:00.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicationController test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-cli] Kubectl client + [DeferCleanup (Each)] [sig-apps] ReplicationController dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-cli] Kubectl client + [DeferCleanup (Each)] [sig-apps] ReplicationController tear down framework | framework.go:193 - STEP: Destroying namespace "kubectl-1115" for this suite. 06/12/23 21:11:16.652 + STEP: Destroying namespace "replication-controller-6463" for this suite. 07/27/23 01:53:00.481 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] ConfigMap - optional updates should be reflected in volume [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:240 -[BeforeEach] [sig-storage] ConfigMap +[sig-apps] CronJob + should schedule multiple jobs concurrently [Conformance] + test/e2e/apps/cronjob.go:69 +[BeforeEach] [sig-apps] CronJob set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:11:16.672 -Jun 12 21:11:16.673: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename configmap 06/12/23 21:11:16.674 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:11:16.724 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:11:16.736 -[BeforeEach] [sig-storage] ConfigMap +STEP: Creating a kubernetes client 07/27/23 01:53:00.505 +Jul 27 01:53:00.505: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename cronjob 07/27/23 01:53:00.506 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:53:00.566 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:53:00.574 +[BeforeEach] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:31 -[It] optional updates should be reflected in volume [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:240 -Jun 12 21:11:16.769: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node -STEP: Creating configMap with name cm-test-opt-del-4a45b659-2579-4e5c-8920-a78e7d6a2a91 06/12/23 21:11:16.769 -STEP: Creating configMap with name cm-test-opt-upd-282c7a9b-a3d9-40f4-bf44-7ca5921df082 06/12/23 21:11:16.783 -STEP: Creating the pod 06/12/23 21:11:16.802 -Jun 12 21:11:16.832: INFO: Waiting up to 5m0s for pod "pod-configmaps-9b79f263-8d4b-49ce-b647-1bcbc0e2c0ba" in namespace "configmap-3094" to be "running and ready" -Jun 12 21:11:16.848: INFO: Pod "pod-configmaps-9b79f263-8d4b-49ce-b647-1bcbc0e2c0ba": Phase="Pending", Reason="", readiness=false. Elapsed: 15.881003ms -Jun 12 21:11:16.848: INFO: The phase of Pod pod-configmaps-9b79f263-8d4b-49ce-b647-1bcbc0e2c0ba is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:11:18.862: INFO: Pod "pod-configmaps-9b79f263-8d4b-49ce-b647-1bcbc0e2c0ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029821233s -Jun 12 21:11:18.862: INFO: The phase of Pod pod-configmaps-9b79f263-8d4b-49ce-b647-1bcbc0e2c0ba is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:11:20.860: INFO: Pod "pod-configmaps-9b79f263-8d4b-49ce-b647-1bcbc0e2c0ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027606275s -Jun 12 21:11:20.860: INFO: The phase of Pod pod-configmaps-9b79f263-8d4b-49ce-b647-1bcbc0e2c0ba is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:11:22.860: INFO: Pod "pod-configmaps-9b79f263-8d4b-49ce-b647-1bcbc0e2c0ba": Phase="Running", Reason="", readiness=true. Elapsed: 6.027340339s -Jun 12 21:11:22.860: INFO: The phase of Pod pod-configmaps-9b79f263-8d4b-49ce-b647-1bcbc0e2c0ba is Running (Ready = true) -Jun 12 21:11:22.860: INFO: Pod "pod-configmaps-9b79f263-8d4b-49ce-b647-1bcbc0e2c0ba" satisfied condition "running and ready" -STEP: Deleting configmap cm-test-opt-del-4a45b659-2579-4e5c-8920-a78e7d6a2a91 06/12/23 21:11:22.971 -STEP: Updating configmap cm-test-opt-upd-282c7a9b-a3d9-40f4-bf44-7ca5921df082 06/12/23 21:11:22.985 -STEP: Creating configMap with name cm-test-opt-create-4677db30-e179-4cd6-8d34-92775dbab9c3 06/12/23 21:11:22.998 -STEP: waiting to observe update in volume 06/12/23 21:11:23.013 -[AfterEach] [sig-storage] ConfigMap +[It] should schedule multiple jobs concurrently [Conformance] + test/e2e/apps/cronjob.go:69 +STEP: Creating a cronjob 07/27/23 01:53:00.584 +W0727 01:53:00.661749 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "c" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "c" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "c" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "c" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +STEP: Ensuring more than one job is running at a time 07/27/23 01:53:00.661 +STEP: Ensuring at least two running jobs exists by listing jobs explicitly 07/27/23 01:55:00.674 +STEP: Removing cronjob 07/27/23 01:55:00.705 +[AfterEach] [sig-apps] CronJob test/e2e/framework/node/init/init.go:32 -Jun 12 21:12:36.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] ConfigMap +Jul 27 01:55:00.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] ConfigMap +[DeferCleanup (Each)] [sig-apps] CronJob dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] ConfigMap +[DeferCleanup (Each)] [sig-apps] CronJob tear down framework | framework.go:193 -STEP: Destroying namespace "configmap-3094" for this suite. 06/12/23 21:12:36.384 +STEP: Destroying namespace "cronjob-5692" for this suite. 07/27/23 01:55:00.762 ------------------------------ -• [SLOW TEST] [79.728 seconds] -[sig-storage] ConfigMap -test/e2e/common/storage/framework.go:23 - optional updates should be reflected in volume [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:240 +• [SLOW TEST] [120.280 seconds] +[sig-apps] CronJob +test/e2e/apps/framework.go:23 + should schedule multiple jobs concurrently [Conformance] + test/e2e/apps/cronjob.go:69 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] ConfigMap + [BeforeEach] [sig-apps] CronJob set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:11:16.672 - Jun 12 21:11:16.673: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename configmap 06/12/23 21:11:16.674 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:11:16.724 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:11:16.736 - [BeforeEach] [sig-storage] ConfigMap + STEP: Creating a kubernetes client 07/27/23 01:53:00.505 + Jul 27 01:53:00.505: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename cronjob 07/27/23 01:53:00.506 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:53:00.566 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:53:00.574 + [BeforeEach] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:31 - [It] optional updates should be reflected in volume [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:240 - Jun 12 21:11:16.769: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node - STEP: Creating configMap with name cm-test-opt-del-4a45b659-2579-4e5c-8920-a78e7d6a2a91 06/12/23 21:11:16.769 - STEP: Creating configMap with name cm-test-opt-upd-282c7a9b-a3d9-40f4-bf44-7ca5921df082 06/12/23 21:11:16.783 - STEP: Creating the pod 06/12/23 21:11:16.802 - Jun 12 21:11:16.832: INFO: Waiting up to 5m0s for pod "pod-configmaps-9b79f263-8d4b-49ce-b647-1bcbc0e2c0ba" in namespace "configmap-3094" to be "running and ready" - Jun 12 21:11:16.848: INFO: Pod "pod-configmaps-9b79f263-8d4b-49ce-b647-1bcbc0e2c0ba": Phase="Pending", Reason="", readiness=false. Elapsed: 15.881003ms - Jun 12 21:11:16.848: INFO: The phase of Pod pod-configmaps-9b79f263-8d4b-49ce-b647-1bcbc0e2c0ba is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:11:18.862: INFO: Pod "pod-configmaps-9b79f263-8d4b-49ce-b647-1bcbc0e2c0ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029821233s - Jun 12 21:11:18.862: INFO: The phase of Pod pod-configmaps-9b79f263-8d4b-49ce-b647-1bcbc0e2c0ba is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:11:20.860: INFO: Pod "pod-configmaps-9b79f263-8d4b-49ce-b647-1bcbc0e2c0ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027606275s - Jun 12 21:11:20.860: INFO: The phase of Pod pod-configmaps-9b79f263-8d4b-49ce-b647-1bcbc0e2c0ba is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:11:22.860: INFO: Pod "pod-configmaps-9b79f263-8d4b-49ce-b647-1bcbc0e2c0ba": Phase="Running", Reason="", readiness=true. Elapsed: 6.027340339s - Jun 12 21:11:22.860: INFO: The phase of Pod pod-configmaps-9b79f263-8d4b-49ce-b647-1bcbc0e2c0ba is Running (Ready = true) - Jun 12 21:11:22.860: INFO: Pod "pod-configmaps-9b79f263-8d4b-49ce-b647-1bcbc0e2c0ba" satisfied condition "running and ready" - STEP: Deleting configmap cm-test-opt-del-4a45b659-2579-4e5c-8920-a78e7d6a2a91 06/12/23 21:11:22.971 - STEP: Updating configmap cm-test-opt-upd-282c7a9b-a3d9-40f4-bf44-7ca5921df082 06/12/23 21:11:22.985 - STEP: Creating configMap with name cm-test-opt-create-4677db30-e179-4cd6-8d34-92775dbab9c3 06/12/23 21:11:22.998 - STEP: waiting to observe update in volume 06/12/23 21:11:23.013 - [AfterEach] [sig-storage] ConfigMap + [It] should schedule multiple jobs concurrently [Conformance] + test/e2e/apps/cronjob.go:69 + STEP: Creating a cronjob 07/27/23 01:53:00.584 + W0727 01:53:00.661749 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "c" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "c" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "c" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "c" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + STEP: Ensuring more than one job is running at a time 07/27/23 01:53:00.661 + STEP: Ensuring at least two running jobs exists by listing jobs explicitly 07/27/23 01:55:00.674 + STEP: Removing cronjob 07/27/23 01:55:00.705 + [AfterEach] [sig-apps] CronJob test/e2e/framework/node/init/init.go:32 - Jun 12 21:12:36.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] ConfigMap + Jul 27 01:55:00.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] ConfigMap + [DeferCleanup (Each)] [sig-apps] CronJob dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] ConfigMap + [DeferCleanup (Each)] [sig-apps] CronJob tear down framework | framework.go:193 - STEP: Destroying namespace "configmap-3094" for this suite. 06/12/23 21:12:36.384 + STEP: Destroying namespace "cronjob-5692" for this suite. 07/27/23 01:55:00.762 << End Captured GinkgoWriter Output ------------------------------ -SSSSS +SSSSSSSS ------------------------------ -[sig-api-machinery] Namespaces [Serial] - should apply changes to a namespace status [Conformance] - test/e2e/apimachinery/namespace.go:299 -[BeforeEach] [sig-api-machinery] Namespaces [Serial] +[sig-storage] Downward API volume + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:249 +[BeforeEach] [sig-storage] Downward API volume set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:12:36.407 -Jun 12 21:12:36.407: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename namespaces 06/12/23 21:12:36.41 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:12:36.458 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:12:36.468 -[BeforeEach] [sig-api-machinery] Namespaces [Serial] +STEP: Creating a kubernetes client 07/27/23 01:55:00.788 +Jul 27 01:55:00.788: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename downward-api 07/27/23 01:55:00.788 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:55:00.918 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:55:00.978 +[BeforeEach] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:31 -[It] should apply changes to a namespace status [Conformance] - test/e2e/apimachinery/namespace.go:299 -STEP: Read namespace status 06/12/23 21:12:36.481 -Jun 12 21:12:36.493: INFO: Status: v1.NamespaceStatus{Phase:"Active", Conditions:[]v1.NamespaceCondition(nil)} -STEP: Patch namespace status 06/12/23 21:12:36.493 -Jun 12 21:12:36.517: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusPatch", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Patched by an e2e test"} -STEP: Update namespace status 06/12/23 21:12:36.519 -Jun 12 21:12:36.551: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Updated by an e2e test"} -[AfterEach] [sig-api-machinery] Namespaces [Serial] +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:249 +STEP: Creating a pod to test downward API volume plugin 07/27/23 01:55:01 +Jul 27 01:55:01.069: INFO: Waiting up to 5m0s for pod "downwardapi-volume-efa0b7a5-3718-4eb9-ab05-1201e5238e73" in namespace "downward-api-5900" to be "Succeeded or Failed" +Jul 27 01:55:01.121: INFO: Pod "downwardapi-volume-efa0b7a5-3718-4eb9-ab05-1201e5238e73": Phase="Pending", Reason="", readiness=false. Elapsed: 52.499119ms +Jul 27 01:55:03.131: INFO: Pod "downwardapi-volume-efa0b7a5-3718-4eb9-ab05-1201e5238e73": Phase="Running", Reason="", readiness=true. Elapsed: 2.062235027s +Jul 27 01:55:05.133: INFO: Pod "downwardapi-volume-efa0b7a5-3718-4eb9-ab05-1201e5238e73": Phase="Running", Reason="", readiness=false. Elapsed: 4.064280854s +Jul 27 01:55:07.137: INFO: Pod "downwardapi-volume-efa0b7a5-3718-4eb9-ab05-1201e5238e73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06822563s +STEP: Saw pod success 07/27/23 01:55:07.137 +Jul 27 01:55:07.137: INFO: Pod "downwardapi-volume-efa0b7a5-3718-4eb9-ab05-1201e5238e73" satisfied condition "Succeeded or Failed" +Jul 27 01:55:07.146: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-efa0b7a5-3718-4eb9-ab05-1201e5238e73 container client-container: +STEP: delete the pod 07/27/23 01:55:07.194 +Jul 27 01:55:07.214: INFO: Waiting for pod downwardapi-volume-efa0b7a5-3718-4eb9-ab05-1201e5238e73 to disappear +Jul 27 01:55:07.223: INFO: Pod downwardapi-volume-efa0b7a5-3718-4eb9-ab05-1201e5238e73 no longer exists +[AfterEach] [sig-storage] Downward API volume test/e2e/framework/node/init/init.go:32 -Jun 12 21:12:36.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] +Jul 27 01:55:07.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] +[DeferCleanup (Each)] [sig-storage] Downward API volume dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] +[DeferCleanup (Each)] [sig-storage] Downward API volume tear down framework | framework.go:193 -STEP: Destroying namespace "namespaces-9567" for this suite. 06/12/23 21:12:36.568 +STEP: Destroying namespace "downward-api-5900" for this suite. 07/27/23 01:55:07.238 ------------------------------ -• [0.176 seconds] -[sig-api-machinery] Namespaces [Serial] -test/e2e/apimachinery/framework.go:23 - should apply changes to a namespace status [Conformance] - test/e2e/apimachinery/namespace.go:299 +• [SLOW TEST] [6.474 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:249 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] Namespaces [Serial] + [BeforeEach] [sig-storage] Downward API volume set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:12:36.407 - Jun 12 21:12:36.407: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename namespaces 06/12/23 21:12:36.41 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:12:36.458 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:12:36.468 - [BeforeEach] [sig-api-machinery] Namespaces [Serial] + STEP: Creating a kubernetes client 07/27/23 01:55:00.788 + Jul 27 01:55:00.788: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename downward-api 07/27/23 01:55:00.788 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:55:00.918 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:55:00.978 + [BeforeEach] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:31 - [It] should apply changes to a namespace status [Conformance] - test/e2e/apimachinery/namespace.go:299 - STEP: Read namespace status 06/12/23 21:12:36.481 - Jun 12 21:12:36.493: INFO: Status: v1.NamespaceStatus{Phase:"Active", Conditions:[]v1.NamespaceCondition(nil)} - STEP: Patch namespace status 06/12/23 21:12:36.493 - Jun 12 21:12:36.517: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusPatch", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Patched by an e2e test"} - STEP: Update namespace status 06/12/23 21:12:36.519 - Jun 12 21:12:36.551: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Updated by an e2e test"} - [AfterEach] [sig-api-machinery] Namespaces [Serial] + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 + [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:249 + STEP: Creating a pod to test downward API volume plugin 07/27/23 01:55:01 + Jul 27 01:55:01.069: INFO: Waiting up to 5m0s for pod "downwardapi-volume-efa0b7a5-3718-4eb9-ab05-1201e5238e73" in namespace "downward-api-5900" to be "Succeeded or Failed" + Jul 27 01:55:01.121: INFO: Pod "downwardapi-volume-efa0b7a5-3718-4eb9-ab05-1201e5238e73": Phase="Pending", Reason="", readiness=false. Elapsed: 52.499119ms + Jul 27 01:55:03.131: INFO: Pod "downwardapi-volume-efa0b7a5-3718-4eb9-ab05-1201e5238e73": Phase="Running", Reason="", readiness=true. Elapsed: 2.062235027s + Jul 27 01:55:05.133: INFO: Pod "downwardapi-volume-efa0b7a5-3718-4eb9-ab05-1201e5238e73": Phase="Running", Reason="", readiness=false. Elapsed: 4.064280854s + Jul 27 01:55:07.137: INFO: Pod "downwardapi-volume-efa0b7a5-3718-4eb9-ab05-1201e5238e73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06822563s + STEP: Saw pod success 07/27/23 01:55:07.137 + Jul 27 01:55:07.137: INFO: Pod "downwardapi-volume-efa0b7a5-3718-4eb9-ab05-1201e5238e73" satisfied condition "Succeeded or Failed" + Jul 27 01:55:07.146: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-efa0b7a5-3718-4eb9-ab05-1201e5238e73 container client-container: + STEP: delete the pod 07/27/23 01:55:07.194 + Jul 27 01:55:07.214: INFO: Waiting for pod downwardapi-volume-efa0b7a5-3718-4eb9-ab05-1201e5238e73 to disappear + Jul 27 01:55:07.223: INFO: Pod downwardapi-volume-efa0b7a5-3718-4eb9-ab05-1201e5238e73 no longer exists + [AfterEach] [sig-storage] Downward API volume test/e2e/framework/node/init/init.go:32 - Jun 12 21:12:36.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + Jul 27 01:55:07.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + [DeferCleanup (Each)] [sig-storage] Downward API volume dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + [DeferCleanup (Each)] [sig-storage] Downward API volume tear down framework | framework.go:193 - STEP: Destroying namespace "namespaces-9567" for this suite. 06/12/23 21:12:36.568 + STEP: Destroying namespace "downward-api-5900" for this suite. 07/27/23 01:55:07.238 << End Captured GinkgoWriter Output ------------------------------ -SSSSS +SSSSSSSS ------------------------------ -[sig-node] Variable Expansion - should allow substituting values in a container's command [NodeConformance] [Conformance] - test/e2e/common/node/expansion.go:73 -[BeforeEach] [sig-node] Variable Expansion +[sig-storage] EmptyDir volumes + should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:177 +[BeforeEach] [sig-storage] EmptyDir volumes set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:12:36.583 -Jun 12 21:12:36.583: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename var-expansion 06/12/23 21:12:36.584 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:12:36.636 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:12:36.651 -[BeforeEach] [sig-node] Variable Expansion +STEP: Creating a kubernetes client 07/27/23 01:55:07.263 +Jul 27 01:55:07.263: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename emptydir 07/27/23 01:55:07.264 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:55:07.306 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:55:07.316 +[BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:31 -[It] should allow substituting values in a container's command [NodeConformance] [Conformance] - test/e2e/common/node/expansion.go:73 -STEP: Creating a pod to test substitution in container's command 06/12/23 21:12:36.66 -Jun 12 21:12:36.685: INFO: Waiting up to 5m0s for pod "var-expansion-6f6be1a1-6a76-47d3-a434-2087dd559e85" in namespace "var-expansion-4533" to be "Succeeded or Failed" -Jun 12 21:12:36.700: INFO: Pod "var-expansion-6f6be1a1-6a76-47d3-a434-2087dd559e85": Phase="Pending", Reason="", readiness=false. Elapsed: 14.840976ms -Jun 12 21:12:38.725: INFO: Pod "var-expansion-6f6be1a1-6a76-47d3-a434-2087dd559e85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040548962s -Jun 12 21:12:40.712: INFO: Pod "var-expansion-6f6be1a1-6a76-47d3-a434-2087dd559e85": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027284499s -Jun 12 21:12:42.723: INFO: Pod "var-expansion-6f6be1a1-6a76-47d3-a434-2087dd559e85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038702485s -STEP: Saw pod success 06/12/23 21:12:42.724 -Jun 12 21:12:42.727: INFO: Pod "var-expansion-6f6be1a1-6a76-47d3-a434-2087dd559e85" satisfied condition "Succeeded or Failed" -Jun 12 21:12:42.740: INFO: Trying to get logs from node 10.138.75.70 pod var-expansion-6f6be1a1-6a76-47d3-a434-2087dd559e85 container dapi-container: -STEP: delete the pod 06/12/23 21:12:42.799 -Jun 12 21:12:42.827: INFO: Waiting for pod var-expansion-6f6be1a1-6a76-47d3-a434-2087dd559e85 to disappear -Jun 12 21:12:42.835: INFO: Pod var-expansion-6f6be1a1-6a76-47d3-a434-2087dd559e85 no longer exists -[AfterEach] [sig-node] Variable Expansion +[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:177 +STEP: Creating a pod to test emptydir 0666 on node default medium 07/27/23 01:55:07.326 +Jul 27 01:55:07.356: INFO: Waiting up to 5m0s for pod "pod-d4677b0b-cda7-4a2e-aa9f-ac264c82ff93" in namespace "emptydir-6242" to be "Succeeded or Failed" +Jul 27 01:55:07.364: INFO: Pod "pod-d4677b0b-cda7-4a2e-aa9f-ac264c82ff93": Phase="Pending", Reason="", readiness=false. Elapsed: 8.679133ms +Jul 27 01:55:09.373: INFO: Pod "pod-d4677b0b-cda7-4a2e-aa9f-ac264c82ff93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017418182s +Jul 27 01:55:11.375: INFO: Pod "pod-d4677b0b-cda7-4a2e-aa9f-ac264c82ff93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019127719s +STEP: Saw pod success 07/27/23 01:55:11.375 +Jul 27 01:55:11.375: INFO: Pod "pod-d4677b0b-cda7-4a2e-aa9f-ac264c82ff93" satisfied condition "Succeeded or Failed" +Jul 27 01:55:11.383: INFO: Trying to get logs from node 10.245.128.19 pod pod-d4677b0b-cda7-4a2e-aa9f-ac264c82ff93 container test-container: +STEP: delete the pod 07/27/23 01:55:11.401 +Jul 27 01:55:11.447: INFO: Waiting for pod pod-d4677b0b-cda7-4a2e-aa9f-ac264c82ff93 to disappear +Jul 27 01:55:11.454: INFO: Pod pod-d4677b0b-cda7-4a2e-aa9f-ac264c82ff93 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/node/init/init.go:32 -Jun 12 21:12:42.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Variable Expansion +Jul 27 01:55:11.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Variable Expansion +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Variable Expansion +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes tear down framework | framework.go:193 -STEP: Destroying namespace "var-expansion-4533" for this suite. 06/12/23 21:12:42.851 +STEP: Destroying namespace "emptydir-6242" for this suite. 07/27/23 01:55:11.466 ------------------------------ -• [SLOW TEST] [6.281 seconds] -[sig-node] Variable Expansion -test/e2e/common/node/framework.go:23 - should allow substituting values in a container's command [NodeConformance] [Conformance] - test/e2e/common/node/expansion.go:73 +• [4.236 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:177 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Variable Expansion + [BeforeEach] [sig-storage] EmptyDir volumes set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:12:36.583 - Jun 12 21:12:36.583: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename var-expansion 06/12/23 21:12:36.584 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:12:36.636 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:12:36.651 - [BeforeEach] [sig-node] Variable Expansion + STEP: Creating a kubernetes client 07/27/23 01:55:07.263 + Jul 27 01:55:07.263: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename emptydir 07/27/23 01:55:07.264 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:55:07.306 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:55:07.316 + [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:31 - [It] should allow substituting values in a container's command [NodeConformance] [Conformance] - test/e2e/common/node/expansion.go:73 - STEP: Creating a pod to test substitution in container's command 06/12/23 21:12:36.66 - Jun 12 21:12:36.685: INFO: Waiting up to 5m0s for pod "var-expansion-6f6be1a1-6a76-47d3-a434-2087dd559e85" in namespace "var-expansion-4533" to be "Succeeded or Failed" - Jun 12 21:12:36.700: INFO: Pod "var-expansion-6f6be1a1-6a76-47d3-a434-2087dd559e85": Phase="Pending", Reason="", readiness=false. Elapsed: 14.840976ms - Jun 12 21:12:38.725: INFO: Pod "var-expansion-6f6be1a1-6a76-47d3-a434-2087dd559e85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040548962s - Jun 12 21:12:40.712: INFO: Pod "var-expansion-6f6be1a1-6a76-47d3-a434-2087dd559e85": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027284499s - Jun 12 21:12:42.723: INFO: Pod "var-expansion-6f6be1a1-6a76-47d3-a434-2087dd559e85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038702485s - STEP: Saw pod success 06/12/23 21:12:42.724 - Jun 12 21:12:42.727: INFO: Pod "var-expansion-6f6be1a1-6a76-47d3-a434-2087dd559e85" satisfied condition "Succeeded or Failed" - Jun 12 21:12:42.740: INFO: Trying to get logs from node 10.138.75.70 pod var-expansion-6f6be1a1-6a76-47d3-a434-2087dd559e85 container dapi-container: - STEP: delete the pod 06/12/23 21:12:42.799 - Jun 12 21:12:42.827: INFO: Waiting for pod var-expansion-6f6be1a1-6a76-47d3-a434-2087dd559e85 to disappear - Jun 12 21:12:42.835: INFO: Pod var-expansion-6f6be1a1-6a76-47d3-a434-2087dd559e85 no longer exists - [AfterEach] [sig-node] Variable Expansion + [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:177 + STEP: Creating a pod to test emptydir 0666 on node default medium 07/27/23 01:55:07.326 + Jul 27 01:55:07.356: INFO: Waiting up to 5m0s for pod "pod-d4677b0b-cda7-4a2e-aa9f-ac264c82ff93" in namespace "emptydir-6242" to be "Succeeded or Failed" + Jul 27 01:55:07.364: INFO: Pod "pod-d4677b0b-cda7-4a2e-aa9f-ac264c82ff93": Phase="Pending", Reason="", readiness=false. Elapsed: 8.679133ms + Jul 27 01:55:09.373: INFO: Pod "pod-d4677b0b-cda7-4a2e-aa9f-ac264c82ff93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017418182s + Jul 27 01:55:11.375: INFO: Pod "pod-d4677b0b-cda7-4a2e-aa9f-ac264c82ff93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019127719s + STEP: Saw pod success 07/27/23 01:55:11.375 + Jul 27 01:55:11.375: INFO: Pod "pod-d4677b0b-cda7-4a2e-aa9f-ac264c82ff93" satisfied condition "Succeeded or Failed" + Jul 27 01:55:11.383: INFO: Trying to get logs from node 10.245.128.19 pod pod-d4677b0b-cda7-4a2e-aa9f-ac264c82ff93 container test-container: + STEP: delete the pod 07/27/23 01:55:11.401 + Jul 27 01:55:11.447: INFO: Waiting for pod pod-d4677b0b-cda7-4a2e-aa9f-ac264c82ff93 to disappear + Jul 27 01:55:11.454: INFO: Pod pod-d4677b0b-cda7-4a2e-aa9f-ac264c82ff93 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/node/init/init.go:32 - Jun 12 21:12:42.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Variable Expansion + Jul 27 01:55:11.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Variable Expansion + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Variable Expansion + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes tear down framework | framework.go:193 - STEP: Destroying namespace "var-expansion-4533" for this suite. 06/12/23 21:12:42.851 + STEP: Destroying namespace "emptydir-6242" for this suite. 07/27/23 01:55:11.466 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-apps] ReplicaSet - Replicaset should have a working scale subresource [Conformance] - test/e2e/apps/replica_set.go:143 -[BeforeEach] [sig-apps] ReplicaSet +[sig-storage] Projected downwardAPI + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:84 +[BeforeEach] [sig-storage] Projected downwardAPI set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:12:42.867 -Jun 12 21:12:42.867: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename replicaset 06/12/23 21:12:42.87 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:12:42.916 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:12:42.932 -[BeforeEach] [sig-apps] ReplicaSet +STEP: Creating a kubernetes client 07/27/23 01:55:11.499 +Jul 27 01:55:11.499: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename projected 07/27/23 01:55:11.5 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:55:11.548 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:55:11.558 +[BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:31 -[It] Replicaset should have a working scale subresource [Conformance] - test/e2e/apps/replica_set.go:143 -STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota 06/12/23 21:12:42.952 -Jun 12 21:12:42.991: INFO: Pod name sample-pod: Found 0 pods out of 1 -Jun 12 21:12:48.001: INFO: Pod name sample-pod: Found 1 pods out of 1 -STEP: ensuring each pod is running 06/12/23 21:12:48.001 -STEP: getting scale subresource 06/12/23 21:12:48.001 -STEP: updating a scale subresource 06/12/23 21:12:48.01 -STEP: verifying the replicaset Spec.Replicas was modified 06/12/23 21:12:48.029 -STEP: Patch a scale subresource 06/12/23 21:12:48.042 -[AfterEach] [sig-apps] ReplicaSet +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:84 +STEP: Creating a pod to test downward API volume plugin 07/27/23 01:55:11.568 +Jul 27 01:55:11.598: INFO: Waiting up to 5m0s for pod "downwardapi-volume-747972d5-7dbd-41a2-966f-d2c61f66afcc" in namespace "projected-6084" to be "Succeeded or Failed" +Jul 27 01:55:11.608: INFO: Pod "downwardapi-volume-747972d5-7dbd-41a2-966f-d2c61f66afcc": Phase="Pending", Reason="", readiness=false. Elapsed: 9.742351ms +Jul 27 01:55:13.618: INFO: Pod "downwardapi-volume-747972d5-7dbd-41a2-966f-d2c61f66afcc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019841933s +Jul 27 01:55:15.617: INFO: Pod "downwardapi-volume-747972d5-7dbd-41a2-966f-d2c61f66afcc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019008329s +STEP: Saw pod success 07/27/23 01:55:15.617 +Jul 27 01:55:15.617: INFO: Pod "downwardapi-volume-747972d5-7dbd-41a2-966f-d2c61f66afcc" satisfied condition "Succeeded or Failed" +Jul 27 01:55:15.625: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-747972d5-7dbd-41a2-966f-d2c61f66afcc container client-container: +STEP: delete the pod 07/27/23 01:55:15.647 +Jul 27 01:55:15.673: INFO: Waiting for pod downwardapi-volume-747972d5-7dbd-41a2-966f-d2c61f66afcc to disappear +Jul 27 01:55:15.680: INFO: Pod downwardapi-volume-747972d5-7dbd-41a2-966f-d2c61f66afcc no longer exists +[AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/node/init/init.go:32 -Jun 12 21:12:48.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] ReplicaSet +Jul 27 01:55:15.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] ReplicaSet +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] ReplicaSet +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI tear down framework | framework.go:193 -STEP: Destroying namespace "replicaset-3038" for this suite. 06/12/23 21:12:48.125 +STEP: Destroying namespace "projected-6084" for this suite. 07/27/23 01:55:15.693 ------------------------------ -• [SLOW TEST] [5.288 seconds] -[sig-apps] ReplicaSet -test/e2e/apps/framework.go:23 - Replicaset should have a working scale subresource [Conformance] - test/e2e/apps/replica_set.go:143 +• [4.215 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:84 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] ReplicaSet + [BeforeEach] [sig-storage] Projected downwardAPI set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:12:42.867 - Jun 12 21:12:42.867: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename replicaset 06/12/23 21:12:42.87 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:12:42.916 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:12:42.932 - [BeforeEach] [sig-apps] ReplicaSet + STEP: Creating a kubernetes client 07/27/23 01:55:11.499 + Jul 27 01:55:11.499: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename projected 07/27/23 01:55:11.5 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:55:11.548 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:55:11.558 + [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:31 - [It] Replicaset should have a working scale subresource [Conformance] - test/e2e/apps/replica_set.go:143 - STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota 06/12/23 21:12:42.952 - Jun 12 21:12:42.991: INFO: Pod name sample-pod: Found 0 pods out of 1 - Jun 12 21:12:48.001: INFO: Pod name sample-pod: Found 1 pods out of 1 - STEP: ensuring each pod is running 06/12/23 21:12:48.001 - STEP: getting scale subresource 06/12/23 21:12:48.001 - STEP: updating a scale subresource 06/12/23 21:12:48.01 - STEP: verifying the replicaset Spec.Replicas was modified 06/12/23 21:12:48.029 - STEP: Patch a scale subresource 06/12/23 21:12:48.042 - [AfterEach] [sig-apps] ReplicaSet + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 + [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:84 + STEP: Creating a pod to test downward API volume plugin 07/27/23 01:55:11.568 + Jul 27 01:55:11.598: INFO: Waiting up to 5m0s for pod "downwardapi-volume-747972d5-7dbd-41a2-966f-d2c61f66afcc" in namespace "projected-6084" to be "Succeeded or Failed" + Jul 27 01:55:11.608: INFO: Pod "downwardapi-volume-747972d5-7dbd-41a2-966f-d2c61f66afcc": Phase="Pending", Reason="", readiness=false. Elapsed: 9.742351ms + Jul 27 01:55:13.618: INFO: Pod "downwardapi-volume-747972d5-7dbd-41a2-966f-d2c61f66afcc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019841933s + Jul 27 01:55:15.617: INFO: Pod "downwardapi-volume-747972d5-7dbd-41a2-966f-d2c61f66afcc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019008329s + STEP: Saw pod success 07/27/23 01:55:15.617 + Jul 27 01:55:15.617: INFO: Pod "downwardapi-volume-747972d5-7dbd-41a2-966f-d2c61f66afcc" satisfied condition "Succeeded or Failed" + Jul 27 01:55:15.625: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-747972d5-7dbd-41a2-966f-d2c61f66afcc container client-container: + STEP: delete the pod 07/27/23 01:55:15.647 + Jul 27 01:55:15.673: INFO: Waiting for pod downwardapi-volume-747972d5-7dbd-41a2-966f-d2c61f66afcc to disappear + Jul 27 01:55:15.680: INFO: Pod downwardapi-volume-747972d5-7dbd-41a2-966f-d2c61f66afcc no longer exists + [AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/node/init/init.go:32 - Jun 12 21:12:48.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] ReplicaSet + Jul 27 01:55:15.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] ReplicaSet + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] ReplicaSet + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI tear down framework | framework.go:193 - STEP: Destroying namespace "replicaset-3038" for this suite. 06/12/23 21:12:48.125 + STEP: Destroying namespace "projected-6084" for this suite. 07/27/23 01:55:15.693 << End Captured GinkgoWriter Output ------------------------------ -SS +SSSSSSSSSSSSSS ------------------------------ -[sig-network] Services - should delete a collection of services [Conformance] - test/e2e/network/service.go:3654 -[BeforeEach] [sig-network] Services +[sig-instrumentation] Events + should delete a collection of events [Conformance] + test/e2e/instrumentation/core_events.go:175 +[BeforeEach] [sig-instrumentation] Events set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:12:48.158 -Jun 12 21:12:48.158: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename services 06/12/23 21:12:48.173 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:12:48.218 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:12:48.229 -[BeforeEach] [sig-network] Services +STEP: Creating a kubernetes client 07/27/23 01:55:15.715 +Jul 27 01:55:15.715: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename events 07/27/23 01:55:15.716 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:55:15.767 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:55:15.775 +[BeforeEach] [sig-instrumentation] Events test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-network] Services - test/e2e/network/service.go:766 -[It] should delete a collection of services [Conformance] - test/e2e/network/service.go:3654 -STEP: creating a collection of services 06/12/23 21:12:48.244 -Jun 12 21:12:48.244: INFO: Creating e2e-svc-a-jn2hj -Jun 12 21:12:48.284: INFO: Creating e2e-svc-b-twv7d -Jun 12 21:12:48.330: INFO: Creating e2e-svc-c-t5cwp -STEP: deleting service collection 06/12/23 21:12:48.386 -Jun 12 21:12:48.523: INFO: Collection of services has been deleted -[AfterEach] [sig-network] Services +[It] should delete a collection of events [Conformance] + test/e2e/instrumentation/core_events.go:175 +STEP: Create set of events 07/27/23 01:55:15.784 +Jul 27 01:55:15.795: INFO: created test-event-1 +Jul 27 01:55:15.808: INFO: created test-event-2 +Jul 27 01:55:15.823: INFO: created test-event-3 +STEP: get a list of Events with a label in the current namespace 07/27/23 01:55:15.823 +STEP: delete collection of events 07/27/23 01:55:15.831 +Jul 27 01:55:15.831: INFO: requesting DeleteCollection of events +STEP: check that the list of events matches the requested quantity 07/27/23 01:55:15.878 +Jul 27 01:55:15.878: INFO: requesting list of events to confirm quantity +[AfterEach] [sig-instrumentation] Events test/e2e/framework/node/init/init.go:32 -Jun 12 21:12:48.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] Services +Jul 27 01:55:15.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-instrumentation] Events test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] Services +[DeferCleanup (Each)] [sig-instrumentation] Events dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] Services +[DeferCleanup (Each)] [sig-instrumentation] Events tear down framework | framework.go:193 -STEP: Destroying namespace "services-2880" for this suite. 06/12/23 21:12:48.541 +STEP: Destroying namespace "events-628" for this suite. 07/27/23 01:55:15.903 ------------------------------ -• [0.399 seconds] -[sig-network] Services -test/e2e/network/common/framework.go:23 - should delete a collection of services [Conformance] - test/e2e/network/service.go:3654 +• [0.217 seconds] +[sig-instrumentation] Events +test/e2e/instrumentation/common/framework.go:23 + should delete a collection of events [Conformance] + test/e2e/instrumentation/core_events.go:175 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] Services + [BeforeEach] [sig-instrumentation] Events set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:12:48.158 - Jun 12 21:12:48.158: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename services 06/12/23 21:12:48.173 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:12:48.218 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:12:48.229 - [BeforeEach] [sig-network] Services + STEP: Creating a kubernetes client 07/27/23 01:55:15.715 + Jul 27 01:55:15.715: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename events 07/27/23 01:55:15.716 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:55:15.767 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:55:15.775 + [BeforeEach] [sig-instrumentation] Events test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-network] Services - test/e2e/network/service.go:766 - [It] should delete a collection of services [Conformance] - test/e2e/network/service.go:3654 - STEP: creating a collection of services 06/12/23 21:12:48.244 - Jun 12 21:12:48.244: INFO: Creating e2e-svc-a-jn2hj - Jun 12 21:12:48.284: INFO: Creating e2e-svc-b-twv7d - Jun 12 21:12:48.330: INFO: Creating e2e-svc-c-t5cwp - STEP: deleting service collection 06/12/23 21:12:48.386 - Jun 12 21:12:48.523: INFO: Collection of services has been deleted - [AfterEach] [sig-network] Services + [It] should delete a collection of events [Conformance] + test/e2e/instrumentation/core_events.go:175 + STEP: Create set of events 07/27/23 01:55:15.784 + Jul 27 01:55:15.795: INFO: created test-event-1 + Jul 27 01:55:15.808: INFO: created test-event-2 + Jul 27 01:55:15.823: INFO: created test-event-3 + STEP: get a list of Events with a label in the current namespace 07/27/23 01:55:15.823 + STEP: delete collection of events 07/27/23 01:55:15.831 + Jul 27 01:55:15.831: INFO: requesting DeleteCollection of events + STEP: check that the list of events matches the requested quantity 07/27/23 01:55:15.878 + Jul 27 01:55:15.878: INFO: requesting list of events to confirm quantity + [AfterEach] [sig-instrumentation] Events test/e2e/framework/node/init/init.go:32 - Jun 12 21:12:48.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] Services + Jul 27 01:55:15.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-instrumentation] Events test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] Services + [DeferCleanup (Each)] [sig-instrumentation] Events dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] Services + [DeferCleanup (Each)] [sig-instrumentation] Events tear down framework | framework.go:193 - STEP: Destroying namespace "services-2880" for this suite. 06/12/23 21:12:48.541 + STEP: Destroying namespace "events-628" for this suite. 07/27/23 01:55:15.903 << End Captured GinkgoWriter Output ------------------------------ -[sig-network] Services - should have session affinity work for NodePort service [LinuxOnly] [Conformance] - test/e2e/network/service.go:2228 -[BeforeEach] [sig-network] Services +SSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: http [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:82 +[BeforeEach] [sig-network] Networking set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:12:48.559 -Jun 12 21:12:48.559: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename services 06/12/23 21:12:48.562 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:12:48.623 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:12:48.653 -[BeforeEach] [sig-network] Services +STEP: Creating a kubernetes client 07/27/23 01:55:15.932 +Jul 27 01:55:15.932: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename pod-network-test 07/27/23 01:55:15.933 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:55:15.987 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:55:15.998 +[BeforeEach] [sig-network] Networking test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-network] Services - test/e2e/network/service.go:766 -[It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] - test/e2e/network/service.go:2228 -STEP: creating service in namespace services-4220 06/12/23 21:12:48.671 -STEP: creating service affinity-nodeport in namespace services-4220 06/12/23 21:12:48.671 -STEP: creating replication controller affinity-nodeport in namespace services-4220 06/12/23 21:12:48.734 -I0612 21:12:48.757207 23 runners.go:193] Created replication controller with name: affinity-nodeport, namespace: services-4220, replica count: 3 -I0612 21:12:51.810776 23 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady -I0612 21:12:54.817765 23 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady -Jun 12 21:12:54.853: INFO: Creating new exec pod -Jun 12 21:12:54.872: INFO: Waiting up to 5m0s for pod "execpod-affinityhzdf4" in namespace "services-4220" to be "running" -Jun 12 21:12:54.884: INFO: Pod "execpod-affinityhzdf4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.195145ms -Jun 12 21:12:56.895: INFO: Pod "execpod-affinityhzdf4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023604561s -Jun 12 21:12:58.896: INFO: Pod "execpod-affinityhzdf4": Phase="Running", Reason="", readiness=true. Elapsed: 4.023632274s -Jun 12 21:12:58.896: INFO: Pod "execpod-affinityhzdf4" satisfied condition "running" -Jun 12 21:12:59.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-4220 exec execpod-affinityhzdf4 -- /bin/sh -x -c nc -v -z -w 2 affinity-nodeport 80' -Jun 12 21:13:00.332: INFO: stderr: "+ nc -v -z -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" -Jun 12 21:13:00.332: INFO: stdout: "" -Jun 12 21:13:00.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-4220 exec execpod-affinityhzdf4 -- /bin/sh -x -c nc -v -z -w 2 172.21.121.214 80' -Jun 12 21:13:00.968: INFO: stderr: "+ nc -v -z -w 2 172.21.121.214 80\nConnection to 172.21.121.214 80 port [tcp/http] succeeded!\n" -Jun 12 21:13:00.969: INFO: stdout: "" -Jun 12 21:13:00.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-4220 exec execpod-affinityhzdf4 -- /bin/sh -x -c nc -v -z -w 2 10.138.75.112 30429' -Jun 12 21:13:01.440: INFO: stderr: "+ nc -v -z -w 2 10.138.75.112 30429\nConnection to 10.138.75.112 30429 port [tcp/*] succeeded!\n" -Jun 12 21:13:01.440: INFO: stdout: "" -Jun 12 21:13:01.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-4220 exec execpod-affinityhzdf4 -- /bin/sh -x -c nc -v -z -w 2 10.138.75.70 30429' -Jun 12 21:13:01.864: INFO: stderr: "+ nc -v -z -w 2 10.138.75.70 30429\nConnection to 10.138.75.70 30429 port [tcp/*] succeeded!\n" -Jun 12 21:13:01.864: INFO: stdout: "" -Jun 12 21:13:01.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-4220 exec execpod-affinityhzdf4 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.138.75.112:30429/ ; done' -Jun 12 21:13:02.587: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30429/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30429/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30429/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30429/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30429/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30429/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30429/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30429/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30429/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30429/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30429/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30429/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30429/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30429/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30429/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30429/\n" -Jun 12 21:13:02.587: INFO: stdout: "\naffinity-nodeport-5mrhl\naffinity-nodeport-5mrhl\naffinity-nodeport-5mrhl\naffinity-nodeport-5mrhl\naffinity-nodeport-5mrhl\naffinity-nodeport-5mrhl\naffinity-nodeport-5mrhl\naffinity-nodeport-5mrhl\naffinity-nodeport-5mrhl\naffinity-nodeport-5mrhl\naffinity-nodeport-5mrhl\naffinity-nodeport-5mrhl\naffinity-nodeport-5mrhl\naffinity-nodeport-5mrhl\naffinity-nodeport-5mrhl\naffinity-nodeport-5mrhl" -Jun 12 21:13:02.587: INFO: Received response from host: affinity-nodeport-5mrhl -Jun 12 21:13:02.587: INFO: Received response from host: affinity-nodeport-5mrhl -Jun 12 21:13:02.587: INFO: Received response from host: affinity-nodeport-5mrhl -Jun 12 21:13:02.587: INFO: Received response from host: affinity-nodeport-5mrhl -Jun 12 21:13:02.587: INFO: Received response from host: affinity-nodeport-5mrhl -Jun 12 21:13:02.587: INFO: Received response from host: affinity-nodeport-5mrhl -Jun 12 21:13:02.587: INFO: Received response from host: affinity-nodeport-5mrhl -Jun 12 21:13:02.587: INFO: Received response from host: affinity-nodeport-5mrhl -Jun 12 21:13:02.588: INFO: Received response from host: affinity-nodeport-5mrhl -Jun 12 21:13:02.588: INFO: Received response from host: affinity-nodeport-5mrhl -Jun 12 21:13:02.588: INFO: Received response from host: affinity-nodeport-5mrhl -Jun 12 21:13:02.588: INFO: Received response from host: affinity-nodeport-5mrhl -Jun 12 21:13:02.588: INFO: Received response from host: affinity-nodeport-5mrhl -Jun 12 21:13:02.588: INFO: Received response from host: affinity-nodeport-5mrhl -Jun 12 21:13:02.588: INFO: Received response from host: affinity-nodeport-5mrhl -Jun 12 21:13:02.588: INFO: Received response from host: affinity-nodeport-5mrhl -Jun 12 21:13:02.588: INFO: Cleaning up the exec pod -STEP: deleting ReplicationController affinity-nodeport in namespace services-4220, will wait for the garbage collector to delete the pods 06/12/23 21:13:02.615 -Jun 12 21:13:02.693: INFO: Deleting ReplicationController affinity-nodeport took: 16.470444ms -Jun 12 21:13:02.794: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.890326ms -[AfterEach] [sig-network] Services +[It] should function for intra-pod communication: http [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:82 +STEP: Performing setup for networking test in namespace pod-network-test-8795 07/27/23 01:55:16.006 +STEP: creating a selector 07/27/23 01:55:16.006 +STEP: Creating the service pods in kubernetes 07/27/23 01:55:16.006 +Jul 27 01:55:16.007: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Jul 27 01:55:16.101: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-8795" to be "running and ready" +Jul 27 01:55:16.114: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 13.037616ms +Jul 27 01:55:16.114: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Jul 27 01:55:18.123: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.022314548s +Jul 27 01:55:18.123: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 01:55:20.124: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.023162639s +Jul 27 01:55:20.124: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 01:55:22.132: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.03100605s +Jul 27 01:55:22.132: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 01:55:24.125: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.024177577s +Jul 27 01:55:24.125: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 01:55:26.123: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.022272473s +Jul 27 01:55:26.123: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 01:55:28.123: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.02240625s +Jul 27 01:55:28.123: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 01:55:30.142: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.040481992s +Jul 27 01:55:30.142: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 01:55:32.123: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.021930789s +Jul 27 01:55:32.123: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 01:55:34.123: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.02210316s +Jul 27 01:55:34.123: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 01:55:36.157: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.055706334s +Jul 27 01:55:36.157: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 01:55:38.124: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 22.023005103s +Jul 27 01:55:38.124: INFO: The phase of Pod netserver-0 is Running (Ready = true) +Jul 27 01:55:38.124: INFO: Pod "netserver-0" satisfied condition "running and ready" +Jul 27 01:55:38.132: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-8795" to be "running and ready" +Jul 27 01:55:38.140: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 7.789341ms +Jul 27 01:55:38.140: INFO: The phase of Pod netserver-1 is Running (Ready = true) +Jul 27 01:55:38.140: INFO: Pod "netserver-1" satisfied condition "running and ready" +Jul 27 01:55:38.149: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-8795" to be "running and ready" +Jul 27 01:55:38.164: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 14.868973ms +Jul 27 01:55:38.164: INFO: The phase of Pod netserver-2 is Running (Ready = true) +Jul 27 01:55:38.164: INFO: Pod "netserver-2" satisfied condition "running and ready" +STEP: Creating test pods 07/27/23 01:55:38.173 +Jul 27 01:55:38.187: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-8795" to be "running" +Jul 27 01:55:38.196: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 8.837548ms +Jul 27 01:55:40.206: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.01868224s +Jul 27 01:55:40.206: INFO: Pod "test-container-pod" satisfied condition "running" +Jul 27 01:55:40.214: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 +Jul 27 01:55:40.214: INFO: Breadth first check of 172.17.218.50 on host 10.245.128.17... +Jul 27 01:55:40.222: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.17.225.4:9080/dial?request=hostname&protocol=http&host=172.17.218.50&port=8083&tries=1'] Namespace:pod-network-test-8795 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jul 27 01:55:40.222: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 01:55:40.222: INFO: ExecWithOptions: Clientset creation +Jul 27 01:55:40.223: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-8795/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F172.17.225.4%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D172.17.218.50%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) +Jul 27 01:55:40.372: INFO: Waiting for responses: map[] +Jul 27 01:55:40.372: INFO: reached 172.17.218.50 after 0/1 tries +Jul 27 01:55:40.372: INFO: Breadth first check of 172.17.230.142 on host 10.245.128.18... +Jul 27 01:55:40.381: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.17.225.4:9080/dial?request=hostname&protocol=http&host=172.17.230.142&port=8083&tries=1'] Namespace:pod-network-test-8795 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jul 27 01:55:40.381: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 01:55:40.382: INFO: ExecWithOptions: Clientset creation +Jul 27 01:55:40.382: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-8795/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F172.17.225.4%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D172.17.230.142%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) +Jul 27 01:55:40.495: INFO: Waiting for responses: map[] +Jul 27 01:55:40.495: INFO: reached 172.17.230.142 after 0/1 tries +Jul 27 01:55:40.495: INFO: Breadth first check of 172.17.225.16 on host 10.245.128.19... +Jul 27 01:55:40.503: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.17.225.4:9080/dial?request=hostname&protocol=http&host=172.17.225.16&port=8083&tries=1'] Namespace:pod-network-test-8795 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jul 27 01:55:40.503: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 01:55:40.503: INFO: ExecWithOptions: Clientset creation +Jul 27 01:55:40.504: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-8795/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F172.17.225.4%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D172.17.225.16%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) +Jul 27 01:55:40.659: INFO: Waiting for responses: map[] +Jul 27 01:55:40.660: INFO: reached 172.17.225.16 after 0/1 tries +Jul 27 01:55:40.660: INFO: Going to retry 0 out of 3 pods.... +[AfterEach] [sig-network] Networking test/e2e/framework/node/init/init.go:32 -Jun 12 21:13:06.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] Services +Jul 27 01:55:40.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Networking test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] Services +[DeferCleanup (Each)] [sig-network] Networking dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] Services +[DeferCleanup (Each)] [sig-network] Networking tear down framework | framework.go:193 -STEP: Destroying namespace "services-4220" for this suite. 06/12/23 21:13:06.408 +STEP: Destroying namespace "pod-network-test-8795" for this suite. 07/27/23 01:55:40.672 ------------------------------ -• [SLOW TEST] [17.863 seconds] -[sig-network] Services -test/e2e/network/common/framework.go:23 - should have session affinity work for NodePort service [LinuxOnly] [Conformance] - test/e2e/network/service.go:2228 +• [SLOW TEST] [24.767 seconds] +[sig-network] Networking +test/e2e/common/network/framework.go:23 + Granular Checks: Pods + test/e2e/common/network/networking.go:32 + should function for intra-pod communication: http [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:82 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] Services + [BeforeEach] [sig-network] Networking set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:12:48.559 - Jun 12 21:12:48.559: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename services 06/12/23 21:12:48.562 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:12:48.623 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:12:48.653 - [BeforeEach] [sig-network] Services + STEP: Creating a kubernetes client 07/27/23 01:55:15.932 + Jul 27 01:55:15.932: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename pod-network-test 07/27/23 01:55:15.933 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:55:15.987 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:55:15.998 + [BeforeEach] [sig-network] Networking test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-network] Services - test/e2e/network/service.go:766 - [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] - test/e2e/network/service.go:2228 - STEP: creating service in namespace services-4220 06/12/23 21:12:48.671 - STEP: creating service affinity-nodeport in namespace services-4220 06/12/23 21:12:48.671 - STEP: creating replication controller affinity-nodeport in namespace services-4220 06/12/23 21:12:48.734 - I0612 21:12:48.757207 23 runners.go:193] Created replication controller with name: affinity-nodeport, namespace: services-4220, replica count: 3 - I0612 21:12:51.810776 23 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady - I0612 21:12:54.817765 23 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady - Jun 12 21:12:54.853: INFO: Creating new exec pod - Jun 12 21:12:54.872: INFO: Waiting up to 5m0s for pod "execpod-affinityhzdf4" in namespace "services-4220" to be "running" - Jun 12 21:12:54.884: INFO: Pod "execpod-affinityhzdf4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.195145ms - Jun 12 21:12:56.895: INFO: Pod "execpod-affinityhzdf4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023604561s - Jun 12 21:12:58.896: INFO: Pod "execpod-affinityhzdf4": Phase="Running", Reason="", readiness=true. Elapsed: 4.023632274s - Jun 12 21:12:58.896: INFO: Pod "execpod-affinityhzdf4" satisfied condition "running" - Jun 12 21:12:59.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-4220 exec execpod-affinityhzdf4 -- /bin/sh -x -c nc -v -z -w 2 affinity-nodeport 80' - Jun 12 21:13:00.332: INFO: stderr: "+ nc -v -z -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" - Jun 12 21:13:00.332: INFO: stdout: "" - Jun 12 21:13:00.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-4220 exec execpod-affinityhzdf4 -- /bin/sh -x -c nc -v -z -w 2 172.21.121.214 80' - Jun 12 21:13:00.968: INFO: stderr: "+ nc -v -z -w 2 172.21.121.214 80\nConnection to 172.21.121.214 80 port [tcp/http] succeeded!\n" - Jun 12 21:13:00.969: INFO: stdout: "" - Jun 12 21:13:00.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-4220 exec execpod-affinityhzdf4 -- /bin/sh -x -c nc -v -z -w 2 10.138.75.112 30429' - Jun 12 21:13:01.440: INFO: stderr: "+ nc -v -z -w 2 10.138.75.112 30429\nConnection to 10.138.75.112 30429 port [tcp/*] succeeded!\n" - Jun 12 21:13:01.440: INFO: stdout: "" - Jun 12 21:13:01.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-4220 exec execpod-affinityhzdf4 -- /bin/sh -x -c nc -v -z -w 2 10.138.75.70 30429' - Jun 12 21:13:01.864: INFO: stderr: "+ nc -v -z -w 2 10.138.75.70 30429\nConnection to 10.138.75.70 30429 port [tcp/*] succeeded!\n" - Jun 12 21:13:01.864: INFO: stdout: "" - Jun 12 21:13:01.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-4220 exec execpod-affinityhzdf4 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.138.75.112:30429/ ; done' - Jun 12 21:13:02.587: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30429/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30429/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30429/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30429/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30429/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30429/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30429/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30429/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30429/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30429/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30429/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30429/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30429/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30429/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30429/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30429/\n" - Jun 12 21:13:02.587: INFO: stdout: "\naffinity-nodeport-5mrhl\naffinity-nodeport-5mrhl\naffinity-nodeport-5mrhl\naffinity-nodeport-5mrhl\naffinity-nodeport-5mrhl\naffinity-nodeport-5mrhl\naffinity-nodeport-5mrhl\naffinity-nodeport-5mrhl\naffinity-nodeport-5mrhl\naffinity-nodeport-5mrhl\naffinity-nodeport-5mrhl\naffinity-nodeport-5mrhl\naffinity-nodeport-5mrhl\naffinity-nodeport-5mrhl\naffinity-nodeport-5mrhl\naffinity-nodeport-5mrhl" - Jun 12 21:13:02.587: INFO: Received response from host: affinity-nodeport-5mrhl - Jun 12 21:13:02.587: INFO: Received response from host: affinity-nodeport-5mrhl - Jun 12 21:13:02.587: INFO: Received response from host: affinity-nodeport-5mrhl - Jun 12 21:13:02.587: INFO: Received response from host: affinity-nodeport-5mrhl - Jun 12 21:13:02.587: INFO: Received response from host: affinity-nodeport-5mrhl - Jun 12 21:13:02.587: INFO: Received response from host: affinity-nodeport-5mrhl - Jun 12 21:13:02.587: INFO: Received response from host: affinity-nodeport-5mrhl - Jun 12 21:13:02.587: INFO: Received response from host: affinity-nodeport-5mrhl - Jun 12 21:13:02.588: INFO: Received response from host: affinity-nodeport-5mrhl - Jun 12 21:13:02.588: INFO: Received response from host: affinity-nodeport-5mrhl - Jun 12 21:13:02.588: INFO: Received response from host: affinity-nodeport-5mrhl - Jun 12 21:13:02.588: INFO: Received response from host: affinity-nodeport-5mrhl - Jun 12 21:13:02.588: INFO: Received response from host: affinity-nodeport-5mrhl - Jun 12 21:13:02.588: INFO: Received response from host: affinity-nodeport-5mrhl - Jun 12 21:13:02.588: INFO: Received response from host: affinity-nodeport-5mrhl - Jun 12 21:13:02.588: INFO: Received response from host: affinity-nodeport-5mrhl - Jun 12 21:13:02.588: INFO: Cleaning up the exec pod - STEP: deleting ReplicationController affinity-nodeport in namespace services-4220, will wait for the garbage collector to delete the pods 06/12/23 21:13:02.615 - Jun 12 21:13:02.693: INFO: Deleting ReplicationController affinity-nodeport took: 16.470444ms - Jun 12 21:13:02.794: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.890326ms - [AfterEach] [sig-network] Services + [It] should function for intra-pod communication: http [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:82 + STEP: Performing setup for networking test in namespace pod-network-test-8795 07/27/23 01:55:16.006 + STEP: creating a selector 07/27/23 01:55:16.006 + STEP: Creating the service pods in kubernetes 07/27/23 01:55:16.006 + Jul 27 01:55:16.007: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable + Jul 27 01:55:16.101: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-8795" to be "running and ready" + Jul 27 01:55:16.114: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 13.037616ms + Jul 27 01:55:16.114: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) + Jul 27 01:55:18.123: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.022314548s + Jul 27 01:55:18.123: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 01:55:20.124: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.023162639s + Jul 27 01:55:20.124: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 01:55:22.132: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.03100605s + Jul 27 01:55:22.132: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 01:55:24.125: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.024177577s + Jul 27 01:55:24.125: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 01:55:26.123: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.022272473s + Jul 27 01:55:26.123: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 01:55:28.123: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.02240625s + Jul 27 01:55:28.123: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 01:55:30.142: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.040481992s + Jul 27 01:55:30.142: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 01:55:32.123: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.021930789s + Jul 27 01:55:32.123: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 01:55:34.123: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.02210316s + Jul 27 01:55:34.123: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 01:55:36.157: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.055706334s + Jul 27 01:55:36.157: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 01:55:38.124: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 22.023005103s + Jul 27 01:55:38.124: INFO: The phase of Pod netserver-0 is Running (Ready = true) + Jul 27 01:55:38.124: INFO: Pod "netserver-0" satisfied condition "running and ready" + Jul 27 01:55:38.132: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-8795" to be "running and ready" + Jul 27 01:55:38.140: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 7.789341ms + Jul 27 01:55:38.140: INFO: The phase of Pod netserver-1 is Running (Ready = true) + Jul 27 01:55:38.140: INFO: Pod "netserver-1" satisfied condition "running and ready" + Jul 27 01:55:38.149: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-8795" to be "running and ready" + Jul 27 01:55:38.164: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 14.868973ms + Jul 27 01:55:38.164: INFO: The phase of Pod netserver-2 is Running (Ready = true) + Jul 27 01:55:38.164: INFO: Pod "netserver-2" satisfied condition "running and ready" + STEP: Creating test pods 07/27/23 01:55:38.173 + Jul 27 01:55:38.187: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-8795" to be "running" + Jul 27 01:55:38.196: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 8.837548ms + Jul 27 01:55:40.206: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.01868224s + Jul 27 01:55:40.206: INFO: Pod "test-container-pod" satisfied condition "running" + Jul 27 01:55:40.214: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 + Jul 27 01:55:40.214: INFO: Breadth first check of 172.17.218.50 on host 10.245.128.17... + Jul 27 01:55:40.222: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.17.225.4:9080/dial?request=hostname&protocol=http&host=172.17.218.50&port=8083&tries=1'] Namespace:pod-network-test-8795 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jul 27 01:55:40.222: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 01:55:40.222: INFO: ExecWithOptions: Clientset creation + Jul 27 01:55:40.223: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-8795/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F172.17.225.4%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D172.17.218.50%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) + Jul 27 01:55:40.372: INFO: Waiting for responses: map[] + Jul 27 01:55:40.372: INFO: reached 172.17.218.50 after 0/1 tries + Jul 27 01:55:40.372: INFO: Breadth first check of 172.17.230.142 on host 10.245.128.18... + Jul 27 01:55:40.381: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.17.225.4:9080/dial?request=hostname&protocol=http&host=172.17.230.142&port=8083&tries=1'] Namespace:pod-network-test-8795 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jul 27 01:55:40.381: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 01:55:40.382: INFO: ExecWithOptions: Clientset creation + Jul 27 01:55:40.382: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-8795/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F172.17.225.4%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D172.17.230.142%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) + Jul 27 01:55:40.495: INFO: Waiting for responses: map[] + Jul 27 01:55:40.495: INFO: reached 172.17.230.142 after 0/1 tries + Jul 27 01:55:40.495: INFO: Breadth first check of 172.17.225.16 on host 10.245.128.19... + Jul 27 01:55:40.503: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.17.225.4:9080/dial?request=hostname&protocol=http&host=172.17.225.16&port=8083&tries=1'] Namespace:pod-network-test-8795 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jul 27 01:55:40.503: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 01:55:40.503: INFO: ExecWithOptions: Clientset creation + Jul 27 01:55:40.504: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-8795/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F172.17.225.4%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D172.17.225.16%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) + Jul 27 01:55:40.659: INFO: Waiting for responses: map[] + Jul 27 01:55:40.660: INFO: reached 172.17.225.16 after 0/1 tries + Jul 27 01:55:40.660: INFO: Going to retry 0 out of 3 pods.... + [AfterEach] [sig-network] Networking test/e2e/framework/node/init/init.go:32 - Jun 12 21:13:06.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] Services + Jul 27 01:55:40.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Networking test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] Services + [DeferCleanup (Each)] [sig-network] Networking dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] Services + [DeferCleanup (Each)] [sig-network] Networking tear down framework | framework.go:193 - STEP: Destroying namespace "services-4220" for this suite. 06/12/23 21:13:06.408 + STEP: Destroying namespace "pod-network-test-8795" for this suite. 07/27/23 01:55:40.672 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSS ------------------------------ -[sig-api-machinery] ResourceQuota - should create a ResourceQuota and capture the life of a secret. [Conformance] - test/e2e/apimachinery/resource_quota.go:160 -[BeforeEach] [sig-api-machinery] ResourceQuota +[sig-api-machinery] Watchers + should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + test/e2e/apimachinery/watch.go:257 +[BeforeEach] [sig-api-machinery] Watchers set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:13:06.435 -Jun 12 21:13:06.435: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename resourcequota 06/12/23 21:13:06.438 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:13:06.476 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:13:06.485 -[BeforeEach] [sig-api-machinery] ResourceQuota +STEP: Creating a kubernetes client 07/27/23 01:55:40.7 +Jul 27 01:55:40.700: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename watch 07/27/23 01:55:40.701 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:55:40.752 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:55:40.765 +[BeforeEach] [sig-api-machinery] Watchers test/e2e/framework/metrics/init/init.go:31 -[It] should create a ResourceQuota and capture the life of a secret. [Conformance] - test/e2e/apimachinery/resource_quota.go:160 -STEP: Discovering how many secrets are in namespace by default 06/12/23 21:13:06.52 -STEP: Counting existing ResourceQuota 06/12/23 21:13:12.579 -STEP: Creating a ResourceQuota 06/12/23 21:13:17.602 -STEP: Ensuring resource quota status is calculated 06/12/23 21:13:17.645 -STEP: Creating a Secret 06/12/23 21:13:19.661 -STEP: Ensuring resource quota status captures secret creation 06/12/23 21:13:19.719 -STEP: Deleting a secret 06/12/23 21:13:21.734 -STEP: Ensuring resource quota status released usage 06/12/23 21:13:21.751 -[AfterEach] [sig-api-machinery] ResourceQuota +[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + test/e2e/apimachinery/watch.go:257 +STEP: creating a watch on configmaps with a certain label 07/27/23 01:55:40.775 +STEP: creating a new configmap 07/27/23 01:55:40.78 +STEP: modifying the configmap once 07/27/23 01:55:40.798 +STEP: changing the label value of the configmap 07/27/23 01:55:40.839 +STEP: Expecting to observe a delete notification for the watched object 07/27/23 01:55:40.871 +Jul 27 01:55:40.871: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6929 eb2de2e4-5a56-4719-9a63-c1b58e845246 82610 0 2023-07-27 01:55:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-07-27 01:55:40 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Jul 27 01:55:40.872: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6929 eb2de2e4-5a56-4719-9a63-c1b58e845246 82614 0 2023-07-27 01:55:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-07-27 01:55:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +Jul 27 01:55:40.872: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6929 eb2de2e4-5a56-4719-9a63-c1b58e845246 82617 0 2023-07-27 01:55:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-07-27 01:55:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying the configmap a second time 07/27/23 01:55:40.872 +STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements 07/27/23 01:55:40.903 +STEP: changing the label value of the configmap back 07/27/23 01:55:50.903 +STEP: modifying the configmap a third time 07/27/23 01:55:50.931 +STEP: deleting the configmap 07/27/23 01:55:50.959 +STEP: Expecting to observe an add notification for the watched object when the label value was restored 07/27/23 01:55:50.981 +Jul 27 01:55:50.981: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6929 eb2de2e4-5a56-4719-9a63-c1b58e845246 82743 0 2023-07-27 01:55:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-07-27 01:55:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Jul 27 01:55:50.982: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6929 eb2de2e4-5a56-4719-9a63-c1b58e845246 82744 0 2023-07-27 01:55:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-07-27 01:55:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} +Jul 27 01:55:50.982: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6929 eb2de2e4-5a56-4719-9a63-c1b58e845246 82745 0 2023-07-27 01:55:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-07-27 01:55:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers test/e2e/framework/node/init/init.go:32 -Jun 12 21:13:23.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +Jul 27 01:55:50.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Watchers test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +[DeferCleanup (Each)] [sig-api-machinery] Watchers dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +[DeferCleanup (Each)] [sig-api-machinery] Watchers tear down framework | framework.go:193 -STEP: Destroying namespace "resourcequota-2786" for this suite. 06/12/23 21:13:23.779 +STEP: Destroying namespace "watch-6929" for this suite. 07/27/23 01:55:50.996 ------------------------------ -• [SLOW TEST] [17.358 seconds] -[sig-api-machinery] ResourceQuota +• [SLOW TEST] [10.318 seconds] +[sig-api-machinery] Watchers test/e2e/apimachinery/framework.go:23 - should create a ResourceQuota and capture the life of a secret. [Conformance] - test/e2e/apimachinery/resource_quota.go:160 - - Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] ResourceQuota - set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:13:06.435 - Jun 12 21:13:06.435: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename resourcequota 06/12/23 21:13:06.438 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:13:06.476 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:13:06.485 - [BeforeEach] [sig-api-machinery] ResourceQuota - test/e2e/framework/metrics/init/init.go:31 - [It] should create a ResourceQuota and capture the life of a secret. [Conformance] - test/e2e/apimachinery/resource_quota.go:160 - STEP: Discovering how many secrets are in namespace by default 06/12/23 21:13:06.52 - STEP: Counting existing ResourceQuota 06/12/23 21:13:12.579 - STEP: Creating a ResourceQuota 06/12/23 21:13:17.602 - STEP: Ensuring resource quota status is calculated 06/12/23 21:13:17.645 - STEP: Creating a Secret 06/12/23 21:13:19.661 - STEP: Ensuring resource quota status captures secret creation 06/12/23 21:13:19.719 - STEP: Deleting a secret 06/12/23 21:13:21.734 - STEP: Ensuring resource quota status released usage 06/12/23 21:13:21.751 - [AfterEach] [sig-api-machinery] ResourceQuota - test/e2e/framework/node/init/init.go:32 - Jun 12 21:13:23.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota - test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota - dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota - tear down framework | framework.go:193 - STEP: Destroying namespace "resourcequota-2786" for this suite. 06/12/23 21:13:23.779 - << End Captured GinkgoWriter Output ------------------------------- -SS ------------------------------- -[sig-storage] Projected configMap - should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:57 -[BeforeEach] [sig-storage] Projected configMap - set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:13:23.797 -Jun 12 21:13:23.798: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename projected 06/12/23 21:13:23.799 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:13:23.844 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:13:23.883 -[BeforeEach] [sig-storage] Projected configMap - test/e2e/framework/metrics/init/init.go:31 -[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:57 -STEP: Creating configMap with name projected-configmap-test-volume-420397a3-a11d-4ad8-bd1b-8a7b342719a3 06/12/23 21:13:23.917 -STEP: Creating a pod to test consume configMaps 06/12/23 21:13:23.955 -Jun 12 21:13:23.985: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1b6e9696-1a24-42e9-84f7-d7406f24297a" in namespace "projected-7865" to be "Succeeded or Failed" -Jun 12 21:13:23.997: INFO: Pod "pod-projected-configmaps-1b6e9696-1a24-42e9-84f7-d7406f24297a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.791818ms -Jun 12 21:13:26.017: INFO: Pod "pod-projected-configmaps-1b6e9696-1a24-42e9-84f7-d7406f24297a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031967476s -Jun 12 21:13:28.028: INFO: Pod "pod-projected-configmaps-1b6e9696-1a24-42e9-84f7-d7406f24297a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043852839s -Jun 12 21:13:30.008: INFO: Pod "pod-projected-configmaps-1b6e9696-1a24-42e9-84f7-d7406f24297a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023379727s -STEP: Saw pod success 06/12/23 21:13:30.008 -Jun 12 21:13:30.009: INFO: Pod "pod-projected-configmaps-1b6e9696-1a24-42e9-84f7-d7406f24297a" satisfied condition "Succeeded or Failed" -Jun 12 21:13:30.019: INFO: Trying to get logs from node 10.138.75.70 pod pod-projected-configmaps-1b6e9696-1a24-42e9-84f7-d7406f24297a container agnhost-container: -STEP: delete the pod 06/12/23 21:13:30.044 -Jun 12 21:13:30.076: INFO: Waiting for pod pod-projected-configmaps-1b6e9696-1a24-42e9-84f7-d7406f24297a to disappear -Jun 12 21:13:30.085: INFO: Pod pod-projected-configmaps-1b6e9696-1a24-42e9-84f7-d7406f24297a no longer exists -[AfterEach] [sig-storage] Projected configMap - test/e2e/framework/node/init/init.go:32 -Jun 12 21:13:30.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Projected configMap - test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Projected configMap - dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Projected configMap - tear down framework | framework.go:193 -STEP: Destroying namespace "projected-7865" for this suite. 06/12/23 21:13:30.102 ------------------------------- -• [SLOW TEST] [6.328 seconds] -[sig-storage] Projected configMap -test/e2e/common/storage/framework.go:23 - should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:57 + should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + test/e2e/apimachinery/watch.go:257 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Projected configMap + [BeforeEach] [sig-api-machinery] Watchers set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:13:23.797 - Jun 12 21:13:23.798: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename projected 06/12/23 21:13:23.799 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:13:23.844 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:13:23.883 - [BeforeEach] [sig-storage] Projected configMap + STEP: Creating a kubernetes client 07/27/23 01:55:40.7 + Jul 27 01:55:40.700: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename watch 07/27/23 01:55:40.701 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:55:40.752 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:55:40.765 + [BeforeEach] [sig-api-machinery] Watchers test/e2e/framework/metrics/init/init.go:31 - [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:57 - STEP: Creating configMap with name projected-configmap-test-volume-420397a3-a11d-4ad8-bd1b-8a7b342719a3 06/12/23 21:13:23.917 - STEP: Creating a pod to test consume configMaps 06/12/23 21:13:23.955 - Jun 12 21:13:23.985: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1b6e9696-1a24-42e9-84f7-d7406f24297a" in namespace "projected-7865" to be "Succeeded or Failed" - Jun 12 21:13:23.997: INFO: Pod "pod-projected-configmaps-1b6e9696-1a24-42e9-84f7-d7406f24297a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.791818ms - Jun 12 21:13:26.017: INFO: Pod "pod-projected-configmaps-1b6e9696-1a24-42e9-84f7-d7406f24297a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031967476s - Jun 12 21:13:28.028: INFO: Pod "pod-projected-configmaps-1b6e9696-1a24-42e9-84f7-d7406f24297a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043852839s - Jun 12 21:13:30.008: INFO: Pod "pod-projected-configmaps-1b6e9696-1a24-42e9-84f7-d7406f24297a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023379727s - STEP: Saw pod success 06/12/23 21:13:30.008 - Jun 12 21:13:30.009: INFO: Pod "pod-projected-configmaps-1b6e9696-1a24-42e9-84f7-d7406f24297a" satisfied condition "Succeeded or Failed" - Jun 12 21:13:30.019: INFO: Trying to get logs from node 10.138.75.70 pod pod-projected-configmaps-1b6e9696-1a24-42e9-84f7-d7406f24297a container agnhost-container: - STEP: delete the pod 06/12/23 21:13:30.044 - Jun 12 21:13:30.076: INFO: Waiting for pod pod-projected-configmaps-1b6e9696-1a24-42e9-84f7-d7406f24297a to disappear - Jun 12 21:13:30.085: INFO: Pod pod-projected-configmaps-1b6e9696-1a24-42e9-84f7-d7406f24297a no longer exists - [AfterEach] [sig-storage] Projected configMap + [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + test/e2e/apimachinery/watch.go:257 + STEP: creating a watch on configmaps with a certain label 07/27/23 01:55:40.775 + STEP: creating a new configmap 07/27/23 01:55:40.78 + STEP: modifying the configmap once 07/27/23 01:55:40.798 + STEP: changing the label value of the configmap 07/27/23 01:55:40.839 + STEP: Expecting to observe a delete notification for the watched object 07/27/23 01:55:40.871 + Jul 27 01:55:40.871: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6929 eb2de2e4-5a56-4719-9a63-c1b58e845246 82610 0 2023-07-27 01:55:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-07-27 01:55:40 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} + Jul 27 01:55:40.872: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6929 eb2de2e4-5a56-4719-9a63-c1b58e845246 82614 0 2023-07-27 01:55:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-07-27 01:55:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} + Jul 27 01:55:40.872: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6929 eb2de2e4-5a56-4719-9a63-c1b58e845246 82617 0 2023-07-27 01:55:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-07-27 01:55:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} + STEP: modifying the configmap a second time 07/27/23 01:55:40.872 + STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements 07/27/23 01:55:40.903 + STEP: changing the label value of the configmap back 07/27/23 01:55:50.903 + STEP: modifying the configmap a third time 07/27/23 01:55:50.931 + STEP: deleting the configmap 07/27/23 01:55:50.959 + STEP: Expecting to observe an add notification for the watched object when the label value was restored 07/27/23 01:55:50.981 + Jul 27 01:55:50.981: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6929 eb2de2e4-5a56-4719-9a63-c1b58e845246 82743 0 2023-07-27 01:55:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-07-27 01:55:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + Jul 27 01:55:50.982: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6929 eb2de2e4-5a56-4719-9a63-c1b58e845246 82744 0 2023-07-27 01:55:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-07-27 01:55:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} + Jul 27 01:55:50.982: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6929 eb2de2e4-5a56-4719-9a63-c1b58e845246 82745 0 2023-07-27 01:55:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-07-27 01:55:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} + [AfterEach] [sig-api-machinery] Watchers test/e2e/framework/node/init/init.go:32 - Jun 12 21:13:30.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Projected configMap + Jul 27 01:55:50.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Watchers test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Projected configMap + [DeferCleanup (Each)] [sig-api-machinery] Watchers dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Projected configMap + [DeferCleanup (Each)] [sig-api-machinery] Watchers tear down framework | framework.go:193 - STEP: Destroying namespace "projected-7865" for this suite. 06/12/23 21:13:30.102 + STEP: Destroying namespace "watch-6929" for this suite. 07/27/23 01:55:50.996 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] EmptyDir volumes - should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:137 -[BeforeEach] [sig-storage] EmptyDir volumes +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + getting/updating/patching custom resource definition status sub-resource works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:145 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:13:30.128 -Jun 12 21:13:30.128: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename emptydir 06/12/23 21:13:30.131 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:13:30.217 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:13:30.227 -[BeforeEach] [sig-storage] EmptyDir volumes +STEP: Creating a kubernetes client 07/27/23 01:55:51.019 +Jul 27 01:55:51.019: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename custom-resource-definition 07/27/23 01:55:51.02 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:55:51.071 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:55:51.079 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 -[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:137 -STEP: Creating a pod to test emptydir 0666 on tmpfs 06/12/23 21:13:30.239 -Jun 12 21:13:30.270: INFO: Waiting up to 5m0s for pod "pod-4a6f69ee-4b18-4ff3-8813-2c46b722b99a" in namespace "emptydir-9850" to be "Succeeded or Failed" -Jun 12 21:13:30.283: INFO: Pod "pod-4a6f69ee-4b18-4ff3-8813-2c46b722b99a": Phase="Pending", Reason="", readiness=false. Elapsed: 13.164708ms -Jun 12 21:13:32.294: INFO: Pod "pod-4a6f69ee-4b18-4ff3-8813-2c46b722b99a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023681388s -Jun 12 21:13:34.294: INFO: Pod "pod-4a6f69ee-4b18-4ff3-8813-2c46b722b99a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023740383s -Jun 12 21:13:36.299: INFO: Pod "pod-4a6f69ee-4b18-4ff3-8813-2c46b722b99a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029048288s -STEP: Saw pod success 06/12/23 21:13:36.299 -Jun 12 21:13:36.300: INFO: Pod "pod-4a6f69ee-4b18-4ff3-8813-2c46b722b99a" satisfied condition "Succeeded or Failed" -Jun 12 21:13:36.311: INFO: Trying to get logs from node 10.138.75.70 pod pod-4a6f69ee-4b18-4ff3-8813-2c46b722b99a container test-container: -STEP: delete the pod 06/12/23 21:13:36.351 -Jun 12 21:13:36.376: INFO: Waiting for pod pod-4a6f69ee-4b18-4ff3-8813-2c46b722b99a to disappear -Jun 12 21:13:36.384: INFO: Pod pod-4a6f69ee-4b18-4ff3-8813-2c46b722b99a no longer exists -[AfterEach] [sig-storage] EmptyDir volumes +[It] getting/updating/patching custom resource definition status sub-resource works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:145 +Jul 27 01:55:51.088: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 -Jun 12 21:13:36.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +Jul 27 01:55:51.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] tear down framework | framework.go:193 -STEP: Destroying namespace "emptydir-9850" for this suite. 06/12/23 21:13:36.397 +STEP: Destroying namespace "custom-resource-definition-8009" for this suite. 07/27/23 01:55:51.718 ------------------------------ -• [SLOW TEST] [6.286 seconds] -[sig-storage] EmptyDir volumes -test/e2e/common/storage/framework.go:23 - should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:137 +• [0.725 seconds] +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + Simple CustomResourceDefinition + test/e2e/apimachinery/custom_resource_definition.go:50 + getting/updating/patching custom resource definition status sub-resource works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:145 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] EmptyDir volumes + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:13:30.128 - Jun 12 21:13:30.128: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename emptydir 06/12/23 21:13:30.131 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:13:30.217 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:13:30.227 - [BeforeEach] [sig-storage] EmptyDir volumes + STEP: Creating a kubernetes client 07/27/23 01:55:51.019 + Jul 27 01:55:51.019: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename custom-resource-definition 07/27/23 01:55:51.02 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:55:51.071 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:55:51.079 + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 - [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:137 - STEP: Creating a pod to test emptydir 0666 on tmpfs 06/12/23 21:13:30.239 - Jun 12 21:13:30.270: INFO: Waiting up to 5m0s for pod "pod-4a6f69ee-4b18-4ff3-8813-2c46b722b99a" in namespace "emptydir-9850" to be "Succeeded or Failed" - Jun 12 21:13:30.283: INFO: Pod "pod-4a6f69ee-4b18-4ff3-8813-2c46b722b99a": Phase="Pending", Reason="", readiness=false. Elapsed: 13.164708ms - Jun 12 21:13:32.294: INFO: Pod "pod-4a6f69ee-4b18-4ff3-8813-2c46b722b99a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023681388s - Jun 12 21:13:34.294: INFO: Pod "pod-4a6f69ee-4b18-4ff3-8813-2c46b722b99a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023740383s - Jun 12 21:13:36.299: INFO: Pod "pod-4a6f69ee-4b18-4ff3-8813-2c46b722b99a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029048288s - STEP: Saw pod success 06/12/23 21:13:36.299 - Jun 12 21:13:36.300: INFO: Pod "pod-4a6f69ee-4b18-4ff3-8813-2c46b722b99a" satisfied condition "Succeeded or Failed" - Jun 12 21:13:36.311: INFO: Trying to get logs from node 10.138.75.70 pod pod-4a6f69ee-4b18-4ff3-8813-2c46b722b99a container test-container: - STEP: delete the pod 06/12/23 21:13:36.351 - Jun 12 21:13:36.376: INFO: Waiting for pod pod-4a6f69ee-4b18-4ff3-8813-2c46b722b99a to disappear - Jun 12 21:13:36.384: INFO: Pod pod-4a6f69ee-4b18-4ff3-8813-2c46b722b99a no longer exists - [AfterEach] [sig-storage] EmptyDir volumes + [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:145 + Jul 27 01:55:51.088: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 21:13:36.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + Jul 27 01:55:51.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "emptydir-9850" for this suite. 06/12/23 21:13:36.397 + STEP: Destroying namespace "custom-resource-definition-8009" for this suite. 07/27/23 01:55:51.718 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-network] Services - should serve multiport endpoints from pods [Conformance] - test/e2e/network/service.go:848 -[BeforeEach] [sig-network] Services +[sig-network] DNS + should provide DNS for pods for Hostname [Conformance] + test/e2e/network/dns.go:248 +[BeforeEach] [sig-network] DNS set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:13:36.421 -Jun 12 21:13:36.421: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename services 06/12/23 21:13:36.424 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:13:36.476 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:13:36.492 -[BeforeEach] [sig-network] Services +STEP: Creating a kubernetes client 07/27/23 01:55:51.748 +Jul 27 01:55:51.748: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename dns 07/27/23 01:55:51.749 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:55:51.813 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:55:51.823 +[BeforeEach] [sig-network] DNS test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-network] Services - test/e2e/network/service.go:766 -[It] should serve multiport endpoints from pods [Conformance] - test/e2e/network/service.go:848 -STEP: creating service multi-endpoint-test in namespace services-7261 06/12/23 21:13:36.565 -STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7261 to expose endpoints map[] 06/12/23 21:13:36.66 -Jun 12 21:13:36.673: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found -Jun 12 21:13:37.709: INFO: successfully validated that service multi-endpoint-test in namespace services-7261 exposes endpoints map[] -STEP: Creating pod pod1 in namespace services-7261 06/12/23 21:13:37.709 -Jun 12 21:13:37.734: INFO: Waiting up to 5m0s for pod "pod1" in namespace "services-7261" to be "running and ready" -Jun 12 21:13:37.747: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.576222ms -Jun 12 21:13:37.747: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:13:39.758: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023809336s -Jun 12 21:13:39.758: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:13:41.759: INFO: Pod "pod1": Phase="Running", Reason="", readiness=true. Elapsed: 4.024586519s -Jun 12 21:13:41.759: INFO: The phase of Pod pod1 is Running (Ready = true) -Jun 12 21:13:41.759: INFO: Pod "pod1" satisfied condition "running and ready" -STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7261 to expose endpoints map[pod1:[100]] 06/12/23 21:13:41.77 -Jun 12 21:13:41.797: INFO: successfully validated that service multi-endpoint-test in namespace services-7261 exposes endpoints map[pod1:[100]] -STEP: Creating pod pod2 in namespace services-7261 06/12/23 21:13:41.798 -Jun 12 21:13:41.817: INFO: Waiting up to 5m0s for pod "pod2" in namespace "services-7261" to be "running and ready" -Jun 12 21:13:41.828: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 11.11466ms -Jun 12 21:13:41.828: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:13:43.849: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03207747s -Jun 12 21:13:43.849: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:13:45.838: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 4.021171072s -Jun 12 21:13:45.838: INFO: The phase of Pod pod2 is Running (Ready = true) -Jun 12 21:13:45.838: INFO: Pod "pod2" satisfied condition "running and ready" -STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7261 to expose endpoints map[pod1:[100] pod2:[101]] 06/12/23 21:13:45.847 -Jun 12 21:13:45.882: INFO: successfully validated that service multi-endpoint-test in namespace services-7261 exposes endpoints map[pod1:[100] pod2:[101]] -STEP: Checking if the Service forwards traffic to pods 06/12/23 21:13:45.883 -Jun 12 21:13:45.883: INFO: Creating new exec pod -Jun 12 21:13:45.903: INFO: Waiting up to 5m0s for pod "execpod2sfgf" in namespace "services-7261" to be "running" -Jun 12 21:13:45.912: INFO: Pod "execpod2sfgf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.865797ms -Jun 12 21:13:47.922: INFO: Pod "execpod2sfgf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019311429s -Jun 12 21:13:49.923: INFO: Pod "execpod2sfgf": Phase="Running", Reason="", readiness=true. Elapsed: 4.019975371s -Jun 12 21:13:49.923: INFO: Pod "execpod2sfgf" satisfied condition "running" -Jun 12 21:13:50.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-7261 exec execpod2sfgf -- /bin/sh -x -c nc -v -z -w 2 multi-endpoint-test 80' -Jun 12 21:13:51.417: INFO: stderr: "+ nc -v -z -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" -Jun 12 21:13:51.417: INFO: stdout: "" -Jun 12 21:13:51.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-7261 exec execpod2sfgf -- /bin/sh -x -c nc -v -z -w 2 172.21.46.27 80' -Jun 12 21:13:51.783: INFO: stderr: "+ nc -v -z -w 2 172.21.46.27 80\nConnection to 172.21.46.27 80 port [tcp/http] succeeded!\n" -Jun 12 21:13:51.783: INFO: stdout: "" -Jun 12 21:13:51.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-7261 exec execpod2sfgf -- /bin/sh -x -c nc -v -z -w 2 multi-endpoint-test 81' -Jun 12 21:13:52.259: INFO: stderr: "+ nc -v -z -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" -Jun 12 21:13:52.259: INFO: stdout: "" -Jun 12 21:13:52.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-7261 exec execpod2sfgf -- /bin/sh -x -c nc -v -z -w 2 172.21.46.27 81' -Jun 12 21:13:52.802: INFO: stderr: "+ nc -v -z -w 2 172.21.46.27 81\nConnection to 172.21.46.27 81 port [tcp/*] succeeded!\n" -Jun 12 21:13:52.802: INFO: stdout: "" -STEP: Deleting pod pod1 in namespace services-7261 06/12/23 21:13:52.802 -STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7261 to expose endpoints map[pod2:[101]] 06/12/23 21:13:52.828 -Jun 12 21:13:53.893: INFO: successfully validated that service multi-endpoint-test in namespace services-7261 exposes endpoints map[pod2:[101]] -STEP: Deleting pod pod2 in namespace services-7261 06/12/23 21:13:53.893 -STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7261 to expose endpoints map[] 06/12/23 21:13:53.928 -Jun 12 21:13:53.966: INFO: successfully validated that service multi-endpoint-test in namespace services-7261 exposes endpoints map[] -[AfterEach] [sig-network] Services +[It] should provide DNS for pods for Hostname [Conformance] + test/e2e/network/dns.go:248 +STEP: Creating a test headless service 07/27/23 01:55:51.834 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2181.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-2181.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;sleep 1; done + 07/27/23 01:55:51.875 +STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2181.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-2181.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;sleep 1; done + 07/27/23 01:55:51.875 +STEP: creating a pod to probe DNS 07/27/23 01:55:51.875 +STEP: submitting the pod to kubernetes 07/27/23 01:55:51.875 +Jul 27 01:55:51.911: INFO: Waiting up to 15m0s for pod "dns-test-d0c49953-15fa-4ad6-afed-9a9e76e1eb9f" in namespace "dns-2181" to be "running" +Jul 27 01:55:51.920: INFO: Pod "dns-test-d0c49953-15fa-4ad6-afed-9a9e76e1eb9f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.99694ms +Jul 27 01:55:53.931: INFO: Pod "dns-test-d0c49953-15fa-4ad6-afed-9a9e76e1eb9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019807097s +Jul 27 01:55:55.931: INFO: Pod "dns-test-d0c49953-15fa-4ad6-afed-9a9e76e1eb9f": Phase="Running", Reason="", readiness=true. Elapsed: 4.020065278s +Jul 27 01:55:55.931: INFO: Pod "dns-test-d0c49953-15fa-4ad6-afed-9a9e76e1eb9f" satisfied condition "running" +STEP: retrieving the pod 07/27/23 01:55:55.931 +STEP: looking for the results for each expected name from probers 07/27/23 01:55:55.941 +Jul 27 01:55:56.017: INFO: DNS probes using dns-2181/dns-test-d0c49953-15fa-4ad6-afed-9a9e76e1eb9f succeeded + +STEP: deleting the pod 07/27/23 01:55:56.017 +STEP: deleting the test headless service 07/27/23 01:55:56.044 +[AfterEach] [sig-network] DNS test/e2e/framework/node/init/init.go:32 -Jun 12 21:13:54.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] Services +Jul 27 01:55:56.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] DNS test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] Services +[DeferCleanup (Each)] [sig-network] DNS dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] Services +[DeferCleanup (Each)] [sig-network] DNS tear down framework | framework.go:193 -STEP: Destroying namespace "services-7261" for this suite. 06/12/23 21:13:54.052 +STEP: Destroying namespace "dns-2181" for this suite. 07/27/23 01:55:56.159 ------------------------------ -• [SLOW TEST] [17.646 seconds] -[sig-network] Services +• [4.436 seconds] +[sig-network] DNS test/e2e/network/common/framework.go:23 - should serve multiport endpoints from pods [Conformance] - test/e2e/network/service.go:848 + should provide DNS for pods for Hostname [Conformance] + test/e2e/network/dns.go:248 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] Services + [BeforeEach] [sig-network] DNS set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:13:36.421 - Jun 12 21:13:36.421: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename services 06/12/23 21:13:36.424 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:13:36.476 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:13:36.492 - [BeforeEach] [sig-network] Services + STEP: Creating a kubernetes client 07/27/23 01:55:51.748 + Jul 27 01:55:51.748: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename dns 07/27/23 01:55:51.749 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:55:51.813 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:55:51.823 + [BeforeEach] [sig-network] DNS test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-network] Services - test/e2e/network/service.go:766 - [It] should serve multiport endpoints from pods [Conformance] - test/e2e/network/service.go:848 - STEP: creating service multi-endpoint-test in namespace services-7261 06/12/23 21:13:36.565 - STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7261 to expose endpoints map[] 06/12/23 21:13:36.66 - Jun 12 21:13:36.673: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found - Jun 12 21:13:37.709: INFO: successfully validated that service multi-endpoint-test in namespace services-7261 exposes endpoints map[] - STEP: Creating pod pod1 in namespace services-7261 06/12/23 21:13:37.709 - Jun 12 21:13:37.734: INFO: Waiting up to 5m0s for pod "pod1" in namespace "services-7261" to be "running and ready" - Jun 12 21:13:37.747: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.576222ms - Jun 12 21:13:37.747: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:13:39.758: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023809336s - Jun 12 21:13:39.758: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:13:41.759: INFO: Pod "pod1": Phase="Running", Reason="", readiness=true. Elapsed: 4.024586519s - Jun 12 21:13:41.759: INFO: The phase of Pod pod1 is Running (Ready = true) - Jun 12 21:13:41.759: INFO: Pod "pod1" satisfied condition "running and ready" - STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7261 to expose endpoints map[pod1:[100]] 06/12/23 21:13:41.77 - Jun 12 21:13:41.797: INFO: successfully validated that service multi-endpoint-test in namespace services-7261 exposes endpoints map[pod1:[100]] - STEP: Creating pod pod2 in namespace services-7261 06/12/23 21:13:41.798 - Jun 12 21:13:41.817: INFO: Waiting up to 5m0s for pod "pod2" in namespace "services-7261" to be "running and ready" - Jun 12 21:13:41.828: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 11.11466ms - Jun 12 21:13:41.828: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:13:43.849: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03207747s - Jun 12 21:13:43.849: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:13:45.838: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 4.021171072s - Jun 12 21:13:45.838: INFO: The phase of Pod pod2 is Running (Ready = true) - Jun 12 21:13:45.838: INFO: Pod "pod2" satisfied condition "running and ready" - STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7261 to expose endpoints map[pod1:[100] pod2:[101]] 06/12/23 21:13:45.847 - Jun 12 21:13:45.882: INFO: successfully validated that service multi-endpoint-test in namespace services-7261 exposes endpoints map[pod1:[100] pod2:[101]] - STEP: Checking if the Service forwards traffic to pods 06/12/23 21:13:45.883 - Jun 12 21:13:45.883: INFO: Creating new exec pod - Jun 12 21:13:45.903: INFO: Waiting up to 5m0s for pod "execpod2sfgf" in namespace "services-7261" to be "running" - Jun 12 21:13:45.912: INFO: Pod "execpod2sfgf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.865797ms - Jun 12 21:13:47.922: INFO: Pod "execpod2sfgf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019311429s - Jun 12 21:13:49.923: INFO: Pod "execpod2sfgf": Phase="Running", Reason="", readiness=true. Elapsed: 4.019975371s - Jun 12 21:13:49.923: INFO: Pod "execpod2sfgf" satisfied condition "running" - Jun 12 21:13:50.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-7261 exec execpod2sfgf -- /bin/sh -x -c nc -v -z -w 2 multi-endpoint-test 80' - Jun 12 21:13:51.417: INFO: stderr: "+ nc -v -z -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" - Jun 12 21:13:51.417: INFO: stdout: "" - Jun 12 21:13:51.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-7261 exec execpod2sfgf -- /bin/sh -x -c nc -v -z -w 2 172.21.46.27 80' - Jun 12 21:13:51.783: INFO: stderr: "+ nc -v -z -w 2 172.21.46.27 80\nConnection to 172.21.46.27 80 port [tcp/http] succeeded!\n" - Jun 12 21:13:51.783: INFO: stdout: "" - Jun 12 21:13:51.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-7261 exec execpod2sfgf -- /bin/sh -x -c nc -v -z -w 2 multi-endpoint-test 81' - Jun 12 21:13:52.259: INFO: stderr: "+ nc -v -z -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" - Jun 12 21:13:52.259: INFO: stdout: "" - Jun 12 21:13:52.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-7261 exec execpod2sfgf -- /bin/sh -x -c nc -v -z -w 2 172.21.46.27 81' - Jun 12 21:13:52.802: INFO: stderr: "+ nc -v -z -w 2 172.21.46.27 81\nConnection to 172.21.46.27 81 port [tcp/*] succeeded!\n" - Jun 12 21:13:52.802: INFO: stdout: "" - STEP: Deleting pod pod1 in namespace services-7261 06/12/23 21:13:52.802 - STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7261 to expose endpoints map[pod2:[101]] 06/12/23 21:13:52.828 - Jun 12 21:13:53.893: INFO: successfully validated that service multi-endpoint-test in namespace services-7261 exposes endpoints map[pod2:[101]] - STEP: Deleting pod pod2 in namespace services-7261 06/12/23 21:13:53.893 - STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7261 to expose endpoints map[] 06/12/23 21:13:53.928 - Jun 12 21:13:53.966: INFO: successfully validated that service multi-endpoint-test in namespace services-7261 exposes endpoints map[] - [AfterEach] [sig-network] Services + [It] should provide DNS for pods for Hostname [Conformance] + test/e2e/network/dns.go:248 + STEP: Creating a test headless service 07/27/23 01:55:51.834 + STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2181.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-2181.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;sleep 1; done + 07/27/23 01:55:51.875 + STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2181.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-2181.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;sleep 1; done + 07/27/23 01:55:51.875 + STEP: creating a pod to probe DNS 07/27/23 01:55:51.875 + STEP: submitting the pod to kubernetes 07/27/23 01:55:51.875 + Jul 27 01:55:51.911: INFO: Waiting up to 15m0s for pod "dns-test-d0c49953-15fa-4ad6-afed-9a9e76e1eb9f" in namespace "dns-2181" to be "running" + Jul 27 01:55:51.920: INFO: Pod "dns-test-d0c49953-15fa-4ad6-afed-9a9e76e1eb9f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.99694ms + Jul 27 01:55:53.931: INFO: Pod "dns-test-d0c49953-15fa-4ad6-afed-9a9e76e1eb9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019807097s + Jul 27 01:55:55.931: INFO: Pod "dns-test-d0c49953-15fa-4ad6-afed-9a9e76e1eb9f": Phase="Running", Reason="", readiness=true. Elapsed: 4.020065278s + Jul 27 01:55:55.931: INFO: Pod "dns-test-d0c49953-15fa-4ad6-afed-9a9e76e1eb9f" satisfied condition "running" + STEP: retrieving the pod 07/27/23 01:55:55.931 + STEP: looking for the results for each expected name from probers 07/27/23 01:55:55.941 + Jul 27 01:55:56.017: INFO: DNS probes using dns-2181/dns-test-d0c49953-15fa-4ad6-afed-9a9e76e1eb9f succeeded + + STEP: deleting the pod 07/27/23 01:55:56.017 + STEP: deleting the test headless service 07/27/23 01:55:56.044 + [AfterEach] [sig-network] DNS test/e2e/framework/node/init/init.go:32 - Jun 12 21:13:54.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] Services + Jul 27 01:55:56.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] DNS test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] Services + [DeferCleanup (Each)] [sig-network] DNS dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] Services + [DeferCleanup (Each)] [sig-network] DNS tear down framework | framework.go:193 - STEP: Destroying namespace "services-7261" for this suite. 06/12/23 21:13:54.052 + STEP: Destroying namespace "dns-2181" for this suite. 07/27/23 01:55:56.159 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSS +SSS ------------------------------ [sig-node] Pods - should delete a collection of pods [Conformance] - test/e2e/common/node/pods.go:845 + should be updated [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:344 [BeforeEach] [sig-node] Pods set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:13:54.089 -Jun 12 21:13:54.090: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename pods 06/12/23 21:13:54.094 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:13:54.138 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:13:54.156 +STEP: Creating a kubernetes client 07/27/23 01:55:56.184 +Jul 27 01:55:56.184: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename pods 07/27/23 01:55:56.185 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:55:56.249 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:55:56.258 [BeforeEach] [sig-node] Pods test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-node] Pods test/e2e/common/node/pods.go:194 -[It] should delete a collection of pods [Conformance] - test/e2e/common/node/pods.go:845 -STEP: Create set of pods 06/12/23 21:13:54.169 -Jun 12 21:13:54.206: INFO: created test-pod-1 -Jun 12 21:13:54.249: INFO: created test-pod-2 -Jun 12 21:13:54.279: INFO: created test-pod-3 -STEP: waiting for all 3 pods to be running 06/12/23 21:13:54.279 -Jun 12 21:13:54.281: INFO: Waiting up to 5m0s for all pods (need at least 3) in namespace 'pods-4789' to be running and ready -Jun 12 21:13:54.324: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed -Jun 12 21:13:54.324: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed -Jun 12 21:13:54.324: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed -Jun 12 21:13:54.324: INFO: 0 / 3 pods in namespace 'pods-4789' are running and ready (0 seconds elapsed) -Jun 12 21:13:54.324: INFO: expected 0 pod replicas in namespace 'pods-4789', 0 are Running and Ready. -Jun 12 21:13:54.324: INFO: POD NODE PHASE GRACE CONDITIONS -Jun 12 21:13:54.324: INFO: test-pod-1 10.138.75.112 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC }] -Jun 12 21:13:54.324: INFO: test-pod-2 10.138.75.116 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC }] -Jun 12 21:13:54.324: INFO: test-pod-3 10.138.75.70 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC }] -Jun 12 21:13:54.325: INFO: -Jun 12 21:13:56.352: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed -Jun 12 21:13:56.352: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed -Jun 12 21:13:56.352: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed -Jun 12 21:13:56.352: INFO: 0 / 3 pods in namespace 'pods-4789' are running and ready (2 seconds elapsed) -Jun 12 21:13:56.352: INFO: expected 0 pod replicas in namespace 'pods-4789', 0 are Running and Ready. -Jun 12 21:13:56.352: INFO: POD NODE PHASE GRACE CONDITIONS -Jun 12 21:13:56.352: INFO: test-pod-1 10.138.75.112 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC }] -Jun 12 21:13:56.353: INFO: test-pod-2 10.138.75.116 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC }] -Jun 12 21:13:56.353: INFO: test-pod-3 10.138.75.70 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC }] -Jun 12 21:13:56.353: INFO: -Jun 12 21:13:58.629: INFO: 3 / 3 pods in namespace 'pods-4789' are running and ready (4 seconds elapsed) -Jun 12 21:13:58.630: INFO: expected 0 pod replicas in namespace 'pods-4789', 0 are Running and Ready. -STEP: waiting for all pods to be deleted 06/12/23 21:13:58.887 -Jun 12 21:13:59.055: INFO: Pod quantity 3 is different from expected quantity 0 -Jun 12 21:14:00.092: INFO: Pod quantity 3 is different from expected quantity 0 -Jun 12 21:14:01.238: INFO: Pod quantity 3 is different from expected quantity 0 -Jun 12 21:14:02.110: INFO: Pod quantity 2 is different from expected quantity 0 -Jun 12 21:14:03.068: INFO: Pod quantity 1 is different from expected quantity 0 +[It] should be updated [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:344 +STEP: creating the pod 07/27/23 01:55:56.267 +STEP: submitting the pod to kubernetes 07/27/23 01:55:56.267 +Jul 27 01:55:56.303: INFO: Waiting up to 5m0s for pod "pod-update-a2c9c56e-d72f-4a98-9112-7241740a92ac" in namespace "pods-1704" to be "running and ready" +Jul 27 01:55:56.311: INFO: Pod "pod-update-a2c9c56e-d72f-4a98-9112-7241740a92ac": Phase="Pending", Reason="", readiness=false. Elapsed: 7.664317ms +Jul 27 01:55:56.311: INFO: The phase of Pod pod-update-a2c9c56e-d72f-4a98-9112-7241740a92ac is Pending, waiting for it to be Running (with Ready = true) +Jul 27 01:55:58.321: INFO: Pod "pod-update-a2c9c56e-d72f-4a98-9112-7241740a92ac": Phase="Running", Reason="", readiness=true. Elapsed: 2.017411364s +Jul 27 01:55:58.321: INFO: The phase of Pod pod-update-a2c9c56e-d72f-4a98-9112-7241740a92ac is Running (Ready = true) +Jul 27 01:55:58.321: INFO: Pod "pod-update-a2c9c56e-d72f-4a98-9112-7241740a92ac" satisfied condition "running and ready" +STEP: verifying the pod is in kubernetes 07/27/23 01:55:58.329 +STEP: updating the pod 07/27/23 01:55:58.337 +Jul 27 01:55:58.871: INFO: Successfully updated pod "pod-update-a2c9c56e-d72f-4a98-9112-7241740a92ac" +Jul 27 01:55:58.871: INFO: Waiting up to 5m0s for pod "pod-update-a2c9c56e-d72f-4a98-9112-7241740a92ac" in namespace "pods-1704" to be "running" +Jul 27 01:55:58.879: INFO: Pod "pod-update-a2c9c56e-d72f-4a98-9112-7241740a92ac": Phase="Running", Reason="", readiness=true. Elapsed: 8.166866ms +Jul 27 01:55:58.879: INFO: Pod "pod-update-a2c9c56e-d72f-4a98-9112-7241740a92ac" satisfied condition "running" +STEP: verifying the updated pod is in kubernetes 07/27/23 01:55:58.879 +Jul 27 01:55:58.888: INFO: Pod update OK [AfterEach] [sig-node] Pods test/e2e/framework/node/init/init.go:32 -Jun 12 21:14:04.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +Jul 27 01:55:58.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Pods test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Pods dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-node] Pods tear down framework | framework.go:193 -STEP: Destroying namespace "pods-4789" for this suite. 06/12/23 21:14:04.086 +STEP: Destroying namespace "pods-1704" for this suite. 07/27/23 01:55:58.907 ------------------------------ -• [SLOW TEST] [10.011 seconds] +• [2.759 seconds] [sig-node] Pods test/e2e/common/node/framework.go:23 - should delete a collection of pods [Conformance] - test/e2e/common/node/pods.go:845 + should be updated [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:344 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-node] Pods set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:13:54.089 - Jun 12 21:13:54.090: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename pods 06/12/23 21:13:54.094 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:13:54.138 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:13:54.156 + STEP: Creating a kubernetes client 07/27/23 01:55:56.184 + Jul 27 01:55:56.184: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename pods 07/27/23 01:55:56.185 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:55:56.249 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:55:56.258 [BeforeEach] [sig-node] Pods test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-node] Pods test/e2e/common/node/pods.go:194 - [It] should delete a collection of pods [Conformance] - test/e2e/common/node/pods.go:845 - STEP: Create set of pods 06/12/23 21:13:54.169 - Jun 12 21:13:54.206: INFO: created test-pod-1 - Jun 12 21:13:54.249: INFO: created test-pod-2 - Jun 12 21:13:54.279: INFO: created test-pod-3 - STEP: waiting for all 3 pods to be running 06/12/23 21:13:54.279 - Jun 12 21:13:54.281: INFO: Waiting up to 5m0s for all pods (need at least 3) in namespace 'pods-4789' to be running and ready - Jun 12 21:13:54.324: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed - Jun 12 21:13:54.324: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed - Jun 12 21:13:54.324: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed - Jun 12 21:13:54.324: INFO: 0 / 3 pods in namespace 'pods-4789' are running and ready (0 seconds elapsed) - Jun 12 21:13:54.324: INFO: expected 0 pod replicas in namespace 'pods-4789', 0 are Running and Ready. - Jun 12 21:13:54.324: INFO: POD NODE PHASE GRACE CONDITIONS - Jun 12 21:13:54.324: INFO: test-pod-1 10.138.75.112 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC }] - Jun 12 21:13:54.324: INFO: test-pod-2 10.138.75.116 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC }] - Jun 12 21:13:54.324: INFO: test-pod-3 10.138.75.70 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC }] - Jun 12 21:13:54.325: INFO: - Jun 12 21:13:56.352: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed - Jun 12 21:13:56.352: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed - Jun 12 21:13:56.352: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed - Jun 12 21:13:56.352: INFO: 0 / 3 pods in namespace 'pods-4789' are running and ready (2 seconds elapsed) - Jun 12 21:13:56.352: INFO: expected 0 pod replicas in namespace 'pods-4789', 0 are Running and Ready. - Jun 12 21:13:56.352: INFO: POD NODE PHASE GRACE CONDITIONS - Jun 12 21:13:56.352: INFO: test-pod-1 10.138.75.112 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC }] - Jun 12 21:13:56.353: INFO: test-pod-2 10.138.75.116 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC }] - Jun 12 21:13:56.353: INFO: test-pod-3 10.138.75.70 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 21:13:54 +0000 UTC }] - Jun 12 21:13:56.353: INFO: - Jun 12 21:13:58.629: INFO: 3 / 3 pods in namespace 'pods-4789' are running and ready (4 seconds elapsed) - Jun 12 21:13:58.630: INFO: expected 0 pod replicas in namespace 'pods-4789', 0 are Running and Ready. - STEP: waiting for all pods to be deleted 06/12/23 21:13:58.887 - Jun 12 21:13:59.055: INFO: Pod quantity 3 is different from expected quantity 0 - Jun 12 21:14:00.092: INFO: Pod quantity 3 is different from expected quantity 0 - Jun 12 21:14:01.238: INFO: Pod quantity 3 is different from expected quantity 0 - Jun 12 21:14:02.110: INFO: Pod quantity 2 is different from expected quantity 0 - Jun 12 21:14:03.068: INFO: Pod quantity 1 is different from expected quantity 0 + [It] should be updated [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:344 + STEP: creating the pod 07/27/23 01:55:56.267 + STEP: submitting the pod to kubernetes 07/27/23 01:55:56.267 + Jul 27 01:55:56.303: INFO: Waiting up to 5m0s for pod "pod-update-a2c9c56e-d72f-4a98-9112-7241740a92ac" in namespace "pods-1704" to be "running and ready" + Jul 27 01:55:56.311: INFO: Pod "pod-update-a2c9c56e-d72f-4a98-9112-7241740a92ac": Phase="Pending", Reason="", readiness=false. Elapsed: 7.664317ms + Jul 27 01:55:56.311: INFO: The phase of Pod pod-update-a2c9c56e-d72f-4a98-9112-7241740a92ac is Pending, waiting for it to be Running (with Ready = true) + Jul 27 01:55:58.321: INFO: Pod "pod-update-a2c9c56e-d72f-4a98-9112-7241740a92ac": Phase="Running", Reason="", readiness=true. Elapsed: 2.017411364s + Jul 27 01:55:58.321: INFO: The phase of Pod pod-update-a2c9c56e-d72f-4a98-9112-7241740a92ac is Running (Ready = true) + Jul 27 01:55:58.321: INFO: Pod "pod-update-a2c9c56e-d72f-4a98-9112-7241740a92ac" satisfied condition "running and ready" + STEP: verifying the pod is in kubernetes 07/27/23 01:55:58.329 + STEP: updating the pod 07/27/23 01:55:58.337 + Jul 27 01:55:58.871: INFO: Successfully updated pod "pod-update-a2c9c56e-d72f-4a98-9112-7241740a92ac" + Jul 27 01:55:58.871: INFO: Waiting up to 5m0s for pod "pod-update-a2c9c56e-d72f-4a98-9112-7241740a92ac" in namespace "pods-1704" to be "running" + Jul 27 01:55:58.879: INFO: Pod "pod-update-a2c9c56e-d72f-4a98-9112-7241740a92ac": Phase="Running", Reason="", readiness=true. Elapsed: 8.166866ms + Jul 27 01:55:58.879: INFO: Pod "pod-update-a2c9c56e-d72f-4a98-9112-7241740a92ac" satisfied condition "running" + STEP: verifying the updated pod is in kubernetes 07/27/23 01:55:58.879 + Jul 27 01:55:58.888: INFO: Pod update OK [AfterEach] [sig-node] Pods test/e2e/framework/node/init/init.go:32 - Jun 12 21:14:04.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + Jul 27 01:55:58.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Pods test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Pods dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-node] Pods tear down framework | framework.go:193 - STEP: Destroying namespace "pods-4789" for this suite. 06/12/23 21:14:04.086 + STEP: Destroying namespace "pods-1704" for this suite. 07/27/23 01:55:58.907 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSS +SSSSS ------------------------------ -[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] - Should recreate evicted statefulset [Conformance] - test/e2e/apps/statefulset.go:739 -[BeforeEach] [sig-apps] StatefulSet +[sig-architecture] Conformance Tests + should have at least two untainted nodes [Conformance] + test/e2e/architecture/conformance.go:38 +[BeforeEach] [sig-architecture] Conformance Tests set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:14:04.104 -Jun 12 21:14:04.104: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename statefulset 06/12/23 21:14:04.106 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:14:04.15 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:14:04.166 -[BeforeEach] [sig-apps] StatefulSet +STEP: Creating a kubernetes client 07/27/23 01:55:58.944 +Jul 27 01:55:58.954: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename conformance-tests 07/27/23 01:55:58.956 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:55:58.997 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:55:59.006 +[BeforeEach] [sig-architecture] Conformance Tests test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-apps] StatefulSet - test/e2e/apps/statefulset.go:98 -[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:113 -STEP: Creating service test in namespace statefulset-1484 06/12/23 21:14:04.174 -[It] Should recreate evicted statefulset [Conformance] - test/e2e/apps/statefulset.go:739 -STEP: Looking for a node to schedule stateful set and pod 06/12/23 21:14:04.296 -STEP: Creating pod with conflicting port in namespace statefulset-1484 06/12/23 21:14:04.326 -STEP: Waiting until pod test-pod will start running in namespace statefulset-1484 06/12/23 21:14:04.352 -Jun 12 21:14:04.353: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "statefulset-1484" to be "running" -Jun 12 21:14:04.364: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 10.599496ms -Jun 12 21:14:06.376: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023107912s -Jun 12 21:14:08.380: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026970953s -Jun 12 21:14:10.374: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=false. Elapsed: 6.021294632s -Jun 12 21:14:10.374: INFO: Pod "test-pod" satisfied condition "running" -STEP: Creating statefulset with conflicting port in namespace statefulset-1484 06/12/23 21:14:10.375 -STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1484 06/12/23 21:14:10.393 -Jun 12 21:14:10.427: INFO: Observed stateful pod in namespace: statefulset-1484, name: ss-0, uid: 2e48f9e5-2e2e-4fa1-889c-498df43d1115, status phase: Pending. Waiting for statefulset controller to delete. -Jun 12 21:14:10.550: INFO: Observed stateful pod in namespace: statefulset-1484, name: ss-0, uid: 2e48f9e5-2e2e-4fa1-889c-498df43d1115, status phase: Failed. Waiting for statefulset controller to delete. -Jun 12 21:14:10.570: INFO: Observed stateful pod in namespace: statefulset-1484, name: ss-0, uid: 2e48f9e5-2e2e-4fa1-889c-498df43d1115, status phase: Failed. Waiting for statefulset controller to delete. -Jun 12 21:14:10.578: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1484 -STEP: Removing pod with conflicting port in namespace statefulset-1484 06/12/23 21:14:10.578 -STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1484 and will be in running state 06/12/23 21:14:10.606 -[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:124 -Jun 12 21:14:18.669: INFO: Deleting all statefulset in ns statefulset-1484 -Jun 12 21:14:18.681: INFO: Scaling statefulset ss to 0 -Jun 12 21:14:28.730: INFO: Waiting for statefulset status.replicas updated to 0 -Jun 12 21:14:28.741: INFO: Deleting statefulset ss -[AfterEach] [sig-apps] StatefulSet +[It] should have at least two untainted nodes [Conformance] + test/e2e/architecture/conformance.go:38 +STEP: Getting node addresses 07/27/23 01:55:59.016 +Jul 27 01:55:59.016: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +[AfterEach] [sig-architecture] Conformance Tests test/e2e/framework/node/init/init.go:32 -Jun 12 21:14:28.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] StatefulSet +Jul 27 01:55:59.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-architecture] Conformance Tests test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] StatefulSet +[DeferCleanup (Each)] [sig-architecture] Conformance Tests dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] StatefulSet +[DeferCleanup (Each)] [sig-architecture] Conformance Tests tear down framework | framework.go:193 -STEP: Destroying namespace "statefulset-1484" for this suite. 06/12/23 21:14:28.798 +STEP: Destroying namespace "conformance-tests-1394" for this suite. 07/27/23 01:55:59.054 ------------------------------ -• [SLOW TEST] [24.709 seconds] -[sig-apps] StatefulSet -test/e2e/apps/framework.go:23 - Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:103 - Should recreate evicted statefulset [Conformance] - test/e2e/apps/statefulset.go:739 +• [0.136 seconds] +[sig-architecture] Conformance Tests +test/e2e/architecture/framework.go:23 + should have at least two untainted nodes [Conformance] + test/e2e/architecture/conformance.go:38 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] StatefulSet + [BeforeEach] [sig-architecture] Conformance Tests set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:14:04.104 - Jun 12 21:14:04.104: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename statefulset 06/12/23 21:14:04.106 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:14:04.15 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:14:04.166 - [BeforeEach] [sig-apps] StatefulSet + STEP: Creating a kubernetes client 07/27/23 01:55:58.944 + Jul 27 01:55:58.954: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename conformance-tests 07/27/23 01:55:58.956 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:55:58.997 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:55:59.006 + [BeforeEach] [sig-architecture] Conformance Tests test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-apps] StatefulSet - test/e2e/apps/statefulset.go:98 - [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:113 - STEP: Creating service test in namespace statefulset-1484 06/12/23 21:14:04.174 - [It] Should recreate evicted statefulset [Conformance] - test/e2e/apps/statefulset.go:739 - STEP: Looking for a node to schedule stateful set and pod 06/12/23 21:14:04.296 - STEP: Creating pod with conflicting port in namespace statefulset-1484 06/12/23 21:14:04.326 - STEP: Waiting until pod test-pod will start running in namespace statefulset-1484 06/12/23 21:14:04.352 - Jun 12 21:14:04.353: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "statefulset-1484" to be "running" - Jun 12 21:14:04.364: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 10.599496ms - Jun 12 21:14:06.376: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023107912s - Jun 12 21:14:08.380: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026970953s - Jun 12 21:14:10.374: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=false. Elapsed: 6.021294632s - Jun 12 21:14:10.374: INFO: Pod "test-pod" satisfied condition "running" - STEP: Creating statefulset with conflicting port in namespace statefulset-1484 06/12/23 21:14:10.375 - STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1484 06/12/23 21:14:10.393 - Jun 12 21:14:10.427: INFO: Observed stateful pod in namespace: statefulset-1484, name: ss-0, uid: 2e48f9e5-2e2e-4fa1-889c-498df43d1115, status phase: Pending. Waiting for statefulset controller to delete. - Jun 12 21:14:10.550: INFO: Observed stateful pod in namespace: statefulset-1484, name: ss-0, uid: 2e48f9e5-2e2e-4fa1-889c-498df43d1115, status phase: Failed. Waiting for statefulset controller to delete. - Jun 12 21:14:10.570: INFO: Observed stateful pod in namespace: statefulset-1484, name: ss-0, uid: 2e48f9e5-2e2e-4fa1-889c-498df43d1115, status phase: Failed. Waiting for statefulset controller to delete. - Jun 12 21:14:10.578: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1484 - STEP: Removing pod with conflicting port in namespace statefulset-1484 06/12/23 21:14:10.578 - STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1484 and will be in running state 06/12/23 21:14:10.606 - [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:124 - Jun 12 21:14:18.669: INFO: Deleting all statefulset in ns statefulset-1484 - Jun 12 21:14:18.681: INFO: Scaling statefulset ss to 0 - Jun 12 21:14:28.730: INFO: Waiting for statefulset status.replicas updated to 0 - Jun 12 21:14:28.741: INFO: Deleting statefulset ss - [AfterEach] [sig-apps] StatefulSet + [It] should have at least two untainted nodes [Conformance] + test/e2e/architecture/conformance.go:38 + STEP: Getting node addresses 07/27/23 01:55:59.016 + Jul 27 01:55:59.016: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable + [AfterEach] [sig-architecture] Conformance Tests test/e2e/framework/node/init/init.go:32 - Jun 12 21:14:28.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] StatefulSet + Jul 27 01:55:59.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-architecture] Conformance Tests test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] StatefulSet + [DeferCleanup (Each)] [sig-architecture] Conformance Tests dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] StatefulSet + [DeferCleanup (Each)] [sig-architecture] Conformance Tests tear down framework | framework.go:193 - STEP: Destroying namespace "statefulset-1484" for this suite. 06/12/23 21:14:28.798 + STEP: Destroying namespace "conformance-tests-1394" for this suite. 07/27/23 01:55:59.054 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSS +SSSS ------------------------------ -[sig-api-machinery] Watchers - should be able to restart watching from the last resource version observed by the previous watch [Conformance] - test/e2e/apimachinery/watch.go:191 -[BeforeEach] [sig-api-machinery] Watchers +[sig-node] Variable Expansion + should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + test/e2e/common/node/expansion.go:186 +[BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:14:28.821 -Jun 12 21:14:28.821: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename watch 06/12/23 21:14:28.823 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:14:28.868 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:14:28.88 -[BeforeEach] [sig-api-machinery] Watchers +STEP: Creating a kubernetes client 07/27/23 01:55:59.081 +Jul 27 01:55:59.081: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename var-expansion 07/27/23 01:55:59.082 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:55:59.126 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:55:59.137 +[BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 -[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] - test/e2e/apimachinery/watch.go:191 -STEP: creating a watch on configmaps 06/12/23 21:14:28.897 -STEP: creating a new configmap 06/12/23 21:14:28.905 -STEP: modifying the configmap once 06/12/23 21:14:28.919 -STEP: closing the watch once it receives two notifications 06/12/23 21:14:28.953 -Jun 12 21:14:28.953: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1664 81cc81c4-75e4-4682-91e9-fc21f7b77ab6 95066 0 2023-06-12 21:14:28 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-06-12 21:14:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} -Jun 12 21:14:28.953: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1664 81cc81c4-75e4-4682-91e9-fc21f7b77ab6 95069 0 2023-06-12 21:14:28 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-06-12 21:14:28 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} -STEP: modifying the configmap a second time, while the watch is closed 06/12/23 21:14:28.953 -STEP: creating a new watch on configmaps from the last resource version observed by the first watch 06/12/23 21:14:28.978 -STEP: deleting the configmap 06/12/23 21:14:28.983 -STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed 06/12/23 21:14:28.998 -Jun 12 21:14:28.999: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1664 81cc81c4-75e4-4682-91e9-fc21f7b77ab6 95072 0 2023-06-12 21:14:28 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-06-12 21:14:28 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} -Jun 12 21:14:28.999: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1664 81cc81c4-75e4-4682-91e9-fc21f7b77ab6 95073 0 2023-06-12 21:14:28 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-06-12 21:14:28 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} -[AfterEach] [sig-api-machinery] Watchers +[It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + test/e2e/common/node/expansion.go:186 +Jul 27 01:55:59.188: INFO: Waiting up to 2m0s for pod "var-expansion-efb3bdc1-fd11-4858-918f-b64aba37765f" in namespace "var-expansion-9983" to be "container 0 failed with reason CreateContainerConfigError" +Jul 27 01:55:59.202: INFO: Pod "var-expansion-efb3bdc1-fd11-4858-918f-b64aba37765f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.911064ms +Jul 27 01:56:01.211: INFO: Pod "var-expansion-efb3bdc1-fd11-4858-918f-b64aba37765f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023659217s +Jul 27 01:56:01.211: INFO: Pod "var-expansion-efb3bdc1-fd11-4858-918f-b64aba37765f" satisfied condition "container 0 failed with reason CreateContainerConfigError" +Jul 27 01:56:01.211: INFO: Deleting pod "var-expansion-efb3bdc1-fd11-4858-918f-b64aba37765f" in namespace "var-expansion-9983" +Jul 27 01:56:01.226: INFO: Wait up to 5m0s for pod "var-expansion-efb3bdc1-fd11-4858-918f-b64aba37765f" to be fully deleted +[AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 -Jun 12 21:14:28.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] Watchers +Jul 27 01:56:05.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] Watchers +[DeferCleanup (Each)] [sig-node] Variable Expansion dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] Watchers +[DeferCleanup (Each)] [sig-node] Variable Expansion tear down framework | framework.go:193 -STEP: Destroying namespace "watch-1664" for this suite. 06/12/23 21:14:29.033 +STEP: Destroying namespace "var-expansion-9983" for this suite. 07/27/23 01:56:05.258 ------------------------------ -• [0.228 seconds] -[sig-api-machinery] Watchers -test/e2e/apimachinery/framework.go:23 - should be able to restart watching from the last resource version observed by the previous watch [Conformance] - test/e2e/apimachinery/watch.go:191 +• [SLOW TEST] [6.201 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + test/e2e/common/node/expansion.go:186 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] Watchers + [BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:14:28.821 - Jun 12 21:14:28.821: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename watch 06/12/23 21:14:28.823 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:14:28.868 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:14:28.88 - [BeforeEach] [sig-api-machinery] Watchers + STEP: Creating a kubernetes client 07/27/23 01:55:59.081 + Jul 27 01:55:59.081: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename var-expansion 07/27/23 01:55:59.082 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:55:59.126 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:55:59.137 + [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 - [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] - test/e2e/apimachinery/watch.go:191 - STEP: creating a watch on configmaps 06/12/23 21:14:28.897 - STEP: creating a new configmap 06/12/23 21:14:28.905 - STEP: modifying the configmap once 06/12/23 21:14:28.919 - STEP: closing the watch once it receives two notifications 06/12/23 21:14:28.953 - Jun 12 21:14:28.953: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1664 81cc81c4-75e4-4682-91e9-fc21f7b77ab6 95066 0 2023-06-12 21:14:28 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-06-12 21:14:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} - Jun 12 21:14:28.953: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1664 81cc81c4-75e4-4682-91e9-fc21f7b77ab6 95069 0 2023-06-12 21:14:28 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-06-12 21:14:28 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} - STEP: modifying the configmap a second time, while the watch is closed 06/12/23 21:14:28.953 - STEP: creating a new watch on configmaps from the last resource version observed by the first watch 06/12/23 21:14:28.978 - STEP: deleting the configmap 06/12/23 21:14:28.983 - STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed 06/12/23 21:14:28.998 - Jun 12 21:14:28.999: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1664 81cc81c4-75e4-4682-91e9-fc21f7b77ab6 95072 0 2023-06-12 21:14:28 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-06-12 21:14:28 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} - Jun 12 21:14:28.999: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1664 81cc81c4-75e4-4682-91e9-fc21f7b77ab6 95073 0 2023-06-12 21:14:28 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-06-12 21:14:28 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} - [AfterEach] [sig-api-machinery] Watchers + [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + test/e2e/common/node/expansion.go:186 + Jul 27 01:55:59.188: INFO: Waiting up to 2m0s for pod "var-expansion-efb3bdc1-fd11-4858-918f-b64aba37765f" in namespace "var-expansion-9983" to be "container 0 failed with reason CreateContainerConfigError" + Jul 27 01:55:59.202: INFO: Pod "var-expansion-efb3bdc1-fd11-4858-918f-b64aba37765f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.911064ms + Jul 27 01:56:01.211: INFO: Pod "var-expansion-efb3bdc1-fd11-4858-918f-b64aba37765f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023659217s + Jul 27 01:56:01.211: INFO: Pod "var-expansion-efb3bdc1-fd11-4858-918f-b64aba37765f" satisfied condition "container 0 failed with reason CreateContainerConfigError" + Jul 27 01:56:01.211: INFO: Deleting pod "var-expansion-efb3bdc1-fd11-4858-918f-b64aba37765f" in namespace "var-expansion-9983" + Jul 27 01:56:01.226: INFO: Wait up to 5m0s for pod "var-expansion-efb3bdc1-fd11-4858-918f-b64aba37765f" to be fully deleted + [AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 - Jun 12 21:14:28.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] Watchers + Jul 27 01:56:05.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] Watchers + [DeferCleanup (Each)] [sig-node] Variable Expansion dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] Watchers + [DeferCleanup (Each)] [sig-node] Variable Expansion tear down framework | framework.go:193 - STEP: Destroying namespace "watch-1664" for this suite. 06/12/23 21:14:29.033 + STEP: Destroying namespace "var-expansion-9983" for this suite. 07/27/23 01:56:05.258 << End Captured GinkgoWriter Output ------------------------------ -[sig-node] KubeletManagedEtcHosts - should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/node/kubelet_etc_hosts.go:63 -[BeforeEach] [sig-node] KubeletManagedEtcHosts +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-instrumentation] Events + should manage the lifecycle of an event [Conformance] + test/e2e/instrumentation/core_events.go:57 +[BeforeEach] [sig-instrumentation] Events set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:14:29.049 -Jun 12 21:14:29.049: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts 06/12/23 21:14:29.055 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:14:29.1 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:14:29.115 -[BeforeEach] [sig-node] KubeletManagedEtcHosts +STEP: Creating a kubernetes client 07/27/23 01:56:05.284 +Jul 27 01:56:05.284: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename events 07/27/23 01:56:05.285 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:56:05.327 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:56:05.339 +[BeforeEach] [sig-instrumentation] Events test/e2e/framework/metrics/init/init.go:31 -[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/node/kubelet_etc_hosts.go:63 -STEP: Setting up the test 06/12/23 21:14:29.128 -STEP: Creating hostNetwork=false pod 06/12/23 21:14:29.128 -Jun 12 21:14:29.158: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "e2e-kubelet-etc-hosts-9806" to be "running and ready" -Jun 12 21:14:29.192: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 33.825531ms -Jun 12 21:14:29.192: INFO: The phase of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:14:31.205: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046895241s -Jun 12 21:14:31.205: INFO: The phase of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:14:33.213: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054964648s -Jun 12 21:14:33.213: INFO: The phase of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:14:35.213: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.05522091s -Jun 12 21:14:35.214: INFO: The phase of Pod test-pod is Running (Ready = true) -Jun 12 21:14:35.214: INFO: Pod "test-pod" satisfied condition "running and ready" -STEP: Creating hostNetwork=true pod 06/12/23 21:14:35.225 -Jun 12 21:14:35.247: INFO: Waiting up to 5m0s for pod "test-host-network-pod" in namespace "e2e-kubelet-etc-hosts-9806" to be "running and ready" -Jun 12 21:14:35.262: INFO: Pod "test-host-network-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 15.233226ms -Jun 12 21:14:35.262: INFO: The phase of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:14:37.274: INFO: Pod "test-host-network-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.026745715s -Jun 12 21:14:37.274: INFO: The phase of Pod test-host-network-pod is Running (Ready = true) -Jun 12 21:14:37.274: INFO: Pod "test-host-network-pod" satisfied condition "running and ready" -STEP: Running the test 06/12/23 21:14:37.282 -STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false 06/12/23 21:14:37.282 -Jun 12 21:14:37.282: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9806 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} -Jun 12 21:14:37.282: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 21:14:37.283: INFO: ExecWithOptions: Clientset creation -Jun 12 21:14:37.283: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-9806/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true) -Jun 12 21:14:37.453: INFO: Exec stderr: "" -Jun 12 21:14:37.453: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9806 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} -Jun 12 21:14:37.453: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 21:14:37.454: INFO: ExecWithOptions: Clientset creation -Jun 12 21:14:37.454: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-9806/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true) -Jun 12 21:14:37.761: INFO: Exec stderr: "" -Jun 12 21:14:37.761: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9806 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} -Jun 12 21:14:37.761: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 21:14:37.765: INFO: ExecWithOptions: Clientset creation -Jun 12 21:14:37.765: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-9806/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true) -Jun 12 21:14:38.036: INFO: Exec stderr: "" -Jun 12 21:14:38.037: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9806 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} -Jun 12 21:14:38.037: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 21:14:38.039: INFO: ExecWithOptions: Clientset creation -Jun 12 21:14:38.039: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-9806/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true) -Jun 12 21:14:38.270: INFO: Exec stderr: "" -STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount 06/12/23 21:14:38.27 -Jun 12 21:14:38.270: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9806 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} -Jun 12 21:14:38.270: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 21:14:38.271: INFO: ExecWithOptions: Clientset creation -Jun 12 21:14:38.271: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-9806/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-3&container=busybox-3&stderr=true&stdout=true) -Jun 12 21:14:38.540: INFO: Exec stderr: "" -Jun 12 21:14:38.540: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9806 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} -Jun 12 21:14:38.540: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 21:14:38.542: INFO: ExecWithOptions: Clientset creation -Jun 12 21:14:38.542: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-9806/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-3&container=busybox-3&stderr=true&stdout=true) -Jun 12 21:14:38.795: INFO: Exec stderr: "" -STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true 06/12/23 21:14:38.796 -Jun 12 21:14:38.796: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9806 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} -Jun 12 21:14:38.796: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 21:14:38.797: INFO: ExecWithOptions: Clientset creation -Jun 12 21:14:38.797: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-9806/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true) -Jun 12 21:14:39.055: INFO: Exec stderr: "" -Jun 12 21:14:39.055: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9806 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} -Jun 12 21:14:39.055: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 21:14:39.057: INFO: ExecWithOptions: Clientset creation -Jun 12 21:14:39.057: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-9806/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true) -Jun 12 21:14:39.297: INFO: Exec stderr: "" -Jun 12 21:14:39.297: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9806 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} -Jun 12 21:14:39.297: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 21:14:39.299: INFO: ExecWithOptions: Clientset creation -Jun 12 21:14:39.299: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-9806/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true) -Jun 12 21:14:39.492: INFO: Exec stderr: "" -Jun 12 21:14:39.492: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9806 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} -Jun 12 21:14:39.492: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 21:14:39.493: INFO: ExecWithOptions: Clientset creation -Jun 12 21:14:39.494: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-9806/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true) -Jun 12 21:14:39.775: INFO: Exec stderr: "" -[AfterEach] [sig-node] KubeletManagedEtcHosts +[It] should manage the lifecycle of an event [Conformance] + test/e2e/instrumentation/core_events.go:57 +STEP: creating a test event 07/27/23 01:56:05.347 +STEP: listing all events in all namespaces 07/27/23 01:56:05.36 +STEP: patching the test event 07/27/23 01:56:05.478 +STEP: fetching the test event 07/27/23 01:56:05.495 +STEP: updating the test event 07/27/23 01:56:05.502 +STEP: getting the test event 07/27/23 01:56:05.524 +STEP: deleting the test event 07/27/23 01:56:05.532 +STEP: listing all events in all namespaces 07/27/23 01:56:05.549 +[AfterEach] [sig-instrumentation] Events test/e2e/framework/node/init/init.go:32 -Jun 12 21:14:39.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] KubeletManagedEtcHosts +Jul 27 01:56:05.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-instrumentation] Events test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] KubeletManagedEtcHosts +[DeferCleanup (Each)] [sig-instrumentation] Events dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] KubeletManagedEtcHosts +[DeferCleanup (Each)] [sig-instrumentation] Events tear down framework | framework.go:193 -STEP: Destroying namespace "e2e-kubelet-etc-hosts-9806" for this suite. 06/12/23 21:14:39.789 +STEP: Destroying namespace "events-3862" for this suite. 07/27/23 01:56:05.659 ------------------------------ -• [SLOW TEST] [10.755 seconds] -[sig-node] KubeletManagedEtcHosts -test/e2e/common/node/framework.go:23 - should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/node/kubelet_etc_hosts.go:63 +• [0.397 seconds] +[sig-instrumentation] Events +test/e2e/instrumentation/common/framework.go:23 + should manage the lifecycle of an event [Conformance] + test/e2e/instrumentation/core_events.go:57 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] KubeletManagedEtcHosts + [BeforeEach] [sig-instrumentation] Events set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:14:29.049 - Jun 12 21:14:29.049: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts 06/12/23 21:14:29.055 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:14:29.1 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:14:29.115 - [BeforeEach] [sig-node] KubeletManagedEtcHosts + STEP: Creating a kubernetes client 07/27/23 01:56:05.284 + Jul 27 01:56:05.284: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename events 07/27/23 01:56:05.285 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:56:05.327 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:56:05.339 + [BeforeEach] [sig-instrumentation] Events test/e2e/framework/metrics/init/init.go:31 - [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/node/kubelet_etc_hosts.go:63 - STEP: Setting up the test 06/12/23 21:14:29.128 - STEP: Creating hostNetwork=false pod 06/12/23 21:14:29.128 - Jun 12 21:14:29.158: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "e2e-kubelet-etc-hosts-9806" to be "running and ready" - Jun 12 21:14:29.192: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 33.825531ms - Jun 12 21:14:29.192: INFO: The phase of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:14:31.205: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046895241s - Jun 12 21:14:31.205: INFO: The phase of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:14:33.213: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054964648s - Jun 12 21:14:33.213: INFO: The phase of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:14:35.213: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.05522091s - Jun 12 21:14:35.214: INFO: The phase of Pod test-pod is Running (Ready = true) - Jun 12 21:14:35.214: INFO: Pod "test-pod" satisfied condition "running and ready" - STEP: Creating hostNetwork=true pod 06/12/23 21:14:35.225 - Jun 12 21:14:35.247: INFO: Waiting up to 5m0s for pod "test-host-network-pod" in namespace "e2e-kubelet-etc-hosts-9806" to be "running and ready" - Jun 12 21:14:35.262: INFO: Pod "test-host-network-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 15.233226ms - Jun 12 21:14:35.262: INFO: The phase of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:14:37.274: INFO: Pod "test-host-network-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.026745715s - Jun 12 21:14:37.274: INFO: The phase of Pod test-host-network-pod is Running (Ready = true) - Jun 12 21:14:37.274: INFO: Pod "test-host-network-pod" satisfied condition "running and ready" - STEP: Running the test 06/12/23 21:14:37.282 - STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false 06/12/23 21:14:37.282 - Jun 12 21:14:37.282: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9806 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} - Jun 12 21:14:37.282: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 21:14:37.283: INFO: ExecWithOptions: Clientset creation - Jun 12 21:14:37.283: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-9806/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true) - Jun 12 21:14:37.453: INFO: Exec stderr: "" - Jun 12 21:14:37.453: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9806 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} - Jun 12 21:14:37.453: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 21:14:37.454: INFO: ExecWithOptions: Clientset creation - Jun 12 21:14:37.454: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-9806/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true) - Jun 12 21:14:37.761: INFO: Exec stderr: "" - Jun 12 21:14:37.761: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9806 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} - Jun 12 21:14:37.761: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 21:14:37.765: INFO: ExecWithOptions: Clientset creation - Jun 12 21:14:37.765: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-9806/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true) - Jun 12 21:14:38.036: INFO: Exec stderr: "" - Jun 12 21:14:38.037: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9806 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} - Jun 12 21:14:38.037: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 21:14:38.039: INFO: ExecWithOptions: Clientset creation - Jun 12 21:14:38.039: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-9806/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true) - Jun 12 21:14:38.270: INFO: Exec stderr: "" - STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount 06/12/23 21:14:38.27 - Jun 12 21:14:38.270: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9806 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} - Jun 12 21:14:38.270: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 21:14:38.271: INFO: ExecWithOptions: Clientset creation - Jun 12 21:14:38.271: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-9806/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-3&container=busybox-3&stderr=true&stdout=true) - Jun 12 21:14:38.540: INFO: Exec stderr: "" - Jun 12 21:14:38.540: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9806 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} - Jun 12 21:14:38.540: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 21:14:38.542: INFO: ExecWithOptions: Clientset creation - Jun 12 21:14:38.542: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-9806/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-3&container=busybox-3&stderr=true&stdout=true) - Jun 12 21:14:38.795: INFO: Exec stderr: "" - STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true 06/12/23 21:14:38.796 - Jun 12 21:14:38.796: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9806 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} - Jun 12 21:14:38.796: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 21:14:38.797: INFO: ExecWithOptions: Clientset creation - Jun 12 21:14:38.797: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-9806/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true) - Jun 12 21:14:39.055: INFO: Exec stderr: "" - Jun 12 21:14:39.055: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9806 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} - Jun 12 21:14:39.055: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 21:14:39.057: INFO: ExecWithOptions: Clientset creation - Jun 12 21:14:39.057: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-9806/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true) - Jun 12 21:14:39.297: INFO: Exec stderr: "" - Jun 12 21:14:39.297: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9806 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} - Jun 12 21:14:39.297: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 21:14:39.299: INFO: ExecWithOptions: Clientset creation - Jun 12 21:14:39.299: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-9806/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true) - Jun 12 21:14:39.492: INFO: Exec stderr: "" - Jun 12 21:14:39.492: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9806 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} - Jun 12 21:14:39.492: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 21:14:39.493: INFO: ExecWithOptions: Clientset creation - Jun 12 21:14:39.494: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-9806/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true) - Jun 12 21:14:39.775: INFO: Exec stderr: "" - [AfterEach] [sig-node] KubeletManagedEtcHosts + [It] should manage the lifecycle of an event [Conformance] + test/e2e/instrumentation/core_events.go:57 + STEP: creating a test event 07/27/23 01:56:05.347 + STEP: listing all events in all namespaces 07/27/23 01:56:05.36 + STEP: patching the test event 07/27/23 01:56:05.478 + STEP: fetching the test event 07/27/23 01:56:05.495 + STEP: updating the test event 07/27/23 01:56:05.502 + STEP: getting the test event 07/27/23 01:56:05.524 + STEP: deleting the test event 07/27/23 01:56:05.532 + STEP: listing all events in all namespaces 07/27/23 01:56:05.549 + [AfterEach] [sig-instrumentation] Events test/e2e/framework/node/init/init.go:32 - Jun 12 21:14:39.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] KubeletManagedEtcHosts + Jul 27 01:56:05.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-instrumentation] Events test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] KubeletManagedEtcHosts + [DeferCleanup (Each)] [sig-instrumentation] Events dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] KubeletManagedEtcHosts + [DeferCleanup (Each)] [sig-instrumentation] Events tear down framework | framework.go:193 - STEP: Destroying namespace "e2e-kubelet-etc-hosts-9806" for this suite. 06/12/23 21:14:39.789 + STEP: Destroying namespace "events-3862" for this suite. 07/27/23 01:56:05.659 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSS ------------------------------- -[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - should mutate custom resource with pruning [Conformance] - test/e2e/apimachinery/webhook.go:341 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[sig-instrumentation] Events API + should delete a collection of events [Conformance] + test/e2e/instrumentation/events.go:207 +[BeforeEach] [sig-instrumentation] Events API set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:14:39.807 -Jun 12 21:14:39.807: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename webhook 06/12/23 21:14:39.809 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:14:39.849 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:14:39.857 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +STEP: Creating a kubernetes client 07/27/23 01:56:05.681 +Jul 27 01:56:05.681: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename events 07/27/23 01:56:05.682 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:56:05.743 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:56:05.754 +[BeforeEach] [sig-instrumentation] Events API test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:90 -STEP: Setting up server cert 06/12/23 21:14:39.923 -STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 21:14:41.092 -STEP: Deploying the webhook pod 06/12/23 21:14:41.131 -STEP: Wait for the deployment to be ready 06/12/23 21:14:41.163 -Jun 12 21:14:41.184: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set -Jun 12 21:14:43.233: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 21, 14, 41, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 14, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 14, 41, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 14, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} -STEP: Deploying the webhook service 06/12/23 21:14:45.242 -STEP: Verifying the service has paired with the endpoint 06/12/23 21:14:45.278 -Jun 12 21:14:46.279: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 -[It] should mutate custom resource with pruning [Conformance] - test/e2e/apimachinery/webhook.go:341 -Jun 12 21:14:46.288: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8743-crds.webhook.example.com via the AdmissionRegistration API 06/12/23 21:14:46.825 -STEP: Creating a custom resource that should be mutated by the webhook 06/12/23 21:14:46.879 -[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[BeforeEach] [sig-instrumentation] Events API + test/e2e/instrumentation/events.go:84 +[It] should delete a collection of events [Conformance] + test/e2e/instrumentation/events.go:207 +STEP: Create set of events 07/27/23 01:56:05.764 +STEP: get a list of Events with a label in the current namespace 07/27/23 01:56:05.857 +STEP: delete a list of events 07/27/23 01:56:05.871 +Jul 27 01:56:05.871: INFO: requesting DeleteCollection of events +STEP: check that the list of events matches the requested quantity 07/27/23 01:56:05.966 +[AfterEach] [sig-instrumentation] Events API test/e2e/framework/node/init/init.go:32 -Jun 12 21:14:49.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:105 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +Jul 27 01:56:05.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-instrumentation] Events API test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-instrumentation] Events API dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-instrumentation] Events API tear down framework | framework.go:193 -STEP: Destroying namespace "webhook-8926" for this suite. 06/12/23 21:14:49.901 -STEP: Destroying namespace "webhook-8926-markers" for this suite. 06/12/23 21:14:49.938 +STEP: Destroying namespace "events-8280" for this suite. 07/27/23 01:56:05.991 ------------------------------ -• [SLOW TEST] [10.156 seconds] -[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] -test/e2e/apimachinery/framework.go:23 - should mutate custom resource with pruning [Conformance] - test/e2e/apimachinery/webhook.go:341 +• [0.338 seconds] +[sig-instrumentation] Events API +test/e2e/instrumentation/common/framework.go:23 + should delete a collection of events [Conformance] + test/e2e/instrumentation/events.go:207 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [BeforeEach] [sig-instrumentation] Events API set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:14:39.807 - Jun 12 21:14:39.807: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename webhook 06/12/23 21:14:39.809 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:14:39.849 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:14:39.857 - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + STEP: Creating a kubernetes client 07/27/23 01:56:05.681 + Jul 27 01:56:05.681: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename events 07/27/23 01:56:05.682 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:56:05.743 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:56:05.754 + [BeforeEach] [sig-instrumentation] Events API test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:90 - STEP: Setting up server cert 06/12/23 21:14:39.923 - STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 21:14:41.092 - STEP: Deploying the webhook pod 06/12/23 21:14:41.131 - STEP: Wait for the deployment to be ready 06/12/23 21:14:41.163 - Jun 12 21:14:41.184: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set - Jun 12 21:14:43.233: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 21, 14, 41, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 14, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 14, 41, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 14, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} - STEP: Deploying the webhook service 06/12/23 21:14:45.242 - STEP: Verifying the service has paired with the endpoint 06/12/23 21:14:45.278 - Jun 12 21:14:46.279: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 - [It] should mutate custom resource with pruning [Conformance] - test/e2e/apimachinery/webhook.go:341 - Jun 12 21:14:46.288: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8743-crds.webhook.example.com via the AdmissionRegistration API 06/12/23 21:14:46.825 - STEP: Creating a custom resource that should be mutated by the webhook 06/12/23 21:14:46.879 - [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [BeforeEach] [sig-instrumentation] Events API + test/e2e/instrumentation/events.go:84 + [It] should delete a collection of events [Conformance] + test/e2e/instrumentation/events.go:207 + STEP: Create set of events 07/27/23 01:56:05.764 + STEP: get a list of Events with a label in the current namespace 07/27/23 01:56:05.857 + STEP: delete a list of events 07/27/23 01:56:05.871 + Jul 27 01:56:05.871: INFO: requesting DeleteCollection of events + STEP: check that the list of events matches the requested quantity 07/27/23 01:56:05.966 + [AfterEach] [sig-instrumentation] Events API test/e2e/framework/node/init/init.go:32 - Jun 12 21:14:49.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:105 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + Jul 27 01:56:05.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-instrumentation] Events API test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-instrumentation] Events API dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-instrumentation] Events API tear down framework | framework.go:193 - STEP: Destroying namespace "webhook-8926" for this suite. 06/12/23 21:14:49.901 - STEP: Destroying namespace "webhook-8926-markers" for this suite. 06/12/23 21:14:49.938 + STEP: Destroying namespace "events-8280" for this suite. 07/27/23 01:56:05.991 << End Captured GinkgoWriter Output ------------------------------ -SSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-node] ConfigMap - should be consumable via the environment [NodeConformance] [Conformance] - test/e2e/common/node/configmap.go:93 -[BeforeEach] [sig-node] ConfigMap +[sig-scheduling] LimitRange + should list, patch and delete a LimitRange by collection [Conformance] + test/e2e/scheduling/limit_range.go:239 +[BeforeEach] [sig-scheduling] LimitRange set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:14:49.964 -Jun 12 21:14:49.964: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename configmap 06/12/23 21:14:49.968 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:14:50.046 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:14:50.11 -[BeforeEach] [sig-node] ConfigMap +STEP: Creating a kubernetes client 07/27/23 01:56:06.022 +Jul 27 01:56:06.022: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename limitrange 07/27/23 01:56:06.023 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:56:06.088 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:56:06.098 +[BeforeEach] [sig-scheduling] LimitRange test/e2e/framework/metrics/init/init.go:31 -[It] should be consumable via the environment [NodeConformance] [Conformance] - test/e2e/common/node/configmap.go:93 -STEP: Creating configMap configmap-1204/configmap-test-af9a554b-ca9e-426d-9770-392186face38 06/12/23 21:14:50.131 -STEP: Creating a pod to test consume configMaps 06/12/23 21:14:50.148 -Jun 12 21:14:50.176: INFO: Waiting up to 5m0s for pod "pod-configmaps-73200387-607e-4897-ac21-fff52f2756a4" in namespace "configmap-1204" to be "Succeeded or Failed" -Jun 12 21:14:50.191: INFO: Pod "pod-configmaps-73200387-607e-4897-ac21-fff52f2756a4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.887782ms -Jun 12 21:14:52.200: INFO: Pod "pod-configmaps-73200387-607e-4897-ac21-fff52f2756a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024071026s -Jun 12 21:14:54.212: INFO: Pod "pod-configmaps-73200387-607e-4897-ac21-fff52f2756a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036129805s -Jun 12 21:14:56.205: INFO: Pod "pod-configmaps-73200387-607e-4897-ac21-fff52f2756a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028519707s -STEP: Saw pod success 06/12/23 21:14:56.205 -Jun 12 21:14:56.205: INFO: Pod "pod-configmaps-73200387-607e-4897-ac21-fff52f2756a4" satisfied condition "Succeeded or Failed" -Jun 12 21:14:56.216: INFO: Trying to get logs from node 10.138.75.70 pod pod-configmaps-73200387-607e-4897-ac21-fff52f2756a4 container env-test: -STEP: delete the pod 06/12/23 21:14:56.24 -Jun 12 21:14:56.287: INFO: Waiting for pod pod-configmaps-73200387-607e-4897-ac21-fff52f2756a4 to disappear -Jun 12 21:14:56.300: INFO: Pod pod-configmaps-73200387-607e-4897-ac21-fff52f2756a4 no longer exists -[AfterEach] [sig-node] ConfigMap +[It] should list, patch and delete a LimitRange by collection [Conformance] + test/e2e/scheduling/limit_range.go:239 +STEP: Creating LimitRange "e2e-limitrange-55ttd" in namespace "limitrange-4395" 07/27/23 01:56:06.106 +STEP: Creating another limitRange in another namespace 07/27/23 01:56:06.125 +Jul 27 01:56:06.208: INFO: Namespace "e2e-limitrange-55ttd-2576" created +Jul 27 01:56:06.208: INFO: Creating LimitRange "e2e-limitrange-55ttd" in namespace "e2e-limitrange-55ttd-2576" +STEP: Listing all LimitRanges with label "e2e-test=e2e-limitrange-55ttd" 07/27/23 01:56:06.223 +Jul 27 01:56:06.281: INFO: Found 2 limitRanges +STEP: Patching LimitRange "e2e-limitrange-55ttd" in "limitrange-4395" namespace 07/27/23 01:56:06.281 +Jul 27 01:56:06.301: INFO: LimitRange "e2e-limitrange-55ttd" has been patched +STEP: Delete LimitRange "e2e-limitrange-55ttd" by Collection with labelSelector: "e2e-limitrange-55ttd=patched" 07/27/23 01:56:06.301 +STEP: Confirm that the limitRange "e2e-limitrange-55ttd" has been deleted 07/27/23 01:56:06.337 +Jul 27 01:56:06.337: INFO: Requesting list of LimitRange to confirm quantity +Jul 27 01:56:06.349: INFO: Found 0 LimitRange with label "e2e-limitrange-55ttd=patched" +Jul 27 01:56:06.349: INFO: LimitRange "e2e-limitrange-55ttd" has been deleted. +STEP: Confirm that a single LimitRange still exists with label "e2e-test=e2e-limitrange-55ttd" 07/27/23 01:56:06.349 +Jul 27 01:56:06.367: INFO: Found 1 limitRange +[AfterEach] [sig-scheduling] LimitRange test/e2e/framework/node/init/init.go:32 -Jun 12 21:14:56.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] ConfigMap +Jul 27 01:56:06.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-scheduling] LimitRange test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] ConfigMap +[DeferCleanup (Each)] [sig-scheduling] LimitRange dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] ConfigMap +[DeferCleanup (Each)] [sig-scheduling] LimitRange tear down framework | framework.go:193 -STEP: Destroying namespace "configmap-1204" for this suite. 06/12/23 21:14:56.326 +STEP: Destroying namespace "limitrange-4395" for this suite. 07/27/23 01:56:06.403 +STEP: Destroying namespace "e2e-limitrange-55ttd-2576" for this suite. 07/27/23 01:56:06.428 ------------------------------ -• [SLOW TEST] [6.375 seconds] -[sig-node] ConfigMap -test/e2e/common/node/framework.go:23 - should be consumable via the environment [NodeConformance] [Conformance] - test/e2e/common/node/configmap.go:93 +• [0.429 seconds] +[sig-scheduling] LimitRange +test/e2e/scheduling/framework.go:40 + should list, patch and delete a LimitRange by collection [Conformance] + test/e2e/scheduling/limit_range.go:239 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] ConfigMap + [BeforeEach] [sig-scheduling] LimitRange set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:14:49.964 - Jun 12 21:14:49.964: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename configmap 06/12/23 21:14:49.968 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:14:50.046 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:14:50.11 - [BeforeEach] [sig-node] ConfigMap + STEP: Creating a kubernetes client 07/27/23 01:56:06.022 + Jul 27 01:56:06.022: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename limitrange 07/27/23 01:56:06.023 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:56:06.088 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:56:06.098 + [BeforeEach] [sig-scheduling] LimitRange test/e2e/framework/metrics/init/init.go:31 - [It] should be consumable via the environment [NodeConformance] [Conformance] - test/e2e/common/node/configmap.go:93 - STEP: Creating configMap configmap-1204/configmap-test-af9a554b-ca9e-426d-9770-392186face38 06/12/23 21:14:50.131 - STEP: Creating a pod to test consume configMaps 06/12/23 21:14:50.148 - Jun 12 21:14:50.176: INFO: Waiting up to 5m0s for pod "pod-configmaps-73200387-607e-4897-ac21-fff52f2756a4" in namespace "configmap-1204" to be "Succeeded or Failed" - Jun 12 21:14:50.191: INFO: Pod "pod-configmaps-73200387-607e-4897-ac21-fff52f2756a4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.887782ms - Jun 12 21:14:52.200: INFO: Pod "pod-configmaps-73200387-607e-4897-ac21-fff52f2756a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024071026s - Jun 12 21:14:54.212: INFO: Pod "pod-configmaps-73200387-607e-4897-ac21-fff52f2756a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036129805s - Jun 12 21:14:56.205: INFO: Pod "pod-configmaps-73200387-607e-4897-ac21-fff52f2756a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028519707s - STEP: Saw pod success 06/12/23 21:14:56.205 - Jun 12 21:14:56.205: INFO: Pod "pod-configmaps-73200387-607e-4897-ac21-fff52f2756a4" satisfied condition "Succeeded or Failed" - Jun 12 21:14:56.216: INFO: Trying to get logs from node 10.138.75.70 pod pod-configmaps-73200387-607e-4897-ac21-fff52f2756a4 container env-test: - STEP: delete the pod 06/12/23 21:14:56.24 - Jun 12 21:14:56.287: INFO: Waiting for pod pod-configmaps-73200387-607e-4897-ac21-fff52f2756a4 to disappear - Jun 12 21:14:56.300: INFO: Pod pod-configmaps-73200387-607e-4897-ac21-fff52f2756a4 no longer exists - [AfterEach] [sig-node] ConfigMap + [It] should list, patch and delete a LimitRange by collection [Conformance] + test/e2e/scheduling/limit_range.go:239 + STEP: Creating LimitRange "e2e-limitrange-55ttd" in namespace "limitrange-4395" 07/27/23 01:56:06.106 + STEP: Creating another limitRange in another namespace 07/27/23 01:56:06.125 + Jul 27 01:56:06.208: INFO: Namespace "e2e-limitrange-55ttd-2576" created + Jul 27 01:56:06.208: INFO: Creating LimitRange "e2e-limitrange-55ttd" in namespace "e2e-limitrange-55ttd-2576" + STEP: Listing all LimitRanges with label "e2e-test=e2e-limitrange-55ttd" 07/27/23 01:56:06.223 + Jul 27 01:56:06.281: INFO: Found 2 limitRanges + STEP: Patching LimitRange "e2e-limitrange-55ttd" in "limitrange-4395" namespace 07/27/23 01:56:06.281 + Jul 27 01:56:06.301: INFO: LimitRange "e2e-limitrange-55ttd" has been patched + STEP: Delete LimitRange "e2e-limitrange-55ttd" by Collection with labelSelector: "e2e-limitrange-55ttd=patched" 07/27/23 01:56:06.301 + STEP: Confirm that the limitRange "e2e-limitrange-55ttd" has been deleted 07/27/23 01:56:06.337 + Jul 27 01:56:06.337: INFO: Requesting list of LimitRange to confirm quantity + Jul 27 01:56:06.349: INFO: Found 0 LimitRange with label "e2e-limitrange-55ttd=patched" + Jul 27 01:56:06.349: INFO: LimitRange "e2e-limitrange-55ttd" has been deleted. + STEP: Confirm that a single LimitRange still exists with label "e2e-test=e2e-limitrange-55ttd" 07/27/23 01:56:06.349 + Jul 27 01:56:06.367: INFO: Found 1 limitRange + [AfterEach] [sig-scheduling] LimitRange test/e2e/framework/node/init/init.go:32 - Jun 12 21:14:56.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] ConfigMap + Jul 27 01:56:06.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-scheduling] LimitRange test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] ConfigMap + [DeferCleanup (Each)] [sig-scheduling] LimitRange dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] ConfigMap + [DeferCleanup (Each)] [sig-scheduling] LimitRange tear down framework | framework.go:193 - STEP: Destroying namespace "configmap-1204" for this suite. 06/12/23 21:14:56.326 + STEP: Destroying namespace "limitrange-4395" for this suite. 07/27/23 01:56:06.403 + STEP: Destroying namespace "e2e-limitrange-55ttd-2576" for this suite. 07/27/23 01:56:06.428 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-api-machinery] ResourceQuota - should create a ResourceQuota and capture the life of a replication controller. [Conformance] - test/e2e/apimachinery/resource_quota.go:392 -[BeforeEach] [sig-api-machinery] ResourceQuota - set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:14:56.344 -Jun 12 21:14:56.345: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename resourcequota 06/12/23 21:14:56.347 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:14:56.394 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:14:56.407 -[BeforeEach] [sig-api-machinery] ResourceQuota +[sig-storage] Projected downwardAPI + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:249 +[BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 +STEP: Creating a kubernetes client 07/27/23 01:56:06.452 +Jul 27 01:56:06.452: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename projected 07/27/23 01:56:06.453 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:56:06.505 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:56:06.515 +[BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:31 -[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] - test/e2e/apimachinery/resource_quota.go:392 -STEP: Counting existing ResourceQuota 06/12/23 21:14:56.437 -STEP: Creating a ResourceQuota 06/12/23 21:15:01.451 -STEP: Ensuring resource quota status is calculated 06/12/23 21:15:01.521 -STEP: Creating a ReplicationController 06/12/23 21:15:03.534 -STEP: Ensuring resource quota status captures replication controller creation 06/12/23 21:15:03.613 -STEP: Deleting a ReplicationController 06/12/23 21:15:05.626 -STEP: Ensuring resource quota status released usage 06/12/23 21:15:05.642 -[AfterEach] [sig-api-machinery] ResourceQuota +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:249 +STEP: Creating a pod to test downward API volume plugin 07/27/23 01:56:06.527 +Jul 27 01:56:06.563: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0a03a25c-2806-4113-9196-50fb708d0e2e" in namespace "projected-9028" to be "Succeeded or Failed" +Jul 27 01:56:06.577: INFO: Pod "downwardapi-volume-0a03a25c-2806-4113-9196-50fb708d0e2e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.930933ms +Jul 27 01:56:08.588: INFO: Pod "downwardapi-volume-0a03a25c-2806-4113-9196-50fb708d0e2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025504427s +Jul 27 01:56:10.587: INFO: Pod "downwardapi-volume-0a03a25c-2806-4113-9196-50fb708d0e2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024851574s +STEP: Saw pod success 07/27/23 01:56:10.587 +Jul 27 01:56:10.588: INFO: Pod "downwardapi-volume-0a03a25c-2806-4113-9196-50fb708d0e2e" satisfied condition "Succeeded or Failed" +Jul 27 01:56:10.595: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-0a03a25c-2806-4113-9196-50fb708d0e2e container client-container: +STEP: delete the pod 07/27/23 01:56:10.617 +Jul 27 01:56:10.637: INFO: Waiting for pod downwardapi-volume-0a03a25c-2806-4113-9196-50fb708d0e2e to disappear +Jul 27 01:56:10.648: INFO: Pod downwardapi-volume-0a03a25c-2806-4113-9196-50fb708d0e2e no longer exists +[AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/node/init/init.go:32 -Jun 12 21:15:07.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +Jul 27 01:56:10.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI tear down framework | framework.go:193 -STEP: Destroying namespace "resourcequota-2421" for this suite. 06/12/23 21:15:07.691 +STEP: Destroying namespace "projected-9028" for this suite. 07/27/23 01:56:10.66 ------------------------------ -• [SLOW TEST] [11.364 seconds] -[sig-api-machinery] ResourceQuota -test/e2e/apimachinery/framework.go:23 - should create a ResourceQuota and capture the life of a replication controller. [Conformance] - test/e2e/apimachinery/resource_quota.go:392 +• [4.267 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:249 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] ResourceQuota + [BeforeEach] [sig-storage] Projected downwardAPI set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:14:56.344 - Jun 12 21:14:56.345: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename resourcequota 06/12/23 21:14:56.347 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:14:56.394 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:14:56.407 - [BeforeEach] [sig-api-machinery] ResourceQuota + STEP: Creating a kubernetes client 07/27/23 01:56:06.452 + Jul 27 01:56:06.452: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename projected 07/27/23 01:56:06.453 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:56:06.505 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:56:06.515 + [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:31 - [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] - test/e2e/apimachinery/resource_quota.go:392 - STEP: Counting existing ResourceQuota 06/12/23 21:14:56.437 - STEP: Creating a ResourceQuota 06/12/23 21:15:01.451 - STEP: Ensuring resource quota status is calculated 06/12/23 21:15:01.521 - STEP: Creating a ReplicationController 06/12/23 21:15:03.534 - STEP: Ensuring resource quota status captures replication controller creation 06/12/23 21:15:03.613 - STEP: Deleting a ReplicationController 06/12/23 21:15:05.626 - STEP: Ensuring resource quota status released usage 06/12/23 21:15:05.642 - [AfterEach] [sig-api-machinery] ResourceQuota + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 + [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:249 + STEP: Creating a pod to test downward API volume plugin 07/27/23 01:56:06.527 + Jul 27 01:56:06.563: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0a03a25c-2806-4113-9196-50fb708d0e2e" in namespace "projected-9028" to be "Succeeded or Failed" + Jul 27 01:56:06.577: INFO: Pod "downwardapi-volume-0a03a25c-2806-4113-9196-50fb708d0e2e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.930933ms + Jul 27 01:56:08.588: INFO: Pod "downwardapi-volume-0a03a25c-2806-4113-9196-50fb708d0e2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025504427s + Jul 27 01:56:10.587: INFO: Pod "downwardapi-volume-0a03a25c-2806-4113-9196-50fb708d0e2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024851574s + STEP: Saw pod success 07/27/23 01:56:10.587 + Jul 27 01:56:10.588: INFO: Pod "downwardapi-volume-0a03a25c-2806-4113-9196-50fb708d0e2e" satisfied condition "Succeeded or Failed" + Jul 27 01:56:10.595: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-0a03a25c-2806-4113-9196-50fb708d0e2e container client-container: + STEP: delete the pod 07/27/23 01:56:10.617 + Jul 27 01:56:10.637: INFO: Waiting for pod downwardapi-volume-0a03a25c-2806-4113-9196-50fb708d0e2e to disappear + Jul 27 01:56:10.648: INFO: Pod downwardapi-volume-0a03a25c-2806-4113-9196-50fb708d0e2e no longer exists + [AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/node/init/init.go:32 - Jun 12 21:15:07.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + Jul 27 01:56:10.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI tear down framework | framework.go:193 - STEP: Destroying namespace "resourcequota-2421" for this suite. 06/12/23 21:15:07.691 + STEP: Destroying namespace "projected-9028" for this suite. 07/27/23 01:56:10.66 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSS +SSSS ------------------------------ -[sig-node] Variable Expansion - should allow substituting values in a volume subpath [Conformance] - test/e2e/common/node/expansion.go:112 -[BeforeEach] [sig-node] Variable Expansion +[sig-node] Pods + should contain environment variables for services [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:444 +[BeforeEach] [sig-node] Pods set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:15:07.718 -Jun 12 21:15:07.718: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename var-expansion 06/12/23 21:15:07.72 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:15:07.838 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:15:07.875 -[BeforeEach] [sig-node] Variable Expansion +STEP: Creating a kubernetes client 07/27/23 01:56:10.72 +Jul 27 01:56:10.720: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename pods 07/27/23 01:56:10.721 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:56:10.763 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:56:10.772 +[BeforeEach] [sig-node] Pods test/e2e/framework/metrics/init/init.go:31 -[It] should allow substituting values in a volume subpath [Conformance] - test/e2e/common/node/expansion.go:112 -STEP: Creating a pod to test substitution in volume subpath 06/12/23 21:15:08.094 -Jun 12 21:15:08.124: INFO: Waiting up to 5m0s for pod "var-expansion-a3e67241-fb6b-4cfc-98bd-ca815b5ade4e" in namespace "var-expansion-7561" to be "Succeeded or Failed" -Jun 12 21:15:08.135: INFO: Pod "var-expansion-a3e67241-fb6b-4cfc-98bd-ca815b5ade4e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.182706ms -Jun 12 21:15:10.144: INFO: Pod "var-expansion-a3e67241-fb6b-4cfc-98bd-ca815b5ade4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019632087s -Jun 12 21:15:12.146: INFO: Pod "var-expansion-a3e67241-fb6b-4cfc-98bd-ca815b5ade4e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021620915s -Jun 12 21:15:14.164: INFO: Pod "var-expansion-a3e67241-fb6b-4cfc-98bd-ca815b5ade4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.039651782s -STEP: Saw pod success 06/12/23 21:15:14.164 -Jun 12 21:15:14.165: INFO: Pod "var-expansion-a3e67241-fb6b-4cfc-98bd-ca815b5ade4e" satisfied condition "Succeeded or Failed" -Jun 12 21:15:14.174: INFO: Trying to get logs from node 10.138.75.70 pod var-expansion-a3e67241-fb6b-4cfc-98bd-ca815b5ade4e container dapi-container: -STEP: delete the pod 06/12/23 21:15:14.198 -Jun 12 21:15:14.224: INFO: Waiting for pod var-expansion-a3e67241-fb6b-4cfc-98bd-ca815b5ade4e to disappear -Jun 12 21:15:14.233: INFO: Pod var-expansion-a3e67241-fb6b-4cfc-98bd-ca815b5ade4e no longer exists -[AfterEach] [sig-node] Variable Expansion +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 +[It] should contain environment variables for services [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:444 +Jul 27 01:56:10.813: INFO: Waiting up to 5m0s for pod "server-envvars-eb6ddf6c-bde8-4369-aad3-b6d284d47c54" in namespace "pods-865" to be "running and ready" +Jul 27 01:56:10.824: INFO: Pod "server-envvars-eb6ddf6c-bde8-4369-aad3-b6d284d47c54": Phase="Pending", Reason="", readiness=false. Elapsed: 10.428695ms +Jul 27 01:56:10.824: INFO: The phase of Pod server-envvars-eb6ddf6c-bde8-4369-aad3-b6d284d47c54 is Pending, waiting for it to be Running (with Ready = true) +Jul 27 01:56:12.836: INFO: Pod "server-envvars-eb6ddf6c-bde8-4369-aad3-b6d284d47c54": Phase="Running", Reason="", readiness=true. Elapsed: 2.022676663s +Jul 27 01:56:12.836: INFO: The phase of Pod server-envvars-eb6ddf6c-bde8-4369-aad3-b6d284d47c54 is Running (Ready = true) +Jul 27 01:56:12.836: INFO: Pod "server-envvars-eb6ddf6c-bde8-4369-aad3-b6d284d47c54" satisfied condition "running and ready" +Jul 27 01:56:12.949: INFO: Waiting up to 5m0s for pod "client-envvars-dd74c7e1-77d4-4d7d-a2a0-b02832424e1b" in namespace "pods-865" to be "Succeeded or Failed" +Jul 27 01:56:12.960: INFO: Pod "client-envvars-dd74c7e1-77d4-4d7d-a2a0-b02832424e1b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.607745ms +Jul 27 01:56:14.969: INFO: Pod "client-envvars-dd74c7e1-77d4-4d7d-a2a0-b02832424e1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020069806s +Jul 27 01:56:16.969: INFO: Pod "client-envvars-dd74c7e1-77d4-4d7d-a2a0-b02832424e1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01954549s +STEP: Saw pod success 07/27/23 01:56:16.969 +Jul 27 01:56:16.969: INFO: Pod "client-envvars-dd74c7e1-77d4-4d7d-a2a0-b02832424e1b" satisfied condition "Succeeded or Failed" +Jul 27 01:56:16.977: INFO: Trying to get logs from node 10.245.128.19 pod client-envvars-dd74c7e1-77d4-4d7d-a2a0-b02832424e1b container env3cont: +STEP: delete the pod 07/27/23 01:56:17.001 +Jul 27 01:56:17.020: INFO: Waiting for pod client-envvars-dd74c7e1-77d4-4d7d-a2a0-b02832424e1b to disappear +Jul 27 01:56:17.028: INFO: Pod client-envvars-dd74c7e1-77d4-4d7d-a2a0-b02832424e1b no longer exists +[AfterEach] [sig-node] Pods test/e2e/framework/node/init/init.go:32 -Jun 12 21:15:14.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Variable Expansion +Jul 27 01:56:17.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Pods test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Variable Expansion +[DeferCleanup (Each)] [sig-node] Pods dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Variable Expansion +[DeferCleanup (Each)] [sig-node] Pods tear down framework | framework.go:193 -STEP: Destroying namespace "var-expansion-7561" for this suite. 06/12/23 21:15:14.252 +STEP: Destroying namespace "pods-865" for this suite. 07/27/23 01:56:17.04 ------------------------------ -• [SLOW TEST] [6.556 seconds] -[sig-node] Variable Expansion +• [SLOW TEST] [6.340 seconds] +[sig-node] Pods test/e2e/common/node/framework.go:23 - should allow substituting values in a volume subpath [Conformance] - test/e2e/common/node/expansion.go:112 + should contain environment variables for services [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:444 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Variable Expansion + [BeforeEach] [sig-node] Pods set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:15:07.718 - Jun 12 21:15:07.718: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename var-expansion 06/12/23 21:15:07.72 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:15:07.838 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:15:07.875 - [BeforeEach] [sig-node] Variable Expansion + STEP: Creating a kubernetes client 07/27/23 01:56:10.72 + Jul 27 01:56:10.720: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename pods 07/27/23 01:56:10.721 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:56:10.763 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:56:10.772 + [BeforeEach] [sig-node] Pods test/e2e/framework/metrics/init/init.go:31 - [It] should allow substituting values in a volume subpath [Conformance] - test/e2e/common/node/expansion.go:112 - STEP: Creating a pod to test substitution in volume subpath 06/12/23 21:15:08.094 - Jun 12 21:15:08.124: INFO: Waiting up to 5m0s for pod "var-expansion-a3e67241-fb6b-4cfc-98bd-ca815b5ade4e" in namespace "var-expansion-7561" to be "Succeeded or Failed" - Jun 12 21:15:08.135: INFO: Pod "var-expansion-a3e67241-fb6b-4cfc-98bd-ca815b5ade4e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.182706ms - Jun 12 21:15:10.144: INFO: Pod "var-expansion-a3e67241-fb6b-4cfc-98bd-ca815b5ade4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019632087s - Jun 12 21:15:12.146: INFO: Pod "var-expansion-a3e67241-fb6b-4cfc-98bd-ca815b5ade4e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021620915s - Jun 12 21:15:14.164: INFO: Pod "var-expansion-a3e67241-fb6b-4cfc-98bd-ca815b5ade4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.039651782s - STEP: Saw pod success 06/12/23 21:15:14.164 - Jun 12 21:15:14.165: INFO: Pod "var-expansion-a3e67241-fb6b-4cfc-98bd-ca815b5ade4e" satisfied condition "Succeeded or Failed" - Jun 12 21:15:14.174: INFO: Trying to get logs from node 10.138.75.70 pod var-expansion-a3e67241-fb6b-4cfc-98bd-ca815b5ade4e container dapi-container: - STEP: delete the pod 06/12/23 21:15:14.198 - Jun 12 21:15:14.224: INFO: Waiting for pod var-expansion-a3e67241-fb6b-4cfc-98bd-ca815b5ade4e to disappear - Jun 12 21:15:14.233: INFO: Pod var-expansion-a3e67241-fb6b-4cfc-98bd-ca815b5ade4e no longer exists - [AfterEach] [sig-node] Variable Expansion + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 + [It] should contain environment variables for services [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:444 + Jul 27 01:56:10.813: INFO: Waiting up to 5m0s for pod "server-envvars-eb6ddf6c-bde8-4369-aad3-b6d284d47c54" in namespace "pods-865" to be "running and ready" + Jul 27 01:56:10.824: INFO: Pod "server-envvars-eb6ddf6c-bde8-4369-aad3-b6d284d47c54": Phase="Pending", Reason="", readiness=false. Elapsed: 10.428695ms + Jul 27 01:56:10.824: INFO: The phase of Pod server-envvars-eb6ddf6c-bde8-4369-aad3-b6d284d47c54 is Pending, waiting for it to be Running (with Ready = true) + Jul 27 01:56:12.836: INFO: Pod "server-envvars-eb6ddf6c-bde8-4369-aad3-b6d284d47c54": Phase="Running", Reason="", readiness=true. Elapsed: 2.022676663s + Jul 27 01:56:12.836: INFO: The phase of Pod server-envvars-eb6ddf6c-bde8-4369-aad3-b6d284d47c54 is Running (Ready = true) + Jul 27 01:56:12.836: INFO: Pod "server-envvars-eb6ddf6c-bde8-4369-aad3-b6d284d47c54" satisfied condition "running and ready" + Jul 27 01:56:12.949: INFO: Waiting up to 5m0s for pod "client-envvars-dd74c7e1-77d4-4d7d-a2a0-b02832424e1b" in namespace "pods-865" to be "Succeeded or Failed" + Jul 27 01:56:12.960: INFO: Pod "client-envvars-dd74c7e1-77d4-4d7d-a2a0-b02832424e1b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.607745ms + Jul 27 01:56:14.969: INFO: Pod "client-envvars-dd74c7e1-77d4-4d7d-a2a0-b02832424e1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020069806s + Jul 27 01:56:16.969: INFO: Pod "client-envvars-dd74c7e1-77d4-4d7d-a2a0-b02832424e1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01954549s + STEP: Saw pod success 07/27/23 01:56:16.969 + Jul 27 01:56:16.969: INFO: Pod "client-envvars-dd74c7e1-77d4-4d7d-a2a0-b02832424e1b" satisfied condition "Succeeded or Failed" + Jul 27 01:56:16.977: INFO: Trying to get logs from node 10.245.128.19 pod client-envvars-dd74c7e1-77d4-4d7d-a2a0-b02832424e1b container env3cont: + STEP: delete the pod 07/27/23 01:56:17.001 + Jul 27 01:56:17.020: INFO: Waiting for pod client-envvars-dd74c7e1-77d4-4d7d-a2a0-b02832424e1b to disappear + Jul 27 01:56:17.028: INFO: Pod client-envvars-dd74c7e1-77d4-4d7d-a2a0-b02832424e1b no longer exists + [AfterEach] [sig-node] Pods test/e2e/framework/node/init/init.go:32 - Jun 12 21:15:14.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Variable Expansion + Jul 27 01:56:17.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Pods test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Variable Expansion + [DeferCleanup (Each)] [sig-node] Pods dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Variable Expansion + [DeferCleanup (Each)] [sig-node] Pods tear down framework | framework.go:193 - STEP: Destroying namespace "var-expansion-7561" for this suite. 06/12/23 21:15:14.252 + STEP: Destroying namespace "pods-865" for this suite. 07/27/23 01:56:17.04 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] EmptyDir volumes - should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:97 -[BeforeEach] [sig-storage] EmptyDir volumes +[sig-network] Services + should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2213 +[BeforeEach] [sig-network] Services set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:15:14.282 -Jun 12 21:15:14.283: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename emptydir 06/12/23 21:15:14.285 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:15:14.328 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:15:14.338 -[BeforeEach] [sig-storage] EmptyDir volumes +STEP: Creating a kubernetes client 07/27/23 01:56:17.062 +Jul 27 01:56:17.062: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename services 07/27/23 01:56:17.063 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:56:17.102 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:56:17.11 +[BeforeEach] [sig-network] Services test/e2e/framework/metrics/init/init.go:31 -[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:97 -STEP: Creating a pod to test emptydir 0644 on tmpfs 06/12/23 21:15:14.35 -Jun 12 21:15:14.377: INFO: Waiting up to 5m0s for pod "pod-df82a95e-315a-4bbd-89da-15ef3cf8d780" in namespace "emptydir-3796" to be "Succeeded or Failed" -Jun 12 21:15:14.390: INFO: Pod "pod-df82a95e-315a-4bbd-89da-15ef3cf8d780": Phase="Pending", Reason="", readiness=false. Elapsed: 12.51632ms -Jun 12 21:15:16.401: INFO: Pod "pod-df82a95e-315a-4bbd-89da-15ef3cf8d780": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023628294s -Jun 12 21:15:18.403: INFO: Pod "pod-df82a95e-315a-4bbd-89da-15ef3cf8d780": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025341803s -Jun 12 21:15:20.405: INFO: Pod "pod-df82a95e-315a-4bbd-89da-15ef3cf8d780": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02758633s -STEP: Saw pod success 06/12/23 21:15:20.405 -Jun 12 21:15:20.406: INFO: Pod "pod-df82a95e-315a-4bbd-89da-15ef3cf8d780" satisfied condition "Succeeded or Failed" -Jun 12 21:15:20.416: INFO: Trying to get logs from node 10.138.75.70 pod pod-df82a95e-315a-4bbd-89da-15ef3cf8d780 container test-container: -STEP: delete the pod 06/12/23 21:15:20.457 -Jun 12 21:15:20.494: INFO: Waiting for pod pod-df82a95e-315a-4bbd-89da-15ef3cf8d780 to disappear -Jun 12 21:15:20.507: INFO: Pod pod-df82a95e-315a-4bbd-89da-15ef3cf8d780 no longer exists -[AfterEach] [sig-storage] EmptyDir volumes +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2213 +STEP: creating service in namespace services-6666 07/27/23 01:56:17.119 +STEP: creating service affinity-clusterip-transition in namespace services-6666 07/27/23 01:56:17.119 +STEP: creating replication controller affinity-clusterip-transition in namespace services-6666 07/27/23 01:56:17.166 +I0727 01:56:17.191849 20 runners.go:193] Created replication controller with name: affinity-clusterip-transition, namespace: services-6666, replica count: 3 +I0727 01:56:20.242382 20 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Jul 27 01:56:20.271: INFO: Creating new exec pod +Jul 27 01:56:20.296: INFO: Waiting up to 5m0s for pod "execpod-affinity2sfd2" in namespace "services-6666" to be "running" +Jul 27 01:56:20.304: INFO: Pod "execpod-affinity2sfd2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.362653ms +Jul 27 01:56:22.313: INFO: Pod "execpod-affinity2sfd2": Phase="Running", Reason="", readiness=true. Elapsed: 2.016853383s +Jul 27 01:56:22.313: INFO: Pod "execpod-affinity2sfd2" satisfied condition "running" +Jul 27 01:56:23.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-6666 exec execpod-affinity2sfd2 -- /bin/sh -x -c nc -v -z -w 2 affinity-clusterip-transition 80' +Jul 27 01:56:23.534: INFO: stderr: "+ nc -v -z -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" +Jul 27 01:56:23.534: INFO: stdout: "" +Jul 27 01:56:23.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-6666 exec execpod-affinity2sfd2 -- /bin/sh -x -c nc -v -z -w 2 172.21.16.221 80' +Jul 27 01:56:23.768: INFO: stderr: "+ nc -v -z -w 2 172.21.16.221 80\nConnection to 172.21.16.221 80 port [tcp/http] succeeded!\n" +Jul 27 01:56:23.768: INFO: stdout: "" +Jul 27 01:56:23.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-6666 exec execpod-affinity2sfd2 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.21.16.221:80/ ; done' +Jul 27 01:56:24.106: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n" +Jul 27 01:56:24.106: INFO: stdout: "\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-fswwv\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-fswwv\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-fswwv\naffinity-clusterip-transition-hmmnq\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-fswwv\naffinity-clusterip-transition-fswwv\naffinity-clusterip-transition-hmmnq\naffinity-clusterip-transition-fswwv\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-hmmnq\naffinity-clusterip-transition-hmmnq" +Jul 27 01:56:24.106: INFO: Received response from host: affinity-clusterip-transition-ng7wj +Jul 27 01:56:24.106: INFO: Received response from host: affinity-clusterip-transition-fswwv +Jul 27 01:56:24.106: INFO: Received response from host: affinity-clusterip-transition-ng7wj +Jul 27 01:56:24.106: INFO: Received response from host: affinity-clusterip-transition-ng7wj +Jul 27 01:56:24.106: INFO: Received response from host: affinity-clusterip-transition-fswwv +Jul 27 01:56:24.106: INFO: Received response from host: affinity-clusterip-transition-ng7wj +Jul 27 01:56:24.106: INFO: Received response from host: affinity-clusterip-transition-fswwv +Jul 27 01:56:24.106: INFO: Received response from host: affinity-clusterip-transition-hmmnq +Jul 27 01:56:24.106: INFO: Received response from host: affinity-clusterip-transition-ng7wj +Jul 27 01:56:24.106: INFO: Received response from host: affinity-clusterip-transition-fswwv +Jul 27 01:56:24.106: INFO: Received response from host: affinity-clusterip-transition-fswwv +Jul 27 01:56:24.106: INFO: Received response from host: affinity-clusterip-transition-hmmnq +Jul 27 01:56:24.106: INFO: Received response from host: affinity-clusterip-transition-fswwv +Jul 27 01:56:24.106: INFO: Received response from host: affinity-clusterip-transition-ng7wj +Jul 27 01:56:24.106: INFO: Received response from host: affinity-clusterip-transition-hmmnq +Jul 27 01:56:24.106: INFO: Received response from host: affinity-clusterip-transition-hmmnq +Jul 27 01:56:24.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-6666 exec execpod-affinity2sfd2 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.21.16.221:80/ ; done' +Jul 27 01:56:24.413: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n" +Jul 27 01:56:24.413: INFO: stdout: "\naffinity-clusterip-transition-hmmnq\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-fswwv\naffinity-clusterip-transition-hmmnq\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-fswwv\naffinity-clusterip-transition-hmmnq\naffinity-clusterip-transition-fswwv\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-hmmnq\naffinity-clusterip-transition-fswwv\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-hmmnq\naffinity-clusterip-transition-hmmnq\naffinity-clusterip-transition-fswwv" +Jul 27 01:56:24.413: INFO: Received response from host: affinity-clusterip-transition-hmmnq +Jul 27 01:56:24.413: INFO: Received response from host: affinity-clusterip-transition-ng7wj +Jul 27 01:56:24.413: INFO: Received response from host: affinity-clusterip-transition-ng7wj +Jul 27 01:56:24.413: INFO: Received response from host: affinity-clusterip-transition-fswwv +Jul 27 01:56:24.413: INFO: Received response from host: affinity-clusterip-transition-hmmnq +Jul 27 01:56:24.413: INFO: Received response from host: affinity-clusterip-transition-ng7wj +Jul 27 01:56:24.413: INFO: Received response from host: affinity-clusterip-transition-fswwv +Jul 27 01:56:24.413: INFO: Received response from host: affinity-clusterip-transition-hmmnq +Jul 27 01:56:24.413: INFO: Received response from host: affinity-clusterip-transition-fswwv +Jul 27 01:56:24.413: INFO: Received response from host: affinity-clusterip-transition-ng7wj +Jul 27 01:56:24.413: INFO: Received response from host: affinity-clusterip-transition-hmmnq +Jul 27 01:56:24.413: INFO: Received response from host: affinity-clusterip-transition-fswwv +Jul 27 01:56:24.413: INFO: Received response from host: affinity-clusterip-transition-ng7wj +Jul 27 01:56:24.413: INFO: Received response from host: affinity-clusterip-transition-hmmnq +Jul 27 01:56:24.413: INFO: Received response from host: affinity-clusterip-transition-hmmnq +Jul 27 01:56:24.413: INFO: Received response from host: affinity-clusterip-transition-fswwv +Jul 27 01:56:54.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-6666 exec execpod-affinity2sfd2 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.21.16.221:80/ ; done' +Jul 27 01:56:54.699: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n" +Jul 27 01:56:54.699: INFO: stdout: "\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj" +Jul 27 01:56:54.699: INFO: Received response from host: affinity-clusterip-transition-ng7wj +Jul 27 01:56:54.699: INFO: Received response from host: affinity-clusterip-transition-ng7wj +Jul 27 01:56:54.699: INFO: Received response from host: affinity-clusterip-transition-ng7wj +Jul 27 01:56:54.699: INFO: Received response from host: affinity-clusterip-transition-ng7wj +Jul 27 01:56:54.699: INFO: Received response from host: affinity-clusterip-transition-ng7wj +Jul 27 01:56:54.699: INFO: Received response from host: affinity-clusterip-transition-ng7wj +Jul 27 01:56:54.699: INFO: Received response from host: affinity-clusterip-transition-ng7wj +Jul 27 01:56:54.699: INFO: Received response from host: affinity-clusterip-transition-ng7wj +Jul 27 01:56:54.699: INFO: Received response from host: affinity-clusterip-transition-ng7wj +Jul 27 01:56:54.699: INFO: Received response from host: affinity-clusterip-transition-ng7wj +Jul 27 01:56:54.699: INFO: Received response from host: affinity-clusterip-transition-ng7wj +Jul 27 01:56:54.699: INFO: Received response from host: affinity-clusterip-transition-ng7wj +Jul 27 01:56:54.699: INFO: Received response from host: affinity-clusterip-transition-ng7wj +Jul 27 01:56:54.699: INFO: Received response from host: affinity-clusterip-transition-ng7wj +Jul 27 01:56:54.699: INFO: Received response from host: affinity-clusterip-transition-ng7wj +Jul 27 01:56:54.699: INFO: Received response from host: affinity-clusterip-transition-ng7wj +Jul 27 01:56:54.699: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-6666, will wait for the garbage collector to delete the pods 07/27/23 01:56:54.722 +Jul 27 01:56:54.809: INFO: Deleting ReplicationController affinity-clusterip-transition took: 22.744565ms +Jul 27 01:56:54.909: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.103546ms +[AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 -Jun 12 21:15:20.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +Jul 27 01:56:57.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +[DeferCleanup (Each)] [sig-network] Services dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +[DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 -STEP: Destroying namespace "emptydir-3796" for this suite. 06/12/23 21:15:20.528 +STEP: Destroying namespace "services-6666" for this suite. 07/27/23 01:56:57.802 ------------------------------ -• [SLOW TEST] [6.288 seconds] -[sig-storage] EmptyDir volumes -test/e2e/common/storage/framework.go:23 - should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:97 +• [SLOW TEST] [40.766 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2213 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] EmptyDir volumes + [BeforeEach] [sig-network] Services set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:15:14.282 - Jun 12 21:15:14.283: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename emptydir 06/12/23 21:15:14.285 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:15:14.328 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:15:14.338 - [BeforeEach] [sig-storage] EmptyDir volumes + STEP: Creating a kubernetes client 07/27/23 01:56:17.062 + Jul 27 01:56:17.062: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename services 07/27/23 01:56:17.063 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:56:17.102 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:56:17.11 + [BeforeEach] [sig-network] Services test/e2e/framework/metrics/init/init.go:31 - [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:97 - STEP: Creating a pod to test emptydir 0644 on tmpfs 06/12/23 21:15:14.35 - Jun 12 21:15:14.377: INFO: Waiting up to 5m0s for pod "pod-df82a95e-315a-4bbd-89da-15ef3cf8d780" in namespace "emptydir-3796" to be "Succeeded or Failed" - Jun 12 21:15:14.390: INFO: Pod "pod-df82a95e-315a-4bbd-89da-15ef3cf8d780": Phase="Pending", Reason="", readiness=false. Elapsed: 12.51632ms - Jun 12 21:15:16.401: INFO: Pod "pod-df82a95e-315a-4bbd-89da-15ef3cf8d780": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023628294s - Jun 12 21:15:18.403: INFO: Pod "pod-df82a95e-315a-4bbd-89da-15ef3cf8d780": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025341803s - Jun 12 21:15:20.405: INFO: Pod "pod-df82a95e-315a-4bbd-89da-15ef3cf8d780": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02758633s - STEP: Saw pod success 06/12/23 21:15:20.405 - Jun 12 21:15:20.406: INFO: Pod "pod-df82a95e-315a-4bbd-89da-15ef3cf8d780" satisfied condition "Succeeded or Failed" - Jun 12 21:15:20.416: INFO: Trying to get logs from node 10.138.75.70 pod pod-df82a95e-315a-4bbd-89da-15ef3cf8d780 container test-container: - STEP: delete the pod 06/12/23 21:15:20.457 - Jun 12 21:15:20.494: INFO: Waiting for pod pod-df82a95e-315a-4bbd-89da-15ef3cf8d780 to disappear - Jun 12 21:15:20.507: INFO: Pod pod-df82a95e-315a-4bbd-89da-15ef3cf8d780 no longer exists - [AfterEach] [sig-storage] EmptyDir volumes + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2213 + STEP: creating service in namespace services-6666 07/27/23 01:56:17.119 + STEP: creating service affinity-clusterip-transition in namespace services-6666 07/27/23 01:56:17.119 + STEP: creating replication controller affinity-clusterip-transition in namespace services-6666 07/27/23 01:56:17.166 + I0727 01:56:17.191849 20 runners.go:193] Created replication controller with name: affinity-clusterip-transition, namespace: services-6666, replica count: 3 + I0727 01:56:20.242382 20 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Jul 27 01:56:20.271: INFO: Creating new exec pod + Jul 27 01:56:20.296: INFO: Waiting up to 5m0s for pod "execpod-affinity2sfd2" in namespace "services-6666" to be "running" + Jul 27 01:56:20.304: INFO: Pod "execpod-affinity2sfd2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.362653ms + Jul 27 01:56:22.313: INFO: Pod "execpod-affinity2sfd2": Phase="Running", Reason="", readiness=true. Elapsed: 2.016853383s + Jul 27 01:56:22.313: INFO: Pod "execpod-affinity2sfd2" satisfied condition "running" + Jul 27 01:56:23.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-6666 exec execpod-affinity2sfd2 -- /bin/sh -x -c nc -v -z -w 2 affinity-clusterip-transition 80' + Jul 27 01:56:23.534: INFO: stderr: "+ nc -v -z -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" + Jul 27 01:56:23.534: INFO: stdout: "" + Jul 27 01:56:23.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-6666 exec execpod-affinity2sfd2 -- /bin/sh -x -c nc -v -z -w 2 172.21.16.221 80' + Jul 27 01:56:23.768: INFO: stderr: "+ nc -v -z -w 2 172.21.16.221 80\nConnection to 172.21.16.221 80 port [tcp/http] succeeded!\n" + Jul 27 01:56:23.768: INFO: stdout: "" + Jul 27 01:56:23.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-6666 exec execpod-affinity2sfd2 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.21.16.221:80/ ; done' + Jul 27 01:56:24.106: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n" + Jul 27 01:56:24.106: INFO: stdout: "\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-fswwv\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-fswwv\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-fswwv\naffinity-clusterip-transition-hmmnq\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-fswwv\naffinity-clusterip-transition-fswwv\naffinity-clusterip-transition-hmmnq\naffinity-clusterip-transition-fswwv\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-hmmnq\naffinity-clusterip-transition-hmmnq" + Jul 27 01:56:24.106: INFO: Received response from host: affinity-clusterip-transition-ng7wj + Jul 27 01:56:24.106: INFO: Received response from host: affinity-clusterip-transition-fswwv + Jul 27 01:56:24.106: INFO: Received response from host: affinity-clusterip-transition-ng7wj + Jul 27 01:56:24.106: INFO: Received response from host: affinity-clusterip-transition-ng7wj + Jul 27 01:56:24.106: INFO: Received response from host: affinity-clusterip-transition-fswwv + Jul 27 01:56:24.106: INFO: Received response from host: affinity-clusterip-transition-ng7wj + Jul 27 01:56:24.106: INFO: Received response from host: affinity-clusterip-transition-fswwv + Jul 27 01:56:24.106: INFO: Received response from host: affinity-clusterip-transition-hmmnq + Jul 27 01:56:24.106: INFO: Received response from host: affinity-clusterip-transition-ng7wj + Jul 27 01:56:24.106: INFO: Received response from host: affinity-clusterip-transition-fswwv + Jul 27 01:56:24.106: INFO: Received response from host: affinity-clusterip-transition-fswwv + Jul 27 01:56:24.106: INFO: Received response from host: affinity-clusterip-transition-hmmnq + Jul 27 01:56:24.106: INFO: Received response from host: affinity-clusterip-transition-fswwv + Jul 27 01:56:24.106: INFO: Received response from host: affinity-clusterip-transition-ng7wj + Jul 27 01:56:24.106: INFO: Received response from host: affinity-clusterip-transition-hmmnq + Jul 27 01:56:24.106: INFO: Received response from host: affinity-clusterip-transition-hmmnq + Jul 27 01:56:24.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-6666 exec execpod-affinity2sfd2 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.21.16.221:80/ ; done' + Jul 27 01:56:24.413: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n" + Jul 27 01:56:24.413: INFO: stdout: "\naffinity-clusterip-transition-hmmnq\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-fswwv\naffinity-clusterip-transition-hmmnq\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-fswwv\naffinity-clusterip-transition-hmmnq\naffinity-clusterip-transition-fswwv\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-hmmnq\naffinity-clusterip-transition-fswwv\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-hmmnq\naffinity-clusterip-transition-hmmnq\naffinity-clusterip-transition-fswwv" + Jul 27 01:56:24.413: INFO: Received response from host: affinity-clusterip-transition-hmmnq + Jul 27 01:56:24.413: INFO: Received response from host: affinity-clusterip-transition-ng7wj + Jul 27 01:56:24.413: INFO: Received response from host: affinity-clusterip-transition-ng7wj + Jul 27 01:56:24.413: INFO: Received response from host: affinity-clusterip-transition-fswwv + Jul 27 01:56:24.413: INFO: Received response from host: affinity-clusterip-transition-hmmnq + Jul 27 01:56:24.413: INFO: Received response from host: affinity-clusterip-transition-ng7wj + Jul 27 01:56:24.413: INFO: Received response from host: affinity-clusterip-transition-fswwv + Jul 27 01:56:24.413: INFO: Received response from host: affinity-clusterip-transition-hmmnq + Jul 27 01:56:24.413: INFO: Received response from host: affinity-clusterip-transition-fswwv + Jul 27 01:56:24.413: INFO: Received response from host: affinity-clusterip-transition-ng7wj + Jul 27 01:56:24.413: INFO: Received response from host: affinity-clusterip-transition-hmmnq + Jul 27 01:56:24.413: INFO: Received response from host: affinity-clusterip-transition-fswwv + Jul 27 01:56:24.413: INFO: Received response from host: affinity-clusterip-transition-ng7wj + Jul 27 01:56:24.413: INFO: Received response from host: affinity-clusterip-transition-hmmnq + Jul 27 01:56:24.413: INFO: Received response from host: affinity-clusterip-transition-hmmnq + Jul 27 01:56:24.413: INFO: Received response from host: affinity-clusterip-transition-fswwv + Jul 27 01:56:54.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-6666 exec execpod-affinity2sfd2 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.21.16.221:80/ ; done' + Jul 27 01:56:54.699: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.16.221:80/\n" + Jul 27 01:56:54.699: INFO: stdout: "\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj\naffinity-clusterip-transition-ng7wj" + Jul 27 01:56:54.699: INFO: Received response from host: affinity-clusterip-transition-ng7wj + Jul 27 01:56:54.699: INFO: Received response from host: affinity-clusterip-transition-ng7wj + Jul 27 01:56:54.699: INFO: Received response from host: affinity-clusterip-transition-ng7wj + Jul 27 01:56:54.699: INFO: Received response from host: affinity-clusterip-transition-ng7wj + Jul 27 01:56:54.699: INFO: Received response from host: affinity-clusterip-transition-ng7wj + Jul 27 01:56:54.699: INFO: Received response from host: affinity-clusterip-transition-ng7wj + Jul 27 01:56:54.699: INFO: Received response from host: affinity-clusterip-transition-ng7wj + Jul 27 01:56:54.699: INFO: Received response from host: affinity-clusterip-transition-ng7wj + Jul 27 01:56:54.699: INFO: Received response from host: affinity-clusterip-transition-ng7wj + Jul 27 01:56:54.699: INFO: Received response from host: affinity-clusterip-transition-ng7wj + Jul 27 01:56:54.699: INFO: Received response from host: affinity-clusterip-transition-ng7wj + Jul 27 01:56:54.699: INFO: Received response from host: affinity-clusterip-transition-ng7wj + Jul 27 01:56:54.699: INFO: Received response from host: affinity-clusterip-transition-ng7wj + Jul 27 01:56:54.699: INFO: Received response from host: affinity-clusterip-transition-ng7wj + Jul 27 01:56:54.699: INFO: Received response from host: affinity-clusterip-transition-ng7wj + Jul 27 01:56:54.699: INFO: Received response from host: affinity-clusterip-transition-ng7wj + Jul 27 01:56:54.699: INFO: Cleaning up the exec pod + STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-6666, will wait for the garbage collector to delete the pods 07/27/23 01:56:54.722 + Jul 27 01:56:54.809: INFO: Deleting ReplicationController affinity-clusterip-transition took: 22.744565ms + Jul 27 01:56:54.909: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.103546ms + [AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 - Jun 12 21:15:20.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + Jul 27 01:56:57.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + [DeferCleanup (Each)] [sig-network] Services dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + [DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 - STEP: Destroying namespace "emptydir-3796" for this suite. 06/12/23 21:15:20.528 + STEP: Destroying namespace "services-6666" for this suite. 07/27/23 01:56:57.802 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-apps] ReplicaSet - should serve a basic image on each replica with a public image [Conformance] - test/e2e/apps/replica_set.go:111 -[BeforeEach] [sig-apps] ReplicaSet +[sig-node] Kubelet when scheduling a read only busybox container + should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:184 +[BeforeEach] [sig-node] Kubelet set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:15:20.584 -Jun 12 21:15:20.585: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename replicaset 06/12/23 21:15:20.588 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:15:20.675 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:15:20.714 -[BeforeEach] [sig-apps] ReplicaSet +STEP: Creating a kubernetes client 07/27/23 01:56:57.83 +Jul 27 01:56:57.830: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename kubelet-test 07/27/23 01:56:57.831 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:56:57.874 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:56:57.884 +[BeforeEach] [sig-node] Kubelet test/e2e/framework/metrics/init/init.go:31 -[It] should serve a basic image on each replica with a public image [Conformance] - test/e2e/apps/replica_set.go:111 -Jun 12 21:15:20.779: INFO: Creating ReplicaSet my-hostname-basic-d1c7dfbd-63db-4517-ac5b-547fa944a3b9 -Jun 12 21:15:20.820: INFO: Pod name my-hostname-basic-d1c7dfbd-63db-4517-ac5b-547fa944a3b9: Found 1 pods out of 1 -Jun 12 21:15:20.820: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-d1c7dfbd-63db-4517-ac5b-547fa944a3b9" is running -Jun 12 21:15:20.820: INFO: Waiting up to 5m0s for pod "my-hostname-basic-d1c7dfbd-63db-4517-ac5b-547fa944a3b9-rbz9v" in namespace "replicaset-8822" to be "running" -Jun 12 21:15:20.830: INFO: Pod "my-hostname-basic-d1c7dfbd-63db-4517-ac5b-547fa944a3b9-rbz9v": Phase="Pending", Reason="", readiness=false. Elapsed: 10.159623ms -Jun 12 21:15:22.841: INFO: Pod "my-hostname-basic-d1c7dfbd-63db-4517-ac5b-547fa944a3b9-rbz9v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020863379s -Jun 12 21:15:24.841: INFO: Pod "my-hostname-basic-d1c7dfbd-63db-4517-ac5b-547fa944a3b9-rbz9v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021031102s -Jun 12 21:15:26.855: INFO: Pod "my-hostname-basic-d1c7dfbd-63db-4517-ac5b-547fa944a3b9-rbz9v": Phase="Running", Reason="", readiness=true. Elapsed: 6.035513178s -Jun 12 21:15:26.856: INFO: Pod "my-hostname-basic-d1c7dfbd-63db-4517-ac5b-547fa944a3b9-rbz9v" satisfied condition "running" -Jun 12 21:15:26.856: INFO: Pod "my-hostname-basic-d1c7dfbd-63db-4517-ac5b-547fa944a3b9-rbz9v" is running (conditions: []) -Jun 12 21:15:26.856: INFO: Trying to dial the pod -Jun 12 21:15:31.901: INFO: Controller my-hostname-basic-d1c7dfbd-63db-4517-ac5b-547fa944a3b9: Got expected result from replica 1 [my-hostname-basic-d1c7dfbd-63db-4517-ac5b-547fa944a3b9-rbz9v]: "my-hostname-basic-d1c7dfbd-63db-4517-ac5b-547fa944a3b9-rbz9v", 1 of 1 required successes so far -[AfterEach] [sig-apps] ReplicaSet +[BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 +[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:184 +Jul 27 01:56:57.925: INFO: Waiting up to 5m0s for pod "busybox-readonly-fsf3ecf285-c84a-453b-9f9f-230f3c278477" in namespace "kubelet-test-9922" to be "running and ready" +Jul 27 01:56:57.934: INFO: Pod "busybox-readonly-fsf3ecf285-c84a-453b-9f9f-230f3c278477": Phase="Pending", Reason="", readiness=false. Elapsed: 8.768165ms +Jul 27 01:56:57.934: INFO: The phase of Pod busybox-readonly-fsf3ecf285-c84a-453b-9f9f-230f3c278477 is Pending, waiting for it to be Running (with Ready = true) +Jul 27 01:56:59.955: INFO: Pod "busybox-readonly-fsf3ecf285-c84a-453b-9f9f-230f3c278477": Phase="Running", Reason="", readiness=true. Elapsed: 2.030293705s +Jul 27 01:56:59.956: INFO: The phase of Pod busybox-readonly-fsf3ecf285-c84a-453b-9f9f-230f3c278477 is Running (Ready = true) +Jul 27 01:56:59.956: INFO: Pod "busybox-readonly-fsf3ecf285-c84a-453b-9f9f-230f3c278477" satisfied condition "running and ready" +[AfterEach] [sig-node] Kubelet test/e2e/framework/node/init/init.go:32 -Jun 12 21:15:31.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] ReplicaSet +Jul 27 01:56:59.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Kubelet test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] ReplicaSet +[DeferCleanup (Each)] [sig-node] Kubelet dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] ReplicaSet +[DeferCleanup (Each)] [sig-node] Kubelet tear down framework | framework.go:193 -STEP: Destroying namespace "replicaset-8822" for this suite. 06/12/23 21:15:31.92 +STEP: Destroying namespace "kubelet-test-9922" for this suite. 07/27/23 01:57:00.056 ------------------------------ -• [SLOW TEST] [11.351 seconds] -[sig-apps] ReplicaSet -test/e2e/apps/framework.go:23 - should serve a basic image on each replica with a public image [Conformance] - test/e2e/apps/replica_set.go:111 +• [2.253 seconds] +[sig-node] Kubelet +test/e2e/common/node/framework.go:23 + when scheduling a read only busybox container + test/e2e/common/node/kubelet.go:175 + should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:184 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] ReplicaSet + [BeforeEach] [sig-node] Kubelet set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:15:20.584 - Jun 12 21:15:20.585: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename replicaset 06/12/23 21:15:20.588 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:15:20.675 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:15:20.714 - [BeforeEach] [sig-apps] ReplicaSet + STEP: Creating a kubernetes client 07/27/23 01:56:57.83 + Jul 27 01:56:57.830: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename kubelet-test 07/27/23 01:56:57.831 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:56:57.874 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:56:57.884 + [BeforeEach] [sig-node] Kubelet test/e2e/framework/metrics/init/init.go:31 - [It] should serve a basic image on each replica with a public image [Conformance] - test/e2e/apps/replica_set.go:111 - Jun 12 21:15:20.779: INFO: Creating ReplicaSet my-hostname-basic-d1c7dfbd-63db-4517-ac5b-547fa944a3b9 - Jun 12 21:15:20.820: INFO: Pod name my-hostname-basic-d1c7dfbd-63db-4517-ac5b-547fa944a3b9: Found 1 pods out of 1 - Jun 12 21:15:20.820: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-d1c7dfbd-63db-4517-ac5b-547fa944a3b9" is running - Jun 12 21:15:20.820: INFO: Waiting up to 5m0s for pod "my-hostname-basic-d1c7dfbd-63db-4517-ac5b-547fa944a3b9-rbz9v" in namespace "replicaset-8822" to be "running" - Jun 12 21:15:20.830: INFO: Pod "my-hostname-basic-d1c7dfbd-63db-4517-ac5b-547fa944a3b9-rbz9v": Phase="Pending", Reason="", readiness=false. Elapsed: 10.159623ms - Jun 12 21:15:22.841: INFO: Pod "my-hostname-basic-d1c7dfbd-63db-4517-ac5b-547fa944a3b9-rbz9v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020863379s - Jun 12 21:15:24.841: INFO: Pod "my-hostname-basic-d1c7dfbd-63db-4517-ac5b-547fa944a3b9-rbz9v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021031102s - Jun 12 21:15:26.855: INFO: Pod "my-hostname-basic-d1c7dfbd-63db-4517-ac5b-547fa944a3b9-rbz9v": Phase="Running", Reason="", readiness=true. Elapsed: 6.035513178s - Jun 12 21:15:26.856: INFO: Pod "my-hostname-basic-d1c7dfbd-63db-4517-ac5b-547fa944a3b9-rbz9v" satisfied condition "running" - Jun 12 21:15:26.856: INFO: Pod "my-hostname-basic-d1c7dfbd-63db-4517-ac5b-547fa944a3b9-rbz9v" is running (conditions: []) - Jun 12 21:15:26.856: INFO: Trying to dial the pod - Jun 12 21:15:31.901: INFO: Controller my-hostname-basic-d1c7dfbd-63db-4517-ac5b-547fa944a3b9: Got expected result from replica 1 [my-hostname-basic-d1c7dfbd-63db-4517-ac5b-547fa944a3b9-rbz9v]: "my-hostname-basic-d1c7dfbd-63db-4517-ac5b-547fa944a3b9-rbz9v", 1 of 1 required successes so far - [AfterEach] [sig-apps] ReplicaSet + [BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 + [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:184 + Jul 27 01:56:57.925: INFO: Waiting up to 5m0s for pod "busybox-readonly-fsf3ecf285-c84a-453b-9f9f-230f3c278477" in namespace "kubelet-test-9922" to be "running and ready" + Jul 27 01:56:57.934: INFO: Pod "busybox-readonly-fsf3ecf285-c84a-453b-9f9f-230f3c278477": Phase="Pending", Reason="", readiness=false. Elapsed: 8.768165ms + Jul 27 01:56:57.934: INFO: The phase of Pod busybox-readonly-fsf3ecf285-c84a-453b-9f9f-230f3c278477 is Pending, waiting for it to be Running (with Ready = true) + Jul 27 01:56:59.955: INFO: Pod "busybox-readonly-fsf3ecf285-c84a-453b-9f9f-230f3c278477": Phase="Running", Reason="", readiness=true. Elapsed: 2.030293705s + Jul 27 01:56:59.956: INFO: The phase of Pod busybox-readonly-fsf3ecf285-c84a-453b-9f9f-230f3c278477 is Running (Ready = true) + Jul 27 01:56:59.956: INFO: Pod "busybox-readonly-fsf3ecf285-c84a-453b-9f9f-230f3c278477" satisfied condition "running and ready" + [AfterEach] [sig-node] Kubelet test/e2e/framework/node/init/init.go:32 - Jun 12 21:15:31.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] ReplicaSet + Jul 27 01:56:59.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Kubelet test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] ReplicaSet + [DeferCleanup (Each)] [sig-node] Kubelet dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] ReplicaSet + [DeferCleanup (Each)] [sig-node] Kubelet tear down framework | framework.go:193 - STEP: Destroying namespace "replicaset-8822" for this suite. 06/12/23 21:15:31.92 + STEP: Destroying namespace "kubelet-test-9922" for this suite. 07/27/23 01:57:00.056 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SS ------------------------------ -[sig-storage] Projected secret - should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/projected_secret.go:67 -[BeforeEach] [sig-storage] Projected secret +[sig-network] EndpointSlice + should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + test/e2e/network/endpointslice.go:205 +[BeforeEach] [sig-network] EndpointSlice set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:15:31.956 -Jun 12 21:15:31.956: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename projected 06/12/23 21:15:31.959 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:15:31.999 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:15:32.006 -[BeforeEach] [sig-storage] Projected secret +STEP: Creating a kubernetes client 07/27/23 01:57:00.084 +Jul 27 01:57:00.084: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename endpointslice 07/27/23 01:57:00.085 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:57:00.152 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:57:00.165 +[BeforeEach] [sig-network] EndpointSlice test/e2e/framework/metrics/init/init.go:31 -[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/projected_secret.go:67 -STEP: Creating projection with secret that has name projected-secret-test-7af11f9e-2c06-44c5-8be5-d1e15df46083 06/12/23 21:15:32.017 -STEP: Creating a pod to test consume secrets 06/12/23 21:15:32.043 -Jun 12 21:15:32.082: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-382d5074-e18d-45da-a57b-3c3fd297b322" in namespace "projected-8647" to be "Succeeded or Failed" -Jun 12 21:15:32.091: INFO: Pod "pod-projected-secrets-382d5074-e18d-45da-a57b-3c3fd297b322": Phase="Pending", Reason="", readiness=false. Elapsed: 9.643578ms -Jun 12 21:15:34.102: INFO: Pod "pod-projected-secrets-382d5074-e18d-45da-a57b-3c3fd297b322": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020281883s -Jun 12 21:15:36.101: INFO: Pod "pod-projected-secrets-382d5074-e18d-45da-a57b-3c3fd297b322": Phase="Running", Reason="", readiness=false. Elapsed: 4.01954334s -Jun 12 21:15:38.103: INFO: Pod "pod-projected-secrets-382d5074-e18d-45da-a57b-3c3fd297b322": Phase="Running", Reason="", readiness=false. Elapsed: 6.020853341s -Jun 12 21:15:40.176: INFO: Pod "pod-projected-secrets-382d5074-e18d-45da-a57b-3c3fd297b322": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093922592s -STEP: Saw pod success 06/12/23 21:15:40.176 -Jun 12 21:15:40.176: INFO: Pod "pod-projected-secrets-382d5074-e18d-45da-a57b-3c3fd297b322" satisfied condition "Succeeded or Failed" -Jun 12 21:15:40.190: INFO: Trying to get logs from node 10.138.75.112 pod pod-projected-secrets-382d5074-e18d-45da-a57b-3c3fd297b322 container projected-secret-volume-test: -STEP: delete the pod 06/12/23 21:15:40.304 -Jun 12 21:15:40.358: INFO: Waiting for pod pod-projected-secrets-382d5074-e18d-45da-a57b-3c3fd297b322 to disappear -Jun 12 21:15:40.374: INFO: Pod pod-projected-secrets-382d5074-e18d-45da-a57b-3c3fd297b322 no longer exists -[AfterEach] [sig-storage] Projected secret +[BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:52 +[It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + test/e2e/network/endpointslice.go:205 +W0727 01:57:00.204050 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "container1" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "container1" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "container1" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "container1" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +STEP: referencing a single matching pod 07/27/23 01:57:05.38 +STEP: referencing matching pods with named port 07/27/23 01:57:10.405 +STEP: creating empty Endpoints and EndpointSlices for no matching Pods 07/27/23 01:57:15.432 +STEP: recreating EndpointSlices after they've been deleted 07/27/23 01:57:20.457 +Jul 27 01:57:20.514: INFO: EndpointSlice for Service endpointslice-8972/example-named-port not found +[AfterEach] [sig-network] EndpointSlice test/e2e/framework/node/init/init.go:32 -Jun 12 21:15:40.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Projected secret +Jul 27 01:57:30.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] EndpointSlice test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Projected secret +[DeferCleanup (Each)] [sig-network] EndpointSlice dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Projected secret +[DeferCleanup (Each)] [sig-network] EndpointSlice tear down framework | framework.go:193 -STEP: Destroying namespace "projected-8647" for this suite. 06/12/23 21:15:40.442 +STEP: Destroying namespace "endpointslice-8972" for this suite. 07/27/23 01:57:30.581 ------------------------------ -• [SLOW TEST] [8.537 seconds] -[sig-storage] Projected secret -test/e2e/common/storage/framework.go:23 - should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/projected_secret.go:67 +• [SLOW TEST] [30.542 seconds] +[sig-network] EndpointSlice +test/e2e/network/common/framework.go:23 + should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + test/e2e/network/endpointslice.go:205 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Projected secret + [BeforeEach] [sig-network] EndpointSlice set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:15:31.956 - Jun 12 21:15:31.956: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename projected 06/12/23 21:15:31.959 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:15:31.999 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:15:32.006 - [BeforeEach] [sig-storage] Projected secret + STEP: Creating a kubernetes client 07/27/23 01:57:00.084 + Jul 27 01:57:00.084: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename endpointslice 07/27/23 01:57:00.085 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:57:00.152 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:57:00.165 + [BeforeEach] [sig-network] EndpointSlice test/e2e/framework/metrics/init/init.go:31 - [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/projected_secret.go:67 - STEP: Creating projection with secret that has name projected-secret-test-7af11f9e-2c06-44c5-8be5-d1e15df46083 06/12/23 21:15:32.017 - STEP: Creating a pod to test consume secrets 06/12/23 21:15:32.043 - Jun 12 21:15:32.082: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-382d5074-e18d-45da-a57b-3c3fd297b322" in namespace "projected-8647" to be "Succeeded or Failed" - Jun 12 21:15:32.091: INFO: Pod "pod-projected-secrets-382d5074-e18d-45da-a57b-3c3fd297b322": Phase="Pending", Reason="", readiness=false. Elapsed: 9.643578ms - Jun 12 21:15:34.102: INFO: Pod "pod-projected-secrets-382d5074-e18d-45da-a57b-3c3fd297b322": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020281883s - Jun 12 21:15:36.101: INFO: Pod "pod-projected-secrets-382d5074-e18d-45da-a57b-3c3fd297b322": Phase="Running", Reason="", readiness=false. Elapsed: 4.01954334s - Jun 12 21:15:38.103: INFO: Pod "pod-projected-secrets-382d5074-e18d-45da-a57b-3c3fd297b322": Phase="Running", Reason="", readiness=false. Elapsed: 6.020853341s - Jun 12 21:15:40.176: INFO: Pod "pod-projected-secrets-382d5074-e18d-45da-a57b-3c3fd297b322": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093922592s - STEP: Saw pod success 06/12/23 21:15:40.176 - Jun 12 21:15:40.176: INFO: Pod "pod-projected-secrets-382d5074-e18d-45da-a57b-3c3fd297b322" satisfied condition "Succeeded or Failed" - Jun 12 21:15:40.190: INFO: Trying to get logs from node 10.138.75.112 pod pod-projected-secrets-382d5074-e18d-45da-a57b-3c3fd297b322 container projected-secret-volume-test: - STEP: delete the pod 06/12/23 21:15:40.304 - Jun 12 21:15:40.358: INFO: Waiting for pod pod-projected-secrets-382d5074-e18d-45da-a57b-3c3fd297b322 to disappear - Jun 12 21:15:40.374: INFO: Pod pod-projected-secrets-382d5074-e18d-45da-a57b-3c3fd297b322 no longer exists - [AfterEach] [sig-storage] Projected secret + [BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:52 + [It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + test/e2e/network/endpointslice.go:205 + W0727 01:57:00.204050 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "container1" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "container1" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "container1" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "container1" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + STEP: referencing a single matching pod 07/27/23 01:57:05.38 + STEP: referencing matching pods with named port 07/27/23 01:57:10.405 + STEP: creating empty Endpoints and EndpointSlices for no matching Pods 07/27/23 01:57:15.432 + STEP: recreating EndpointSlices after they've been deleted 07/27/23 01:57:20.457 + Jul 27 01:57:20.514: INFO: EndpointSlice for Service endpointslice-8972/example-named-port not found + [AfterEach] [sig-network] EndpointSlice test/e2e/framework/node/init/init.go:32 - Jun 12 21:15:40.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Projected secret + Jul 27 01:57:30.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] EndpointSlice test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Projected secret + [DeferCleanup (Each)] [sig-network] EndpointSlice dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Projected secret + [DeferCleanup (Each)] [sig-network] EndpointSlice tear down framework | framework.go:193 - STEP: Destroying namespace "projected-8647" for this suite. 06/12/23 21:15:40.442 + STEP: Destroying namespace "endpointslice-8972" for this suite. 07/27/23 01:57:30.581 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSS ------------------------------ -[sig-network] DNS - should provide DNS for ExternalName services [Conformance] - test/e2e/network/dns.go:333 -[BeforeEach] [sig-network] DNS +[sig-network] Services + should be able to change the type from ClusterIP to ExternalName [Conformance] + test/e2e/network/service.go:1515 +[BeforeEach] [sig-network] Services set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:15:40.497 -Jun 12 21:15:40.497: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename dns 06/12/23 21:15:40.502 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:15:40.598 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:15:40.721 -[BeforeEach] [sig-network] DNS +STEP: Creating a kubernetes client 07/27/23 01:57:30.628 +Jul 27 01:57:30.628: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename services 07/27/23 01:57:30.633 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:57:30.676 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:57:30.701 +[BeforeEach] [sig-network] Services test/e2e/framework/metrics/init/init.go:31 -[It] should provide DNS for ExternalName services [Conformance] - test/e2e/network/dns.go:333 -STEP: Creating a test externalName service 06/12/23 21:15:40.784 -STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9737.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9737.svc.cluster.local; sleep 1; done - 06/12/23 21:15:40.806 -STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9737.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9737.svc.cluster.local; sleep 1; done - 06/12/23 21:15:40.806 -STEP: creating a pod to probe DNS 06/12/23 21:15:40.807 -STEP: submitting the pod to kubernetes 06/12/23 21:15:40.808 -Jun 12 21:15:40.836: INFO: Waiting up to 15m0s for pod "dns-test-5cf7d2b8-9a68-49da-a18a-f81f906e5f93" in namespace "dns-9737" to be "running" -Jun 12 21:15:40.851: INFO: Pod "dns-test-5cf7d2b8-9a68-49da-a18a-f81f906e5f93": Phase="Pending", Reason="", readiness=false. Elapsed: 11.694682ms -Jun 12 21:15:42.861: INFO: Pod "dns-test-5cf7d2b8-9a68-49da-a18a-f81f906e5f93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022564869s -Jun 12 21:15:44.862: INFO: Pod "dns-test-5cf7d2b8-9a68-49da-a18a-f81f906e5f93": Phase="Running", Reason="", readiness=true. Elapsed: 4.022696018s -Jun 12 21:15:44.862: INFO: Pod "dns-test-5cf7d2b8-9a68-49da-a18a-f81f906e5f93" satisfied condition "running" -STEP: retrieving the pod 06/12/23 21:15:44.862 -STEP: looking for the results for each expected name from probers 06/12/23 21:15:44.872 -Jun 12 21:15:44.905: INFO: DNS probes using dns-test-5cf7d2b8-9a68-49da-a18a-f81f906e5f93 succeeded - -STEP: deleting the pod 06/12/23 21:15:44.905 -STEP: changing the externalName to bar.example.com 06/12/23 21:15:44.939 -STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9737.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9737.svc.cluster.local; sleep 1; done - 06/12/23 21:15:44.977 -STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9737.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9737.svc.cluster.local; sleep 1; done - 06/12/23 21:15:44.978 -STEP: creating a second pod to probe DNS 06/12/23 21:15:44.978 -STEP: submitting the pod to kubernetes 06/12/23 21:15:44.978 -Jun 12 21:15:45.002: INFO: Waiting up to 15m0s for pod "dns-test-8bc0fea2-002c-46f5-b5c1-740446d176b8" in namespace "dns-9737" to be "running" -Jun 12 21:15:45.014: INFO: Pod "dns-test-8bc0fea2-002c-46f5-b5c1-740446d176b8": Phase="Pending", Reason="", readiness=false. Elapsed: 11.540043ms -Jun 12 21:15:47.027: INFO: Pod "dns-test-8bc0fea2-002c-46f5-b5c1-740446d176b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024966818s -Jun 12 21:15:49.025: INFO: Pod "dns-test-8bc0fea2-002c-46f5-b5c1-740446d176b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023188781s -Jun 12 21:15:51.031: INFO: Pod "dns-test-8bc0fea2-002c-46f5-b5c1-740446d176b8": Phase="Running", Reason="", readiness=true. Elapsed: 6.029227176s -Jun 12 21:15:51.031: INFO: Pod "dns-test-8bc0fea2-002c-46f5-b5c1-740446d176b8" satisfied condition "running" -STEP: retrieving the pod 06/12/23 21:15:51.032 -STEP: looking for the results for each expected name from probers 06/12/23 21:15:51.042 -Jun 12 21:15:51.115: INFO: DNS probes using dns-test-8bc0fea2-002c-46f5-b5c1-740446d176b8 succeeded - -STEP: deleting the pod 06/12/23 21:15:51.115 -STEP: changing the service to type=ClusterIP 06/12/23 21:15:51.141 -STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9737.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9737.svc.cluster.local; sleep 1; done - 06/12/23 21:15:51.191 -STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9737.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-9737.svc.cluster.local; sleep 1; done - 06/12/23 21:15:51.191 -STEP: creating a third pod to probe DNS 06/12/23 21:15:51.192 -STEP: submitting the pod to kubernetes 06/12/23 21:15:51.204 -Jun 12 21:15:51.225: INFO: Waiting up to 15m0s for pod "dns-test-351f3c2f-9280-40d1-a69c-16ec12712643" in namespace "dns-9737" to be "running" -Jun 12 21:15:51.235: INFO: Pod "dns-test-351f3c2f-9280-40d1-a69c-16ec12712643": Phase="Pending", Reason="", readiness=false. Elapsed: 9.195293ms -Jun 12 21:15:53.344: INFO: Pod "dns-test-351f3c2f-9280-40d1-a69c-16ec12712643": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118021009s -Jun 12 21:15:55.246: INFO: Pod "dns-test-351f3c2f-9280-40d1-a69c-16ec12712643": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020527204s -Jun 12 21:15:57.245: INFO: Pod "dns-test-351f3c2f-9280-40d1-a69c-16ec12712643": Phase="Running", Reason="", readiness=true. Elapsed: 6.019375754s -Jun 12 21:15:57.245: INFO: Pod "dns-test-351f3c2f-9280-40d1-a69c-16ec12712643" satisfied condition "running" -STEP: retrieving the pod 06/12/23 21:15:57.245 -STEP: looking for the results for each expected name from probers 06/12/23 21:15:57.254 -Jun 12 21:15:57.284: INFO: DNS probes using dns-test-351f3c2f-9280-40d1-a69c-16ec12712643 succeeded - -STEP: deleting the pod 06/12/23 21:15:57.284 -STEP: deleting the test externalName service 06/12/23 21:15:57.307 -[AfterEach] [sig-network] DNS +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should be able to change the type from ClusterIP to ExternalName [Conformance] + test/e2e/network/service.go:1515 +STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-1395 07/27/23 01:57:30.729 +STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service 07/27/23 01:57:30.802 +STEP: creating service externalsvc in namespace services-1395 07/27/23 01:57:30.802 +STEP: creating replication controller externalsvc in namespace services-1395 07/27/23 01:57:30.848 +I0727 01:57:30.922745 20 runners.go:193] Created replication controller with name: externalsvc, namespace: services-1395, replica count: 2 +I0727 01:57:34.011093 20 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +STEP: changing the ClusterIP service to type=ExternalName 07/27/23 01:57:34.034 +Jul 27 01:57:34.145: INFO: Creating new exec pod +Jul 27 01:57:34.175: INFO: Waiting up to 5m0s for pod "execpodql4sf" in namespace "services-1395" to be "running" +Jul 27 01:57:34.220: INFO: Pod "execpodql4sf": Phase="Pending", Reason="", readiness=false. Elapsed: 27.244322ms +Jul 27 01:57:36.307: INFO: Pod "execpodql4sf": Phase="Running", Reason="", readiness=true. Elapsed: 2.114627355s +Jul 27 01:57:36.307: INFO: Pod "execpodql4sf" satisfied condition "running" +Jul 27 01:57:36.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-1395 exec execpodql4sf -- /bin/sh -x -c nslookup clusterip-service.services-1395.svc.cluster.local' +Jul 27 01:57:36.685: INFO: stderr: "+ nslookup clusterip-service.services-1395.svc.cluster.local\n" +Jul 27 01:57:36.685: INFO: stdout: "Server:\t\t172.21.0.10\nAddress:\t172.21.0.10#53\n\nclusterip-service.services-1395.svc.cluster.local\tcanonical name = externalsvc.services-1395.svc.cluster.local.\nName:\texternalsvc.services-1395.svc.cluster.local\nAddress: 172.21.39.30\n\n" +STEP: deleting ReplicationController externalsvc in namespace services-1395, will wait for the garbage collector to delete the pods 07/27/23 01:57:36.685 +Jul 27 01:57:36.778: INFO: Deleting ReplicationController externalsvc took: 24.326793ms +Jul 27 01:57:36.880: INFO: Terminating ReplicationController externalsvc pods took: 101.49253ms +Jul 27 01:57:39.736: INFO: Cleaning up the ClusterIP to ExternalName test service +[AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 -Jun 12 21:15:57.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] DNS +Jul 27 01:57:39.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] DNS +[DeferCleanup (Each)] [sig-network] Services dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] DNS +[DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 -STEP: Destroying namespace "dns-9737" for this suite. 06/12/23 21:15:57.37 +STEP: Destroying namespace "services-1395" for this suite. 07/27/23 01:57:39.789 ------------------------------ -• [SLOW TEST] [16.888 seconds] -[sig-network] DNS +• [SLOW TEST] [9.186 seconds] +[sig-network] Services test/e2e/network/common/framework.go:23 - should provide DNS for ExternalName services [Conformance] - test/e2e/network/dns.go:333 + should be able to change the type from ClusterIP to ExternalName [Conformance] + test/e2e/network/service.go:1515 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] DNS + [BeforeEach] [sig-network] Services set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:15:40.497 - Jun 12 21:15:40.497: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename dns 06/12/23 21:15:40.502 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:15:40.598 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:15:40.721 - [BeforeEach] [sig-network] DNS + STEP: Creating a kubernetes client 07/27/23 01:57:30.628 + Jul 27 01:57:30.628: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename services 07/27/23 01:57:30.633 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:57:30.676 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:57:30.701 + [BeforeEach] [sig-network] Services test/e2e/framework/metrics/init/init.go:31 - [It] should provide DNS for ExternalName services [Conformance] - test/e2e/network/dns.go:333 - STEP: Creating a test externalName service 06/12/23 21:15:40.784 - STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9737.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9737.svc.cluster.local; sleep 1; done - 06/12/23 21:15:40.806 - STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9737.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9737.svc.cluster.local; sleep 1; done - 06/12/23 21:15:40.806 - STEP: creating a pod to probe DNS 06/12/23 21:15:40.807 - STEP: submitting the pod to kubernetes 06/12/23 21:15:40.808 - Jun 12 21:15:40.836: INFO: Waiting up to 15m0s for pod "dns-test-5cf7d2b8-9a68-49da-a18a-f81f906e5f93" in namespace "dns-9737" to be "running" - Jun 12 21:15:40.851: INFO: Pod "dns-test-5cf7d2b8-9a68-49da-a18a-f81f906e5f93": Phase="Pending", Reason="", readiness=false. Elapsed: 11.694682ms - Jun 12 21:15:42.861: INFO: Pod "dns-test-5cf7d2b8-9a68-49da-a18a-f81f906e5f93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022564869s - Jun 12 21:15:44.862: INFO: Pod "dns-test-5cf7d2b8-9a68-49da-a18a-f81f906e5f93": Phase="Running", Reason="", readiness=true. Elapsed: 4.022696018s - Jun 12 21:15:44.862: INFO: Pod "dns-test-5cf7d2b8-9a68-49da-a18a-f81f906e5f93" satisfied condition "running" - STEP: retrieving the pod 06/12/23 21:15:44.862 - STEP: looking for the results for each expected name from probers 06/12/23 21:15:44.872 - Jun 12 21:15:44.905: INFO: DNS probes using dns-test-5cf7d2b8-9a68-49da-a18a-f81f906e5f93 succeeded - - STEP: deleting the pod 06/12/23 21:15:44.905 - STEP: changing the externalName to bar.example.com 06/12/23 21:15:44.939 - STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9737.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9737.svc.cluster.local; sleep 1; done - 06/12/23 21:15:44.977 - STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9737.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9737.svc.cluster.local; sleep 1; done - 06/12/23 21:15:44.978 - STEP: creating a second pod to probe DNS 06/12/23 21:15:44.978 - STEP: submitting the pod to kubernetes 06/12/23 21:15:44.978 - Jun 12 21:15:45.002: INFO: Waiting up to 15m0s for pod "dns-test-8bc0fea2-002c-46f5-b5c1-740446d176b8" in namespace "dns-9737" to be "running" - Jun 12 21:15:45.014: INFO: Pod "dns-test-8bc0fea2-002c-46f5-b5c1-740446d176b8": Phase="Pending", Reason="", readiness=false. Elapsed: 11.540043ms - Jun 12 21:15:47.027: INFO: Pod "dns-test-8bc0fea2-002c-46f5-b5c1-740446d176b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024966818s - Jun 12 21:15:49.025: INFO: Pod "dns-test-8bc0fea2-002c-46f5-b5c1-740446d176b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023188781s - Jun 12 21:15:51.031: INFO: Pod "dns-test-8bc0fea2-002c-46f5-b5c1-740446d176b8": Phase="Running", Reason="", readiness=true. Elapsed: 6.029227176s - Jun 12 21:15:51.031: INFO: Pod "dns-test-8bc0fea2-002c-46f5-b5c1-740446d176b8" satisfied condition "running" - STEP: retrieving the pod 06/12/23 21:15:51.032 - STEP: looking for the results for each expected name from probers 06/12/23 21:15:51.042 - Jun 12 21:15:51.115: INFO: DNS probes using dns-test-8bc0fea2-002c-46f5-b5c1-740446d176b8 succeeded - - STEP: deleting the pod 06/12/23 21:15:51.115 - STEP: changing the service to type=ClusterIP 06/12/23 21:15:51.141 - STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9737.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9737.svc.cluster.local; sleep 1; done - 06/12/23 21:15:51.191 - STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9737.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-9737.svc.cluster.local; sleep 1; done - 06/12/23 21:15:51.191 - STEP: creating a third pod to probe DNS 06/12/23 21:15:51.192 - STEP: submitting the pod to kubernetes 06/12/23 21:15:51.204 - Jun 12 21:15:51.225: INFO: Waiting up to 15m0s for pod "dns-test-351f3c2f-9280-40d1-a69c-16ec12712643" in namespace "dns-9737" to be "running" - Jun 12 21:15:51.235: INFO: Pod "dns-test-351f3c2f-9280-40d1-a69c-16ec12712643": Phase="Pending", Reason="", readiness=false. Elapsed: 9.195293ms - Jun 12 21:15:53.344: INFO: Pod "dns-test-351f3c2f-9280-40d1-a69c-16ec12712643": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118021009s - Jun 12 21:15:55.246: INFO: Pod "dns-test-351f3c2f-9280-40d1-a69c-16ec12712643": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020527204s - Jun 12 21:15:57.245: INFO: Pod "dns-test-351f3c2f-9280-40d1-a69c-16ec12712643": Phase="Running", Reason="", readiness=true. Elapsed: 6.019375754s - Jun 12 21:15:57.245: INFO: Pod "dns-test-351f3c2f-9280-40d1-a69c-16ec12712643" satisfied condition "running" - STEP: retrieving the pod 06/12/23 21:15:57.245 - STEP: looking for the results for each expected name from probers 06/12/23 21:15:57.254 - Jun 12 21:15:57.284: INFO: DNS probes using dns-test-351f3c2f-9280-40d1-a69c-16ec12712643 succeeded - - STEP: deleting the pod 06/12/23 21:15:57.284 - STEP: deleting the test externalName service 06/12/23 21:15:57.307 - [AfterEach] [sig-network] DNS + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should be able to change the type from ClusterIP to ExternalName [Conformance] + test/e2e/network/service.go:1515 + STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-1395 07/27/23 01:57:30.729 + STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service 07/27/23 01:57:30.802 + STEP: creating service externalsvc in namespace services-1395 07/27/23 01:57:30.802 + STEP: creating replication controller externalsvc in namespace services-1395 07/27/23 01:57:30.848 + I0727 01:57:30.922745 20 runners.go:193] Created replication controller with name: externalsvc, namespace: services-1395, replica count: 2 + I0727 01:57:34.011093 20 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + STEP: changing the ClusterIP service to type=ExternalName 07/27/23 01:57:34.034 + Jul 27 01:57:34.145: INFO: Creating new exec pod + Jul 27 01:57:34.175: INFO: Waiting up to 5m0s for pod "execpodql4sf" in namespace "services-1395" to be "running" + Jul 27 01:57:34.220: INFO: Pod "execpodql4sf": Phase="Pending", Reason="", readiness=false. Elapsed: 27.244322ms + Jul 27 01:57:36.307: INFO: Pod "execpodql4sf": Phase="Running", Reason="", readiness=true. Elapsed: 2.114627355s + Jul 27 01:57:36.307: INFO: Pod "execpodql4sf" satisfied condition "running" + Jul 27 01:57:36.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-1395 exec execpodql4sf -- /bin/sh -x -c nslookup clusterip-service.services-1395.svc.cluster.local' + Jul 27 01:57:36.685: INFO: stderr: "+ nslookup clusterip-service.services-1395.svc.cluster.local\n" + Jul 27 01:57:36.685: INFO: stdout: "Server:\t\t172.21.0.10\nAddress:\t172.21.0.10#53\n\nclusterip-service.services-1395.svc.cluster.local\tcanonical name = externalsvc.services-1395.svc.cluster.local.\nName:\texternalsvc.services-1395.svc.cluster.local\nAddress: 172.21.39.30\n\n" + STEP: deleting ReplicationController externalsvc in namespace services-1395, will wait for the garbage collector to delete the pods 07/27/23 01:57:36.685 + Jul 27 01:57:36.778: INFO: Deleting ReplicationController externalsvc took: 24.326793ms + Jul 27 01:57:36.880: INFO: Terminating ReplicationController externalsvc pods took: 101.49253ms + Jul 27 01:57:39.736: INFO: Cleaning up the ClusterIP to ExternalName test service + [AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 - Jun 12 21:15:57.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] DNS + Jul 27 01:57:39.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] DNS + [DeferCleanup (Each)] [sig-network] Services dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] DNS + [DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 - STEP: Destroying namespace "dns-9737" for this suite. 06/12/23 21:15:57.37 + STEP: Destroying namespace "services-1395" for this suite. 07/27/23 01:57:39.789 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] Projected downwardAPI - should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:249 -[BeforeEach] [sig-storage] Projected downwardAPI +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:99 +[BeforeEach] [sig-storage] Projected configMap set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:15:57.391 -Jun 12 21:15:57.391: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename projected 06/12/23 21:15:57.394 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:15:57.439 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:15:57.451 -[BeforeEach] [sig-storage] Projected downwardAPI +STEP: Creating a kubernetes client 07/27/23 01:57:39.818 +Jul 27 01:57:39.818: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename projected 07/27/23 01:57:39.819 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:57:39.865 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:57:39.879 +[BeforeEach] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-storage] Projected downwardAPI - test/e2e/common/storage/projected_downwardapi.go:44 -[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:249 -STEP: Creating a pod to test downward API volume plugin 06/12/23 21:15:57.463 -Jun 12 21:15:57.500: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3f9e3ebc-0510-43f3-a737-63f01f0174f0" in namespace "projected-4328" to be "Succeeded or Failed" -Jun 12 21:15:57.520: INFO: Pod "downwardapi-volume-3f9e3ebc-0510-43f3-a737-63f01f0174f0": Phase="Pending", Reason="", readiness=false. Elapsed: 19.299986ms -Jun 12 21:15:59.531: INFO: Pod "downwardapi-volume-3f9e3ebc-0510-43f3-a737-63f01f0174f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030428026s -Jun 12 21:16:01.541: INFO: Pod "downwardapi-volume-3f9e3ebc-0510-43f3-a737-63f01f0174f0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040198864s -Jun 12 21:16:03.532: INFO: Pod "downwardapi-volume-3f9e3ebc-0510-43f3-a737-63f01f0174f0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031371971s -Jun 12 21:16:05.533: INFO: Pod "downwardapi-volume-3f9e3ebc-0510-43f3-a737-63f01f0174f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.032155763s -STEP: Saw pod success 06/12/23 21:16:05.533 -Jun 12 21:16:05.533: INFO: Pod "downwardapi-volume-3f9e3ebc-0510-43f3-a737-63f01f0174f0" satisfied condition "Succeeded or Failed" -Jun 12 21:16:05.543: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-3f9e3ebc-0510-43f3-a737-63f01f0174f0 container client-container: -STEP: delete the pod 06/12/23 21:16:05.565 -Jun 12 21:16:05.589: INFO: Waiting for pod downwardapi-volume-3f9e3ebc-0510-43f3-a737-63f01f0174f0 to disappear -Jun 12 21:16:05.598: INFO: Pod downwardapi-volume-3f9e3ebc-0510-43f3-a737-63f01f0174f0 no longer exists -[AfterEach] [sig-storage] Projected downwardAPI +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:99 +STEP: Creating configMap with name projected-configmap-test-volume-map-c848f922-095b-4328-bdc6-969c2c576ea8 07/27/23 01:57:39.9 +STEP: Creating a pod to test consume configMaps 07/27/23 01:57:39.919 +Jul 27 01:57:39.945: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7a7f9c30-189e-4453-b63f-9a4a101a3c31" in namespace "projected-5029" to be "Succeeded or Failed" +Jul 27 01:57:39.957: INFO: Pod "pod-projected-configmaps-7a7f9c30-189e-4453-b63f-9a4a101a3c31": Phase="Pending", Reason="", readiness=false. Elapsed: 11.534988ms +Jul 27 01:57:41.967: INFO: Pod "pod-projected-configmaps-7a7f9c30-189e-4453-b63f-9a4a101a3c31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021681355s +Jul 27 01:57:43.967: INFO: Pod "pod-projected-configmaps-7a7f9c30-189e-4453-b63f-9a4a101a3c31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022002274s +STEP: Saw pod success 07/27/23 01:57:43.967 +Jul 27 01:57:43.968: INFO: Pod "pod-projected-configmaps-7a7f9c30-189e-4453-b63f-9a4a101a3c31" satisfied condition "Succeeded or Failed" +Jul 27 01:57:43.976: INFO: Trying to get logs from node 10.245.128.19 pod pod-projected-configmaps-7a7f9c30-189e-4453-b63f-9a4a101a3c31 container agnhost-container: +STEP: delete the pod 07/27/23 01:57:44 +Jul 27 01:57:44.020: INFO: Waiting for pod pod-projected-configmaps-7a7f9c30-189e-4453-b63f-9a4a101a3c31 to disappear +Jul 27 01:57:44.036: INFO: Pod pod-projected-configmaps-7a7f9c30-189e-4453-b63f-9a4a101a3c31 no longer exists +[AfterEach] [sig-storage] Projected configMap test/e2e/framework/node/init/init.go:32 -Jun 12 21:16:05.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Projected downwardAPI +Jul 27 01:57:44.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Projected downwardAPI +[DeferCleanup (Each)] [sig-storage] Projected configMap dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Projected downwardAPI +[DeferCleanup (Each)] [sig-storage] Projected configMap tear down framework | framework.go:193 -STEP: Destroying namespace "projected-4328" for this suite. 06/12/23 21:16:05.611 +STEP: Destroying namespace "projected-5029" for this suite. 07/27/23 01:57:44.048 ------------------------------ -• [SLOW TEST] [8.236 seconds] -[sig-storage] Projected downwardAPI +• [4.253 seconds] +[sig-storage] Projected configMap test/e2e/common/storage/framework.go:23 - should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:249 - - Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Projected downwardAPI - set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:15:57.391 - Jun 12 21:15:57.391: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename projected 06/12/23 21:15:57.394 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:15:57.439 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:15:57.451 - [BeforeEach] [sig-storage] Projected downwardAPI - test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-storage] Projected downwardAPI - test/e2e/common/storage/projected_downwardapi.go:44 - [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:249 - STEP: Creating a pod to test downward API volume plugin 06/12/23 21:15:57.463 - Jun 12 21:15:57.500: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3f9e3ebc-0510-43f3-a737-63f01f0174f0" in namespace "projected-4328" to be "Succeeded or Failed" - Jun 12 21:15:57.520: INFO: Pod "downwardapi-volume-3f9e3ebc-0510-43f3-a737-63f01f0174f0": Phase="Pending", Reason="", readiness=false. Elapsed: 19.299986ms - Jun 12 21:15:59.531: INFO: Pod "downwardapi-volume-3f9e3ebc-0510-43f3-a737-63f01f0174f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030428026s - Jun 12 21:16:01.541: INFO: Pod "downwardapi-volume-3f9e3ebc-0510-43f3-a737-63f01f0174f0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040198864s - Jun 12 21:16:03.532: INFO: Pod "downwardapi-volume-3f9e3ebc-0510-43f3-a737-63f01f0174f0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031371971s - Jun 12 21:16:05.533: INFO: Pod "downwardapi-volume-3f9e3ebc-0510-43f3-a737-63f01f0174f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.032155763s - STEP: Saw pod success 06/12/23 21:16:05.533 - Jun 12 21:16:05.533: INFO: Pod "downwardapi-volume-3f9e3ebc-0510-43f3-a737-63f01f0174f0" satisfied condition "Succeeded or Failed" - Jun 12 21:16:05.543: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-3f9e3ebc-0510-43f3-a737-63f01f0174f0 container client-container: - STEP: delete the pod 06/12/23 21:16:05.565 - Jun 12 21:16:05.589: INFO: Waiting for pod downwardapi-volume-3f9e3ebc-0510-43f3-a737-63f01f0174f0 to disappear - Jun 12 21:16:05.598: INFO: Pod downwardapi-volume-3f9e3ebc-0510-43f3-a737-63f01f0174f0 no longer exists - [AfterEach] [sig-storage] Projected downwardAPI - test/e2e/framework/node/init/init.go:32 - Jun 12 21:16:05.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Projected downwardAPI - test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Projected downwardAPI - dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Projected downwardAPI - tear down framework | framework.go:193 - STEP: Destroying namespace "projected-4328" for this suite. 06/12/23 21:16:05.611 - << End Captured GinkgoWriter Output ------------------------------- -SS ------------------------------- -[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch - watch on custom resource definition objects [Conformance] - test/e2e/apimachinery/crd_watch.go:51 -[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] - set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:16:05.627 -Jun 12 21:16:05.628: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename crd-watch 06/12/23 21:16:05.63 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:16:05.678 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:16:05.689 -[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] - test/e2e/framework/metrics/init/init.go:31 -[It] watch on custom resource definition objects [Conformance] - test/e2e/apimachinery/crd_watch.go:51 -Jun 12 21:16:05.711: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Creating first CR 06/12/23 21:16:08.367 -Jun 12 21:16:08.385: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-06-12T21:16:08Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-06-12T21:16:08Z]] name:name1 resourceVersion:96649 uid:7ea4e8f1-25df-48ec-a9ec-03684d572415] num:map[num1:9223372036854775807 num2:1000000]]} -STEP: Creating second CR 06/12/23 21:16:18.386 -Jun 12 21:16:18.406: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-06-12T21:16:18Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-06-12T21:16:18Z]] name:name2 resourceVersion:96717 uid:21c3451c-13ce-4fb7-b452-83f06ade366f] num:map[num1:9223372036854775807 num2:1000000]]} -STEP: Modifying first CR 06/12/23 21:16:28.409 -Jun 12 21:16:28.458: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-06-12T21:16:08Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-06-12T21:16:28Z]] name:name1 resourceVersion:96760 uid:7ea4e8f1-25df-48ec-a9ec-03684d572415] num:map[num1:9223372036854775807 num2:1000000]]} -STEP: Modifying second CR 06/12/23 21:16:38.459 -Jun 12 21:16:38.476: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-06-12T21:16:18Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-06-12T21:16:38Z]] name:name2 resourceVersion:96811 uid:21c3451c-13ce-4fb7-b452-83f06ade366f] num:map[num1:9223372036854775807 num2:1000000]]} -STEP: Deleting first CR 06/12/23 21:16:48.484 -Jun 12 21:16:48.582: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-06-12T21:16:08Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-06-12T21:16:28Z]] name:name1 resourceVersion:96869 uid:7ea4e8f1-25df-48ec-a9ec-03684d572415] num:map[num1:9223372036854775807 num2:1000000]]} -STEP: Deleting second CR 06/12/23 21:16:58.586 -Jun 12 21:16:58.644: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-06-12T21:16:18Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-06-12T21:16:38Z]] name:name2 resourceVersion:96919 uid:21c3451c-13ce-4fb7-b452-83f06ade366f] num:map[num1:9223372036854775807 num2:1000000]]} -[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] - test/e2e/framework/node/init/init.go:32 -Jun 12 21:17:09.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] - test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] - dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] - tear down framework | framework.go:193 -STEP: Destroying namespace "crd-watch-9454" for this suite. 06/12/23 21:17:09.215 ------------------------------- -• [SLOW TEST] [63.625 seconds] -[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] -test/e2e/apimachinery/framework.go:23 - CustomResourceDefinition Watch - test/e2e/apimachinery/crd_watch.go:44 - watch on custom resource definition objects [Conformance] - test/e2e/apimachinery/crd_watch.go:51 + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:99 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + [BeforeEach] [sig-storage] Projected configMap set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:16:05.627 - Jun 12 21:16:05.628: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename crd-watch 06/12/23 21:16:05.63 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:16:05.678 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:16:05.689 - [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + STEP: Creating a kubernetes client 07/27/23 01:57:39.818 + Jul 27 01:57:39.818: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename projected 07/27/23 01:57:39.819 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:57:39.865 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:57:39.879 + [BeforeEach] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:31 - [It] watch on custom resource definition objects [Conformance] - test/e2e/apimachinery/crd_watch.go:51 - Jun 12 21:16:05.711: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Creating first CR 06/12/23 21:16:08.367 - Jun 12 21:16:08.385: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-06-12T21:16:08Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-06-12T21:16:08Z]] name:name1 resourceVersion:96649 uid:7ea4e8f1-25df-48ec-a9ec-03684d572415] num:map[num1:9223372036854775807 num2:1000000]]} - STEP: Creating second CR 06/12/23 21:16:18.386 - Jun 12 21:16:18.406: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-06-12T21:16:18Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-06-12T21:16:18Z]] name:name2 resourceVersion:96717 uid:21c3451c-13ce-4fb7-b452-83f06ade366f] num:map[num1:9223372036854775807 num2:1000000]]} - STEP: Modifying first CR 06/12/23 21:16:28.409 - Jun 12 21:16:28.458: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-06-12T21:16:08Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-06-12T21:16:28Z]] name:name1 resourceVersion:96760 uid:7ea4e8f1-25df-48ec-a9ec-03684d572415] num:map[num1:9223372036854775807 num2:1000000]]} - STEP: Modifying second CR 06/12/23 21:16:38.459 - Jun 12 21:16:38.476: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-06-12T21:16:18Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-06-12T21:16:38Z]] name:name2 resourceVersion:96811 uid:21c3451c-13ce-4fb7-b452-83f06ade366f] num:map[num1:9223372036854775807 num2:1000000]]} - STEP: Deleting first CR 06/12/23 21:16:48.484 - Jun 12 21:16:48.582: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-06-12T21:16:08Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-06-12T21:16:28Z]] name:name1 resourceVersion:96869 uid:7ea4e8f1-25df-48ec-a9ec-03684d572415] num:map[num1:9223372036854775807 num2:1000000]]} - STEP: Deleting second CR 06/12/23 21:16:58.586 - Jun 12 21:16:58.644: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-06-12T21:16:18Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-06-12T21:16:38Z]] name:name2 resourceVersion:96919 uid:21c3451c-13ce-4fb7-b452-83f06ade366f] num:map[num1:9223372036854775807 num2:1000000]]} - [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:99 + STEP: Creating configMap with name projected-configmap-test-volume-map-c848f922-095b-4328-bdc6-969c2c576ea8 07/27/23 01:57:39.9 + STEP: Creating a pod to test consume configMaps 07/27/23 01:57:39.919 + Jul 27 01:57:39.945: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7a7f9c30-189e-4453-b63f-9a4a101a3c31" in namespace "projected-5029" to be "Succeeded or Failed" + Jul 27 01:57:39.957: INFO: Pod "pod-projected-configmaps-7a7f9c30-189e-4453-b63f-9a4a101a3c31": Phase="Pending", Reason="", readiness=false. Elapsed: 11.534988ms + Jul 27 01:57:41.967: INFO: Pod "pod-projected-configmaps-7a7f9c30-189e-4453-b63f-9a4a101a3c31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021681355s + Jul 27 01:57:43.967: INFO: Pod "pod-projected-configmaps-7a7f9c30-189e-4453-b63f-9a4a101a3c31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022002274s + STEP: Saw pod success 07/27/23 01:57:43.967 + Jul 27 01:57:43.968: INFO: Pod "pod-projected-configmaps-7a7f9c30-189e-4453-b63f-9a4a101a3c31" satisfied condition "Succeeded or Failed" + Jul 27 01:57:43.976: INFO: Trying to get logs from node 10.245.128.19 pod pod-projected-configmaps-7a7f9c30-189e-4453-b63f-9a4a101a3c31 container agnhost-container: + STEP: delete the pod 07/27/23 01:57:44 + Jul 27 01:57:44.020: INFO: Waiting for pod pod-projected-configmaps-7a7f9c30-189e-4453-b63f-9a4a101a3c31 to disappear + Jul 27 01:57:44.036: INFO: Pod pod-projected-configmaps-7a7f9c30-189e-4453-b63f-9a4a101a3c31 no longer exists + [AfterEach] [sig-storage] Projected configMap test/e2e/framework/node/init/init.go:32 - Jun 12 21:17:09.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + Jul 27 01:57:44.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-storage] Projected configMap dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-storage] Projected configMap tear down framework | framework.go:193 - STEP: Destroying namespace "crd-watch-9454" for this suite. 06/12/23 21:17:09.215 + STEP: Destroying namespace "projected-5029" for this suite. 07/27/23 01:57:44.048 << End Captured GinkgoWriter Output ------------------------------ -SSSSSS +SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume - should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:249 + should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:221 [BeforeEach] [sig-storage] Downward API volume set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:17:09.254 -Jun 12 21:17:09.254: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename downward-api 06/12/23 21:17:09.256 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:17:09.3 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:17:09.312 +STEP: Creating a kubernetes client 07/27/23 01:57:44.071 +Jul 27 01:57:44.072: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename downward-api 07/27/23 01:57:44.073 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:57:44.114 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:57:44.123 [BeforeEach] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] Downward API volume test/e2e/common/storage/downwardapi_volume.go:44 -[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:249 -STEP: Creating a pod to test downward API volume plugin 06/12/23 21:17:09.33 -Jun 12 21:17:09.372: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6304c4c2-7b36-4077-b3dc-d3ee8ee47333" in namespace "downward-api-6550" to be "Succeeded or Failed" -Jun 12 21:17:09.430: INFO: Pod "downwardapi-volume-6304c4c2-7b36-4077-b3dc-d3ee8ee47333": Phase="Pending", Reason="", readiness=false. Elapsed: 57.720834ms -Jun 12 21:17:11.439: INFO: Pod "downwardapi-volume-6304c4c2-7b36-4077-b3dc-d3ee8ee47333": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067352915s -Jun 12 21:17:13.440: INFO: Pod "downwardapi-volume-6304c4c2-7b36-4077-b3dc-d3ee8ee47333": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06776556s -Jun 12 21:17:15.440: INFO: Pod "downwardapi-volume-6304c4c2-7b36-4077-b3dc-d3ee8ee47333": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.068335724s -STEP: Saw pod success 06/12/23 21:17:15.441 -Jun 12 21:17:15.441: INFO: Pod "downwardapi-volume-6304c4c2-7b36-4077-b3dc-d3ee8ee47333" satisfied condition "Succeeded or Failed" -Jun 12 21:17:15.449: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-6304c4c2-7b36-4077-b3dc-d3ee8ee47333 container client-container: -STEP: delete the pod 06/12/23 21:17:15.467 -Jun 12 21:17:15.496: INFO: Waiting for pod downwardapi-volume-6304c4c2-7b36-4077-b3dc-d3ee8ee47333 to disappear -Jun 12 21:17:15.505: INFO: Pod downwardapi-volume-6304c4c2-7b36-4077-b3dc-d3ee8ee47333 no longer exists +[It] should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:221 +STEP: Creating a pod to test downward API volume plugin 07/27/23 01:57:44.132 +W0727 01:57:44.160899 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +Jul 27 01:57:44.161: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df8ed24e-8d66-4093-9d34-c99e626b90cf" in namespace "downward-api-3961" to be "Succeeded or Failed" +Jul 27 01:57:44.172: INFO: Pod "downwardapi-volume-df8ed24e-8d66-4093-9d34-c99e626b90cf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.793766ms +Jul 27 01:57:46.188: INFO: Pod "downwardapi-volume-df8ed24e-8d66-4093-9d34-c99e626b90cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026948781s +Jul 27 01:57:48.185: INFO: Pod "downwardapi-volume-df8ed24e-8d66-4093-9d34-c99e626b90cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024250233s +STEP: Saw pod success 07/27/23 01:57:48.185 +Jul 27 01:57:48.185: INFO: Pod "downwardapi-volume-df8ed24e-8d66-4093-9d34-c99e626b90cf" satisfied condition "Succeeded or Failed" +Jul 27 01:57:48.194: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-df8ed24e-8d66-4093-9d34-c99e626b90cf container client-container: +STEP: delete the pod 07/27/23 01:57:48.215 +Jul 27 01:57:48.240: INFO: Waiting for pod downwardapi-volume-df8ed24e-8d66-4093-9d34-c99e626b90cf to disappear +Jul 27 01:57:48.247: INFO: Pod downwardapi-volume-df8ed24e-8d66-4093-9d34-c99e626b90cf no longer exists [AfterEach] [sig-storage] Downward API volume test/e2e/framework/node/init/init.go:32 -Jun 12 21:17:15.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +Jul 27 01:57:48.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Downward API volume dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] Downward API volume tear down framework | framework.go:193 -STEP: Destroying namespace "downward-api-6550" for this suite. 06/12/23 21:17:15.518 +STEP: Destroying namespace "downward-api-3961" for this suite. 07/27/23 01:57:48.259 ------------------------------ -• [SLOW TEST] [6.279 seconds] +• [4.211 seconds] [sig-storage] Downward API volume test/e2e/common/storage/framework.go:23 - should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:249 + should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:221 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] Downward API volume set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:17:09.254 - Jun 12 21:17:09.254: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename downward-api 06/12/23 21:17:09.256 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:17:09.3 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:17:09.312 + STEP: Creating a kubernetes client 07/27/23 01:57:44.071 + Jul 27 01:57:44.072: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename downward-api 07/27/23 01:57:44.073 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:57:44.114 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:57:44.123 [BeforeEach] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] Downward API volume test/e2e/common/storage/downwardapi_volume.go:44 - [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:249 - STEP: Creating a pod to test downward API volume plugin 06/12/23 21:17:09.33 - Jun 12 21:17:09.372: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6304c4c2-7b36-4077-b3dc-d3ee8ee47333" in namespace "downward-api-6550" to be "Succeeded or Failed" - Jun 12 21:17:09.430: INFO: Pod "downwardapi-volume-6304c4c2-7b36-4077-b3dc-d3ee8ee47333": Phase="Pending", Reason="", readiness=false. Elapsed: 57.720834ms - Jun 12 21:17:11.439: INFO: Pod "downwardapi-volume-6304c4c2-7b36-4077-b3dc-d3ee8ee47333": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067352915s - Jun 12 21:17:13.440: INFO: Pod "downwardapi-volume-6304c4c2-7b36-4077-b3dc-d3ee8ee47333": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06776556s - Jun 12 21:17:15.440: INFO: Pod "downwardapi-volume-6304c4c2-7b36-4077-b3dc-d3ee8ee47333": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.068335724s - STEP: Saw pod success 06/12/23 21:17:15.441 - Jun 12 21:17:15.441: INFO: Pod "downwardapi-volume-6304c4c2-7b36-4077-b3dc-d3ee8ee47333" satisfied condition "Succeeded or Failed" - Jun 12 21:17:15.449: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-6304c4c2-7b36-4077-b3dc-d3ee8ee47333 container client-container: - STEP: delete the pod 06/12/23 21:17:15.467 - Jun 12 21:17:15.496: INFO: Waiting for pod downwardapi-volume-6304c4c2-7b36-4077-b3dc-d3ee8ee47333 to disappear - Jun 12 21:17:15.505: INFO: Pod downwardapi-volume-6304c4c2-7b36-4077-b3dc-d3ee8ee47333 no longer exists + [It] should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:221 + STEP: Creating a pod to test downward API volume plugin 07/27/23 01:57:44.132 + W0727 01:57:44.160899 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + Jul 27 01:57:44.161: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df8ed24e-8d66-4093-9d34-c99e626b90cf" in namespace "downward-api-3961" to be "Succeeded or Failed" + Jul 27 01:57:44.172: INFO: Pod "downwardapi-volume-df8ed24e-8d66-4093-9d34-c99e626b90cf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.793766ms + Jul 27 01:57:46.188: INFO: Pod "downwardapi-volume-df8ed24e-8d66-4093-9d34-c99e626b90cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026948781s + Jul 27 01:57:48.185: INFO: Pod "downwardapi-volume-df8ed24e-8d66-4093-9d34-c99e626b90cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024250233s + STEP: Saw pod success 07/27/23 01:57:48.185 + Jul 27 01:57:48.185: INFO: Pod "downwardapi-volume-df8ed24e-8d66-4093-9d34-c99e626b90cf" satisfied condition "Succeeded or Failed" + Jul 27 01:57:48.194: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-df8ed24e-8d66-4093-9d34-c99e626b90cf container client-container: + STEP: delete the pod 07/27/23 01:57:48.215 + Jul 27 01:57:48.240: INFO: Waiting for pod downwardapi-volume-df8ed24e-8d66-4093-9d34-c99e626b90cf to disappear + Jul 27 01:57:48.247: INFO: Pod downwardapi-volume-df8ed24e-8d66-4093-9d34-c99e626b90cf no longer exists [AfterEach] [sig-storage] Downward API volume test/e2e/framework/node/init/init.go:32 - Jun 12 21:17:15.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + Jul 27 01:57:48.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Downward API volume dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] Downward API volume tear down framework | framework.go:193 - STEP: Destroying namespace "downward-api-6550" for this suite. 06/12/23 21:17:15.518 + STEP: Destroying namespace "downward-api-3961" for this suite. 07/27/23 01:57:48.259 << End Captured GinkgoWriter Output ------------------------------ -[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] - should perform canary updates and phased rolling updates of template modifications [Conformance] - test/e2e/apps/statefulset.go:317 -[BeforeEach] [sig-apps] StatefulSet +SSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should delete RS created by deployment when not orphaning [Conformance] + test/e2e/apimachinery/garbage_collector.go:491 +[BeforeEach] [sig-api-machinery] Garbage collector set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:17:15.537 -Jun 12 21:17:15.538: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename statefulset 06/12/23 21:17:15.54 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:17:15.584 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:17:15.595 -[BeforeEach] [sig-apps] StatefulSet +STEP: Creating a kubernetes client 07/27/23 01:57:48.282 +Jul 27 01:57:48.283: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename gc 07/27/23 01:57:48.284 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:57:48.331 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:57:48.342 +[BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-apps] StatefulSet - test/e2e/apps/statefulset.go:98 -[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:113 -STEP: Creating service test in namespace statefulset-5786 06/12/23 21:17:15.61 -[It] should perform canary updates and phased rolling updates of template modifications [Conformance] - test/e2e/apps/statefulset.go:317 -STEP: Creating a new StatefulSet 06/12/23 21:17:15.633 -Jun 12 21:17:15.664: INFO: Found 0 stateful pods, waiting for 3 -Jun 12 21:17:25.675: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true -Jun 12 21:17:25.675: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true -Jun 12 21:17:25.675: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true -STEP: Updating stateful set template: update image from registry.k8s.io/e2e-test-images/httpd:2.4.38-4 to registry.k8s.io/e2e-test-images/httpd:2.4.39-4 06/12/23 21:17:25.703 -Jun 12 21:17:25.744: INFO: Updating stateful set ss2 -STEP: Creating a new revision 06/12/23 21:17:25.744 -STEP: Not applying an update when the partition is greater than the number of replicas 06/12/23 21:17:35.786 -STEP: Performing a canary update 06/12/23 21:17:35.786 -Jun 12 21:17:35.826: INFO: Updating stateful set ss2 -Jun 12 21:17:35.853: INFO: Waiting for Pod statefulset-5786/ss2-2 to have revision ss2-5459d8585b update revision ss2-7b6c9599d5 -STEP: Restoring Pods to the correct revision when they are deleted 06/12/23 21:17:45.878 -Jun 12 21:17:46.018: INFO: Found 2 stateful pods, waiting for 3 -Jun 12 21:17:56.042: INFO: Found 2 stateful pods, waiting for 3 -Jun 12 21:18:06.039: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true -Jun 12 21:18:06.039: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true -Jun 12 21:18:06.039: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false -Jun 12 21:18:16.029: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true -Jun 12 21:18:16.029: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true -Jun 12 21:18:16.029: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true -STEP: Performing a phased rolling update 06/12/23 21:18:16.048 -Jun 12 21:18:16.088: INFO: Updating stateful set ss2 -Jun 12 21:18:16.115: INFO: Waiting for Pod statefulset-5786/ss2-1 to have revision ss2-5459d8585b update revision ss2-7b6c9599d5 -Jun 12 21:18:26.180: INFO: Updating stateful set ss2 -Jun 12 21:18:26.202: INFO: Waiting for StatefulSet statefulset-5786/ss2 to complete update -Jun 12 21:18:26.202: INFO: Waiting for Pod statefulset-5786/ss2-0 to have revision ss2-5459d8585b update revision ss2-7b6c9599d5 -Jun 12 21:18:36.244: INFO: Waiting for StatefulSet statefulset-5786/ss2 to complete update -Jun 12 21:18:36.244: INFO: Waiting for Pod statefulset-5786/ss2-0 to have revision ss2-5459d8585b update revision ss2-7b6c9599d5 -Jun 12 21:18:46.226: INFO: Waiting for StatefulSet statefulset-5786/ss2 to complete update -Jun 12 21:18:56.227: INFO: Waiting for StatefulSet statefulset-5786/ss2 to complete update -[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:124 -Jun 12 21:19:06.228: INFO: Deleting all statefulset in ns statefulset-5786 -Jun 12 21:19:06.239: INFO: Scaling statefulset ss2 to 0 -Jun 12 21:19:16.299: INFO: Waiting for statefulset status.replicas updated to 0 -Jun 12 21:19:16.311: INFO: Deleting statefulset ss2 -[AfterEach] [sig-apps] StatefulSet +[It] should delete RS created by deployment when not orphaning [Conformance] + test/e2e/apimachinery/garbage_collector.go:491 +STEP: create the deployment 07/27/23 01:57:48.351 +STEP: Wait for the Deployment to create new ReplicaSet 07/27/23 01:57:48.368 +STEP: delete the deployment 07/27/23 01:57:48.887 +STEP: wait for all rs to be garbage collected 07/27/23 01:57:48.906 +STEP: expected 0 rs, got 1 rs 07/27/23 01:57:48.923 +STEP: expected 0 pods, got 2 pods 07/27/23 01:57:48.931 +STEP: Gathering metrics 07/27/23 01:57:49.458 +W0727 01:57:49.485976 20 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +Jul 27 01:57:49.486: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 -Jun 12 21:19:16.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] StatefulSet +Jul 27 01:57:49.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] StatefulSet +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] StatefulSet +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector tear down framework | framework.go:193 -STEP: Destroying namespace "statefulset-5786" for this suite. 06/12/23 21:19:16.382 +STEP: Destroying namespace "gc-6340" for this suite. 07/27/23 01:57:49.503 ------------------------------ -• [SLOW TEST] [120.864 seconds] -[sig-apps] StatefulSet -test/e2e/apps/framework.go:23 - Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:103 - should perform canary updates and phased rolling updates of template modifications [Conformance] - test/e2e/apps/statefulset.go:317 +• [1.247 seconds] +[sig-api-machinery] Garbage collector +test/e2e/apimachinery/framework.go:23 + should delete RS created by deployment when not orphaning [Conformance] + test/e2e/apimachinery/garbage_collector.go:491 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] StatefulSet + [BeforeEach] [sig-api-machinery] Garbage collector set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:17:15.537 - Jun 12 21:17:15.538: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename statefulset 06/12/23 21:17:15.54 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:17:15.584 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:17:15.595 - [BeforeEach] [sig-apps] StatefulSet + STEP: Creating a kubernetes client 07/27/23 01:57:48.282 + Jul 27 01:57:48.283: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename gc 07/27/23 01:57:48.284 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:57:48.331 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:57:48.342 + [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-apps] StatefulSet - test/e2e/apps/statefulset.go:98 - [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:113 - STEP: Creating service test in namespace statefulset-5786 06/12/23 21:17:15.61 - [It] should perform canary updates and phased rolling updates of template modifications [Conformance] - test/e2e/apps/statefulset.go:317 - STEP: Creating a new StatefulSet 06/12/23 21:17:15.633 - Jun 12 21:17:15.664: INFO: Found 0 stateful pods, waiting for 3 - Jun 12 21:17:25.675: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true - Jun 12 21:17:25.675: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true - Jun 12 21:17:25.675: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true - STEP: Updating stateful set template: update image from registry.k8s.io/e2e-test-images/httpd:2.4.38-4 to registry.k8s.io/e2e-test-images/httpd:2.4.39-4 06/12/23 21:17:25.703 - Jun 12 21:17:25.744: INFO: Updating stateful set ss2 - STEP: Creating a new revision 06/12/23 21:17:25.744 - STEP: Not applying an update when the partition is greater than the number of replicas 06/12/23 21:17:35.786 - STEP: Performing a canary update 06/12/23 21:17:35.786 - Jun 12 21:17:35.826: INFO: Updating stateful set ss2 - Jun 12 21:17:35.853: INFO: Waiting for Pod statefulset-5786/ss2-2 to have revision ss2-5459d8585b update revision ss2-7b6c9599d5 - STEP: Restoring Pods to the correct revision when they are deleted 06/12/23 21:17:45.878 - Jun 12 21:17:46.018: INFO: Found 2 stateful pods, waiting for 3 - Jun 12 21:17:56.042: INFO: Found 2 stateful pods, waiting for 3 - Jun 12 21:18:06.039: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true - Jun 12 21:18:06.039: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true - Jun 12 21:18:06.039: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false - Jun 12 21:18:16.029: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true - Jun 12 21:18:16.029: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true - Jun 12 21:18:16.029: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true - STEP: Performing a phased rolling update 06/12/23 21:18:16.048 - Jun 12 21:18:16.088: INFO: Updating stateful set ss2 - Jun 12 21:18:16.115: INFO: Waiting for Pod statefulset-5786/ss2-1 to have revision ss2-5459d8585b update revision ss2-7b6c9599d5 - Jun 12 21:18:26.180: INFO: Updating stateful set ss2 - Jun 12 21:18:26.202: INFO: Waiting for StatefulSet statefulset-5786/ss2 to complete update - Jun 12 21:18:26.202: INFO: Waiting for Pod statefulset-5786/ss2-0 to have revision ss2-5459d8585b update revision ss2-7b6c9599d5 - Jun 12 21:18:36.244: INFO: Waiting for StatefulSet statefulset-5786/ss2 to complete update - Jun 12 21:18:36.244: INFO: Waiting for Pod statefulset-5786/ss2-0 to have revision ss2-5459d8585b update revision ss2-7b6c9599d5 - Jun 12 21:18:46.226: INFO: Waiting for StatefulSet statefulset-5786/ss2 to complete update - Jun 12 21:18:56.227: INFO: Waiting for StatefulSet statefulset-5786/ss2 to complete update - [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:124 - Jun 12 21:19:06.228: INFO: Deleting all statefulset in ns statefulset-5786 - Jun 12 21:19:06.239: INFO: Scaling statefulset ss2 to 0 - Jun 12 21:19:16.299: INFO: Waiting for statefulset status.replicas updated to 0 - Jun 12 21:19:16.311: INFO: Deleting statefulset ss2 - [AfterEach] [sig-apps] StatefulSet - test/e2e/framework/node/init/init.go:32 - Jun 12 21:19:16.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] StatefulSet - test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] StatefulSet - dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] StatefulSet - tear down framework | framework.go:193 - STEP: Destroying namespace "statefulset-5786" for this suite. 06/12/23 21:19:16.382 - << End Captured GinkgoWriter Output ------------------------------- -SSS ------------------------------- -[sig-apps] Daemon set [Serial] - should retry creating failed daemon pods [Conformance] - test/e2e/apps/daemon_set.go:294 -[BeforeEach] [sig-apps] Daemon set [Serial] - set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:19:16.402 -Jun 12 21:19:16.402: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename daemonsets 06/12/23 21:19:16.404 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:19:16.461 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:19:16.476 -[BeforeEach] [sig-apps] Daemon set [Serial] - test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-apps] Daemon set [Serial] - test/e2e/apps/daemon_set.go:146 -[It] should retry creating failed daemon pods [Conformance] - test/e2e/apps/daemon_set.go:294 -STEP: Creating a simple DaemonSet "daemon-set" 06/12/23 21:19:16.573 -STEP: Check that daemon pods launch on every node of the cluster. 06/12/23 21:19:16.59 -Jun 12 21:19:16.611: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 21:19:16.611: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 -Jun 12 21:19:17.673: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 21:19:17.673: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 -Jun 12 21:19:18.634: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 21:19:18.634: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 -Jun 12 21:19:19.633: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 -Jun 12 21:19:19.633: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set -STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. 06/12/23 21:19:19.645 -Jun 12 21:19:19.700: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 -Jun 12 21:19:19.700: INFO: Node 10.138.75.70 is running 0 daemon pod, expected 1 -Jun 12 21:19:20.723: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 -Jun 12 21:19:20.723: INFO: Node 10.138.75.70 is running 0 daemon pod, expected 1 -Jun 12 21:19:21.722: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 -Jun 12 21:19:21.722: INFO: Node 10.138.75.70 is running 0 daemon pod, expected 1 -Jun 12 21:19:22.725: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 -Jun 12 21:19:22.725: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set -STEP: Wait for the failed daemon pod to be completely deleted. 06/12/23 21:19:22.725 -[AfterEach] [sig-apps] Daemon set [Serial] - test/e2e/apps/daemon_set.go:111 -STEP: Deleting DaemonSet "daemon-set" 06/12/23 21:19:22.742 -STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2228, will wait for the garbage collector to delete the pods 06/12/23 21:19:22.743 -Jun 12 21:19:22.821: INFO: Deleting DaemonSet.extensions daemon-set took: 17.598032ms -Jun 12 21:19:22.921: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.67123ms -Jun 12 21:19:26.830: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 21:19:26.830: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set -Jun 12 21:19:26.839: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"98291"},"items":null} - -Jun 12 21:19:26.847: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"98291"},"items":null} - -[AfterEach] [sig-apps] Daemon set [Serial] - test/e2e/framework/node/init/init.go:32 -Jun 12 21:19:26.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] - test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] - dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] - tear down framework | framework.go:193 -STEP: Destroying namespace "daemonsets-2228" for this suite. 06/12/23 21:19:26.898 ------------------------------- -• [SLOW TEST] [10.509 seconds] -[sig-apps] Daemon set [Serial] -test/e2e/apps/framework.go:23 - should retry creating failed daemon pods [Conformance] - test/e2e/apps/daemon_set.go:294 - - Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] Daemon set [Serial] - set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:19:16.402 - Jun 12 21:19:16.402: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename daemonsets 06/12/23 21:19:16.404 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:19:16.461 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:19:16.476 - [BeforeEach] [sig-apps] Daemon set [Serial] - test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-apps] Daemon set [Serial] - test/e2e/apps/daemon_set.go:146 - [It] should retry creating failed daemon pods [Conformance] - test/e2e/apps/daemon_set.go:294 - STEP: Creating a simple DaemonSet "daemon-set" 06/12/23 21:19:16.573 - STEP: Check that daemon pods launch on every node of the cluster. 06/12/23 21:19:16.59 - Jun 12 21:19:16.611: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 21:19:16.611: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 - Jun 12 21:19:17.673: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 21:19:17.673: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 - Jun 12 21:19:18.634: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 21:19:18.634: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 - Jun 12 21:19:19.633: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 - Jun 12 21:19:19.633: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set - STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. 06/12/23 21:19:19.645 - Jun 12 21:19:19.700: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 - Jun 12 21:19:19.700: INFO: Node 10.138.75.70 is running 0 daemon pod, expected 1 - Jun 12 21:19:20.723: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 - Jun 12 21:19:20.723: INFO: Node 10.138.75.70 is running 0 daemon pod, expected 1 - Jun 12 21:19:21.722: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 - Jun 12 21:19:21.722: INFO: Node 10.138.75.70 is running 0 daemon pod, expected 1 - Jun 12 21:19:22.725: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 - Jun 12 21:19:22.725: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set - STEP: Wait for the failed daemon pod to be completely deleted. 06/12/23 21:19:22.725 - [AfterEach] [sig-apps] Daemon set [Serial] - test/e2e/apps/daemon_set.go:111 - STEP: Deleting DaemonSet "daemon-set" 06/12/23 21:19:22.742 - STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2228, will wait for the garbage collector to delete the pods 06/12/23 21:19:22.743 - Jun 12 21:19:22.821: INFO: Deleting DaemonSet.extensions daemon-set took: 17.598032ms - Jun 12 21:19:22.921: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.67123ms - Jun 12 21:19:26.830: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 21:19:26.830: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set - Jun 12 21:19:26.839: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"98291"},"items":null} - - Jun 12 21:19:26.847: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"98291"},"items":null} + [It] should delete RS created by deployment when not orphaning [Conformance] + test/e2e/apimachinery/garbage_collector.go:491 + STEP: create the deployment 07/27/23 01:57:48.351 + STEP: Wait for the Deployment to create new ReplicaSet 07/27/23 01:57:48.368 + STEP: delete the deployment 07/27/23 01:57:48.887 + STEP: wait for all rs to be garbage collected 07/27/23 01:57:48.906 + STEP: expected 0 rs, got 1 rs 07/27/23 01:57:48.923 + STEP: expected 0 pods, got 2 pods 07/27/23 01:57:48.931 + STEP: Gathering metrics 07/27/23 01:57:49.458 + W0727 01:57:49.485976 20 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. + Jul 27 01:57:49.486: INFO: For apiserver_request_total: + For apiserver_request_latency_seconds: + For apiserver_init_events_total: + For garbage_collector_attempt_to_delete_queue_latency: + For garbage_collector_attempt_to_delete_work_duration: + For garbage_collector_attempt_to_orphan_queue_latency: + For garbage_collector_attempt_to_orphan_work_duration: + For garbage_collector_dirty_processing_latency_microseconds: + For garbage_collector_event_processing_latency_microseconds: + For garbage_collector_graph_changes_queue_latency: + For garbage_collector_graph_changes_work_duration: + For garbage_collector_orphan_processing_latency_microseconds: + For namespace_queue_latency: + For namespace_queue_latency_sum: + For namespace_queue_latency_count: + For namespace_retries: + For namespace_work_duration: + For namespace_work_duration_sum: + For namespace_work_duration_count: + For function_duration_seconds: + For errors_total: + For evicted_pods_total: - [AfterEach] [sig-apps] Daemon set [Serial] + [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 - Jun 12 21:19:26.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + Jul 27 01:57:49.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector tear down framework | framework.go:193 - STEP: Destroying namespace "daemonsets-2228" for this suite. 06/12/23 21:19:26.898 + STEP: Destroying namespace "gc-6340" for this suite. 07/27/23 01:57:49.503 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSS ------------------------------ -[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - patching/updating a mutating webhook should work [Conformance] - test/e2e/apimachinery/webhook.go:508 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[sig-apps] Job + should delete a job [Conformance] + test/e2e/apps/job.go:481 +[BeforeEach] [sig-apps] Job set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:19:26.915 -Jun 12 21:19:26.915: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename webhook 06/12/23 21:19:26.919 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:19:26.961 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:19:26.972 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +STEP: Creating a kubernetes client 07/27/23 01:57:49.53 +Jul 27 01:57:49.530: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename job 07/27/23 01:57:49.531 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:57:49.584 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:57:49.594 +[BeforeEach] [sig-apps] Job test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:90 -STEP: Setting up server cert 06/12/23 21:19:27.046 -STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 21:19:27.707 -STEP: Deploying the webhook pod 06/12/23 21:19:27.745 -STEP: Wait for the deployment to be ready 06/12/23 21:19:27.774 -Jun 12 21:19:27.796: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set -Jun 12 21:19:29.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 21, 19, 27, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 19, 27, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 19, 27, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 19, 27, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} -STEP: Deploying the webhook service 06/12/23 21:19:31.865 -STEP: Verifying the service has paired with the endpoint 06/12/23 21:19:31.904 -Jun 12 21:19:32.906: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 -[It] patching/updating a mutating webhook should work [Conformance] - test/e2e/apimachinery/webhook.go:508 -STEP: Creating a mutating webhook configuration 06/12/23 21:19:32.915 -STEP: Updating a mutating webhook configuration's rules to not include the create operation 06/12/23 21:19:32.973 -STEP: Creating a configMap that should not be mutated 06/12/23 21:19:32.989 -STEP: Patching a mutating webhook configuration's rules to include the create operation 06/12/23 21:19:33.021 -STEP: Creating a configMap that should be mutated 06/12/23 21:19:33.046 -[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[It] should delete a job [Conformance] + test/e2e/apps/job.go:481 +STEP: Creating a job 07/27/23 01:57:49.608 +W0727 01:57:49.627994 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "c" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "c" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "c" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "c" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +STEP: Ensuring active pods == parallelism 07/27/23 01:57:49.628 +STEP: delete a job 07/27/23 01:57:53.64 +STEP: deleting Job.batch foo in namespace job-7775, will wait for the garbage collector to delete the pods 07/27/23 01:57:53.64 +Jul 27 01:57:53.728: INFO: Deleting Job.batch foo took: 27.22267ms +Jul 27 01:57:53.828: INFO: Terminating Job.batch foo pods took: 100.81943ms +STEP: Ensuring job was deleted 07/27/23 01:58:25.029 +[AfterEach] [sig-apps] Job test/e2e/framework/node/init/init.go:32 -Jun 12 21:19:33.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:105 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +Jul 27 01:58:25.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Job test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-apps] Job dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-apps] Job tear down framework | framework.go:193 -STEP: Destroying namespace "webhook-6735" for this suite. 06/12/23 21:19:33.326 -STEP: Destroying namespace "webhook-6735-markers" for this suite. 06/12/23 21:19:33.341 +STEP: Destroying namespace "job-7775" for this suite. 07/27/23 01:58:25.055 ------------------------------ -• [SLOW TEST] [6.448 seconds] -[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] -test/e2e/apimachinery/framework.go:23 - patching/updating a mutating webhook should work [Conformance] - test/e2e/apimachinery/webhook.go:508 +• [SLOW TEST] [35.548 seconds] +[sig-apps] Job +test/e2e/apps/framework.go:23 + should delete a job [Conformance] + test/e2e/apps/job.go:481 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [BeforeEach] [sig-apps] Job set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:19:26.915 - Jun 12 21:19:26.915: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename webhook 06/12/23 21:19:26.919 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:19:26.961 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:19:26.972 - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + STEP: Creating a kubernetes client 07/27/23 01:57:49.53 + Jul 27 01:57:49.530: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename job 07/27/23 01:57:49.531 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:57:49.584 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:57:49.594 + [BeforeEach] [sig-apps] Job test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:90 - STEP: Setting up server cert 06/12/23 21:19:27.046 - STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 21:19:27.707 - STEP: Deploying the webhook pod 06/12/23 21:19:27.745 - STEP: Wait for the deployment to be ready 06/12/23 21:19:27.774 - Jun 12 21:19:27.796: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set - Jun 12 21:19:29.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 21, 19, 27, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 19, 27, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 19, 27, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 19, 27, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} - STEP: Deploying the webhook service 06/12/23 21:19:31.865 - STEP: Verifying the service has paired with the endpoint 06/12/23 21:19:31.904 - Jun 12 21:19:32.906: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 - [It] patching/updating a mutating webhook should work [Conformance] - test/e2e/apimachinery/webhook.go:508 - STEP: Creating a mutating webhook configuration 06/12/23 21:19:32.915 - STEP: Updating a mutating webhook configuration's rules to not include the create operation 06/12/23 21:19:32.973 - STEP: Creating a configMap that should not be mutated 06/12/23 21:19:32.989 - STEP: Patching a mutating webhook configuration's rules to include the create operation 06/12/23 21:19:33.021 - STEP: Creating a configMap that should be mutated 06/12/23 21:19:33.046 - [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [It] should delete a job [Conformance] + test/e2e/apps/job.go:481 + STEP: Creating a job 07/27/23 01:57:49.608 + W0727 01:57:49.627994 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "c" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "c" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "c" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "c" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + STEP: Ensuring active pods == parallelism 07/27/23 01:57:49.628 + STEP: delete a job 07/27/23 01:57:53.64 + STEP: deleting Job.batch foo in namespace job-7775, will wait for the garbage collector to delete the pods 07/27/23 01:57:53.64 + Jul 27 01:57:53.728: INFO: Deleting Job.batch foo took: 27.22267ms + Jul 27 01:57:53.828: INFO: Terminating Job.batch foo pods took: 100.81943ms + STEP: Ensuring job was deleted 07/27/23 01:58:25.029 + [AfterEach] [sig-apps] Job test/e2e/framework/node/init/init.go:32 - Jun 12 21:19:33.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:105 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + Jul 27 01:58:25.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Job test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-apps] Job dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-apps] Job tear down framework | framework.go:193 - STEP: Destroying namespace "webhook-6735" for this suite. 06/12/23 21:19:33.326 - STEP: Destroying namespace "webhook-6735-markers" for this suite. 06/12/23 21:19:33.341 + STEP: Destroying namespace "job-7775" for this suite. 07/27/23 01:58:25.055 << End Captured GinkgoWriter Output ------------------------------ -SSSSS ------------------------------- -[sig-node] Probing container - should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] - test/e2e/common/node/container_probe.go:169 -[BeforeEach] [sig-node] Probing container +[sig-apps] Deployment + Deployment should have a working scale subresource [Conformance] + test/e2e/apps/deployment.go:150 +[BeforeEach] [sig-apps] Deployment set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:19:33.365 -Jun 12 21:19:33.366: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename container-probe 06/12/23 21:19:33.367 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:19:33.446 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:19:33.457 -[BeforeEach] [sig-node] Probing container +STEP: Creating a kubernetes client 07/27/23 01:58:25.078 +Jul 27 01:58:25.078: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename deployment 07/27/23 01:58:25.079 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:58:25.122 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:58:25.132 +[BeforeEach] [sig-apps] Deployment test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-node] Probing container - test/e2e/common/node/container_probe.go:63 -[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] - test/e2e/common/node/container_probe.go:169 -STEP: Creating pod liveness-19b52461-39ea-483d-9c88-98999e91169e in namespace container-probe-8802 06/12/23 21:19:33.468 -Jun 12 21:19:33.498: INFO: Waiting up to 5m0s for pod "liveness-19b52461-39ea-483d-9c88-98999e91169e" in namespace "container-probe-8802" to be "not pending" -Jun 12 21:19:33.514: INFO: Pod "liveness-19b52461-39ea-483d-9c88-98999e91169e": Phase="Pending", Reason="", readiness=false. Elapsed: 15.050084ms -Jun 12 21:19:35.534: INFO: Pod "liveness-19b52461-39ea-483d-9c88-98999e91169e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035188239s -Jun 12 21:19:37.543: INFO: Pod "liveness-19b52461-39ea-483d-9c88-98999e91169e": Phase="Running", Reason="", readiness=true. Elapsed: 4.04489372s -Jun 12 21:19:37.543: INFO: Pod "liveness-19b52461-39ea-483d-9c88-98999e91169e" satisfied condition "not pending" -Jun 12 21:19:37.544: INFO: Started pod liveness-19b52461-39ea-483d-9c88-98999e91169e in namespace container-probe-8802 -STEP: checking the pod's current state and verifying that restartCount is present 06/12/23 21:19:37.544 -Jun 12 21:19:37.609: INFO: Initial restart count of pod liveness-19b52461-39ea-483d-9c88-98999e91169e is 0 -Jun 12 21:19:55.872: INFO: Restart count of pod container-probe-8802/liveness-19b52461-39ea-483d-9c88-98999e91169e is now 1 (18.263010026s elapsed) -STEP: deleting the pod 06/12/23 21:19:55.872 -[AfterEach] [sig-node] Probing container +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] Deployment should have a working scale subresource [Conformance] + test/e2e/apps/deployment.go:150 +Jul 27 01:58:25.145: INFO: Creating simple deployment test-new-deployment +W0727 01:58:25.161808 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "httpd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "httpd" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "httpd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "httpd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +Jul 27 01:58:25.194: INFO: deployment "test-new-deployment" doesn't have the required revision set +STEP: getting scale subresource 07/27/23 01:58:27.224 +STEP: updating a scale subresource 07/27/23 01:58:27.231 +STEP: verifying the deployment Spec.Replicas was modified 07/27/23 01:58:27.245 +STEP: Patch a scale subresource 07/27/23 01:58:27.254 +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Jul 27 01:58:27.337: INFO: Deployment "test-new-deployment": +&Deployment{ObjectMeta:{test-new-deployment deployment-2732 d415b18d-f84e-440a-8950-0286fe058862 85155 3 2023-07-27 01:58:25 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {e2e.test Update apps/v1 2023-07-27 01:58:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 01:58:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00066f668 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-7f5969cbc7" has successfully progressed.,LastUpdateTime:2023-07-27 01:58:26 +0000 UTC,LastTransitionTime:2023-07-27 01:58:25 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-07-27 01:58:27 +0000 UTC,LastTransitionTime:2023-07-27 01:58:27 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Jul 27 01:58:27.347: INFO: New ReplicaSet "test-new-deployment-7f5969cbc7" of Deployment "test-new-deployment": +&ReplicaSet{ObjectMeta:{test-new-deployment-7f5969cbc7 deployment-2732 78ffbc18-c98d-4b75-8a87-1cdbfa9dd0f0 85158 3 2023-07-27 01:58:25 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[deployment.kubernetes.io/desired-replicas:4 deployment.kubernetes.io/max-replicas:5 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment d415b18d-f84e-440a-8950-0286fe058862 0xc005639967 0xc005639968}] [] [{kube-controller-manager Update apps/v1 2023-07-27 01:58:26 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status} {kube-controller-manager Update apps/v1 2023-07-27 01:58:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d415b18d-f84e-440a-8950-0286fe058862\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 7f5969cbc7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0056399f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Jul 27 01:58:27.357: INFO: Pod "test-new-deployment-7f5969cbc7-dlqk4" is not available: +&Pod{ObjectMeta:{test-new-deployment-7f5969cbc7-dlqk4 test-new-deployment-7f5969cbc7- deployment-2732 eb168fa6-98f1-4d89-a0d0-366b2942f69c 85159 0 2023-07-27 01:58:27 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-new-deployment-7f5969cbc7 78ffbc18-c98d-4b75-8a87-1cdbfa9dd0f0 0xc005639da7 0xc005639da8}] [] [{kube-controller-manager Update v1 2023-07-27 01:58:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"78ffbc18-c98d-4b75-8a87-1cdbfa9dd0f0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-x9wbr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x9wbr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.17,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c45,c35,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-f4gnx,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:58:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jul 27 01:58:27.357: INFO: Pod "test-new-deployment-7f5969cbc7-z5586" is available: +&Pod{ObjectMeta:{test-new-deployment-7f5969cbc7-z5586 test-new-deployment-7f5969cbc7- deployment-2732 de9c09ad-c2a6-446b-8cbf-fa2b021294f5 85148 0 2023-07-27 01:58:25 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:c34aa7e6e452ba014ca738c4cf98341a6e2dea4c4cf8fb0f43600aba041e5928 cni.projectcalico.org/podIP:172.17.225.50/32 cni.projectcalico.org/podIPs:172.17.225.50/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.225.50" + ], + "default": true, + "dns": {} +}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-new-deployment-7f5969cbc7 78ffbc18-c98d-4b75-8a87-1cdbfa9dd0f0 0xc005639f87 0xc005639f88}] [] [{calico Update v1 2023-07-27 01:58:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kube-controller-manager Update v1 2023-07-27 01:58:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"78ffbc18-c98d-4b75-8a87-1cdbfa9dd0f0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-07-27 01:58:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-07-27 01:58:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.225.50\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6pjr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6pjr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.19,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c45,c35,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:58:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:58:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:58:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:58:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.19,PodIP:172.17.225.50,StartTime:2023-07-27 01:58:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-07-27 01:58:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://198f8fd45ca13eb98ef5946f9af6415a78879ad52d27d4101a85d904287c71ae,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.17.225.50,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment test/e2e/framework/node/init/init.go:32 -Jun 12 21:19:55.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Probing container +Jul 27 01:58:27.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Deployment test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Probing container +[DeferCleanup (Each)] [sig-apps] Deployment dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Probing container +[DeferCleanup (Each)] [sig-apps] Deployment tear down framework | framework.go:193 -STEP: Destroying namespace "container-probe-8802" for this suite. 06/12/23 21:19:55.923 +STEP: Destroying namespace "deployment-2732" for this suite. 07/27/23 01:58:27.37 ------------------------------ -• [SLOW TEST] [22.579 seconds] -[sig-node] Probing container -test/e2e/common/node/framework.go:23 - should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] - test/e2e/common/node/container_probe.go:169 +• [2.317 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + Deployment should have a working scale subresource [Conformance] + test/e2e/apps/deployment.go:150 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Probing container + [BeforeEach] [sig-apps] Deployment set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:19:33.365 - Jun 12 21:19:33.366: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename container-probe 06/12/23 21:19:33.367 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:19:33.446 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:19:33.457 - [BeforeEach] [sig-node] Probing container + STEP: Creating a kubernetes client 07/27/23 01:58:25.078 + Jul 27 01:58:25.078: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename deployment 07/27/23 01:58:25.079 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:58:25.122 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:58:25.132 + [BeforeEach] [sig-apps] Deployment test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-node] Probing container - test/e2e/common/node/container_probe.go:63 - [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] - test/e2e/common/node/container_probe.go:169 - STEP: Creating pod liveness-19b52461-39ea-483d-9c88-98999e91169e in namespace container-probe-8802 06/12/23 21:19:33.468 - Jun 12 21:19:33.498: INFO: Waiting up to 5m0s for pod "liveness-19b52461-39ea-483d-9c88-98999e91169e" in namespace "container-probe-8802" to be "not pending" - Jun 12 21:19:33.514: INFO: Pod "liveness-19b52461-39ea-483d-9c88-98999e91169e": Phase="Pending", Reason="", readiness=false. Elapsed: 15.050084ms - Jun 12 21:19:35.534: INFO: Pod "liveness-19b52461-39ea-483d-9c88-98999e91169e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035188239s - Jun 12 21:19:37.543: INFO: Pod "liveness-19b52461-39ea-483d-9c88-98999e91169e": Phase="Running", Reason="", readiness=true. Elapsed: 4.04489372s - Jun 12 21:19:37.543: INFO: Pod "liveness-19b52461-39ea-483d-9c88-98999e91169e" satisfied condition "not pending" - Jun 12 21:19:37.544: INFO: Started pod liveness-19b52461-39ea-483d-9c88-98999e91169e in namespace container-probe-8802 - STEP: checking the pod's current state and verifying that restartCount is present 06/12/23 21:19:37.544 - Jun 12 21:19:37.609: INFO: Initial restart count of pod liveness-19b52461-39ea-483d-9c88-98999e91169e is 0 - Jun 12 21:19:55.872: INFO: Restart count of pod container-probe-8802/liveness-19b52461-39ea-483d-9c88-98999e91169e is now 1 (18.263010026s elapsed) - STEP: deleting the pod 06/12/23 21:19:55.872 - [AfterEach] [sig-node] Probing container + [BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 + [It] Deployment should have a working scale subresource [Conformance] + test/e2e/apps/deployment.go:150 + Jul 27 01:58:25.145: INFO: Creating simple deployment test-new-deployment + W0727 01:58:25.161808 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "httpd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "httpd" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "httpd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "httpd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + Jul 27 01:58:25.194: INFO: deployment "test-new-deployment" doesn't have the required revision set + STEP: getting scale subresource 07/27/23 01:58:27.224 + STEP: updating a scale subresource 07/27/23 01:58:27.231 + STEP: verifying the deployment Spec.Replicas was modified 07/27/23 01:58:27.245 + STEP: Patch a scale subresource 07/27/23 01:58:27.254 + [AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 + Jul 27 01:58:27.337: INFO: Deployment "test-new-deployment": + &Deployment{ObjectMeta:{test-new-deployment deployment-2732 d415b18d-f84e-440a-8950-0286fe058862 85155 3 2023-07-27 01:58:25 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {e2e.test Update apps/v1 2023-07-27 01:58:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 01:58:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00066f668 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-7f5969cbc7" has successfully progressed.,LastUpdateTime:2023-07-27 01:58:26 +0000 UTC,LastTransitionTime:2023-07-27 01:58:25 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-07-27 01:58:27 +0000 UTC,LastTransitionTime:2023-07-27 01:58:27 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + + Jul 27 01:58:27.347: INFO: New ReplicaSet "test-new-deployment-7f5969cbc7" of Deployment "test-new-deployment": + &ReplicaSet{ObjectMeta:{test-new-deployment-7f5969cbc7 deployment-2732 78ffbc18-c98d-4b75-8a87-1cdbfa9dd0f0 85158 3 2023-07-27 01:58:25 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[deployment.kubernetes.io/desired-replicas:4 deployment.kubernetes.io/max-replicas:5 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment d415b18d-f84e-440a-8950-0286fe058862 0xc005639967 0xc005639968}] [] [{kube-controller-manager Update apps/v1 2023-07-27 01:58:26 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status} {kube-controller-manager Update apps/v1 2023-07-27 01:58:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d415b18d-f84e-440a-8950-0286fe058862\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 7f5969cbc7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0056399f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} + Jul 27 01:58:27.357: INFO: Pod "test-new-deployment-7f5969cbc7-dlqk4" is not available: + &Pod{ObjectMeta:{test-new-deployment-7f5969cbc7-dlqk4 test-new-deployment-7f5969cbc7- deployment-2732 eb168fa6-98f1-4d89-a0d0-366b2942f69c 85159 0 2023-07-27 01:58:27 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-new-deployment-7f5969cbc7 78ffbc18-c98d-4b75-8a87-1cdbfa9dd0f0 0xc005639da7 0xc005639da8}] [] [{kube-controller-manager Update v1 2023-07-27 01:58:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"78ffbc18-c98d-4b75-8a87-1cdbfa9dd0f0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-x9wbr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x9wbr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.17,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c45,c35,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-f4gnx,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:58:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jul 27 01:58:27.357: INFO: Pod "test-new-deployment-7f5969cbc7-z5586" is available: + &Pod{ObjectMeta:{test-new-deployment-7f5969cbc7-z5586 test-new-deployment-7f5969cbc7- deployment-2732 de9c09ad-c2a6-446b-8cbf-fa2b021294f5 85148 0 2023-07-27 01:58:25 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:c34aa7e6e452ba014ca738c4cf98341a6e2dea4c4cf8fb0f43600aba041e5928 cni.projectcalico.org/podIP:172.17.225.50/32 cni.projectcalico.org/podIPs:172.17.225.50/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.225.50" + ], + "default": true, + "dns": {} + }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-new-deployment-7f5969cbc7 78ffbc18-c98d-4b75-8a87-1cdbfa9dd0f0 0xc005639f87 0xc005639f88}] [] [{calico Update v1 2023-07-27 01:58:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kube-controller-manager Update v1 2023-07-27 01:58:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"78ffbc18-c98d-4b75-8a87-1cdbfa9dd0f0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-07-27 01:58:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-07-27 01:58:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.225.50\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6pjr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6pjr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.19,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c45,c35,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:58:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:58:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:58:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 01:58:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.19,PodIP:172.17.225.50,StartTime:2023-07-27 01:58:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-07-27 01:58:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://198f8fd45ca13eb98ef5946f9af6415a78879ad52d27d4101a85d904287c71ae,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.17.225.50,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + [AfterEach] [sig-apps] Deployment test/e2e/framework/node/init/init.go:32 - Jun 12 21:19:55.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Probing container + Jul 27 01:58:27.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Deployment test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Probing container + [DeferCleanup (Each)] [sig-apps] Deployment dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Probing container + [DeferCleanup (Each)] [sig-apps] Deployment tear down framework | framework.go:193 - STEP: Destroying namespace "container-probe-8802" for this suite. 06/12/23 21:19:55.923 + STEP: Destroying namespace "deployment-2732" for this suite. 07/27/23 01:58:27.37 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSS +SSSSS ------------------------------ -[sig-apps] Daemon set [Serial] - should verify changes to a daemon set status [Conformance] - test/e2e/apps/daemon_set.go:862 -[BeforeEach] [sig-apps] Daemon set [Serial] +[sig-api-machinery] Garbage collector + should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + test/e2e/apimachinery/garbage_collector.go:550 +[BeforeEach] [sig-api-machinery] Garbage collector set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:19:55.946 -Jun 12 21:19:55.947: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename daemonsets 06/12/23 21:19:55.948 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:19:55.988 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:19:56.006 -[BeforeEach] [sig-apps] Daemon set [Serial] +STEP: Creating a kubernetes client 07/27/23 01:58:27.396 +Jul 27 01:58:27.396: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename gc 07/27/23 01:58:27.396 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:58:27.45 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:58:27.459 +[BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-apps] Daemon set [Serial] - test/e2e/apps/daemon_set.go:146 -[It] should verify changes to a daemon set status [Conformance] - test/e2e/apps/daemon_set.go:862 -STEP: Creating simple DaemonSet "daemon-set" 06/12/23 21:19:56.112 -STEP: Check that daemon pods launch on every node of the cluster. 06/12/23 21:19:56.128 -Jun 12 21:19:56.146: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 21:19:56.146: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 -Jun 12 21:19:57.189: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 21:19:57.190: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 -Jun 12 21:19:58.171: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 21:19:58.171: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 -Jun 12 21:19:59.169: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 -Jun 12 21:19:59.169: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 -Jun 12 21:20:00.168: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 -Jun 12 21:20:00.168: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 -Jun 12 21:20:01.212: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 -Jun 12 21:20:01.212: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set -STEP: Getting /status 06/12/23 21:20:01.221 -Jun 12 21:20:01.233: INFO: Daemon Set daemon-set has Conditions: [] -STEP: updating the DaemonSet Status 06/12/23 21:20:01.233 -Jun 12 21:20:01.258: INFO: updatedStatus.Conditions: []v1.DaemonSetCondition{v1.DaemonSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} -STEP: watching for the daemon set status to be updated 06/12/23 21:20:01.258 -Jun 12 21:20:01.264: INFO: Observed &DaemonSet event: ADDED -Jun 12 21:20:01.264: INFO: Observed &DaemonSet event: MODIFIED -Jun 12 21:20:01.265: INFO: Observed &DaemonSet event: MODIFIED -Jun 12 21:20:01.266: INFO: Observed &DaemonSet event: MODIFIED -Jun 12 21:20:01.266: INFO: Observed &DaemonSet event: MODIFIED -Jun 12 21:20:01.266: INFO: Found daemon set daemon-set in namespace daemonsets-7481 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] -Jun 12 21:20:01.266: INFO: Daemon set daemon-set has an updated status -STEP: patching the DaemonSet Status 06/12/23 21:20:01.266 -STEP: watching for the daemon set status to be patched 06/12/23 21:20:01.285 -Jun 12 21:20:01.291: INFO: Observed &DaemonSet event: ADDED -Jun 12 21:20:01.292: INFO: Observed &DaemonSet event: MODIFIED -Jun 12 21:20:01.293: INFO: Observed &DaemonSet event: MODIFIED -Jun 12 21:20:01.293: INFO: Observed &DaemonSet event: MODIFIED -Jun 12 21:20:01.294: INFO: Observed &DaemonSet event: MODIFIED -Jun 12 21:20:01.294: INFO: Observed daemon set daemon-set in namespace daemonsets-7481 with annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] -Jun 12 21:20:01.295: INFO: Observed &DaemonSet event: MODIFIED -Jun 12 21:20:01.295: INFO: Found daemon set daemon-set in namespace daemonsets-7481 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusPatched True 0001-01-01 00:00:00 +0000 UTC }] -Jun 12 21:20:01.295: INFO: Daemon set daemon-set has a patched status -[AfterEach] [sig-apps] Daemon set [Serial] - test/e2e/apps/daemon_set.go:111 -STEP: Deleting DaemonSet "daemon-set" 06/12/23 21:20:01.304 -STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7481, will wait for the garbage collector to delete the pods 06/12/23 21:20:01.304 -Jun 12 21:20:01.384: INFO: Deleting DaemonSet.extensions daemon-set took: 15.798593ms -Jun 12 21:20:01.484: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.200837ms -Jun 12 21:20:04.293: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 21:20:04.293: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set -Jun 12 21:20:04.302: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"98865"},"items":null} - -Jun 12 21:20:04.311: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"98865"},"items":null} +[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + test/e2e/apimachinery/garbage_collector.go:550 +STEP: create the deployment 07/27/23 01:58:27.468 +W0727 01:58:27.489696 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "nginx" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "nginx" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "nginx" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "nginx" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +STEP: Wait for the Deployment to create new ReplicaSet 07/27/23 01:58:27.489 +STEP: delete the deployment 07/27/23 01:58:28.014 +STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs 07/27/23 01:58:28.034 +STEP: Gathering metrics 07/27/23 01:58:28.581 +W0727 01:58:28.598459 20 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +Jul 27 01:58:28.598: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: -[AfterEach] [sig-apps] Daemon set [Serial] +[AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 -Jun 12 21:20:04.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] +Jul 27 01:58:28.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector tear down framework | framework.go:193 -STEP: Destroying namespace "daemonsets-7481" for this suite. 06/12/23 21:20:04.36 +STEP: Destroying namespace "gc-8684" for this suite. 07/27/23 01:58:28.608 ------------------------------ -• [SLOW TEST] [8.432 seconds] -[sig-apps] Daemon set [Serial] -test/e2e/apps/framework.go:23 - should verify changes to a daemon set status [Conformance] - test/e2e/apps/daemon_set.go:862 +• [1.273 seconds] +[sig-api-machinery] Garbage collector +test/e2e/apimachinery/framework.go:23 + should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + test/e2e/apimachinery/garbage_collector.go:550 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] Daemon set [Serial] + [BeforeEach] [sig-api-machinery] Garbage collector set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:19:55.946 - Jun 12 21:19:55.947: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename daemonsets 06/12/23 21:19:55.948 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:19:55.988 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:19:56.006 - [BeforeEach] [sig-apps] Daemon set [Serial] + STEP: Creating a kubernetes client 07/27/23 01:58:27.396 + Jul 27 01:58:27.396: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename gc 07/27/23 01:58:27.396 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:58:27.45 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:58:27.459 + [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-apps] Daemon set [Serial] - test/e2e/apps/daemon_set.go:146 - [It] should verify changes to a daemon set status [Conformance] - test/e2e/apps/daemon_set.go:862 - STEP: Creating simple DaemonSet "daemon-set" 06/12/23 21:19:56.112 - STEP: Check that daemon pods launch on every node of the cluster. 06/12/23 21:19:56.128 - Jun 12 21:19:56.146: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 21:19:56.146: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 - Jun 12 21:19:57.189: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 21:19:57.190: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 - Jun 12 21:19:58.171: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 21:19:58.171: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 - Jun 12 21:19:59.169: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 - Jun 12 21:19:59.169: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 - Jun 12 21:20:00.168: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 - Jun 12 21:20:00.168: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 - Jun 12 21:20:01.212: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 - Jun 12 21:20:01.212: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set - STEP: Getting /status 06/12/23 21:20:01.221 - Jun 12 21:20:01.233: INFO: Daemon Set daemon-set has Conditions: [] - STEP: updating the DaemonSet Status 06/12/23 21:20:01.233 - Jun 12 21:20:01.258: INFO: updatedStatus.Conditions: []v1.DaemonSetCondition{v1.DaemonSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} - STEP: watching for the daemon set status to be updated 06/12/23 21:20:01.258 - Jun 12 21:20:01.264: INFO: Observed &DaemonSet event: ADDED - Jun 12 21:20:01.264: INFO: Observed &DaemonSet event: MODIFIED - Jun 12 21:20:01.265: INFO: Observed &DaemonSet event: MODIFIED - Jun 12 21:20:01.266: INFO: Observed &DaemonSet event: MODIFIED - Jun 12 21:20:01.266: INFO: Observed &DaemonSet event: MODIFIED - Jun 12 21:20:01.266: INFO: Found daemon set daemon-set in namespace daemonsets-7481 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] - Jun 12 21:20:01.266: INFO: Daemon set daemon-set has an updated status - STEP: patching the DaemonSet Status 06/12/23 21:20:01.266 - STEP: watching for the daemon set status to be patched 06/12/23 21:20:01.285 - Jun 12 21:20:01.291: INFO: Observed &DaemonSet event: ADDED - Jun 12 21:20:01.292: INFO: Observed &DaemonSet event: MODIFIED - Jun 12 21:20:01.293: INFO: Observed &DaemonSet event: MODIFIED - Jun 12 21:20:01.293: INFO: Observed &DaemonSet event: MODIFIED - Jun 12 21:20:01.294: INFO: Observed &DaemonSet event: MODIFIED - Jun 12 21:20:01.294: INFO: Observed daemon set daemon-set in namespace daemonsets-7481 with annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] - Jun 12 21:20:01.295: INFO: Observed &DaemonSet event: MODIFIED - Jun 12 21:20:01.295: INFO: Found daemon set daemon-set in namespace daemonsets-7481 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusPatched True 0001-01-01 00:00:00 +0000 UTC }] - Jun 12 21:20:01.295: INFO: Daemon set daemon-set has a patched status - [AfterEach] [sig-apps] Daemon set [Serial] - test/e2e/apps/daemon_set.go:111 - STEP: Deleting DaemonSet "daemon-set" 06/12/23 21:20:01.304 - STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7481, will wait for the garbage collector to delete the pods 06/12/23 21:20:01.304 - Jun 12 21:20:01.384: INFO: Deleting DaemonSet.extensions daemon-set took: 15.798593ms - Jun 12 21:20:01.484: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.200837ms - Jun 12 21:20:04.293: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 21:20:04.293: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set - Jun 12 21:20:04.302: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"98865"},"items":null} - - Jun 12 21:20:04.311: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"98865"},"items":null} + [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + test/e2e/apimachinery/garbage_collector.go:550 + STEP: create the deployment 07/27/23 01:58:27.468 + W0727 01:58:27.489696 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "nginx" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "nginx" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "nginx" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "nginx" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + STEP: Wait for the Deployment to create new ReplicaSet 07/27/23 01:58:27.489 + STEP: delete the deployment 07/27/23 01:58:28.014 + STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs 07/27/23 01:58:28.034 + STEP: Gathering metrics 07/27/23 01:58:28.581 + W0727 01:58:28.598459 20 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. + Jul 27 01:58:28.598: INFO: For apiserver_request_total: + For apiserver_request_latency_seconds: + For apiserver_init_events_total: + For garbage_collector_attempt_to_delete_queue_latency: + For garbage_collector_attempt_to_delete_work_duration: + For garbage_collector_attempt_to_orphan_queue_latency: + For garbage_collector_attempt_to_orphan_work_duration: + For garbage_collector_dirty_processing_latency_microseconds: + For garbage_collector_event_processing_latency_microseconds: + For garbage_collector_graph_changes_queue_latency: + For garbage_collector_graph_changes_work_duration: + For garbage_collector_orphan_processing_latency_microseconds: + For namespace_queue_latency: + For namespace_queue_latency_sum: + For namespace_queue_latency_count: + For namespace_retries: + For namespace_work_duration: + For namespace_work_duration_sum: + For namespace_work_duration_count: + For function_duration_seconds: + For errors_total: + For evicted_pods_total: - [AfterEach] [sig-apps] Daemon set [Serial] + [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 - Jun 12 21:20:04.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + Jul 27 01:58:28.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector tear down framework | framework.go:193 - STEP: Destroying namespace "daemonsets-7481" for this suite. 06/12/23 21:20:04.36 + STEP: Destroying namespace "gc-8684" for this suite. 07/27/23 01:58:28.608 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSS +SSSSSS ------------------------------ -[sig-storage] Downward API volume - should provide podname only [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:53 -[BeforeEach] [sig-storage] Downward API volume +[sig-cli] Kubectl client Kubectl describe + should check if kubectl describe prints relevant information for rc and pods [Conformance] + test/e2e/kubectl/kubectl.go:1276 +[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:20:04.379 -Jun 12 21:20:04.381: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename downward-api 06/12/23 21:20:04.384 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:20:04.424 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:20:04.436 -[BeforeEach] [sig-storage] Downward API volume +STEP: Creating a kubernetes client 07/27/23 01:58:28.669 +Jul 27 01:58:28.669: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename kubectl 07/27/23 01:58:28.67 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:58:28.731 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:58:28.739 +[BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-storage] Downward API volume - test/e2e/common/storage/downwardapi_volume.go:44 -[It] should provide podname only [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:53 -STEP: Creating a pod to test downward API volume plugin 06/12/23 21:20:04.455 -Jun 12 21:20:04.481: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f3574b40-1d57-4ed1-9db8-515cd9d6400a" in namespace "downward-api-1726" to be "Succeeded or Failed" -Jun 12 21:20:04.494: INFO: Pod "downwardapi-volume-f3574b40-1d57-4ed1-9db8-515cd9d6400a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.669917ms -Jun 12 21:20:06.505: INFO: Pod "downwardapi-volume-f3574b40-1d57-4ed1-9db8-515cd9d6400a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024151604s -Jun 12 21:20:08.504: INFO: Pod "downwardapi-volume-f3574b40-1d57-4ed1-9db8-515cd9d6400a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02350568s -Jun 12 21:20:10.521: INFO: Pod "downwardapi-volume-f3574b40-1d57-4ed1-9db8-515cd9d6400a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040306986s -STEP: Saw pod success 06/12/23 21:20:10.521 -Jun 12 21:20:10.522: INFO: Pod "downwardapi-volume-f3574b40-1d57-4ed1-9db8-515cd9d6400a" satisfied condition "Succeeded or Failed" -Jun 12 21:20:10.533: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-f3574b40-1d57-4ed1-9db8-515cd9d6400a container client-container: -STEP: delete the pod 06/12/23 21:20:10.649 -Jun 12 21:20:10.684: INFO: Waiting for pod downwardapi-volume-f3574b40-1d57-4ed1-9db8-515cd9d6400a to disappear -Jun 12 21:20:10.697: INFO: Pod downwardapi-volume-f3574b40-1d57-4ed1-9db8-515cd9d6400a no longer exists -[AfterEach] [sig-storage] Downward API volume +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[It] should check if kubectl describe prints relevant information for rc and pods [Conformance] + test/e2e/kubectl/kubectl.go:1276 +Jul 27 01:58:28.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-379 create -f -' +Jul 27 01:58:32.975: INFO: stderr: "" +Jul 27 01:58:32.975: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +Jul 27 01:58:32.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-379 create -f -' +Jul 27 01:58:33.456: INFO: stderr: "" +Jul 27 01:58:33.456: INFO: stdout: "service/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. 07/27/23 01:58:33.456 +Jul 27 01:58:34.465: INFO: Selector matched 1 pods for map[app:agnhost] +Jul 27 01:58:34.465: INFO: Found 0 / 1 +Jul 27 01:58:35.466: INFO: Selector matched 1 pods for map[app:agnhost] +Jul 27 01:58:35.466: INFO: Found 1 / 1 +Jul 27 01:58:35.466: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Jul 27 01:58:35.474: INFO: Selector matched 1 pods for map[app:agnhost] +Jul 27 01:58:35.474: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Jul 27 01:58:35.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-379 describe pod agnhost-primary-wp2z8' +Jul 27 01:58:35.582: INFO: stderr: "" +Jul 27 01:58:35.582: INFO: stdout: "Name: agnhost-primary-wp2z8\nNamespace: kubectl-379\nPriority: 0\nService Account: default\nNode: 10.245.128.19/10.245.128.19\nStart Time: Thu, 27 Jul 2023 01:58:33 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: cni.projectcalico.org/containerID: 4313f81c99e5b7c03208c52da4edb82f16f2264da3995baadf8ea72b40c12cea\n cni.projectcalico.org/podIP: 172.17.225.37/32\n cni.projectcalico.org/podIPs: 172.17.225.37/32\n k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"k8s-pod-network\",\n \"ips\": [\n \"172.17.225.37\"\n ],\n \"default\": true,\n \"dns\": {}\n }]\n openshift.io/scc: anyuid\nStatus: Running\nIP: 172.17.225.37\nIPs:\n IP: 172.17.225.37\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: cri-o://c84d4bdfec73be0638653a5e4e53ecdc772053f88e5d70fb0a2ed6e9ad3cd801\n Image: registry.k8s.io/e2e-test-images/agnhost:2.43\n Image ID: registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 27 Jul 2023 01:58:34 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8q949 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-8q949:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: \nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 2s default-scheduler Successfully assigned kubectl-379/agnhost-primary-wp2z8 to 10.245.128.19\n Normal AddedInterface 1s multus Add eth0 [172.17.225.37/32] from k8s-pod-network\n Normal Pulled 1s kubelet Container image \"registry.k8s.io/e2e-test-images/agnhost:2.43\" already present on machine\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" +Jul 27 01:58:35.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-379 describe rc agnhost-primary' +Jul 27 01:58:35.687: INFO: stderr: "" +Jul 27 01:58:35.687: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-379\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: registry.k8s.io/e2e-test-images/agnhost:2.43\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-primary-wp2z8\n" +Jul 27 01:58:35.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-379 describe service agnhost-primary' +Jul 27 01:58:35.899: INFO: stderr: "" +Jul 27 01:58:35.899: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-379\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 172.21.154.133\nIPs: 172.21.154.133\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 172.17.225.37:6379\nSession Affinity: None\nEvents: \n" +Jul 27 01:58:35.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-379 describe node 10.245.128.17' +Jul 27 01:58:36.387: INFO: stderr: "" +Jul 27 01:58:36.387: INFO: stdout: "Name: 10.245.128.17\nRoles: master,worker\nLabels: arch=amd64\n beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=bx2.4x16\n beta.kubernetes.io/os=linux\n failure-domain.beta.kubernetes.io/region=au-syd\n failure-domain.beta.kubernetes.io/zone=au-syd-3\n ibm-cloud.kubernetes.io/iaas-provider=g2\n ibm-cloud.kubernetes.io/instance-id=02j7_ca6791eb-413f-4147-9b54-5e3c4b4992a2\n ibm-cloud.kubernetes.io/internal-ip=10.245.128.17\n ibm-cloud.kubernetes.io/machine-type=bx2.4x16\n ibm-cloud.kubernetes.io/os=REDHAT_8_64\n ibm-cloud.kubernetes.io/region=au-syd\n ibm-cloud.kubernetes.io/sgx-enabled=false\n ibm-cloud.kubernetes.io/subnet-id=02j7-9af2c7d9-0b70-4ed4-87d6-2a2c87b14d32\n ibm-cloud.kubernetes.io/worker-id=kube-cj0q6c8s0ufjsdor0f4g-kubee2epvgo-default-00000176\n ibm-cloud.kubernetes.io/worker-pool-id=cj0q6c8s0ufjsdor0f4g-9ed60cd\n ibm-cloud.kubernetes.io/worker-pool-name=default\n ibm-cloud.kubernetes.io/worker-version=4.13.4_1526_openshift\n ibm-cloud.kubernetes.io/zone=au-syd-3\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=10.245.128.17\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\n node-role.kubernetes.io/worker=\n node.kubernetes.io/instance-type=bx2.4x16\n node.openshift.io/os_id=rhel\n topology.kubernetes.io/region=au-syd\n topology.kubernetes.io/zone=au-syd-3\nAnnotations: csi.volume.kubernetes.io/nodeid: {\"vpc.block.csi.ibm.io\":\"kube-cj0q6c8s0ufjsdor0f4g-kubee2epvgo-default-00000176\"}\n projectcalico.org/IPv4Address: 10.245.128.17/24\n projectcalico.org/IPv4IPIPTunnelAddr: 172.17.218.0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 26 Jul 2023 23:12:10 +0000\nTaints: \nUnschedulable: false\nLease:\n HolderIdentity: 10.245.128.17\n AcquireTime: \n RenewTime: Thu, 27 Jul 2023 01:58:35 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Wed, 26 Jul 2023 23:20:01 +0000 Wed, 26 Jul 2023 23:20:01 +0000 CalicoIsUp Calico is running on this node\n MemoryPressure False Thu, 27 Jul 2023 01:55:38 +0000 Wed, 26 Jul 2023 23:12:10 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 27 Jul 2023 01:55:38 +0000 Wed, 26 Jul 2023 23:12:10 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 27 Jul 2023 01:55:38 +0000 Wed, 26 Jul 2023 23:12:10 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 27 Jul 2023 01:55:38 +0000 Wed, 26 Jul 2023 23:21:42 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.245.128.17\n ExternalIP: 10.245.128.17\n Hostname: 10.245.128.17\nCapacity:\n cpu: 4\n ephemeral-storage: 102152044Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 16401480Ki\n pods: 110\n scheduling.k8s.io/foo: 5\nAllocatable:\n cpu: 3910m\n ephemeral-storage: 99373508326\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 13611080Ki\n pods: 110\n scheduling.k8s.io/foo: 5\nSystem Info:\n Machine ID: ca6791eb413f41479b545e3c4b4992a2\n System UUID: ca6791eb-413f-4147-9b54-5e3c4b4992a2\n Boot ID: 2f106fd2-5635-47ba-8727-598557009b38\n Kernel Version: 4.18.0-477.15.1.el8_8.x86_64\n OS Image: Red Hat Enterprise Linux 8.8 (Ootpa)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: cri-o://1.26.3-9.rhaos4.13.git994242a.el8\n Kubelet Version: v1.26.5+7d22122\n Kube-Proxy Version: v1.26.5+7d22122\nPodCIDR: 172.17.192.0/24\nPodCIDRs: 172.17.192.0/24\nProviderID: ibm://68010fd8df4f467681ddec1e065d7a48///cj0q6c8s0ufjsdor0f4g/kube-cj0q6c8s0ufjsdor0f4g-kubee2epvgo-default-00000176\nNon-terminated Pods: (43 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n calico-system calico-node-6gb7d 250m (6%) 0 (0%) 80Mi (0%) 0 (0%) 158m\n kube-system ibm-keepalived-watcher-krnnt 5m (0%) 0 (0%) 10Mi (0%) 0 (0%) 166m\n kube-system ibm-master-proxy-static-10.245.128.17 26m (0%) 300m (7%) 32001024 (0%) 512M (3%) 165m\n kube-system ibm-vpc-block-csi-controller-0 165m (4%) 660m (16%) 330Mi (2%) 1320Mi (9%) 152m\n kube-system ibm-vpc-block-csi-node-pb2sj 55m (1%) 220m (5%) 125Mi (0%) 500Mi (3%) 166m\n kube-system vpn-7d8b749c64-87d9s 5m (0%) 0 (0%) 5Mi (0%) 0 (0%) 152m\n openshift-cluster-node-tuning-operator tuned-wnh5v 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 152m\n openshift-cluster-storage-operator csi-snapshot-controller-5b77984679-frszr 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 156m\n openshift-cluster-storage-operator csi-snapshot-webhook-78b8c8d77c-2pk6s 10m (0%) 0 (0%) 20Mi (0%) 0 (0%) 156m\n openshift-console console-7fd48bd95f-wksvb 10m (0%) 0 (0%) 100Mi (0%) 0 (0%) 150m\n openshift-console downloads-6874b45df6-w7xkq 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 156m\n openshift-dns dns-default-5mw2g 60m (1%) 0 (0%) 110Mi (0%) 0 (0%) 152m\n openshift-dns node-resolver-2kt92 5m (0%) 0 (0%) 21Mi (0%) 0 (0%) 152m\n openshift-image-registry image-registry-69fbbd6d88-6xgnp 100m (2%) 0 (0%) 256Mi (1%) 0 (0%) 8m29s\n openshift-image-registry node-ca-pmxp9 10m (0%) 0 (0%) 10Mi (0%) 0 (0%) 152m\n openshift-ingress-canary ingress-canary-wh5qj 10m (0%) 0 (0%) 20Mi (0%) 0 (0%) 152m\n openshift-ingress router-default-865b575f54-qjwfv 100m (2%) 0 (0%) 256Mi (1%) 0 (0%) 152m\n openshift-kube-proxy openshift-kube-proxy-r7t77 110m (2%) 0 (0%) 220Mi (1%) 0 (0%) 161m\n openshift-kube-storage-version-migrator migrator-77d7ddf546-9g7xm 10m (0%) 0 (0%) 200Mi (1%) 0 (0%) 152m\n openshift-marketplace certified-operators-qlqcc 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 155m\n openshift-marketplace community-operators-dtgmg 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 155m\n openshift-marketplace redhat-marketplace-vnvdb 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 155m\n openshift-marketplace redhat-operators-9qw52 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 28m\n openshift-monitoring alertmanager-main-1 9m (0%) 0 (0%) 120Mi (0%) 0 (0%) 150m\n openshift-monitoring kube-state-metrics-575bd9d6b6-2wk6g 4m (0%) 0 (0%) 110Mi (0%) 0 (0%) 8m29s\n openshift-monitoring node-exporter-2tscc 9m (0%) 0 (0%) 47Mi (0%) 0 (0%) 152m\n openshift-monitoring openshift-state-metrics-99754b784-vdbrs 3m (0%) 0 (0%) 72Mi (0%) 0 (0%) 8m29s\n openshift-monitoring prometheus-adapter-657855c676-qlc95 1m (0%) 0 (0%) 40Mi (0%) 0 (0%) 152m\n openshift-monitoring prometheus-k8s-1 75m (1%) 0 (0%) 1104Mi (8%) 0 (0%) 150m\n openshift-monitoring prometheus-operator-765bbdfd45-twq98 6m (0%) 0 (0%) 165Mi (1%) 0 (0%) 8m29s\n openshift-monitoring prometheus-operator-admission-webhook-84c7bbc8cc-hct4l 5m (0%) 0 (0%) 30Mi (0%) 0 (0%) 152m\n openshift-monitoring telemeter-client-c964ff8c9-xszvz 3m (0%) 0 (0%) 70Mi (0%) 0 (0%) 8m29s\n openshift-monitoring thanos-querier-7f9c896d7f-xqld6 15m (0%) 0 (0%) 92Mi (0%) 0 (0%) 152m\n openshift-multus multus-5x56j 10m (0%) 0 (0%) 65Mi (0%) 0 (0%) 161m\n openshift-multus multus-additional-cni-plugins-p7gf5 10m (0%) 0 (0%) 10Mi (0%) 0 (0%) 161m\n openshift-multus multus-admission-controller-8ccd764f4-j68g7 20m (0%) 0 (0%) 70Mi (0%) 0 (0%) 152m\n openshift-multus network-metrics-daemon-djvdx 20m (0%) 0 (0%) 120Mi (0%) 0 (0%) 161m\n openshift-network-diagnostics network-check-target-2j7hq 10m (0%) 0 (0%) 15Mi (0%) 0 (0%) 161m\n openshift-operator-lifecycle-manager packageserver-b9964c68-p2fd4 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 152m\n openshift-service-ca service-ca-665db46585-9cprv 10m (0%) 0 (0%) 120Mi (0%) 0 (0%) 156m\n sonobuoy sonobuoy-e2e-job-17fd703895604ed7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 31m\n sonobuoy sonobuoy-systemd-logs-daemon-set-2f6dede7a68c4213-vft4d 0 (0%) 0 (0%) 0 (0%) 0 (0%) 31m\n tigera-operator tigera-operator-5b48cf996b-5zb5v 100m (2%) 0 (0%) 40Mi (0%) 0 (0%) 177m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 1321m (33%) 1180m (30%)\n memory 4591123Ki (33%) 2420408320 (17%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\n scheduling.k8s.io/foo 0 0\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Starting 161m kube-proxy \n Normal Starting 166m kubelet Starting kubelet.\n Normal NodeAllocatableEnforced 166m kubelet Updated Node Allocatable limit across pods\n Normal RegisteredNode 166m node-controller Node 10.245.128.17 event: Registered Node 10.245.128.17 in Controller\n Normal NodeHasSufficientMemory 166m (x8 over 166m) kubelet Node 10.245.128.17 status is now: NodeHasSufficientMemory\n Normal NodeHasNoDiskPressure 166m (x8 over 166m) kubelet Node 10.245.128.17 status is now: NodeHasNoDiskPressure\n Normal NodeHasSufficientPID 166m (x7 over 166m) kubelet Node 10.245.128.17 status is now: NodeHasSufficientPID\n Normal Synced 166m cloud-node-controller Node synced successfully\n Normal NodeReady 156m kubelet Node 10.245.128.17 status is now: NodeReady\n Normal RegisteredNode 152m node-controller Node 10.245.128.17 event: Registered Node 10.245.128.17 in Controller\n" +Jul 27 01:58:36.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-379 describe namespace kubectl-379' +Jul 27 01:58:36.493: INFO: stderr: "" +Jul 27 01:58:36.493: INFO: stdout: "Name: kubectl-379\nLabels: e2e-framework=kubectl\n e2e-run=d18faff3-626a-4ea4-87bd-253935adf598\n kubernetes.io/metadata.name=kubectl-379\n pod-security.kubernetes.io/audit=privileged\n pod-security.kubernetes.io/audit-version=v1.24\n pod-security.kubernetes.io/enforce=baseline\n pod-security.kubernetes.io/warn=privileged\n pod-security.kubernetes.io/warn-version=v1.24\nAnnotations: openshift.io/sa.scc.mcs: s0:c46,c0\n openshift.io/sa.scc.supplemental-groups: 1002070000/10000\n openshift.io/sa.scc.uid-range: 1002070000/10000\nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" +[AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 -Jun 12 21:20:10.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Downward API volume +Jul 27 01:58:36.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Downward API volume +[DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Downward API volume +[DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 -STEP: Destroying namespace "downward-api-1726" for this suite. 06/12/23 21:20:10.722 +STEP: Destroying namespace "kubectl-379" for this suite. 07/27/23 01:58:36.535 ------------------------------ -• [SLOW TEST] [6.363 seconds] -[sig-storage] Downward API volume -test/e2e/common/storage/framework.go:23 - should provide podname only [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:53 +• [SLOW TEST] [7.888 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl describe + test/e2e/kubectl/kubectl.go:1270 + should check if kubectl describe prints relevant information for rc and pods [Conformance] + test/e2e/kubectl/kubectl.go:1276 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Downward API volume + [BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:20:04.379 - Jun 12 21:20:04.381: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename downward-api 06/12/23 21:20:04.384 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:20:04.424 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:20:04.436 - [BeforeEach] [sig-storage] Downward API volume + STEP: Creating a kubernetes client 07/27/23 01:58:28.669 + Jul 27 01:58:28.669: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename kubectl 07/27/23 01:58:28.67 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:58:28.731 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:58:28.739 + [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-storage] Downward API volume - test/e2e/common/storage/downwardapi_volume.go:44 - [It] should provide podname only [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:53 - STEP: Creating a pod to test downward API volume plugin 06/12/23 21:20:04.455 - Jun 12 21:20:04.481: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f3574b40-1d57-4ed1-9db8-515cd9d6400a" in namespace "downward-api-1726" to be "Succeeded or Failed" - Jun 12 21:20:04.494: INFO: Pod "downwardapi-volume-f3574b40-1d57-4ed1-9db8-515cd9d6400a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.669917ms - Jun 12 21:20:06.505: INFO: Pod "downwardapi-volume-f3574b40-1d57-4ed1-9db8-515cd9d6400a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024151604s - Jun 12 21:20:08.504: INFO: Pod "downwardapi-volume-f3574b40-1d57-4ed1-9db8-515cd9d6400a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02350568s - Jun 12 21:20:10.521: INFO: Pod "downwardapi-volume-f3574b40-1d57-4ed1-9db8-515cd9d6400a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040306986s - STEP: Saw pod success 06/12/23 21:20:10.521 - Jun 12 21:20:10.522: INFO: Pod "downwardapi-volume-f3574b40-1d57-4ed1-9db8-515cd9d6400a" satisfied condition "Succeeded or Failed" - Jun 12 21:20:10.533: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-f3574b40-1d57-4ed1-9db8-515cd9d6400a container client-container: - STEP: delete the pod 06/12/23 21:20:10.649 - Jun 12 21:20:10.684: INFO: Waiting for pod downwardapi-volume-f3574b40-1d57-4ed1-9db8-515cd9d6400a to disappear - Jun 12 21:20:10.697: INFO: Pod downwardapi-volume-f3574b40-1d57-4ed1-9db8-515cd9d6400a no longer exists - [AfterEach] [sig-storage] Downward API volume + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] + test/e2e/kubectl/kubectl.go:1276 + Jul 27 01:58:28.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-379 create -f -' + Jul 27 01:58:32.975: INFO: stderr: "" + Jul 27 01:58:32.975: INFO: stdout: "replicationcontroller/agnhost-primary created\n" + Jul 27 01:58:32.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-379 create -f -' + Jul 27 01:58:33.456: INFO: stderr: "" + Jul 27 01:58:33.456: INFO: stdout: "service/agnhost-primary created\n" + STEP: Waiting for Agnhost primary to start. 07/27/23 01:58:33.456 + Jul 27 01:58:34.465: INFO: Selector matched 1 pods for map[app:agnhost] + Jul 27 01:58:34.465: INFO: Found 0 / 1 + Jul 27 01:58:35.466: INFO: Selector matched 1 pods for map[app:agnhost] + Jul 27 01:58:35.466: INFO: Found 1 / 1 + Jul 27 01:58:35.466: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 + Jul 27 01:58:35.474: INFO: Selector matched 1 pods for map[app:agnhost] + Jul 27 01:58:35.474: INFO: ForEach: Found 1 pods from the filter. Now looping through them. + Jul 27 01:58:35.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-379 describe pod agnhost-primary-wp2z8' + Jul 27 01:58:35.582: INFO: stderr: "" + Jul 27 01:58:35.582: INFO: stdout: "Name: agnhost-primary-wp2z8\nNamespace: kubectl-379\nPriority: 0\nService Account: default\nNode: 10.245.128.19/10.245.128.19\nStart Time: Thu, 27 Jul 2023 01:58:33 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: cni.projectcalico.org/containerID: 4313f81c99e5b7c03208c52da4edb82f16f2264da3995baadf8ea72b40c12cea\n cni.projectcalico.org/podIP: 172.17.225.37/32\n cni.projectcalico.org/podIPs: 172.17.225.37/32\n k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"k8s-pod-network\",\n \"ips\": [\n \"172.17.225.37\"\n ],\n \"default\": true,\n \"dns\": {}\n }]\n openshift.io/scc: anyuid\nStatus: Running\nIP: 172.17.225.37\nIPs:\n IP: 172.17.225.37\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: cri-o://c84d4bdfec73be0638653a5e4e53ecdc772053f88e5d70fb0a2ed6e9ad3cd801\n Image: registry.k8s.io/e2e-test-images/agnhost:2.43\n Image ID: registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 27 Jul 2023 01:58:34 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8q949 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-8q949:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: \nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 2s default-scheduler Successfully assigned kubectl-379/agnhost-primary-wp2z8 to 10.245.128.19\n Normal AddedInterface 1s multus Add eth0 [172.17.225.37/32] from k8s-pod-network\n Normal Pulled 1s kubelet Container image \"registry.k8s.io/e2e-test-images/agnhost:2.43\" already present on machine\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" + Jul 27 01:58:35.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-379 describe rc agnhost-primary' + Jul 27 01:58:35.687: INFO: stderr: "" + Jul 27 01:58:35.687: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-379\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: registry.k8s.io/e2e-test-images/agnhost:2.43\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-primary-wp2z8\n" + Jul 27 01:58:35.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-379 describe service agnhost-primary' + Jul 27 01:58:35.899: INFO: stderr: "" + Jul 27 01:58:35.899: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-379\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 172.21.154.133\nIPs: 172.21.154.133\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 172.17.225.37:6379\nSession Affinity: None\nEvents: \n" + Jul 27 01:58:35.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-379 describe node 10.245.128.17' + Jul 27 01:58:36.387: INFO: stderr: "" + Jul 27 01:58:36.387: INFO: stdout: "Name: 10.245.128.17\nRoles: master,worker\nLabels: arch=amd64\n beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=bx2.4x16\n beta.kubernetes.io/os=linux\n failure-domain.beta.kubernetes.io/region=au-syd\n failure-domain.beta.kubernetes.io/zone=au-syd-3\n ibm-cloud.kubernetes.io/iaas-provider=g2\n ibm-cloud.kubernetes.io/instance-id=02j7_ca6791eb-413f-4147-9b54-5e3c4b4992a2\n ibm-cloud.kubernetes.io/internal-ip=10.245.128.17\n ibm-cloud.kubernetes.io/machine-type=bx2.4x16\n ibm-cloud.kubernetes.io/os=REDHAT_8_64\n ibm-cloud.kubernetes.io/region=au-syd\n ibm-cloud.kubernetes.io/sgx-enabled=false\n ibm-cloud.kubernetes.io/subnet-id=02j7-9af2c7d9-0b70-4ed4-87d6-2a2c87b14d32\n ibm-cloud.kubernetes.io/worker-id=kube-cj0q6c8s0ufjsdor0f4g-kubee2epvgo-default-00000176\n ibm-cloud.kubernetes.io/worker-pool-id=cj0q6c8s0ufjsdor0f4g-9ed60cd\n ibm-cloud.kubernetes.io/worker-pool-name=default\n ibm-cloud.kubernetes.io/worker-version=4.13.4_1526_openshift\n ibm-cloud.kubernetes.io/zone=au-syd-3\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=10.245.128.17\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\n node-role.kubernetes.io/worker=\n node.kubernetes.io/instance-type=bx2.4x16\n node.openshift.io/os_id=rhel\n topology.kubernetes.io/region=au-syd\n topology.kubernetes.io/zone=au-syd-3\nAnnotations: csi.volume.kubernetes.io/nodeid: {\"vpc.block.csi.ibm.io\":\"kube-cj0q6c8s0ufjsdor0f4g-kubee2epvgo-default-00000176\"}\n projectcalico.org/IPv4Address: 10.245.128.17/24\n projectcalico.org/IPv4IPIPTunnelAddr: 172.17.218.0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 26 Jul 2023 23:12:10 +0000\nTaints: \nUnschedulable: false\nLease:\n HolderIdentity: 10.245.128.17\n AcquireTime: \n RenewTime: Thu, 27 Jul 2023 01:58:35 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Wed, 26 Jul 2023 23:20:01 +0000 Wed, 26 Jul 2023 23:20:01 +0000 CalicoIsUp Calico is running on this node\n MemoryPressure False Thu, 27 Jul 2023 01:55:38 +0000 Wed, 26 Jul 2023 23:12:10 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 27 Jul 2023 01:55:38 +0000 Wed, 26 Jul 2023 23:12:10 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 27 Jul 2023 01:55:38 +0000 Wed, 26 Jul 2023 23:12:10 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 27 Jul 2023 01:55:38 +0000 Wed, 26 Jul 2023 23:21:42 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.245.128.17\n ExternalIP: 10.245.128.17\n Hostname: 10.245.128.17\nCapacity:\n cpu: 4\n ephemeral-storage: 102152044Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 16401480Ki\n pods: 110\n scheduling.k8s.io/foo: 5\nAllocatable:\n cpu: 3910m\n ephemeral-storage: 99373508326\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 13611080Ki\n pods: 110\n scheduling.k8s.io/foo: 5\nSystem Info:\n Machine ID: ca6791eb413f41479b545e3c4b4992a2\n System UUID: ca6791eb-413f-4147-9b54-5e3c4b4992a2\n Boot ID: 2f106fd2-5635-47ba-8727-598557009b38\n Kernel Version: 4.18.0-477.15.1.el8_8.x86_64\n OS Image: Red Hat Enterprise Linux 8.8 (Ootpa)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: cri-o://1.26.3-9.rhaos4.13.git994242a.el8\n Kubelet Version: v1.26.5+7d22122\n Kube-Proxy Version: v1.26.5+7d22122\nPodCIDR: 172.17.192.0/24\nPodCIDRs: 172.17.192.0/24\nProviderID: ibm://68010fd8df4f467681ddec1e065d7a48///cj0q6c8s0ufjsdor0f4g/kube-cj0q6c8s0ufjsdor0f4g-kubee2epvgo-default-00000176\nNon-terminated Pods: (43 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n calico-system calico-node-6gb7d 250m (6%) 0 (0%) 80Mi (0%) 0 (0%) 158m\n kube-system ibm-keepalived-watcher-krnnt 5m (0%) 0 (0%) 10Mi (0%) 0 (0%) 166m\n kube-system ibm-master-proxy-static-10.245.128.17 26m (0%) 300m (7%) 32001024 (0%) 512M (3%) 165m\n kube-system ibm-vpc-block-csi-controller-0 165m (4%) 660m (16%) 330Mi (2%) 1320Mi (9%) 152m\n kube-system ibm-vpc-block-csi-node-pb2sj 55m (1%) 220m (5%) 125Mi (0%) 500Mi (3%) 166m\n kube-system vpn-7d8b749c64-87d9s 5m (0%) 0 (0%) 5Mi (0%) 0 (0%) 152m\n openshift-cluster-node-tuning-operator tuned-wnh5v 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 152m\n openshift-cluster-storage-operator csi-snapshot-controller-5b77984679-frszr 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 156m\n openshift-cluster-storage-operator csi-snapshot-webhook-78b8c8d77c-2pk6s 10m (0%) 0 (0%) 20Mi (0%) 0 (0%) 156m\n openshift-console console-7fd48bd95f-wksvb 10m (0%) 0 (0%) 100Mi (0%) 0 (0%) 150m\n openshift-console downloads-6874b45df6-w7xkq 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 156m\n openshift-dns dns-default-5mw2g 60m (1%) 0 (0%) 110Mi (0%) 0 (0%) 152m\n openshift-dns node-resolver-2kt92 5m (0%) 0 (0%) 21Mi (0%) 0 (0%) 152m\n openshift-image-registry image-registry-69fbbd6d88-6xgnp 100m (2%) 0 (0%) 256Mi (1%) 0 (0%) 8m29s\n openshift-image-registry node-ca-pmxp9 10m (0%) 0 (0%) 10Mi (0%) 0 (0%) 152m\n openshift-ingress-canary ingress-canary-wh5qj 10m (0%) 0 (0%) 20Mi (0%) 0 (0%) 152m\n openshift-ingress router-default-865b575f54-qjwfv 100m (2%) 0 (0%) 256Mi (1%) 0 (0%) 152m\n openshift-kube-proxy openshift-kube-proxy-r7t77 110m (2%) 0 (0%) 220Mi (1%) 0 (0%) 161m\n openshift-kube-storage-version-migrator migrator-77d7ddf546-9g7xm 10m (0%) 0 (0%) 200Mi (1%) 0 (0%) 152m\n openshift-marketplace certified-operators-qlqcc 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 155m\n openshift-marketplace community-operators-dtgmg 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 155m\n openshift-marketplace redhat-marketplace-vnvdb 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 155m\n openshift-marketplace redhat-operators-9qw52 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 28m\n openshift-monitoring alertmanager-main-1 9m (0%) 0 (0%) 120Mi (0%) 0 (0%) 150m\n openshift-monitoring kube-state-metrics-575bd9d6b6-2wk6g 4m (0%) 0 (0%) 110Mi (0%) 0 (0%) 8m29s\n openshift-monitoring node-exporter-2tscc 9m (0%) 0 (0%) 47Mi (0%) 0 (0%) 152m\n openshift-monitoring openshift-state-metrics-99754b784-vdbrs 3m (0%) 0 (0%) 72Mi (0%) 0 (0%) 8m29s\n openshift-monitoring prometheus-adapter-657855c676-qlc95 1m (0%) 0 (0%) 40Mi (0%) 0 (0%) 152m\n openshift-monitoring prometheus-k8s-1 75m (1%) 0 (0%) 1104Mi (8%) 0 (0%) 150m\n openshift-monitoring prometheus-operator-765bbdfd45-twq98 6m (0%) 0 (0%) 165Mi (1%) 0 (0%) 8m29s\n openshift-monitoring prometheus-operator-admission-webhook-84c7bbc8cc-hct4l 5m (0%) 0 (0%) 30Mi (0%) 0 (0%) 152m\n openshift-monitoring telemeter-client-c964ff8c9-xszvz 3m (0%) 0 (0%) 70Mi (0%) 0 (0%) 8m29s\n openshift-monitoring thanos-querier-7f9c896d7f-xqld6 15m (0%) 0 (0%) 92Mi (0%) 0 (0%) 152m\n openshift-multus multus-5x56j 10m (0%) 0 (0%) 65Mi (0%) 0 (0%) 161m\n openshift-multus multus-additional-cni-plugins-p7gf5 10m (0%) 0 (0%) 10Mi (0%) 0 (0%) 161m\n openshift-multus multus-admission-controller-8ccd764f4-j68g7 20m (0%) 0 (0%) 70Mi (0%) 0 (0%) 152m\n openshift-multus network-metrics-daemon-djvdx 20m (0%) 0 (0%) 120Mi (0%) 0 (0%) 161m\n openshift-network-diagnostics network-check-target-2j7hq 10m (0%) 0 (0%) 15Mi (0%) 0 (0%) 161m\n openshift-operator-lifecycle-manager packageserver-b9964c68-p2fd4 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 152m\n openshift-service-ca service-ca-665db46585-9cprv 10m (0%) 0 (0%) 120Mi (0%) 0 (0%) 156m\n sonobuoy sonobuoy-e2e-job-17fd703895604ed7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 31m\n sonobuoy sonobuoy-systemd-logs-daemon-set-2f6dede7a68c4213-vft4d 0 (0%) 0 (0%) 0 (0%) 0 (0%) 31m\n tigera-operator tigera-operator-5b48cf996b-5zb5v 100m (2%) 0 (0%) 40Mi (0%) 0 (0%) 177m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 1321m (33%) 1180m (30%)\n memory 4591123Ki (33%) 2420408320 (17%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\n scheduling.k8s.io/foo 0 0\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Starting 161m kube-proxy \n Normal Starting 166m kubelet Starting kubelet.\n Normal NodeAllocatableEnforced 166m kubelet Updated Node Allocatable limit across pods\n Normal RegisteredNode 166m node-controller Node 10.245.128.17 event: Registered Node 10.245.128.17 in Controller\n Normal NodeHasSufficientMemory 166m (x8 over 166m) kubelet Node 10.245.128.17 status is now: NodeHasSufficientMemory\n Normal NodeHasNoDiskPressure 166m (x8 over 166m) kubelet Node 10.245.128.17 status is now: NodeHasNoDiskPressure\n Normal NodeHasSufficientPID 166m (x7 over 166m) kubelet Node 10.245.128.17 status is now: NodeHasSufficientPID\n Normal Synced 166m cloud-node-controller Node synced successfully\n Normal NodeReady 156m kubelet Node 10.245.128.17 status is now: NodeReady\n Normal RegisteredNode 152m node-controller Node 10.245.128.17 event: Registered Node 10.245.128.17 in Controller\n" + Jul 27 01:58:36.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-379 describe namespace kubectl-379' + Jul 27 01:58:36.493: INFO: stderr: "" + Jul 27 01:58:36.493: INFO: stdout: "Name: kubectl-379\nLabels: e2e-framework=kubectl\n e2e-run=d18faff3-626a-4ea4-87bd-253935adf598\n kubernetes.io/metadata.name=kubectl-379\n pod-security.kubernetes.io/audit=privileged\n pod-security.kubernetes.io/audit-version=v1.24\n pod-security.kubernetes.io/enforce=baseline\n pod-security.kubernetes.io/warn=privileged\n pod-security.kubernetes.io/warn-version=v1.24\nAnnotations: openshift.io/sa.scc.mcs: s0:c46,c0\n openshift.io/sa.scc.supplemental-groups: 1002070000/10000\n openshift.io/sa.scc.uid-range: 1002070000/10000\nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" + [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 - Jun 12 21:20:10.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Downward API volume + Jul 27 01:58:36.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Downward API volume + [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Downward API volume + [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 - STEP: Destroying namespace "downward-api-1726" for this suite. 06/12/23 21:20:10.722 + STEP: Destroying namespace "kubectl-379" for this suite. 07/27/23 01:58:36.535 << End Captured GinkgoWriter Output ------------------------------ -SSS +SSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts - should allow opting out of API token automount [Conformance] - test/e2e/auth/service_accounts.go:161 + ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + test/e2e/auth/service_accounts.go:531 [BeforeEach] [sig-auth] ServiceAccounts set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:20:10.743 -Jun 12 21:20:10.744: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename svcaccounts 06/12/23 21:20:10.757 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:20:10.854 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:20:10.905 +STEP: Creating a kubernetes client 07/27/23 01:58:36.557 +Jul 27 01:58:36.557: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename svcaccounts 07/27/23 01:58:36.558 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:58:36.654 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:58:36.671 [BeforeEach] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:31 -[It] should allow opting out of API token automount [Conformance] - test/e2e/auth/service_accounts.go:161 -Jun 12 21:20:11.070: INFO: created pod pod-service-account-defaultsa -Jun 12 21:20:11.070: INFO: pod pod-service-account-defaultsa service account token volume mount: true -Jun 12 21:20:11.101: INFO: created pod pod-service-account-mountsa -Jun 12 21:20:11.101: INFO: pod pod-service-account-mountsa service account token volume mount: true -Jun 12 21:20:11.151: INFO: created pod pod-service-account-nomountsa -Jun 12 21:20:11.151: INFO: pod pod-service-account-nomountsa service account token volume mount: false -Jun 12 21:20:11.172: INFO: created pod pod-service-account-defaultsa-mountspec -Jun 12 21:20:11.172: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true -Jun 12 21:20:11.221: INFO: created pod pod-service-account-mountsa-mountspec -Jun 12 21:20:11.221: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true -Jun 12 21:20:11.240: INFO: created pod pod-service-account-nomountsa-mountspec -Jun 12 21:20:11.240: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true -Jun 12 21:20:11.267: INFO: created pod pod-service-account-defaultsa-nomountspec -Jun 12 21:20:11.267: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false -Jun 12 21:20:11.299: INFO: created pod pod-service-account-mountsa-nomountspec -Jun 12 21:20:11.300: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false -Jun 12 21:20:11.319: INFO: created pod pod-service-account-nomountsa-nomountspec -Jun 12 21:20:11.320: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false +[It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + test/e2e/auth/service_accounts.go:531 +Jul 27 01:58:36.738: INFO: created pod +Jul 27 01:58:36.738: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-1343" to be "Succeeded or Failed" +Jul 27 01:58:36.746: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 8.387816ms +Jul 27 01:58:38.756: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018317376s +Jul 27 01:58:40.755: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017141039s +STEP: Saw pod success 07/27/23 01:58:40.755 +Jul 27 01:58:40.755: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" +Jul 27 01:59:10.758: INFO: polling logs +Jul 27 01:59:10.780: INFO: Pod logs: +I0727 01:58:37.812706 1 log.go:198] OK: Got token +I0727 01:58:37.812752 1 log.go:198] validating with in-cluster discovery +I0727 01:58:37.813258 1 log.go:198] OK: got issuer https://kubernetes.default.svc +I0727 01:58:37.813292 1 log.go:198] Full, not-validated claims: +openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc", Subject:"system:serviceaccount:svcaccounts-1343:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1690423717, NotBefore:1690423117, IssuedAt:1690423117, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-1343", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"5f85e435-54b1-4cd3-a9c2-ed437befa5c0"}}} +I0727 01:58:37.837666 1 log.go:198] OK: Constructed OIDC provider for issuer https://kubernetes.default.svc +I0727 01:58:37.858750 1 log.go:198] OK: Validated signature on JWT +I0727 01:58:37.858841 1 log.go:198] OK: Got valid claims from token! +I0727 01:58:37.858873 1 log.go:198] Full, validated claims: +&openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc", Subject:"system:serviceaccount:svcaccounts-1343:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1690423717, NotBefore:1690423117, IssuedAt:1690423117, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-1343", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"5f85e435-54b1-4cd3-a9c2-ed437befa5c0"}}} + +Jul 27 01:59:10.781: INFO: completed pod [AfterEach] [sig-auth] ServiceAccounts test/e2e/framework/node/init/init.go:32 -Jun 12 21:20:11.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +Jul 27 01:59:10.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-auth] ServiceAccounts dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-auth] ServiceAccounts tear down framework | framework.go:193 -STEP: Destroying namespace "svcaccounts-1558" for this suite. 06/12/23 21:20:11.385 +STEP: Destroying namespace "svcaccounts-1343" for this suite. 07/27/23 01:59:10.818 ------------------------------ -• [0.690 seconds] +• [SLOW TEST] [34.285 seconds] [sig-auth] ServiceAccounts test/e2e/auth/framework.go:23 - should allow opting out of API token automount [Conformance] - test/e2e/auth/service_accounts.go:161 + ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + test/e2e/auth/service_accounts.go:531 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-auth] ServiceAccounts set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:20:10.743 - Jun 12 21:20:10.744: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename svcaccounts 06/12/23 21:20:10.757 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:20:10.854 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:20:10.905 + STEP: Creating a kubernetes client 07/27/23 01:58:36.557 + Jul 27 01:58:36.557: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename svcaccounts 07/27/23 01:58:36.558 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:58:36.654 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:58:36.671 [BeforeEach] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:31 - [It] should allow opting out of API token automount [Conformance] - test/e2e/auth/service_accounts.go:161 - Jun 12 21:20:11.070: INFO: created pod pod-service-account-defaultsa - Jun 12 21:20:11.070: INFO: pod pod-service-account-defaultsa service account token volume mount: true - Jun 12 21:20:11.101: INFO: created pod pod-service-account-mountsa - Jun 12 21:20:11.101: INFO: pod pod-service-account-mountsa service account token volume mount: true - Jun 12 21:20:11.151: INFO: created pod pod-service-account-nomountsa - Jun 12 21:20:11.151: INFO: pod pod-service-account-nomountsa service account token volume mount: false - Jun 12 21:20:11.172: INFO: created pod pod-service-account-defaultsa-mountspec - Jun 12 21:20:11.172: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true - Jun 12 21:20:11.221: INFO: created pod pod-service-account-mountsa-mountspec - Jun 12 21:20:11.221: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true - Jun 12 21:20:11.240: INFO: created pod pod-service-account-nomountsa-mountspec - Jun 12 21:20:11.240: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true - Jun 12 21:20:11.267: INFO: created pod pod-service-account-defaultsa-nomountspec - Jun 12 21:20:11.267: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false - Jun 12 21:20:11.299: INFO: created pod pod-service-account-mountsa-nomountspec - Jun 12 21:20:11.300: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false - Jun 12 21:20:11.319: INFO: created pod pod-service-account-nomountsa-nomountspec - Jun 12 21:20:11.320: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false + [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + test/e2e/auth/service_accounts.go:531 + Jul 27 01:58:36.738: INFO: created pod + Jul 27 01:58:36.738: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-1343" to be "Succeeded or Failed" + Jul 27 01:58:36.746: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 8.387816ms + Jul 27 01:58:38.756: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018317376s + Jul 27 01:58:40.755: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017141039s + STEP: Saw pod success 07/27/23 01:58:40.755 + Jul 27 01:58:40.755: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" + Jul 27 01:59:10.758: INFO: polling logs + Jul 27 01:59:10.780: INFO: Pod logs: + I0727 01:58:37.812706 1 log.go:198] OK: Got token + I0727 01:58:37.812752 1 log.go:198] validating with in-cluster discovery + I0727 01:58:37.813258 1 log.go:198] OK: got issuer https://kubernetes.default.svc + I0727 01:58:37.813292 1 log.go:198] Full, not-validated claims: + openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc", Subject:"system:serviceaccount:svcaccounts-1343:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1690423717, NotBefore:1690423117, IssuedAt:1690423117, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-1343", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"5f85e435-54b1-4cd3-a9c2-ed437befa5c0"}}} + I0727 01:58:37.837666 1 log.go:198] OK: Constructed OIDC provider for issuer https://kubernetes.default.svc + I0727 01:58:37.858750 1 log.go:198] OK: Validated signature on JWT + I0727 01:58:37.858841 1 log.go:198] OK: Got valid claims from token! + I0727 01:58:37.858873 1 log.go:198] Full, validated claims: + &openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc", Subject:"system:serviceaccount:svcaccounts-1343:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1690423717, NotBefore:1690423117, IssuedAt:1690423117, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-1343", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"5f85e435-54b1-4cd3-a9c2-ed437befa5c0"}}} + + Jul 27 01:59:10.781: INFO: completed pod [AfterEach] [sig-auth] ServiceAccounts test/e2e/framework/node/init/init.go:32 - Jun 12 21:20:11.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + Jul 27 01:59:10.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-auth] ServiceAccounts dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-auth] ServiceAccounts tear down framework | framework.go:193 - STEP: Destroying namespace "svcaccounts-1558" for this suite. 06/12/23 21:20:11.385 + STEP: Destroying namespace "svcaccounts-1343" for this suite. 07/27/23 01:59:10.818 << End Captured GinkgoWriter Output ------------------------------ -S +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] EmptyDir volumes - should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:187 -[BeforeEach] [sig-storage] EmptyDir volumes +[sig-api-machinery] Namespaces [Serial] + should ensure that all services are removed when a namespace is deleted [Conformance] + test/e2e/apimachinery/namespace.go:251 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:20:11.434 -Jun 12 21:20:11.434: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename emptydir 06/12/23 21:20:11.436 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:20:11.564 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:20:11.573 -[BeforeEach] [sig-storage] EmptyDir volumes +STEP: Creating a kubernetes client 07/27/23 01:59:10.847 +Jul 27 01:59:10.847: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename namespaces 07/27/23 01:59:10.848 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:59:10.904 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:59:10.912 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 -[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:187 -STEP: Creating a pod to test emptydir 0777 on node default medium 06/12/23 21:20:11.627 -Jun 12 21:20:11.671: INFO: Waiting up to 5m0s for pod "pod-e07e696d-82b9-4851-8eee-a3a78a0a2d87" in namespace "emptydir-3877" to be "Succeeded or Failed" -Jun 12 21:20:11.703: INFO: Pod "pod-e07e696d-82b9-4851-8eee-a3a78a0a2d87": Phase="Pending", Reason="", readiness=false. Elapsed: 31.734622ms -Jun 12 21:20:13.729: INFO: Pod "pod-e07e696d-82b9-4851-8eee-a3a78a0a2d87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057745083s -Jun 12 21:20:15.713: INFO: Pod "pod-e07e696d-82b9-4851-8eee-a3a78a0a2d87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041856679s -Jun 12 21:20:17.716: INFO: Pod "pod-e07e696d-82b9-4851-8eee-a3a78a0a2d87": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044666147s -Jun 12 21:20:19.716: INFO: Pod "pod-e07e696d-82b9-4851-8eee-a3a78a0a2d87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.044849011s -STEP: Saw pod success 06/12/23 21:20:19.716 -Jun 12 21:20:19.716: INFO: Pod "pod-e07e696d-82b9-4851-8eee-a3a78a0a2d87" satisfied condition "Succeeded or Failed" -Jun 12 21:20:19.726: INFO: Trying to get logs from node 10.138.75.70 pod pod-e07e696d-82b9-4851-8eee-a3a78a0a2d87 container test-container: -STEP: delete the pod 06/12/23 21:20:19.79 -Jun 12 21:20:19.822: INFO: Waiting for pod pod-e07e696d-82b9-4851-8eee-a3a78a0a2d87 to disappear -Jun 12 21:20:19.831: INFO: Pod pod-e07e696d-82b9-4851-8eee-a3a78a0a2d87 no longer exists -[AfterEach] [sig-storage] EmptyDir volumes +[It] should ensure that all services are removed when a namespace is deleted [Conformance] + test/e2e/apimachinery/namespace.go:251 +STEP: Creating a test namespace 07/27/23 01:59:10.921 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:59:11.18 +STEP: Creating a service in the namespace 07/27/23 01:59:11.189 +STEP: Deleting the namespace 07/27/23 01:59:11.361 +STEP: Waiting for the namespace to be removed. 07/27/23 01:59:11.397 +STEP: Recreating the namespace 07/27/23 01:59:18.411 +STEP: Verifying there is no service in the namespace 07/27/23 01:59:18.453 +[AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 -Jun 12 21:20:19.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +Jul 27 01:59:18.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 -STEP: Destroying namespace "emptydir-3877" for this suite. 06/12/23 21:20:19.857 +STEP: Destroying namespace "namespaces-4586" for this suite. 07/27/23 01:59:18.483 +STEP: Destroying namespace "nsdeletetest-2157" for this suite. 07/27/23 01:59:18.511 +Jul 27 01:59:18.533: INFO: Namespace nsdeletetest-2157 was already deleted +STEP: Destroying namespace "nsdeletetest-4797" for this suite. 07/27/23 01:59:18.533 ------------------------------ -• [SLOW TEST] [8.441 seconds] -[sig-storage] EmptyDir volumes -test/e2e/common/storage/framework.go:23 - should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:187 +• [SLOW TEST] [7.716 seconds] +[sig-api-machinery] Namespaces [Serial] +test/e2e/apimachinery/framework.go:23 + should ensure that all services are removed when a namespace is deleted [Conformance] + test/e2e/apimachinery/namespace.go:251 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] EmptyDir volumes + [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:20:11.434 - Jun 12 21:20:11.434: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename emptydir 06/12/23 21:20:11.436 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:20:11.564 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:20:11.573 - [BeforeEach] [sig-storage] EmptyDir volumes + STEP: Creating a kubernetes client 07/27/23 01:59:10.847 + Jul 27 01:59:10.847: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename namespaces 07/27/23 01:59:10.848 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:59:10.904 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:59:10.912 + [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 - [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:187 - STEP: Creating a pod to test emptydir 0777 on node default medium 06/12/23 21:20:11.627 - Jun 12 21:20:11.671: INFO: Waiting up to 5m0s for pod "pod-e07e696d-82b9-4851-8eee-a3a78a0a2d87" in namespace "emptydir-3877" to be "Succeeded or Failed" - Jun 12 21:20:11.703: INFO: Pod "pod-e07e696d-82b9-4851-8eee-a3a78a0a2d87": Phase="Pending", Reason="", readiness=false. Elapsed: 31.734622ms - Jun 12 21:20:13.729: INFO: Pod "pod-e07e696d-82b9-4851-8eee-a3a78a0a2d87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057745083s - Jun 12 21:20:15.713: INFO: Pod "pod-e07e696d-82b9-4851-8eee-a3a78a0a2d87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041856679s - Jun 12 21:20:17.716: INFO: Pod "pod-e07e696d-82b9-4851-8eee-a3a78a0a2d87": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044666147s - Jun 12 21:20:19.716: INFO: Pod "pod-e07e696d-82b9-4851-8eee-a3a78a0a2d87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.044849011s - STEP: Saw pod success 06/12/23 21:20:19.716 - Jun 12 21:20:19.716: INFO: Pod "pod-e07e696d-82b9-4851-8eee-a3a78a0a2d87" satisfied condition "Succeeded or Failed" - Jun 12 21:20:19.726: INFO: Trying to get logs from node 10.138.75.70 pod pod-e07e696d-82b9-4851-8eee-a3a78a0a2d87 container test-container: - STEP: delete the pod 06/12/23 21:20:19.79 - Jun 12 21:20:19.822: INFO: Waiting for pod pod-e07e696d-82b9-4851-8eee-a3a78a0a2d87 to disappear - Jun 12 21:20:19.831: INFO: Pod pod-e07e696d-82b9-4851-8eee-a3a78a0a2d87 no longer exists - [AfterEach] [sig-storage] EmptyDir volumes + [It] should ensure that all services are removed when a namespace is deleted [Conformance] + test/e2e/apimachinery/namespace.go:251 + STEP: Creating a test namespace 07/27/23 01:59:10.921 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:59:11.18 + STEP: Creating a service in the namespace 07/27/23 01:59:11.189 + STEP: Deleting the namespace 07/27/23 01:59:11.361 + STEP: Waiting for the namespace to be removed. 07/27/23 01:59:11.397 + STEP: Recreating the namespace 07/27/23 01:59:18.411 + STEP: Verifying there is no service in the namespace 07/27/23 01:59:18.453 + [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 - Jun 12 21:20:19.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + Jul 27 01:59:18.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 - STEP: Destroying namespace "emptydir-3877" for this suite. 06/12/23 21:20:19.857 + STEP: Destroying namespace "namespaces-4586" for this suite. 07/27/23 01:59:18.483 + STEP: Destroying namespace "nsdeletetest-2157" for this suite. 07/27/23 01:59:18.511 + Jul 27 01:59:18.533: INFO: Namespace nsdeletetest-2157 was already deleted + STEP: Destroying namespace "nsdeletetest-4797" for this suite. 07/27/23 01:59:18.533 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSS +SSSSS ------------------------------ -[sig-node] Probing container - should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] - test/e2e/common/node/container_probe.go:184 -[BeforeEach] [sig-node] Probing container +[sig-node] Secrets + should fail to create secret due to empty secret key [Conformance] + test/e2e/common/node/secrets.go:140 +[BeforeEach] [sig-node] Secrets set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:20:19.879 -Jun 12 21:20:19.879: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename container-probe 06/12/23 21:20:19.88 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:20:19.923 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:20:19.932 -[BeforeEach] [sig-node] Probing container +STEP: Creating a kubernetes client 07/27/23 01:59:18.563 +Jul 27 01:59:18.563: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename secrets 07/27/23 01:59:18.565 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:59:18.606 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:59:18.615 +[BeforeEach] [sig-node] Secrets test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-node] Probing container - test/e2e/common/node/container_probe.go:63 -[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] - test/e2e/common/node/container_probe.go:184 -STEP: Creating pod liveness-ec4b96ee-2451-4dd9-81de-c87910f1f62c in namespace container-probe-6921 06/12/23 21:20:19.947 -Jun 12 21:20:19.972: INFO: Waiting up to 5m0s for pod "liveness-ec4b96ee-2451-4dd9-81de-c87910f1f62c" in namespace "container-probe-6921" to be "not pending" -Jun 12 21:20:20.030: INFO: Pod "liveness-ec4b96ee-2451-4dd9-81de-c87910f1f62c": Phase="Pending", Reason="", readiness=false. Elapsed: 57.052104ms -Jun 12 21:20:22.043: INFO: Pod "liveness-ec4b96ee-2451-4dd9-81de-c87910f1f62c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070503542s -Jun 12 21:20:24.041: INFO: Pod "liveness-ec4b96ee-2451-4dd9-81de-c87910f1f62c": Phase="Running", Reason="", readiness=true. Elapsed: 4.068736051s -Jun 12 21:20:24.041: INFO: Pod "liveness-ec4b96ee-2451-4dd9-81de-c87910f1f62c" satisfied condition "not pending" -Jun 12 21:20:24.041: INFO: Started pod liveness-ec4b96ee-2451-4dd9-81de-c87910f1f62c in namespace container-probe-6921 -STEP: checking the pod's current state and verifying that restartCount is present 06/12/23 21:20:24.041 -Jun 12 21:20:24.053: INFO: Initial restart count of pod liveness-ec4b96ee-2451-4dd9-81de-c87910f1f62c is 0 -STEP: deleting the pod 06/12/23 21:24:25.972 -[AfterEach] [sig-node] Probing container +[It] should fail to create secret due to empty secret key [Conformance] + test/e2e/common/node/secrets.go:140 +STEP: Creating projection with secret that has name secret-emptykey-test-44348b36-9bd0-4ea0-a7d9-6164fe3c92bc 07/27/23 01:59:18.625 +[AfterEach] [sig-node] Secrets test/e2e/framework/node/init/init.go:32 -Jun 12 21:24:25.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Probing container +Jul 27 01:59:18.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Secrets test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Probing container +[DeferCleanup (Each)] [sig-node] Secrets dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Probing container +[DeferCleanup (Each)] [sig-node] Secrets tear down framework | framework.go:193 -STEP: Destroying namespace "container-probe-6921" for this suite. 06/12/23 21:24:26.012 +STEP: Destroying namespace "secrets-9920" for this suite. 07/27/23 01:59:18.646 ------------------------------ -• [SLOW TEST] [246.153 seconds] -[sig-node] Probing container +• [0.108 seconds] +[sig-node] Secrets test/e2e/common/node/framework.go:23 - should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] - test/e2e/common/node/container_probe.go:184 + should fail to create secret due to empty secret key [Conformance] + test/e2e/common/node/secrets.go:140 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Probing container + [BeforeEach] [sig-node] Secrets set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:20:19.879 - Jun 12 21:20:19.879: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename container-probe 06/12/23 21:20:19.88 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:20:19.923 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:20:19.932 - [BeforeEach] [sig-node] Probing container + STEP: Creating a kubernetes client 07/27/23 01:59:18.563 + Jul 27 01:59:18.563: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename secrets 07/27/23 01:59:18.565 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:59:18.606 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:59:18.615 + [BeforeEach] [sig-node] Secrets test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-node] Probing container - test/e2e/common/node/container_probe.go:63 - [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] - test/e2e/common/node/container_probe.go:184 - STEP: Creating pod liveness-ec4b96ee-2451-4dd9-81de-c87910f1f62c in namespace container-probe-6921 06/12/23 21:20:19.947 - Jun 12 21:20:19.972: INFO: Waiting up to 5m0s for pod "liveness-ec4b96ee-2451-4dd9-81de-c87910f1f62c" in namespace "container-probe-6921" to be "not pending" - Jun 12 21:20:20.030: INFO: Pod "liveness-ec4b96ee-2451-4dd9-81de-c87910f1f62c": Phase="Pending", Reason="", readiness=false. Elapsed: 57.052104ms - Jun 12 21:20:22.043: INFO: Pod "liveness-ec4b96ee-2451-4dd9-81de-c87910f1f62c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070503542s - Jun 12 21:20:24.041: INFO: Pod "liveness-ec4b96ee-2451-4dd9-81de-c87910f1f62c": Phase="Running", Reason="", readiness=true. Elapsed: 4.068736051s - Jun 12 21:20:24.041: INFO: Pod "liveness-ec4b96ee-2451-4dd9-81de-c87910f1f62c" satisfied condition "not pending" - Jun 12 21:20:24.041: INFO: Started pod liveness-ec4b96ee-2451-4dd9-81de-c87910f1f62c in namespace container-probe-6921 - STEP: checking the pod's current state and verifying that restartCount is present 06/12/23 21:20:24.041 - Jun 12 21:20:24.053: INFO: Initial restart count of pod liveness-ec4b96ee-2451-4dd9-81de-c87910f1f62c is 0 - STEP: deleting the pod 06/12/23 21:24:25.972 - [AfterEach] [sig-node] Probing container + [It] should fail to create secret due to empty secret key [Conformance] + test/e2e/common/node/secrets.go:140 + STEP: Creating projection with secret that has name secret-emptykey-test-44348b36-9bd0-4ea0-a7d9-6164fe3c92bc 07/27/23 01:59:18.625 + [AfterEach] [sig-node] Secrets test/e2e/framework/node/init/init.go:32 - Jun 12 21:24:25.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Probing container + Jul 27 01:59:18.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Secrets test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Probing container + [DeferCleanup (Each)] [sig-node] Secrets dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Probing container + [DeferCleanup (Each)] [sig-node] Secrets tear down framework | framework.go:193 - STEP: Destroying namespace "container-probe-6921" for this suite. 06/12/23 21:24:26.012 + STEP: Destroying namespace "secrets-9920" for this suite. 07/27/23 01:59:18.646 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSS +SS ------------------------------ -[sig-api-machinery] Namespaces [Serial] - should apply a finalizer to a Namespace [Conformance] - test/e2e/apimachinery/namespace.go:394 -[BeforeEach] [sig-api-machinery] Namespaces [Serial] +[sig-network] Services + should complete a service status lifecycle [Conformance] + test/e2e/network/service.go:3428 +[BeforeEach] [sig-network] Services set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:24:26.037 -Jun 12 21:24:26.037: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename namespaces 06/12/23 21:24:26.04 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:24:26.088 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:24:26.105 -[BeforeEach] [sig-api-machinery] Namespaces [Serial] +STEP: Creating a kubernetes client 07/27/23 01:59:18.672 +Jul 27 01:59:18.672: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename services 07/27/23 01:59:18.673 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:59:18.721 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:59:18.73 +[BeforeEach] [sig-network] Services test/e2e/framework/metrics/init/init.go:31 -[It] should apply a finalizer to a Namespace [Conformance] - test/e2e/apimachinery/namespace.go:394 -STEP: Creating namespace "e2e-ns-n8wnj" 06/12/23 21:24:26.116 -Jun 12 21:24:26.161: INFO: Namespace "e2e-ns-n8wnj-4888" has []v1.FinalizerName{"kubernetes"} -STEP: Adding e2e finalizer to namespace "e2e-ns-n8wnj-4888" 06/12/23 21:24:26.161 -Jun 12 21:24:26.228: INFO: Namespace "e2e-ns-n8wnj-4888" has []v1.FinalizerName{"kubernetes", "e2e.example.com/fakeFinalizer"} -STEP: Removing e2e finalizer from namespace "e2e-ns-n8wnj-4888" 06/12/23 21:24:26.229 -Jun 12 21:24:26.253: INFO: Namespace "e2e-ns-n8wnj-4888" has []v1.FinalizerName{"kubernetes"} -[AfterEach] [sig-api-machinery] Namespaces [Serial] +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should complete a service status lifecycle [Conformance] + test/e2e/network/service.go:3428 +STEP: creating a Service 07/27/23 01:59:18.786 +STEP: watching for the Service to be added 07/27/23 01:59:18.856 +Jul 27 01:59:18.861: INFO: Found Service test-service-8cjbd in namespace services-7033 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] +Jul 27 01:59:18.861: INFO: Service test-service-8cjbd created +STEP: Getting /status 07/27/23 01:59:18.861 +Jul 27 01:59:18.886: INFO: Service test-service-8cjbd has LoadBalancer: {[]} +STEP: patching the ServiceStatus 07/27/23 01:59:18.886 +STEP: watching for the Service to be patched 07/27/23 01:59:18.921 +Jul 27 01:59:18.934: INFO: observed Service test-service-8cjbd in namespace services-7033 with annotations: map[] & LoadBalancer: {[]} +Jul 27 01:59:18.934: INFO: Found Service test-service-8cjbd in namespace services-7033 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} +Jul 27 01:59:18.934: INFO: Service test-service-8cjbd has service status patched +STEP: updating the ServiceStatus 07/27/23 01:59:18.934 +Jul 27 01:59:18.972: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the Service to be updated 07/27/23 01:59:18.979 +Jul 27 01:59:18.989: INFO: Observed Service test-service-8cjbd in namespace services-7033 with annotations: map[] & Conditions: {[]} +Jul 27 01:59:18.989: INFO: Observed event: &Service{ObjectMeta:{test-service-8cjbd services-7033 d96ed6fa-bed2-40f0-a333-237661ba3dd2 85960 0 2023-07-27 01:59:18 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2023-07-27 01:59:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} } {e2e.test Update v1 2023-07-27 01:59:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}} status}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:172.21.23.194,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[172.21.23.194],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} +Jul 27 01:59:18.989: INFO: Found Service test-service-8cjbd in namespace services-7033 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Jul 27 01:59:18.989: INFO: Service test-service-8cjbd has service status updated +STEP: patching the service 07/27/23 01:59:18.989 +STEP: watching for the Service to be patched 07/27/23 01:59:19.007 +Jul 27 01:59:19.012: INFO: observed Service test-service-8cjbd in namespace services-7033 with labels: map[test-service-static:true] +Jul 27 01:59:19.012: INFO: observed Service test-service-8cjbd in namespace services-7033 with labels: map[test-service-static:true] +Jul 27 01:59:19.012: INFO: observed Service test-service-8cjbd in namespace services-7033 with labels: map[test-service-static:true] +Jul 27 01:59:19.012: INFO: Found Service test-service-8cjbd in namespace services-7033 with labels: map[test-service:patched test-service-static:true] +Jul 27 01:59:19.012: INFO: Service test-service-8cjbd patched +STEP: deleting the service 07/27/23 01:59:19.012 +STEP: watching for the Service to be deleted 07/27/23 01:59:19.066 +Jul 27 01:59:19.070: INFO: Observed event: ADDED +Jul 27 01:59:19.070: INFO: Observed event: MODIFIED +Jul 27 01:59:19.071: INFO: Observed event: MODIFIED +Jul 27 01:59:19.071: INFO: Observed event: MODIFIED +Jul 27 01:59:19.071: INFO: Found Service test-service-8cjbd in namespace services-7033 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] +Jul 27 01:59:19.071: INFO: Service test-service-8cjbd deleted +[AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 -Jun 12 21:24:26.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] +Jul 27 01:59:19.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] +[DeferCleanup (Each)] [sig-network] Services dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] +[DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 -STEP: Destroying namespace "namespaces-4561" for this suite. 06/12/23 21:24:26.268 -STEP: Destroying namespace "e2e-ns-n8wnj-4888" for this suite. 06/12/23 21:24:26.284 +STEP: Destroying namespace "services-7033" for this suite. 07/27/23 01:59:19.08 ------------------------------ -• [0.286 seconds] -[sig-api-machinery] Namespaces [Serial] -test/e2e/apimachinery/framework.go:23 - should apply a finalizer to a Namespace [Conformance] - test/e2e/apimachinery/namespace.go:394 +• [0.430 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should complete a service status lifecycle [Conformance] + test/e2e/network/service.go:3428 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] Namespaces [Serial] + [BeforeEach] [sig-network] Services set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:24:26.037 - Jun 12 21:24:26.037: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename namespaces 06/12/23 21:24:26.04 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:24:26.088 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:24:26.105 - [BeforeEach] [sig-api-machinery] Namespaces [Serial] + STEP: Creating a kubernetes client 07/27/23 01:59:18.672 + Jul 27 01:59:18.672: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename services 07/27/23 01:59:18.673 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:59:18.721 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:59:18.73 + [BeforeEach] [sig-network] Services test/e2e/framework/metrics/init/init.go:31 - [It] should apply a finalizer to a Namespace [Conformance] - test/e2e/apimachinery/namespace.go:394 - STEP: Creating namespace "e2e-ns-n8wnj" 06/12/23 21:24:26.116 - Jun 12 21:24:26.161: INFO: Namespace "e2e-ns-n8wnj-4888" has []v1.FinalizerName{"kubernetes"} - STEP: Adding e2e finalizer to namespace "e2e-ns-n8wnj-4888" 06/12/23 21:24:26.161 - Jun 12 21:24:26.228: INFO: Namespace "e2e-ns-n8wnj-4888" has []v1.FinalizerName{"kubernetes", "e2e.example.com/fakeFinalizer"} - STEP: Removing e2e finalizer from namespace "e2e-ns-n8wnj-4888" 06/12/23 21:24:26.229 - Jun 12 21:24:26.253: INFO: Namespace "e2e-ns-n8wnj-4888" has []v1.FinalizerName{"kubernetes"} - [AfterEach] [sig-api-machinery] Namespaces [Serial] + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should complete a service status lifecycle [Conformance] + test/e2e/network/service.go:3428 + STEP: creating a Service 07/27/23 01:59:18.786 + STEP: watching for the Service to be added 07/27/23 01:59:18.856 + Jul 27 01:59:18.861: INFO: Found Service test-service-8cjbd in namespace services-7033 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] + Jul 27 01:59:18.861: INFO: Service test-service-8cjbd created + STEP: Getting /status 07/27/23 01:59:18.861 + Jul 27 01:59:18.886: INFO: Service test-service-8cjbd has LoadBalancer: {[]} + STEP: patching the ServiceStatus 07/27/23 01:59:18.886 + STEP: watching for the Service to be patched 07/27/23 01:59:18.921 + Jul 27 01:59:18.934: INFO: observed Service test-service-8cjbd in namespace services-7033 with annotations: map[] & LoadBalancer: {[]} + Jul 27 01:59:18.934: INFO: Found Service test-service-8cjbd in namespace services-7033 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} + Jul 27 01:59:18.934: INFO: Service test-service-8cjbd has service status patched + STEP: updating the ServiceStatus 07/27/23 01:59:18.934 + Jul 27 01:59:18.972: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} + STEP: watching for the Service to be updated 07/27/23 01:59:18.979 + Jul 27 01:59:18.989: INFO: Observed Service test-service-8cjbd in namespace services-7033 with annotations: map[] & Conditions: {[]} + Jul 27 01:59:18.989: INFO: Observed event: &Service{ObjectMeta:{test-service-8cjbd services-7033 d96ed6fa-bed2-40f0-a333-237661ba3dd2 85960 0 2023-07-27 01:59:18 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2023-07-27 01:59:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} } {e2e.test Update v1 2023-07-27 01:59:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}} status}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:172.21.23.194,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[172.21.23.194],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} + Jul 27 01:59:18.989: INFO: Found Service test-service-8cjbd in namespace services-7033 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] + Jul 27 01:59:18.989: INFO: Service test-service-8cjbd has service status updated + STEP: patching the service 07/27/23 01:59:18.989 + STEP: watching for the Service to be patched 07/27/23 01:59:19.007 + Jul 27 01:59:19.012: INFO: observed Service test-service-8cjbd in namespace services-7033 with labels: map[test-service-static:true] + Jul 27 01:59:19.012: INFO: observed Service test-service-8cjbd in namespace services-7033 with labels: map[test-service-static:true] + Jul 27 01:59:19.012: INFO: observed Service test-service-8cjbd in namespace services-7033 with labels: map[test-service-static:true] + Jul 27 01:59:19.012: INFO: Found Service test-service-8cjbd in namespace services-7033 with labels: map[test-service:patched test-service-static:true] + Jul 27 01:59:19.012: INFO: Service test-service-8cjbd patched + STEP: deleting the service 07/27/23 01:59:19.012 + STEP: watching for the Service to be deleted 07/27/23 01:59:19.066 + Jul 27 01:59:19.070: INFO: Observed event: ADDED + Jul 27 01:59:19.070: INFO: Observed event: MODIFIED + Jul 27 01:59:19.071: INFO: Observed event: MODIFIED + Jul 27 01:59:19.071: INFO: Observed event: MODIFIED + Jul 27 01:59:19.071: INFO: Found Service test-service-8cjbd in namespace services-7033 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] + Jul 27 01:59:19.071: INFO: Service test-service-8cjbd deleted + [AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 - Jun 12 21:24:26.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + Jul 27 01:59:19.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + [DeferCleanup (Each)] [sig-network] Services dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + [DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 - STEP: Destroying namespace "namespaces-4561" for this suite. 06/12/23 21:24:26.268 - STEP: Destroying namespace "e2e-ns-n8wnj-4888" for this suite. 06/12/23 21:24:26.284 + STEP: Destroying namespace "services-7033" for this suite. 07/27/23 01:59:19.08 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------- -[sig-network] Services - should serve a basic endpoint from pods [Conformance] - test/e2e/network/service.go:787 -[BeforeEach] [sig-network] Services +[sig-storage] Projected configMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:375 +[BeforeEach] [sig-storage] Projected configMap set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:24:26.328 -Jun 12 21:24:26.328: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename services 06/12/23 21:24:26.331 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:24:26.373 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:24:26.384 -[BeforeEach] [sig-network] Services +STEP: Creating a kubernetes client 07/27/23 01:59:19.102 +Jul 27 01:59:19.102: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename projected 07/27/23 01:59:19.103 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:59:19.159 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:59:19.169 +[BeforeEach] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-network] Services - test/e2e/network/service.go:766 -[It] should serve a basic endpoint from pods [Conformance] - test/e2e/network/service.go:787 -STEP: creating service endpoint-test2 in namespace services-8372 06/12/23 21:24:26.4 -STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8372 to expose endpoints map[] 06/12/23 21:24:26.444 -Jun 12 21:24:26.454: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found -Jun 12 21:24:27.489: INFO: successfully validated that service endpoint-test2 in namespace services-8372 exposes endpoints map[] -STEP: Creating pod pod1 in namespace services-8372 06/12/23 21:24:27.489 -Jun 12 21:24:27.525: INFO: Waiting up to 5m0s for pod "pod1" in namespace "services-8372" to be "running and ready" -Jun 12 21:24:27.537: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 11.732641ms -Jun 12 21:24:27.537: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:24:29.552: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026742978s -Jun 12 21:24:29.552: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:24:31.561: INFO: Pod "pod1": Phase="Running", Reason="", readiness=true. Elapsed: 4.035634765s -Jun 12 21:24:31.561: INFO: The phase of Pod pod1 is Running (Ready = true) -Jun 12 21:24:31.561: INFO: Pod "pod1" satisfied condition "running and ready" -STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8372 to expose endpoints map[pod1:[80]] 06/12/23 21:24:31.592 -Jun 12 21:24:31.645: INFO: successfully validated that service endpoint-test2 in namespace services-8372 exposes endpoints map[pod1:[80]] -STEP: Checking if the Service forwards traffic to pod1 06/12/23 21:24:31.646 -Jun 12 21:24:31.646: INFO: Creating new exec pod -Jun 12 21:24:31.674: INFO: Waiting up to 5m0s for pod "execpod89fjc" in namespace "services-8372" to be "running" -Jun 12 21:24:31.704: INFO: Pod "execpod89fjc": Phase="Pending", Reason="", readiness=false. Elapsed: 29.530729ms -Jun 12 21:24:33.715: INFO: Pod "execpod89fjc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040727406s -Jun 12 21:24:35.716: INFO: Pod "execpod89fjc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042128s -Jun 12 21:24:37.728: INFO: Pod "execpod89fjc": Phase="Running", Reason="", readiness=true. Elapsed: 6.054239425s -Jun 12 21:24:37.729: INFO: Pod "execpod89fjc" satisfied condition "running" -Jun 12 21:24:38.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-8372 exec execpod89fjc -- /bin/sh -x -c nc -v -z -w 2 endpoint-test2 80' -Jun 12 21:24:39.950: INFO: stderr: "+ nc -v -z -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" -Jun 12 21:24:39.950: INFO: stdout: "" -Jun 12 21:24:39.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-8372 exec execpod89fjc -- /bin/sh -x -c nc -v -z -w 2 172.21.142.154 80' -Jun 12 21:24:40.882: INFO: stderr: "+ nc -v -z -w 2 172.21.142.154 80\nConnection to 172.21.142.154 80 port [tcp/http] succeeded!\n" -Jun 12 21:24:40.882: INFO: stdout: "" -STEP: Creating pod pod2 in namespace services-8372 06/12/23 21:24:40.882 -Jun 12 21:24:40.903: INFO: Waiting up to 5m0s for pod "pod2" in namespace "services-8372" to be "running and ready" -Jun 12 21:24:40.911: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.733817ms -Jun 12 21:24:40.911: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:24:42.923: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019710083s -Jun 12 21:24:42.923: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:24:44.923: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 4.019438381s -Jun 12 21:24:44.923: INFO: The phase of Pod pod2 is Running (Ready = true) -Jun 12 21:24:44.923: INFO: Pod "pod2" satisfied condition "running and ready" -STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8372 to expose endpoints map[pod1:[80] pod2:[80]] 06/12/23 21:24:44.931 -Jun 12 21:24:44.969: INFO: successfully validated that service endpoint-test2 in namespace services-8372 exposes endpoints map[pod1:[80] pod2:[80]] -STEP: Checking if the Service forwards traffic to pod1 and pod2 06/12/23 21:24:44.97 -Jun 12 21:24:45.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-8372 exec execpod89fjc -- /bin/sh -x -c nc -v -z -w 2 endpoint-test2 80' -Jun 12 21:24:46.589: INFO: stderr: "+ nc -v -z -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" -Jun 12 21:24:46.589: INFO: stdout: "" -Jun 12 21:24:46.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-8372 exec execpod89fjc -- /bin/sh -x -c nc -v -z -w 2 172.21.142.154 80' -Jun 12 21:24:47.035: INFO: stderr: "+ nc -v -z -w 2 172.21.142.154 80\nConnection to 172.21.142.154 80 port [tcp/http] succeeded!\n" -Jun 12 21:24:47.035: INFO: stdout: "" -STEP: Deleting pod pod1 in namespace services-8372 06/12/23 21:24:47.035 -STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8372 to expose endpoints map[pod2:[80]] 06/12/23 21:24:47.063 -Jun 12 21:24:48.123: INFO: successfully validated that service endpoint-test2 in namespace services-8372 exposes endpoints map[pod2:[80]] -STEP: Checking if the Service forwards traffic to pod2 06/12/23 21:24:48.123 -Jun 12 21:24:49.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-8372 exec execpod89fjc -- /bin/sh -x -c nc -v -z -w 2 endpoint-test2 80' -Jun 12 21:24:49.912: INFO: stderr: "+ nc -v -z -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" -Jun 12 21:24:49.912: INFO: stdout: "" -Jun 12 21:24:49.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-8372 exec execpod89fjc -- /bin/sh -x -c nc -v -z -w 2 172.21.142.154 80' -Jun 12 21:24:50.330: INFO: stderr: "+ nc -v -z -w 2 172.21.142.154 80\nConnection to 172.21.142.154 80 port [tcp/http] succeeded!\n" -Jun 12 21:24:50.330: INFO: stdout: "" -STEP: Deleting pod pod2 in namespace services-8372 06/12/23 21:24:50.33 -STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8372 to expose endpoints map[] 06/12/23 21:24:50.373 -Jun 12 21:24:51.464: INFO: successfully validated that service endpoint-test2 in namespace services-8372 exposes endpoints map[] -[AfterEach] [sig-network] Services +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:375 +STEP: Creating configMap with name projected-configmap-test-volume-f407a6b6-e980-4604-8c32-d6ffcd129b0f 07/27/23 01:59:19.183 +STEP: Creating a pod to test consume configMaps 07/27/23 01:59:19.211 +Jul 27 01:59:19.251: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1e5b95e6-6382-4492-867b-6cd83385f73b" in namespace "projected-8255" to be "Succeeded or Failed" +Jul 27 01:59:19.267: INFO: Pod "pod-projected-configmaps-1e5b95e6-6382-4492-867b-6cd83385f73b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.556125ms +Jul 27 01:59:21.277: INFO: Pod "pod-projected-configmaps-1e5b95e6-6382-4492-867b-6cd83385f73b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025356379s +Jul 27 01:59:23.277: INFO: Pod "pod-projected-configmaps-1e5b95e6-6382-4492-867b-6cd83385f73b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025923645s +STEP: Saw pod success 07/27/23 01:59:23.277 +Jul 27 01:59:23.277: INFO: Pod "pod-projected-configmaps-1e5b95e6-6382-4492-867b-6cd83385f73b" satisfied condition "Succeeded or Failed" +Jul 27 01:59:23.286: INFO: Trying to get logs from node 10.245.128.19 pod pod-projected-configmaps-1e5b95e6-6382-4492-867b-6cd83385f73b container projected-configmap-volume-test: +STEP: delete the pod 07/27/23 01:59:23.331 +Jul 27 01:59:23.355: INFO: Waiting for pod pod-projected-configmaps-1e5b95e6-6382-4492-867b-6cd83385f73b to disappear +Jul 27 01:59:23.364: INFO: Pod pod-projected-configmaps-1e5b95e6-6382-4492-867b-6cd83385f73b no longer exists +[AfterEach] [sig-storage] Projected configMap test/e2e/framework/node/init/init.go:32 -Jun 12 21:24:51.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] Services +Jul 27 01:59:23.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] Services +[DeferCleanup (Each)] [sig-storage] Projected configMap dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] Services +[DeferCleanup (Each)] [sig-storage] Projected configMap tear down framework | framework.go:193 -STEP: Destroying namespace "services-8372" for this suite. 06/12/23 21:24:51.53 +STEP: Destroying namespace "projected-8255" for this suite. 07/27/23 01:59:23.386 ------------------------------ -• [SLOW TEST] [25.220 seconds] -[sig-network] Services -test/e2e/network/common/framework.go:23 - should serve a basic endpoint from pods [Conformance] - test/e2e/network/service.go:787 +• [4.309 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:375 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] Services + [BeforeEach] [sig-storage] Projected configMap set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:24:26.328 - Jun 12 21:24:26.328: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename services 06/12/23 21:24:26.331 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:24:26.373 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:24:26.384 - [BeforeEach] [sig-network] Services + STEP: Creating a kubernetes client 07/27/23 01:59:19.102 + Jul 27 01:59:19.102: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename projected 07/27/23 01:59:19.103 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:59:19.159 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:59:19.169 + [BeforeEach] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-network] Services - test/e2e/network/service.go:766 - [It] should serve a basic endpoint from pods [Conformance] - test/e2e/network/service.go:787 - STEP: creating service endpoint-test2 in namespace services-8372 06/12/23 21:24:26.4 - STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8372 to expose endpoints map[] 06/12/23 21:24:26.444 - Jun 12 21:24:26.454: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found - Jun 12 21:24:27.489: INFO: successfully validated that service endpoint-test2 in namespace services-8372 exposes endpoints map[] - STEP: Creating pod pod1 in namespace services-8372 06/12/23 21:24:27.489 - Jun 12 21:24:27.525: INFO: Waiting up to 5m0s for pod "pod1" in namespace "services-8372" to be "running and ready" - Jun 12 21:24:27.537: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 11.732641ms - Jun 12 21:24:27.537: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:24:29.552: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026742978s - Jun 12 21:24:29.552: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:24:31.561: INFO: Pod "pod1": Phase="Running", Reason="", readiness=true. Elapsed: 4.035634765s - Jun 12 21:24:31.561: INFO: The phase of Pod pod1 is Running (Ready = true) - Jun 12 21:24:31.561: INFO: Pod "pod1" satisfied condition "running and ready" - STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8372 to expose endpoints map[pod1:[80]] 06/12/23 21:24:31.592 - Jun 12 21:24:31.645: INFO: successfully validated that service endpoint-test2 in namespace services-8372 exposes endpoints map[pod1:[80]] - STEP: Checking if the Service forwards traffic to pod1 06/12/23 21:24:31.646 - Jun 12 21:24:31.646: INFO: Creating new exec pod - Jun 12 21:24:31.674: INFO: Waiting up to 5m0s for pod "execpod89fjc" in namespace "services-8372" to be "running" - Jun 12 21:24:31.704: INFO: Pod "execpod89fjc": Phase="Pending", Reason="", readiness=false. Elapsed: 29.530729ms - Jun 12 21:24:33.715: INFO: Pod "execpod89fjc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040727406s - Jun 12 21:24:35.716: INFO: Pod "execpod89fjc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042128s - Jun 12 21:24:37.728: INFO: Pod "execpod89fjc": Phase="Running", Reason="", readiness=true. Elapsed: 6.054239425s - Jun 12 21:24:37.729: INFO: Pod "execpod89fjc" satisfied condition "running" - Jun 12 21:24:38.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-8372 exec execpod89fjc -- /bin/sh -x -c nc -v -z -w 2 endpoint-test2 80' - Jun 12 21:24:39.950: INFO: stderr: "+ nc -v -z -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" - Jun 12 21:24:39.950: INFO: stdout: "" - Jun 12 21:24:39.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-8372 exec execpod89fjc -- /bin/sh -x -c nc -v -z -w 2 172.21.142.154 80' - Jun 12 21:24:40.882: INFO: stderr: "+ nc -v -z -w 2 172.21.142.154 80\nConnection to 172.21.142.154 80 port [tcp/http] succeeded!\n" - Jun 12 21:24:40.882: INFO: stdout: "" - STEP: Creating pod pod2 in namespace services-8372 06/12/23 21:24:40.882 - Jun 12 21:24:40.903: INFO: Waiting up to 5m0s for pod "pod2" in namespace "services-8372" to be "running and ready" - Jun 12 21:24:40.911: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.733817ms - Jun 12 21:24:40.911: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:24:42.923: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019710083s - Jun 12 21:24:42.923: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:24:44.923: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 4.019438381s - Jun 12 21:24:44.923: INFO: The phase of Pod pod2 is Running (Ready = true) - Jun 12 21:24:44.923: INFO: Pod "pod2" satisfied condition "running and ready" - STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8372 to expose endpoints map[pod1:[80] pod2:[80]] 06/12/23 21:24:44.931 - Jun 12 21:24:44.969: INFO: successfully validated that service endpoint-test2 in namespace services-8372 exposes endpoints map[pod1:[80] pod2:[80]] - STEP: Checking if the Service forwards traffic to pod1 and pod2 06/12/23 21:24:44.97 - Jun 12 21:24:45.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-8372 exec execpod89fjc -- /bin/sh -x -c nc -v -z -w 2 endpoint-test2 80' - Jun 12 21:24:46.589: INFO: stderr: "+ nc -v -z -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" - Jun 12 21:24:46.589: INFO: stdout: "" - Jun 12 21:24:46.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-8372 exec execpod89fjc -- /bin/sh -x -c nc -v -z -w 2 172.21.142.154 80' - Jun 12 21:24:47.035: INFO: stderr: "+ nc -v -z -w 2 172.21.142.154 80\nConnection to 172.21.142.154 80 port [tcp/http] succeeded!\n" - Jun 12 21:24:47.035: INFO: stdout: "" - STEP: Deleting pod pod1 in namespace services-8372 06/12/23 21:24:47.035 - STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8372 to expose endpoints map[pod2:[80]] 06/12/23 21:24:47.063 - Jun 12 21:24:48.123: INFO: successfully validated that service endpoint-test2 in namespace services-8372 exposes endpoints map[pod2:[80]] - STEP: Checking if the Service forwards traffic to pod2 06/12/23 21:24:48.123 - Jun 12 21:24:49.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-8372 exec execpod89fjc -- /bin/sh -x -c nc -v -z -w 2 endpoint-test2 80' - Jun 12 21:24:49.912: INFO: stderr: "+ nc -v -z -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" - Jun 12 21:24:49.912: INFO: stdout: "" - Jun 12 21:24:49.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-8372 exec execpod89fjc -- /bin/sh -x -c nc -v -z -w 2 172.21.142.154 80' - Jun 12 21:24:50.330: INFO: stderr: "+ nc -v -z -w 2 172.21.142.154 80\nConnection to 172.21.142.154 80 port [tcp/http] succeeded!\n" - Jun 12 21:24:50.330: INFO: stdout: "" - STEP: Deleting pod pod2 in namespace services-8372 06/12/23 21:24:50.33 - STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8372 to expose endpoints map[] 06/12/23 21:24:50.373 - Jun 12 21:24:51.464: INFO: successfully validated that service endpoint-test2 in namespace services-8372 exposes endpoints map[] - [AfterEach] [sig-network] Services + [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:375 + STEP: Creating configMap with name projected-configmap-test-volume-f407a6b6-e980-4604-8c32-d6ffcd129b0f 07/27/23 01:59:19.183 + STEP: Creating a pod to test consume configMaps 07/27/23 01:59:19.211 + Jul 27 01:59:19.251: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1e5b95e6-6382-4492-867b-6cd83385f73b" in namespace "projected-8255" to be "Succeeded or Failed" + Jul 27 01:59:19.267: INFO: Pod "pod-projected-configmaps-1e5b95e6-6382-4492-867b-6cd83385f73b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.556125ms + Jul 27 01:59:21.277: INFO: Pod "pod-projected-configmaps-1e5b95e6-6382-4492-867b-6cd83385f73b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025356379s + Jul 27 01:59:23.277: INFO: Pod "pod-projected-configmaps-1e5b95e6-6382-4492-867b-6cd83385f73b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025923645s + STEP: Saw pod success 07/27/23 01:59:23.277 + Jul 27 01:59:23.277: INFO: Pod "pod-projected-configmaps-1e5b95e6-6382-4492-867b-6cd83385f73b" satisfied condition "Succeeded or Failed" + Jul 27 01:59:23.286: INFO: Trying to get logs from node 10.245.128.19 pod pod-projected-configmaps-1e5b95e6-6382-4492-867b-6cd83385f73b container projected-configmap-volume-test: + STEP: delete the pod 07/27/23 01:59:23.331 + Jul 27 01:59:23.355: INFO: Waiting for pod pod-projected-configmaps-1e5b95e6-6382-4492-867b-6cd83385f73b to disappear + Jul 27 01:59:23.364: INFO: Pod pod-projected-configmaps-1e5b95e6-6382-4492-867b-6cd83385f73b no longer exists + [AfterEach] [sig-storage] Projected configMap test/e2e/framework/node/init/init.go:32 - Jun 12 21:24:51.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] Services + Jul 27 01:59:23.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] Services + [DeferCleanup (Each)] [sig-storage] Projected configMap dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] Services + [DeferCleanup (Each)] [sig-storage] Projected configMap tear down framework | framework.go:193 - STEP: Destroying namespace "services-8372" for this suite. 06/12/23 21:24:51.53 + STEP: Destroying namespace "projected-8255" for this suite. 07/27/23 01:59:23.386 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSS +SSSSSSSSSSSSS ------------------------------ -[sig-node] Secrets - should patch a secret [Conformance] - test/e2e/common/node/secrets.go:154 -[BeforeEach] [sig-node] Secrets +[sig-node] Security Context + should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/node/security_context.go:164 +[BeforeEach] [sig-node] Security Context set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:24:51.551 -Jun 12 21:24:51.551: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename secrets 06/12/23 21:24:51.554 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:24:51.604 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:24:51.614 -[BeforeEach] [sig-node] Secrets +STEP: Creating a kubernetes client 07/27/23 01:59:23.412 +Jul 27 01:59:23.412: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename security-context 07/27/23 01:59:23.413 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:59:23.457 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:59:23.466 +[BeforeEach] [sig-node] Security Context test/e2e/framework/metrics/init/init.go:31 -[It] should patch a secret [Conformance] - test/e2e/common/node/secrets.go:154 -STEP: creating a secret 06/12/23 21:24:51.635 -STEP: listing secrets in all namespaces to ensure that there are more than zero 06/12/23 21:24:51.651 -STEP: patching the secret 06/12/23 21:24:51.913 -STEP: deleting the secret using a LabelSelector 06/12/23 21:24:51.936 -STEP: listing secrets in all namespaces, searching for label name and value in patch 06/12/23 21:24:51.972 -[AfterEach] [sig-node] Secrets +[It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/node/security_context.go:164 +STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser 07/27/23 01:59:23.477 +W0727 01:59:23.504664 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "test-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "test-container" must set securityContext.capabilities.drop=["ALL"]), seccompProfile (pod or container "test-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +Jul 27 01:59:23.504: INFO: Waiting up to 5m0s for pod "security-context-397c8291-14a2-4ca0-9ee7-30d6cfc8dc08" in namespace "security-context-7740" to be "Succeeded or Failed" +Jul 27 01:59:23.514: INFO: Pod "security-context-397c8291-14a2-4ca0-9ee7-30d6cfc8dc08": Phase="Pending", Reason="", readiness=false. Elapsed: 9.817111ms +Jul 27 01:59:25.542: INFO: Pod "security-context-397c8291-14a2-4ca0-9ee7-30d6cfc8dc08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03802657s +Jul 27 01:59:27.524: INFO: Pod "security-context-397c8291-14a2-4ca0-9ee7-30d6cfc8dc08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019855385s +STEP: Saw pod success 07/27/23 01:59:27.524 +Jul 27 01:59:27.524: INFO: Pod "security-context-397c8291-14a2-4ca0-9ee7-30d6cfc8dc08" satisfied condition "Succeeded or Failed" +Jul 27 01:59:27.534: INFO: Trying to get logs from node 10.245.128.19 pod security-context-397c8291-14a2-4ca0-9ee7-30d6cfc8dc08 container test-container: +STEP: delete the pod 07/27/23 01:59:27.559 +Jul 27 01:59:27.580: INFO: Waiting for pod security-context-397c8291-14a2-4ca0-9ee7-30d6cfc8dc08 to disappear +Jul 27 01:59:27.589: INFO: Pod security-context-397c8291-14a2-4ca0-9ee7-30d6cfc8dc08 no longer exists +[AfterEach] [sig-node] Security Context test/e2e/framework/node/init/init.go:32 -Jun 12 21:24:52.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Secrets +Jul 27 01:59:27.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Security Context test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Secrets +[DeferCleanup (Each)] [sig-node] Security Context dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Secrets +[DeferCleanup (Each)] [sig-node] Security Context tear down framework | framework.go:193 -STEP: Destroying namespace "secrets-6552" for this suite. 06/12/23 21:24:52.186 +STEP: Destroying namespace "security-context-7740" for this suite. 07/27/23 01:59:27.603 ------------------------------ -• [0.647 seconds] -[sig-node] Secrets -test/e2e/common/node/framework.go:23 - should patch a secret [Conformance] - test/e2e/common/node/secrets.go:154 +• [4.214 seconds] +[sig-node] Security Context +test/e2e/node/framework.go:23 + should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/node/security_context.go:164 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Secrets + [BeforeEach] [sig-node] Security Context set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:24:51.551 - Jun 12 21:24:51.551: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename secrets 06/12/23 21:24:51.554 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:24:51.604 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:24:51.614 - [BeforeEach] [sig-node] Secrets + STEP: Creating a kubernetes client 07/27/23 01:59:23.412 + Jul 27 01:59:23.412: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename security-context 07/27/23 01:59:23.413 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:59:23.457 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:59:23.466 + [BeforeEach] [sig-node] Security Context test/e2e/framework/metrics/init/init.go:31 - [It] should patch a secret [Conformance] - test/e2e/common/node/secrets.go:154 - STEP: creating a secret 06/12/23 21:24:51.635 - STEP: listing secrets in all namespaces to ensure that there are more than zero 06/12/23 21:24:51.651 - STEP: patching the secret 06/12/23 21:24:51.913 - STEP: deleting the secret using a LabelSelector 06/12/23 21:24:51.936 - STEP: listing secrets in all namespaces, searching for label name and value in patch 06/12/23 21:24:51.972 - [AfterEach] [sig-node] Secrets + [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/node/security_context.go:164 + STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser 07/27/23 01:59:23.477 + W0727 01:59:23.504664 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "test-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "test-container" must set securityContext.capabilities.drop=["ALL"]), seccompProfile (pod or container "test-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + Jul 27 01:59:23.504: INFO: Waiting up to 5m0s for pod "security-context-397c8291-14a2-4ca0-9ee7-30d6cfc8dc08" in namespace "security-context-7740" to be "Succeeded or Failed" + Jul 27 01:59:23.514: INFO: Pod "security-context-397c8291-14a2-4ca0-9ee7-30d6cfc8dc08": Phase="Pending", Reason="", readiness=false. Elapsed: 9.817111ms + Jul 27 01:59:25.542: INFO: Pod "security-context-397c8291-14a2-4ca0-9ee7-30d6cfc8dc08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03802657s + Jul 27 01:59:27.524: INFO: Pod "security-context-397c8291-14a2-4ca0-9ee7-30d6cfc8dc08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019855385s + STEP: Saw pod success 07/27/23 01:59:27.524 + Jul 27 01:59:27.524: INFO: Pod "security-context-397c8291-14a2-4ca0-9ee7-30d6cfc8dc08" satisfied condition "Succeeded or Failed" + Jul 27 01:59:27.534: INFO: Trying to get logs from node 10.245.128.19 pod security-context-397c8291-14a2-4ca0-9ee7-30d6cfc8dc08 container test-container: + STEP: delete the pod 07/27/23 01:59:27.559 + Jul 27 01:59:27.580: INFO: Waiting for pod security-context-397c8291-14a2-4ca0-9ee7-30d6cfc8dc08 to disappear + Jul 27 01:59:27.589: INFO: Pod security-context-397c8291-14a2-4ca0-9ee7-30d6cfc8dc08 no longer exists + [AfterEach] [sig-node] Security Context test/e2e/framework/node/init/init.go:32 - Jun 12 21:24:52.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Secrets + Jul 27 01:59:27.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Security Context test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Secrets + [DeferCleanup (Each)] [sig-node] Security Context dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Secrets + [DeferCleanup (Each)] [sig-node] Security Context tear down framework | framework.go:193 - STEP: Destroying namespace "secrets-6552" for this suite. 06/12/23 21:24:52.186 + STEP: Destroying namespace "security-context-7740" for this suite. 07/27/23 01:59:27.603 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSS ------------------------------ -[sig-node] Downward API - should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] - test/e2e/common/node/downwardapi.go:217 -[BeforeEach] [sig-node] Downward API +[sig-storage] CSIInlineVolumes + should support ephemeral VolumeLifecycleMode in CSIDriver API [Conformance] + test/e2e/storage/csi_inline.go:46 +[BeforeEach] [sig-storage] CSIInlineVolumes set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:24:52.203 -Jun 12 21:24:52.203: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename downward-api 06/12/23 21:24:52.205 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:24:52.246 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:24:52.256 -[BeforeEach] [sig-node] Downward API +STEP: Creating a kubernetes client 07/27/23 01:59:27.626 +Jul 27 01:59:27.627: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename csiinlinevolumes 07/27/23 01:59:27.627 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:59:27.664 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:59:27.673 +[BeforeEach] [sig-storage] CSIInlineVolumes test/e2e/framework/metrics/init/init.go:31 -[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] - test/e2e/common/node/downwardapi.go:217 -STEP: Creating a pod to test downward api env vars 06/12/23 21:24:52.27 -Jun 12 21:24:52.298: INFO: Waiting up to 5m0s for pod "downward-api-34e7e485-3126-425f-8e65-d5fe746e0e2a" in namespace "downward-api-5790" to be "Succeeded or Failed" -Jun 12 21:24:52.333: INFO: Pod "downward-api-34e7e485-3126-425f-8e65-d5fe746e0e2a": Phase="Pending", Reason="", readiness=false. Elapsed: 33.98767ms -Jun 12 21:24:54.343: INFO: Pod "downward-api-34e7e485-3126-425f-8e65-d5fe746e0e2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043979032s -Jun 12 21:24:56.344: INFO: Pod "downward-api-34e7e485-3126-425f-8e65-d5fe746e0e2a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044908558s -Jun 12 21:24:58.343: INFO: Pod "downward-api-34e7e485-3126-425f-8e65-d5fe746e0e2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.044823213s -STEP: Saw pod success 06/12/23 21:24:58.344 -Jun 12 21:24:58.344: INFO: Pod "downward-api-34e7e485-3126-425f-8e65-d5fe746e0e2a" satisfied condition "Succeeded or Failed" -Jun 12 21:24:58.355: INFO: Trying to get logs from node 10.138.75.70 pod downward-api-34e7e485-3126-425f-8e65-d5fe746e0e2a container dapi-container: -STEP: delete the pod 06/12/23 21:24:58.443 -Jun 12 21:24:58.491: INFO: Waiting for pod downward-api-34e7e485-3126-425f-8e65-d5fe746e0e2a to disappear -Jun 12 21:24:58.504: INFO: Pod downward-api-34e7e485-3126-425f-8e65-d5fe746e0e2a no longer exists -[AfterEach] [sig-node] Downward API +[It] should support ephemeral VolumeLifecycleMode in CSIDriver API [Conformance] + test/e2e/storage/csi_inline.go:46 +STEP: creating 07/27/23 01:59:27.682 +STEP: getting 07/27/23 01:59:27.729 +STEP: listing 07/27/23 01:59:27.751 +STEP: deleting 07/27/23 01:59:27.769 +[AfterEach] [sig-storage] CSIInlineVolumes test/e2e/framework/node/init/init.go:32 -Jun 12 21:24:58.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Downward API +Jul 27 01:59:27.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] CSIInlineVolumes test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Downward API +[DeferCleanup (Each)] [sig-storage] CSIInlineVolumes dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Downward API +[DeferCleanup (Each)] [sig-storage] CSIInlineVolumes tear down framework | framework.go:193 -STEP: Destroying namespace "downward-api-5790" for this suite. 06/12/23 21:24:58.524 +STEP: Destroying namespace "csiinlinevolumes-321" for this suite. 07/27/23 01:59:27.825 ------------------------------ -• [SLOW TEST] [6.340 seconds] -[sig-node] Downward API -test/e2e/common/node/framework.go:23 - should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] - test/e2e/common/node/downwardapi.go:217 +• [0.220 seconds] +[sig-storage] CSIInlineVolumes +test/e2e/storage/utils/framework.go:23 + should support ephemeral VolumeLifecycleMode in CSIDriver API [Conformance] + test/e2e/storage/csi_inline.go:46 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Downward API + [BeforeEach] [sig-storage] CSIInlineVolumes set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:24:52.203 - Jun 12 21:24:52.203: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename downward-api 06/12/23 21:24:52.205 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:24:52.246 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:24:52.256 - [BeforeEach] [sig-node] Downward API + STEP: Creating a kubernetes client 07/27/23 01:59:27.626 + Jul 27 01:59:27.627: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename csiinlinevolumes 07/27/23 01:59:27.627 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:59:27.664 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:59:27.673 + [BeforeEach] [sig-storage] CSIInlineVolumes test/e2e/framework/metrics/init/init.go:31 - [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] - test/e2e/common/node/downwardapi.go:217 - STEP: Creating a pod to test downward api env vars 06/12/23 21:24:52.27 - Jun 12 21:24:52.298: INFO: Waiting up to 5m0s for pod "downward-api-34e7e485-3126-425f-8e65-d5fe746e0e2a" in namespace "downward-api-5790" to be "Succeeded or Failed" - Jun 12 21:24:52.333: INFO: Pod "downward-api-34e7e485-3126-425f-8e65-d5fe746e0e2a": Phase="Pending", Reason="", readiness=false. Elapsed: 33.98767ms - Jun 12 21:24:54.343: INFO: Pod "downward-api-34e7e485-3126-425f-8e65-d5fe746e0e2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043979032s - Jun 12 21:24:56.344: INFO: Pod "downward-api-34e7e485-3126-425f-8e65-d5fe746e0e2a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044908558s - Jun 12 21:24:58.343: INFO: Pod "downward-api-34e7e485-3126-425f-8e65-d5fe746e0e2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.044823213s - STEP: Saw pod success 06/12/23 21:24:58.344 - Jun 12 21:24:58.344: INFO: Pod "downward-api-34e7e485-3126-425f-8e65-d5fe746e0e2a" satisfied condition "Succeeded or Failed" - Jun 12 21:24:58.355: INFO: Trying to get logs from node 10.138.75.70 pod downward-api-34e7e485-3126-425f-8e65-d5fe746e0e2a container dapi-container: - STEP: delete the pod 06/12/23 21:24:58.443 - Jun 12 21:24:58.491: INFO: Waiting for pod downward-api-34e7e485-3126-425f-8e65-d5fe746e0e2a to disappear - Jun 12 21:24:58.504: INFO: Pod downward-api-34e7e485-3126-425f-8e65-d5fe746e0e2a no longer exists - [AfterEach] [sig-node] Downward API + [It] should support ephemeral VolumeLifecycleMode in CSIDriver API [Conformance] + test/e2e/storage/csi_inline.go:46 + STEP: creating 07/27/23 01:59:27.682 + STEP: getting 07/27/23 01:59:27.729 + STEP: listing 07/27/23 01:59:27.751 + STEP: deleting 07/27/23 01:59:27.769 + [AfterEach] [sig-storage] CSIInlineVolumes test/e2e/framework/node/init/init.go:32 - Jun 12 21:24:58.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Downward API + Jul 27 01:59:27.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] CSIInlineVolumes test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Downward API + [DeferCleanup (Each)] [sig-storage] CSIInlineVolumes dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Downward API + [DeferCleanup (Each)] [sig-storage] CSIInlineVolumes tear down framework | framework.go:193 - STEP: Destroying namespace "downward-api-5790" for this suite. 06/12/23 21:24:58.524 + STEP: Destroying namespace "csiinlinevolumes-321" for this suite. 07/27/23 01:59:27.825 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-scheduling] SchedulerPreemption [Serial] - validates basic preemption works [Conformance] - test/e2e/scheduling/preemption.go:130 -[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] +[sig-network] Services + should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2191 +[BeforeEach] [sig-network] Services set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:24:58.556 -Jun 12 21:24:58.556: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename sched-preemption 06/12/23 21:24:58.558 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:24:58.6 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:24:58.675 -[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] +STEP: Creating a kubernetes client 07/27/23 01:59:27.848 +Jul 27 01:59:27.848: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename services 07/27/23 01:59:27.848 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:59:27.887 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:59:27.896 +[BeforeEach] [sig-network] Services test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] - test/e2e/scheduling/preemption.go:97 -Jun 12 21:24:58.773: INFO: Waiting up to 1m0s for all nodes to be ready -Jun 12 21:25:58.979: INFO: Waiting for terminating namespaces to be deleted... -[It] validates basic preemption works [Conformance] - test/e2e/scheduling/preemption.go:130 -STEP: Create pods that use 4/5 of node resources. 06/12/23 21:25:58.995 -Jun 12 21:25:59.068: INFO: Created pod: pod0-0-sched-preemption-low-priority -Jun 12 21:25:59.090: INFO: Created pod: pod0-1-sched-preemption-medium-priority -Jun 12 21:25:59.151: INFO: Created pod: pod1-0-sched-preemption-medium-priority -Jun 12 21:25:59.176: INFO: Created pod: pod1-1-sched-preemption-medium-priority -Jun 12 21:25:59.259: INFO: Created pod: pod2-0-sched-preemption-medium-priority -Jun 12 21:25:59.280: INFO: Created pod: pod2-1-sched-preemption-medium-priority -STEP: Wait for pods to be scheduled. 06/12/23 21:25:59.281 -Jun 12 21:25:59.281: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-4498" to be "running" -Jun 12 21:25:59.290: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 8.292464ms -Jun 12 21:26:01.301: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019863459s -Jun 12 21:26:03.302: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 4.02003819s -Jun 12 21:26:03.302: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" -Jun 12 21:26:03.302: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-4498" to be "running" -Jun 12 21:26:03.311: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 8.574535ms -Jun 12 21:26:03.311: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" -Jun 12 21:26:03.311: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-4498" to be "running" -Jun 12 21:26:03.321: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 10.498669ms -Jun 12 21:26:03.321: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" -Jun 12 21:26:03.321: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-4498" to be "running" -Jun 12 21:26:03.330: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 8.817344ms -Jun 12 21:26:03.330: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" -Jun 12 21:26:03.330: INFO: Waiting up to 5m0s for pod "pod2-0-sched-preemption-medium-priority" in namespace "sched-preemption-4498" to be "running" -Jun 12 21:26:03.342: INFO: Pod "pod2-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 11.89251ms -Jun 12 21:26:03.342: INFO: Pod "pod2-0-sched-preemption-medium-priority" satisfied condition "running" -Jun 12 21:26:03.342: INFO: Waiting up to 5m0s for pod "pod2-1-sched-preemption-medium-priority" in namespace "sched-preemption-4498" to be "running" -Jun 12 21:26:03.363: INFO: Pod "pod2-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 20.662603ms -Jun 12 21:26:03.363: INFO: Pod "pod2-1-sched-preemption-medium-priority" satisfied condition "running" -STEP: Run a high priority pod that has same requirements as that of lower priority pod 06/12/23 21:26:03.363 -Jun 12 21:26:03.396: INFO: Waiting up to 2m0s for pod "preemptor-pod" in namespace "sched-preemption-4498" to be "running" -Jun 12 21:26:03.404: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 8.240988ms -Jun 12 21:26:05.414: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017921695s -Jun 12 21:26:07.416: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019422731s -Jun 12 21:26:09.416: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019756394s -Jun 12 21:26:11.447: INFO: Pod "preemptor-pod": Phase="Running", Reason="", readiness=true. Elapsed: 8.050726702s -Jun 12 21:26:11.447: INFO: Pod "preemptor-pod" satisfied condition "running" -[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2191 +STEP: creating service in namespace services-7955 07/27/23 01:59:27.906 +STEP: creating service affinity-clusterip in namespace services-7955 07/27/23 01:59:27.906 +STEP: creating replication controller affinity-clusterip in namespace services-7955 07/27/23 01:59:27.958 +I0727 01:59:27.993618 20 runners.go:193] Created replication controller with name: affinity-clusterip, namespace: services-7955, replica count: 3 +I0727 01:59:31.044616 20 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Jul 27 01:59:31.074: INFO: Creating new exec pod +Jul 27 01:59:31.098: INFO: Waiting up to 5m0s for pod "execpod-affinityfssmx" in namespace "services-7955" to be "running" +Jul 27 01:59:31.107: INFO: Pod "execpod-affinityfssmx": Phase="Pending", Reason="", readiness=false. Elapsed: 9.226868ms +Jul 27 01:59:33.121: INFO: Pod "execpod-affinityfssmx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023243247s +Jul 27 01:59:35.116: INFO: Pod "execpod-affinityfssmx": Phase="Running", Reason="", readiness=true. Elapsed: 4.018502297s +Jul 27 01:59:35.116: INFO: Pod "execpod-affinityfssmx" satisfied condition "running" +Jul 27 01:59:36.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-7955 exec execpod-affinityfssmx -- /bin/sh -x -c nc -v -z -w 2 affinity-clusterip 80' +Jul 27 01:59:36.316: INFO: stderr: "+ nc -v -z -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" +Jul 27 01:59:36.316: INFO: stdout: "" +Jul 27 01:59:36.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-7955 exec execpod-affinityfssmx -- /bin/sh -x -c nc -v -z -w 2 172.21.128.78 80' +Jul 27 01:59:36.565: INFO: stderr: "+ nc -v -z -w 2 172.21.128.78 80\nConnection to 172.21.128.78 80 port [tcp/http] succeeded!\n" +Jul 27 01:59:36.565: INFO: stdout: "" +Jul 27 01:59:36.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-7955 exec execpod-affinityfssmx -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.21.128.78:80/ ; done' +Jul 27 01:59:36.852: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.128.78:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.128.78:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.128.78:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.128.78:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.128.78:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.128.78:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.128.78:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.128.78:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.128.78:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.128.78:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.128.78:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.128.78:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.128.78:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.128.78:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.128.78:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.128.78:80/\n" +Jul 27 01:59:36.852: INFO: stdout: "\naffinity-clusterip-x57qr\naffinity-clusterip-x57qr\naffinity-clusterip-x57qr\naffinity-clusterip-x57qr\naffinity-clusterip-x57qr\naffinity-clusterip-x57qr\naffinity-clusterip-x57qr\naffinity-clusterip-x57qr\naffinity-clusterip-x57qr\naffinity-clusterip-x57qr\naffinity-clusterip-x57qr\naffinity-clusterip-x57qr\naffinity-clusterip-x57qr\naffinity-clusterip-x57qr\naffinity-clusterip-x57qr\naffinity-clusterip-x57qr" +Jul 27 01:59:36.852: INFO: Received response from host: affinity-clusterip-x57qr +Jul 27 01:59:36.852: INFO: Received response from host: affinity-clusterip-x57qr +Jul 27 01:59:36.852: INFO: Received response from host: affinity-clusterip-x57qr +Jul 27 01:59:36.852: INFO: Received response from host: affinity-clusterip-x57qr +Jul 27 01:59:36.852: INFO: Received response from host: affinity-clusterip-x57qr +Jul 27 01:59:36.852: INFO: Received response from host: affinity-clusterip-x57qr +Jul 27 01:59:36.852: INFO: Received response from host: affinity-clusterip-x57qr +Jul 27 01:59:36.852: INFO: Received response from host: affinity-clusterip-x57qr +Jul 27 01:59:36.852: INFO: Received response from host: affinity-clusterip-x57qr +Jul 27 01:59:36.852: INFO: Received response from host: affinity-clusterip-x57qr +Jul 27 01:59:36.853: INFO: Received response from host: affinity-clusterip-x57qr +Jul 27 01:59:36.853: INFO: Received response from host: affinity-clusterip-x57qr +Jul 27 01:59:36.853: INFO: Received response from host: affinity-clusterip-x57qr +Jul 27 01:59:36.853: INFO: Received response from host: affinity-clusterip-x57qr +Jul 27 01:59:36.853: INFO: Received response from host: affinity-clusterip-x57qr +Jul 27 01:59:36.853: INFO: Received response from host: affinity-clusterip-x57qr +Jul 27 01:59:36.853: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip in namespace services-7955, will wait for the garbage collector to delete the pods 07/27/23 01:59:36.902 +Jul 27 01:59:36.991: INFO: Deleting ReplicationController affinity-clusterip took: 23.014647ms +Jul 27 01:59:37.091: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.743502ms +[AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 -Jun 12 21:26:11.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] - test/e2e/scheduling/preemption.go:84 -[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] +Jul 27 01:59:39.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] +[DeferCleanup (Each)] [sig-network] Services dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] +[DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 -STEP: Destroying namespace "sched-preemption-4498" for this suite. 06/12/23 21:26:11.838 +STEP: Destroying namespace "services-7955" for this suite. 07/27/23 01:59:39.663 ------------------------------ -• [SLOW TEST] [73.302 seconds] -[sig-scheduling] SchedulerPreemption [Serial] -test/e2e/scheduling/framework.go:40 - validates basic preemption works [Conformance] - test/e2e/scheduling/preemption.go:130 +• [SLOW TEST] [11.838 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2191 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + [BeforeEach] [sig-network] Services set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:24:58.556 - Jun 12 21:24:58.556: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename sched-preemption 06/12/23 21:24:58.558 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:24:58.6 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:24:58.675 - [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + STEP: Creating a kubernetes client 07/27/23 01:59:27.848 + Jul 27 01:59:27.848: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename services 07/27/23 01:59:27.848 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:59:27.887 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:59:27.896 + [BeforeEach] [sig-network] Services test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] - test/e2e/scheduling/preemption.go:97 - Jun 12 21:24:58.773: INFO: Waiting up to 1m0s for all nodes to be ready - Jun 12 21:25:58.979: INFO: Waiting for terminating namespaces to be deleted... - [It] validates basic preemption works [Conformance] - test/e2e/scheduling/preemption.go:130 - STEP: Create pods that use 4/5 of node resources. 06/12/23 21:25:58.995 - Jun 12 21:25:59.068: INFO: Created pod: pod0-0-sched-preemption-low-priority - Jun 12 21:25:59.090: INFO: Created pod: pod0-1-sched-preemption-medium-priority - Jun 12 21:25:59.151: INFO: Created pod: pod1-0-sched-preemption-medium-priority - Jun 12 21:25:59.176: INFO: Created pod: pod1-1-sched-preemption-medium-priority - Jun 12 21:25:59.259: INFO: Created pod: pod2-0-sched-preemption-medium-priority - Jun 12 21:25:59.280: INFO: Created pod: pod2-1-sched-preemption-medium-priority - STEP: Wait for pods to be scheduled. 06/12/23 21:25:59.281 - Jun 12 21:25:59.281: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-4498" to be "running" - Jun 12 21:25:59.290: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 8.292464ms - Jun 12 21:26:01.301: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019863459s - Jun 12 21:26:03.302: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 4.02003819s - Jun 12 21:26:03.302: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" - Jun 12 21:26:03.302: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-4498" to be "running" - Jun 12 21:26:03.311: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 8.574535ms - Jun 12 21:26:03.311: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" - Jun 12 21:26:03.311: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-4498" to be "running" - Jun 12 21:26:03.321: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 10.498669ms - Jun 12 21:26:03.321: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" - Jun 12 21:26:03.321: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-4498" to be "running" - Jun 12 21:26:03.330: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 8.817344ms - Jun 12 21:26:03.330: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" - Jun 12 21:26:03.330: INFO: Waiting up to 5m0s for pod "pod2-0-sched-preemption-medium-priority" in namespace "sched-preemption-4498" to be "running" - Jun 12 21:26:03.342: INFO: Pod "pod2-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 11.89251ms - Jun 12 21:26:03.342: INFO: Pod "pod2-0-sched-preemption-medium-priority" satisfied condition "running" - Jun 12 21:26:03.342: INFO: Waiting up to 5m0s for pod "pod2-1-sched-preemption-medium-priority" in namespace "sched-preemption-4498" to be "running" - Jun 12 21:26:03.363: INFO: Pod "pod2-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 20.662603ms - Jun 12 21:26:03.363: INFO: Pod "pod2-1-sched-preemption-medium-priority" satisfied condition "running" - STEP: Run a high priority pod that has same requirements as that of lower priority pod 06/12/23 21:26:03.363 - Jun 12 21:26:03.396: INFO: Waiting up to 2m0s for pod "preemptor-pod" in namespace "sched-preemption-4498" to be "running" - Jun 12 21:26:03.404: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 8.240988ms - Jun 12 21:26:05.414: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017921695s - Jun 12 21:26:07.416: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019422731s - Jun 12 21:26:09.416: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019756394s - Jun 12 21:26:11.447: INFO: Pod "preemptor-pod": Phase="Running", Reason="", readiness=true. Elapsed: 8.050726702s - Jun 12 21:26:11.447: INFO: Pod "preemptor-pod" satisfied condition "running" - [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2191 + STEP: creating service in namespace services-7955 07/27/23 01:59:27.906 + STEP: creating service affinity-clusterip in namespace services-7955 07/27/23 01:59:27.906 + STEP: creating replication controller affinity-clusterip in namespace services-7955 07/27/23 01:59:27.958 + I0727 01:59:27.993618 20 runners.go:193] Created replication controller with name: affinity-clusterip, namespace: services-7955, replica count: 3 + I0727 01:59:31.044616 20 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Jul 27 01:59:31.074: INFO: Creating new exec pod + Jul 27 01:59:31.098: INFO: Waiting up to 5m0s for pod "execpod-affinityfssmx" in namespace "services-7955" to be "running" + Jul 27 01:59:31.107: INFO: Pod "execpod-affinityfssmx": Phase="Pending", Reason="", readiness=false. Elapsed: 9.226868ms + Jul 27 01:59:33.121: INFO: Pod "execpod-affinityfssmx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023243247s + Jul 27 01:59:35.116: INFO: Pod "execpod-affinityfssmx": Phase="Running", Reason="", readiness=true. Elapsed: 4.018502297s + Jul 27 01:59:35.116: INFO: Pod "execpod-affinityfssmx" satisfied condition "running" + Jul 27 01:59:36.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-7955 exec execpod-affinityfssmx -- /bin/sh -x -c nc -v -z -w 2 affinity-clusterip 80' + Jul 27 01:59:36.316: INFO: stderr: "+ nc -v -z -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" + Jul 27 01:59:36.316: INFO: stdout: "" + Jul 27 01:59:36.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-7955 exec execpod-affinityfssmx -- /bin/sh -x -c nc -v -z -w 2 172.21.128.78 80' + Jul 27 01:59:36.565: INFO: stderr: "+ nc -v -z -w 2 172.21.128.78 80\nConnection to 172.21.128.78 80 port [tcp/http] succeeded!\n" + Jul 27 01:59:36.565: INFO: stdout: "" + Jul 27 01:59:36.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-7955 exec execpod-affinityfssmx -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.21.128.78:80/ ; done' + Jul 27 01:59:36.852: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.128.78:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.128.78:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.128.78:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.128.78:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.128.78:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.128.78:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.128.78:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.128.78:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.128.78:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.128.78:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.128.78:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.128.78:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.128.78:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.128.78:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.128.78:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.128.78:80/\n" + Jul 27 01:59:36.852: INFO: stdout: "\naffinity-clusterip-x57qr\naffinity-clusterip-x57qr\naffinity-clusterip-x57qr\naffinity-clusterip-x57qr\naffinity-clusterip-x57qr\naffinity-clusterip-x57qr\naffinity-clusterip-x57qr\naffinity-clusterip-x57qr\naffinity-clusterip-x57qr\naffinity-clusterip-x57qr\naffinity-clusterip-x57qr\naffinity-clusterip-x57qr\naffinity-clusterip-x57qr\naffinity-clusterip-x57qr\naffinity-clusterip-x57qr\naffinity-clusterip-x57qr" + Jul 27 01:59:36.852: INFO: Received response from host: affinity-clusterip-x57qr + Jul 27 01:59:36.852: INFO: Received response from host: affinity-clusterip-x57qr + Jul 27 01:59:36.852: INFO: Received response from host: affinity-clusterip-x57qr + Jul 27 01:59:36.852: INFO: Received response from host: affinity-clusterip-x57qr + Jul 27 01:59:36.852: INFO: Received response from host: affinity-clusterip-x57qr + Jul 27 01:59:36.852: INFO: Received response from host: affinity-clusterip-x57qr + Jul 27 01:59:36.852: INFO: Received response from host: affinity-clusterip-x57qr + Jul 27 01:59:36.852: INFO: Received response from host: affinity-clusterip-x57qr + Jul 27 01:59:36.852: INFO: Received response from host: affinity-clusterip-x57qr + Jul 27 01:59:36.852: INFO: Received response from host: affinity-clusterip-x57qr + Jul 27 01:59:36.853: INFO: Received response from host: affinity-clusterip-x57qr + Jul 27 01:59:36.853: INFO: Received response from host: affinity-clusterip-x57qr + Jul 27 01:59:36.853: INFO: Received response from host: affinity-clusterip-x57qr + Jul 27 01:59:36.853: INFO: Received response from host: affinity-clusterip-x57qr + Jul 27 01:59:36.853: INFO: Received response from host: affinity-clusterip-x57qr + Jul 27 01:59:36.853: INFO: Received response from host: affinity-clusterip-x57qr + Jul 27 01:59:36.853: INFO: Cleaning up the exec pod + STEP: deleting ReplicationController affinity-clusterip in namespace services-7955, will wait for the garbage collector to delete the pods 07/27/23 01:59:36.902 + Jul 27 01:59:36.991: INFO: Deleting ReplicationController affinity-clusterip took: 23.014647ms + Jul 27 01:59:37.091: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.743502ms + [AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 - Jun 12 21:26:11.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] - test/e2e/scheduling/preemption.go:84 - [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + Jul 27 01:59:39.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + [DeferCleanup (Each)] [sig-network] Services dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + [DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 - STEP: Destroying namespace "sched-preemption-4498" for this suite. 06/12/23 21:26:11.838 + STEP: Destroying namespace "services-7955" for this suite. 07/27/23 01:59:39.663 << End Captured GinkgoWriter Output ------------------------------ -SSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook - should execute poststart http hook properly [NodeConformance] [Conformance] - test/e2e/common/node/lifecycle_hook.go:167 -[BeforeEach] [sig-node] Container Lifecycle Hook +[sig-apps] CronJob + should replace jobs when ReplaceConcurrent [Conformance] + test/e2e/apps/cronjob.go:160 +[BeforeEach] [sig-apps] CronJob set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:26:11.863 -Jun 12 21:26:11.864: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename container-lifecycle-hook 06/12/23 21:26:11.868 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:26:11.908 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:26:11.925 -[BeforeEach] [sig-node] Container Lifecycle Hook +STEP: Creating a kubernetes client 07/27/23 01:59:39.687 +Jul 27 01:59:39.687: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename cronjob 07/27/23 01:59:39.688 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:59:39.732 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:59:39.741 +[BeforeEach] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] when create a pod with lifecycle hook - test/e2e/common/node/lifecycle_hook.go:77 -STEP: create the container to handle the HTTPGet hook request. 06/12/23 21:26:11.994 -Jun 12 21:26:12.021: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-571" to be "running and ready" -Jun 12 21:26:12.050: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 29.510817ms -Jun 12 21:26:12.051: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:26:14.060: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039250743s -Jun 12 21:26:14.060: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:26:16.062: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 4.040685688s -Jun 12 21:26:16.062: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) -Jun 12 21:26:16.062: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" -[It] should execute poststart http hook properly [NodeConformance] [Conformance] - test/e2e/common/node/lifecycle_hook.go:167 -STEP: create the pod with lifecycle hook 06/12/23 21:26:16.079 -Jun 12 21:26:16.099: INFO: Waiting up to 5m0s for pod "pod-with-poststart-http-hook" in namespace "container-lifecycle-hook-571" to be "running and ready" -Jun 12 21:26:16.119: INFO: Pod "pod-with-poststart-http-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 20.144811ms -Jun 12 21:26:16.119: INFO: The phase of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:26:18.134: INFO: Pod "pod-with-poststart-http-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034952249s -Jun 12 21:26:18.134: INFO: The phase of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:26:20.134: INFO: Pod "pod-with-poststart-http-hook": Phase="Running", Reason="", readiness=true. Elapsed: 4.034531021s -Jun 12 21:26:20.134: INFO: The phase of Pod pod-with-poststart-http-hook is Running (Ready = true) -Jun 12 21:26:20.134: INFO: Pod "pod-with-poststart-http-hook" satisfied condition "running and ready" -STEP: check poststart hook 06/12/23 21:26:20.173 -STEP: delete the pod with lifecycle hook 06/12/23 21:26:20.241 -Jun 12 21:26:20.287: INFO: Waiting for pod pod-with-poststart-http-hook to disappear -Jun 12 21:26:20.302: INFO: Pod pod-with-poststart-http-hook still exists -Jun 12 21:26:22.306: INFO: Waiting for pod pod-with-poststart-http-hook to disappear -Jun 12 21:26:22.319: INFO: Pod pod-with-poststart-http-hook still exists -Jun 12 21:26:24.303: INFO: Waiting for pod pod-with-poststart-http-hook to disappear -Jun 12 21:26:24.330: INFO: Pod pod-with-poststart-http-hook no longer exists -[AfterEach] [sig-node] Container Lifecycle Hook +[It] should replace jobs when ReplaceConcurrent [Conformance] + test/e2e/apps/cronjob.go:160 +STEP: Creating a ReplaceConcurrent cronjob 07/27/23 01:59:39.751 +W0727 01:59:39.772396 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "c" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "c" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "c" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "c" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +STEP: Ensuring a job is scheduled 07/27/23 01:59:39.772 +STEP: Ensuring exactly one is scheduled 07/27/23 02:00:01.824 +STEP: Ensuring exactly one running job exists by listing jobs explicitly 07/27/23 02:00:01.844 +STEP: Ensuring the job is replaced with a new one 07/27/23 02:00:01.861 +STEP: Removing cronjob 07/27/23 02:01:01.874 +[AfterEach] [sig-apps] CronJob test/e2e/framework/node/init/init.go:32 -Jun 12 21:26:24.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook +Jul 27 02:01:01.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook +[DeferCleanup (Each)] [sig-apps] CronJob dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook +[DeferCleanup (Each)] [sig-apps] CronJob tear down framework | framework.go:193 -STEP: Destroying namespace "container-lifecycle-hook-571" for this suite. 06/12/23 21:26:24.346 +STEP: Destroying namespace "cronjob-6029" for this suite. 07/27/23 02:01:01.912 ------------------------------ -• [SLOW TEST] [12.499 seconds] -[sig-node] Container Lifecycle Hook -test/e2e/common/node/framework.go:23 - when create a pod with lifecycle hook - test/e2e/common/node/lifecycle_hook.go:46 - should execute poststart http hook properly [NodeConformance] [Conformance] - test/e2e/common/node/lifecycle_hook.go:167 +• [SLOW TEST] [82.257 seconds] +[sig-apps] CronJob +test/e2e/apps/framework.go:23 + should replace jobs when ReplaceConcurrent [Conformance] + test/e2e/apps/cronjob.go:160 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Container Lifecycle Hook + [BeforeEach] [sig-apps] CronJob set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:26:11.863 - Jun 12 21:26:11.864: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename container-lifecycle-hook 06/12/23 21:26:11.868 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:26:11.908 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:26:11.925 - [BeforeEach] [sig-node] Container Lifecycle Hook + STEP: Creating a kubernetes client 07/27/23 01:59:39.687 + Jul 27 01:59:39.687: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename cronjob 07/27/23 01:59:39.688 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 01:59:39.732 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 01:59:39.741 + [BeforeEach] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] when create a pod with lifecycle hook - test/e2e/common/node/lifecycle_hook.go:77 - STEP: create the container to handle the HTTPGet hook request. 06/12/23 21:26:11.994 - Jun 12 21:26:12.021: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-571" to be "running and ready" - Jun 12 21:26:12.050: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 29.510817ms - Jun 12 21:26:12.051: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:26:14.060: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039250743s - Jun 12 21:26:14.060: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:26:16.062: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 4.040685688s - Jun 12 21:26:16.062: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) - Jun 12 21:26:16.062: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" - [It] should execute poststart http hook properly [NodeConformance] [Conformance] - test/e2e/common/node/lifecycle_hook.go:167 - STEP: create the pod with lifecycle hook 06/12/23 21:26:16.079 - Jun 12 21:26:16.099: INFO: Waiting up to 5m0s for pod "pod-with-poststart-http-hook" in namespace "container-lifecycle-hook-571" to be "running and ready" - Jun 12 21:26:16.119: INFO: Pod "pod-with-poststart-http-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 20.144811ms - Jun 12 21:26:16.119: INFO: The phase of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:26:18.134: INFO: Pod "pod-with-poststart-http-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034952249s - Jun 12 21:26:18.134: INFO: The phase of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:26:20.134: INFO: Pod "pod-with-poststart-http-hook": Phase="Running", Reason="", readiness=true. Elapsed: 4.034531021s - Jun 12 21:26:20.134: INFO: The phase of Pod pod-with-poststart-http-hook is Running (Ready = true) - Jun 12 21:26:20.134: INFO: Pod "pod-with-poststart-http-hook" satisfied condition "running and ready" - STEP: check poststart hook 06/12/23 21:26:20.173 - STEP: delete the pod with lifecycle hook 06/12/23 21:26:20.241 - Jun 12 21:26:20.287: INFO: Waiting for pod pod-with-poststart-http-hook to disappear - Jun 12 21:26:20.302: INFO: Pod pod-with-poststart-http-hook still exists - Jun 12 21:26:22.306: INFO: Waiting for pod pod-with-poststart-http-hook to disappear - Jun 12 21:26:22.319: INFO: Pod pod-with-poststart-http-hook still exists - Jun 12 21:26:24.303: INFO: Waiting for pod pod-with-poststart-http-hook to disappear - Jun 12 21:26:24.330: INFO: Pod pod-with-poststart-http-hook no longer exists - [AfterEach] [sig-node] Container Lifecycle Hook - test/e2e/framework/node/init/init.go:32 - Jun 12 21:26:24.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + [It] should replace jobs when ReplaceConcurrent [Conformance] + test/e2e/apps/cronjob.go:160 + STEP: Creating a ReplaceConcurrent cronjob 07/27/23 01:59:39.751 + W0727 01:59:39.772396 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "c" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "c" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "c" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "c" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + STEP: Ensuring a job is scheduled 07/27/23 01:59:39.772 + STEP: Ensuring exactly one is scheduled 07/27/23 02:00:01.824 + STEP: Ensuring exactly one running job exists by listing jobs explicitly 07/27/23 02:00:01.844 + STEP: Ensuring the job is replaced with a new one 07/27/23 02:00:01.861 + STEP: Removing cronjob 07/27/23 02:01:01.874 + [AfterEach] [sig-apps] CronJob + test/e2e/framework/node/init/init.go:32 + Jul 27 02:01:01.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + [DeferCleanup (Each)] [sig-apps] CronJob dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + [DeferCleanup (Each)] [sig-apps] CronJob tear down framework | framework.go:193 - STEP: Destroying namespace "container-lifecycle-hook-571" for this suite. 06/12/23 21:26:24.346 + STEP: Destroying namespace "cronjob-6029" for this suite. 07/27/23 02:01:01.912 << End Captured GinkgoWriter Output ------------------------------ -SSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-scheduling] SchedulerPredicates [Serial] - validates that NodeSelector is respected if matching [Conformance] - test/e2e/scheduling/predicates.go:466 -[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] +[sig-cli] Kubectl client Kubectl version + should check is all data is printed [Conformance] + test/e2e/kubectl/kubectl.go:1685 +[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:26:24.364 -Jun 12 21:26:24.364: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename sched-pred 06/12/23 21:26:24.366 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:26:24.456 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:26:24.466 -[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] +STEP: Creating a kubernetes client 07/27/23 02:01:01.945 +Jul 27 02:01:01.945: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename kubectl 07/27/23 02:01:01.946 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:01:01.993 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:01:02.006 +[BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] - test/e2e/scheduling/predicates.go:97 -Jun 12 21:26:24.479: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready -Jun 12 21:26:24.516: INFO: Waiting for terminating namespaces to be deleted... -Jun 12 21:26:24.543: INFO: -Logging pods the apiserver thinks is on node 10.138.75.112 before test -Jun 12 21:26:24.596: INFO: calico-node-b9sdb from calico-system started at 2023-06-12 17:53:02 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.596: INFO: Container calico-node ready: true, restart count 0 -Jun 12 21:26:24.596: INFO: calico-typha-74d94b74f5-dc6td from calico-system started at 2023-06-12 17:53:09 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.596: INFO: Container calico-typha ready: true, restart count 0 -Jun 12 21:26:24.596: INFO: ibm-cloud-provider-ip-168-1-198-197-75947fc545-gxzn7 from ibm-system started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.596: INFO: Container ibm-cloud-provider-ip-168-1-198-197 ready: true, restart count 0 -Jun 12 21:26:24.596: INFO: ibm-keepalived-watcher-5hc6v from kube-system started at 2023-06-12 17:40:13 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.597: INFO: Container keepalived-watcher ready: true, restart count 0 -Jun 12 21:26:24.597: INFO: ibm-master-proxy-static-10.138.75.112 from kube-system started at 2023-06-12 17:40:09 +0000 UTC (2 container statuses recorded) -Jun 12 21:26:24.597: INFO: Container ibm-master-proxy-static ready: true, restart count 0 -Jun 12 21:26:24.597: INFO: Container pause ready: true, restart count 0 -Jun 12 21:26:24.597: INFO: ibmcloud-block-storage-driver-5zqmj from kube-system started at 2023-06-12 17:40:20 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.597: INFO: Container ibmcloud-block-storage-driver-container ready: true, restart count 0 -Jun 12 21:26:24.597: INFO: tuned-phslc from openshift-cluster-node-tuning-operator started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.597: INFO: Container tuned ready: true, restart count 0 -Jun 12 21:26:24.597: INFO: csi-snapshot-controller-7f8879b9ff-p456r from openshift-cluster-storage-operator started at 2023-06-12 17:55:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.597: INFO: Container snapshot-controller ready: true, restart count 0 -Jun 12 21:26:24.597: INFO: csi-snapshot-webhook-7bd9594b6d-bp5dr from openshift-cluster-storage-operator started at 2023-06-12 17:55:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.597: INFO: Container webhook ready: true, restart count 0 -Jun 12 21:26:24.597: INFO: console-5bf97c7949-w5sn5 from openshift-console started at 2023-06-12 18:01:02 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.597: INFO: Container console ready: true, restart count 0 -Jun 12 21:26:24.597: INFO: downloads-8b57f44bb-55ss5 from openshift-console started at 2023-06-12 17:55:24 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.597: INFO: Container download-server ready: true, restart count 0 -Jun 12 21:26:24.597: INFO: dns-default-hpnqj from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (2 container statuses recorded) -Jun 12 21:26:24.597: INFO: Container dns ready: true, restart count 0 -Jun 12 21:26:24.597: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:26:24.597: INFO: node-resolver-5st6j from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.597: INFO: Container dns-node-resolver ready: true, restart count 0 -Jun 12 21:26:24.597: INFO: image-registry-6c79bcf5c4-p7ss4 from openshift-image-registry started at 2023-06-12 18:00:30 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.597: INFO: Container registry ready: true, restart count 0 -Jun 12 21:26:24.597: INFO: node-ca-qm7sb from openshift-image-registry started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.597: INFO: Container node-ca ready: true, restart count 0 -Jun 12 21:26:24.598: INFO: ingress-canary-5qpcw from openshift-ingress-canary started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.598: INFO: Container serve-healthcheck-canary ready: true, restart count 0 -Jun 12 21:26:24.598: INFO: router-default-7d454f944c-62qgz from openshift-ingress started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.598: INFO: Container router ready: true, restart count 0 -Jun 12 21:26:24.598: INFO: openshift-kube-proxy-b9xs9 from openshift-kube-proxy started at 2023-06-12 17:47:53 +0000 UTC (2 container statuses recorded) -Jun 12 21:26:24.598: INFO: Container kube-proxy ready: true, restart count 0 -Jun 12 21:26:24.598: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:26:24.598: INFO: migrator-cfb6c8f7c-vx2tr from openshift-kube-storage-version-migrator started at 2023-06-12 17:55:28 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.598: INFO: Container migrator ready: true, restart count 0 -Jun 12 21:26:24.598: INFO: community-operators-fm8cx from openshift-marketplace started at 2023-06-12 17:57:19 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.598: INFO: Container registry-server ready: true, restart count 0 -Jun 12 21:26:24.598: INFO: redhat-operators-pr47d from openshift-marketplace started at 2023-06-12 19:05:36 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.598: INFO: Container registry-server ready: true, restart count 0 -Jun 12 21:26:24.598: INFO: alertmanager-main-1 from openshift-monitoring started at 2023-06-12 18:01:06 +0000 UTC (6 container statuses recorded) -Jun 12 21:26:24.598: INFO: Container alertmanager ready: true, restart count 1 -Jun 12 21:26:24.598: INFO: Container alertmanager-proxy ready: true, restart count 0 -Jun 12 21:26:24.598: INFO: Container config-reloader ready: true, restart count 0 -Jun 12 21:26:24.598: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:26:24.598: INFO: Container kube-rbac-proxy-metric ready: true, restart count 0 -Jun 12 21:26:24.598: INFO: Container prom-label-proxy ready: true, restart count 0 -Jun 12 21:26:24.598: INFO: kube-state-metrics-6ccfb58dc4-rgnnh from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (3 container statuses recorded) -Jun 12 21:26:24.598: INFO: Container kube-rbac-proxy-main ready: true, restart count 0 -Jun 12 21:26:24.598: INFO: Container kube-rbac-proxy-self ready: true, restart count 0 -Jun 12 21:26:24.598: INFO: Container kube-state-metrics ready: true, restart count 0 -Jun 12 21:26:24.598: INFO: node-exporter-r799t from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (2 container statuses recorded) -Jun 12 21:26:24.598: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:26:24.598: INFO: Container node-exporter ready: true, restart count 0 -Jun 12 21:26:24.598: INFO: prometheus-adapter-7c58c77c58-xfd55 from openshift-monitoring started at 2023-06-12 17:59:36 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.598: INFO: Container prometheus-adapter ready: true, restart count 0 -Jun 12 21:26:24.598: INFO: prometheus-k8s-0 from openshift-monitoring started at 2023-06-12 18:01:32 +0000 UTC (6 container statuses recorded) -Jun 12 21:26:24.598: INFO: Container config-reloader ready: true, restart count 0 -Jun 12 21:26:24.598: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:26:24.598: INFO: Container kube-rbac-proxy-thanos ready: true, restart count 0 -Jun 12 21:26:24.598: INFO: Container prometheus ready: true, restart count 0 -Jun 12 21:26:24.598: INFO: Container prometheus-proxy ready: true, restart count 0 -Jun 12 21:26:24.598: INFO: Container thanos-sidecar ready: true, restart count 0 -Jun 12 21:26:24.598: INFO: prometheus-operator-admission-webhook-5d679565bb-66wnf from openshift-monitoring started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.598: INFO: Container prometheus-operator-admission-webhook ready: true, restart count 0 -Jun 12 21:26:24.598: INFO: thanos-querier-6497df7b9-djrsc from openshift-monitoring started at 2023-06-12 17:59:42 +0000 UTC (6 container statuses recorded) -Jun 12 21:26:24.598: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:26:24.598: INFO: Container kube-rbac-proxy-metrics ready: true, restart count 0 -Jun 12 21:26:24.598: INFO: Container kube-rbac-proxy-rules ready: true, restart count 0 -Jun 12 21:26:24.598: INFO: Container oauth-proxy ready: true, restart count 0 -Jun 12 21:26:24.598: INFO: Container prom-label-proxy ready: true, restart count 0 -Jun 12 21:26:24.598: INFO: Container thanos-query ready: true, restart count 0 -Jun 12 21:26:24.599: INFO: multus-additional-cni-plugins-zpr6c from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.599: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 -Jun 12 21:26:24.599: INFO: multus-q452d from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.599: INFO: Container kube-multus ready: true, restart count 0 -Jun 12 21:26:24.599: INFO: network-metrics-daemon-vx56x from openshift-multus started at 2023-06-12 17:47:49 +0000 UTC (2 container statuses recorded) -Jun 12 21:26:24.599: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:26:24.599: INFO: Container network-metrics-daemon ready: true, restart count 0 -Jun 12 21:26:24.599: INFO: network-check-target-lfvfw from openshift-network-diagnostics started at 2023-06-12 17:47:56 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.599: INFO: Container network-check-target-container ready: true, restart count 0 -Jun 12 21:26:24.599: INFO: network-operator-5498bf7dc6-xv8r2 from openshift-network-operator started at 2023-06-12 17:47:21 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.599: INFO: Container network-operator ready: true, restart count 1 -Jun 12 21:26:24.599: INFO: collect-profiles-28110060-nx85j from openshift-operator-lifecycle-manager started at 2023-06-12 21:00:00 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.599: INFO: Container collect-profiles ready: false, restart count 0 -Jun 12 21:26:24.599: INFO: collect-profiles-28110075-c42rr from openshift-operator-lifecycle-manager started at 2023-06-12 21:15:00 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.599: INFO: Container collect-profiles ready: false, restart count 0 -Jun 12 21:26:24.599: INFO: packageserver-7f8bd8c95b-fgfhz from openshift-operator-lifecycle-manager started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.599: INFO: Container packageserver ready: true, restart count 0 -Jun 12 21:26:24.599: INFO: sonobuoy-systemd-logs-daemon-set-129e475d543c42cc-xk7f7 from sonobuoy started at 2023-06-12 20:39:06 +0000 UTC (2 container statuses recorded) -Jun 12 21:26:24.599: INFO: Container sonobuoy-worker ready: true, restart count 0 -Jun 12 21:26:24.599: INFO: Container systemd-logs ready: true, restart count 0 -Jun 12 21:26:24.599: INFO: -Logging pods the apiserver thinks is on node 10.138.75.116 before test -Jun 12 21:26:24.688: INFO: calico-kube-controllers-58944988fc-kv6pq from calico-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.688: INFO: Container calico-kube-controllers ready: true, restart count 0 -Jun 12 21:26:24.688: INFO: calico-node-nhd4m from calico-system started at 2023-06-12 17:53:02 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.688: INFO: Container calico-node ready: true, restart count 0 -Jun 12 21:26:24.688: INFO: ibm-file-plugin-5f8cc7b66-hc7b9 from kube-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.689: INFO: Container ibm-file-plugin-container ready: true, restart count 0 -Jun 12 21:26:24.689: INFO: ibm-keepalived-watcher-zp24l from kube-system started at 2023-06-12 17:40:01 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.689: INFO: Container keepalived-watcher ready: true, restart count 0 -Jun 12 21:26:24.689: INFO: ibm-master-proxy-static-10.138.75.116 from kube-system started at 2023-06-12 17:39:58 +0000 UTC (2 container statuses recorded) -Jun 12 21:26:24.689: INFO: Container ibm-master-proxy-static ready: true, restart count 0 -Jun 12 21:26:24.689: INFO: Container pause ready: true, restart count 0 -Jun 12 21:26:24.689: INFO: ibm-storage-watcher-f4db746b4-mlm76 from kube-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.689: INFO: Container ibm-storage-watcher-container ready: true, restart count 0 -Jun 12 21:26:24.689: INFO: ibmcloud-block-storage-driver-4wh25 from kube-system started at 2023-06-12 17:40:09 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.689: INFO: Container ibmcloud-block-storage-driver-container ready: true, restart count 0 -Jun 12 21:26:24.689: INFO: ibmcloud-block-storage-plugin-5f85bc9665-2ltn5 from kube-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.689: INFO: Container ibmcloud-block-storage-plugin-container ready: true, restart count 0 -Jun 12 21:26:24.689: INFO: vpn-7bc564c55c-htxd6 from kube-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.689: INFO: Container vpn ready: true, restart count 0 -Jun 12 21:26:24.689: INFO: cluster-node-tuning-operator-5f6cff5c99-z22gd from openshift-cluster-node-tuning-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.689: INFO: Container cluster-node-tuning-operator ready: true, restart count 0 -Jun 12 21:26:24.689: INFO: tuned-44pqh from openshift-cluster-node-tuning-operator started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.689: INFO: Container tuned ready: true, restart count 0 -Jun 12 21:26:24.690: INFO: cluster-samples-operator-597884bb5d-bv9cn from openshift-cluster-samples-operator started at 2023-06-12 17:54:08 +0000 UTC (2 container statuses recorded) -Jun 12 21:26:24.690: INFO: Container cluster-samples-operator ready: true, restart count 0 -Jun 12 21:26:24.690: INFO: Container cluster-samples-operator-watch ready: true, restart count 0 -Jun 12 21:26:24.690: INFO: cluster-storage-operator-75bb97486-7xrgf from openshift-cluster-storage-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.690: INFO: Container cluster-storage-operator ready: true, restart count 1 -Jun 12 21:26:24.690: INFO: csi-snapshot-controller-operator-69df8b995f-flpdz from openshift-cluster-storage-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.690: INFO: Container csi-snapshot-controller-operator ready: true, restart count 0 -Jun 12 21:26:24.690: INFO: console-operator-747447cc44-5hk9p from openshift-console-operator started at 2023-06-12 17:54:08 +0000 UTC (2 container statuses recorded) -Jun 12 21:26:24.690: INFO: Container console-operator ready: true, restart count 1 -Jun 12 21:26:24.690: INFO: Container conversion-webhook-server ready: true, restart count 2 -Jun 12 21:26:24.690: INFO: console-5bf97c7949-22prk from openshift-console started at 2023-06-12 18:01:30 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.690: INFO: Container console ready: true, restart count 0 -Jun 12 21:26:24.690: INFO: dns-operator-65c495d75-cd4fc from openshift-dns-operator started at 2023-06-12 17:54:08 +0000 UTC (2 container statuses recorded) -Jun 12 21:26:24.690: INFO: Container dns-operator ready: true, restart count 0 -Jun 12 21:26:24.690: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:26:24.690: INFO: dns-default-cw4pt from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (2 container statuses recorded) -Jun 12 21:26:24.690: INFO: Container dns ready: true, restart count 0 -Jun 12 21:26:24.690: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:26:24.690: INFO: node-resolver-8mss5 from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.690: INFO: Container dns-node-resolver ready: true, restart count 0 -Jun 12 21:26:24.690: INFO: cluster-image-registry-operator-f9c46b94f-swtmm from openshift-image-registry started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.690: INFO: Container cluster-image-registry-operator ready: true, restart count 0 -Jun 12 21:26:24.690: INFO: node-ca-5cs7d from openshift-image-registry started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.690: INFO: Container node-ca ready: true, restart count 0 -Jun 12 21:26:24.690: INFO: registry-pvc-permissions-j28ls from openshift-image-registry started at 2023-06-12 18:00:38 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.690: INFO: Container pvc-permissions ready: false, restart count 0 -Jun 12 21:26:24.690: INFO: ingress-canary-9xbwx from openshift-ingress-canary started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.691: INFO: Container serve-healthcheck-canary ready: true, restart count 0 -Jun 12 21:26:24.691: INFO: ingress-operator-57d9f78b9c-59cl8 from openshift-ingress-operator started at 2023-06-12 17:54:08 +0000 UTC (2 container statuses recorded) -Jun 12 21:26:24.691: INFO: Container ingress-operator ready: true, restart count 0 -Jun 12 21:26:24.691: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:26:24.691: INFO: insights-operator-7dfcfbc664-j8swm from openshift-insights started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.691: INFO: Container insights-operator ready: true, restart count 1 -Jun 12 21:26:24.691: INFO: openshift-kube-proxy-5hl4f from openshift-kube-proxy started at 2023-06-12 17:47:53 +0000 UTC (2 container statuses recorded) -Jun 12 21:26:24.691: INFO: Container kube-proxy ready: true, restart count 0 -Jun 12 21:26:24.691: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:26:24.691: INFO: kube-storage-version-migrator-operator-689b97b878-cqw2l from openshift-kube-storage-version-migrator-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.691: INFO: Container kube-storage-version-migrator-operator ready: true, restart count 1 -Jun 12 21:26:24.691: INFO: marketplace-operator-769ddf547d-mm52g from openshift-marketplace started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.691: INFO: Container marketplace-operator ready: true, restart count 0 -Jun 12 21:26:24.691: INFO: cluster-monitoring-operator-7df766d4db-cnq44 from openshift-monitoring started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.691: INFO: Container cluster-monitoring-operator ready: true, restart count 0 -Jun 12 21:26:24.691: INFO: node-exporter-s9sgk from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (2 container statuses recorded) -Jun 12 21:26:24.691: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:26:24.691: INFO: Container node-exporter ready: true, restart count 0 -Jun 12 21:26:24.691: INFO: multus-additional-cni-plugins-rsr27 from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.691: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 -Jun 12 21:26:24.691: INFO: multus-admission-controller-5894dd7875-bfbwp from openshift-multus started at 2023-06-12 17:58:31 +0000 UTC (2 container statuses recorded) -Jun 12 21:26:24.691: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:26:24.691: INFO: Container multus-admission-controller ready: true, restart count 0 -Jun 12 21:26:24.691: INFO: multus-ln9rr from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.691: INFO: Container kube-multus ready: true, restart count 0 -Jun 12 21:26:24.691: INFO: network-metrics-daemon-75s49 from openshift-multus started at 2023-06-12 17:47:49 +0000 UTC (2 container statuses recorded) -Jun 12 21:26:24.691: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:26:24.692: INFO: Container network-metrics-daemon ready: true, restart count 0 -Jun 12 21:26:24.692: INFO: network-check-source-7f6b75fdb6-8882l from openshift-network-diagnostics started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.692: INFO: Container check-endpoints ready: true, restart count 0 -Jun 12 21:26:24.692: INFO: network-check-target-kjfll from openshift-network-diagnostics started at 2023-06-12 17:47:56 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.692: INFO: Container network-check-target-container ready: true, restart count 0 -Jun 12 21:26:24.692: INFO: catalog-operator-874999f59-jggx9 from openshift-operator-lifecycle-manager started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.692: INFO: Container catalog-operator ready: true, restart count 0 -Jun 12 21:26:24.692: INFO: collect-profiles-28110045-fcbk8 from openshift-operator-lifecycle-manager started at 2023-06-12 20:45:00 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.692: INFO: Container collect-profiles ready: false, restart count 0 -Jun 12 21:26:24.692: INFO: olm-operator-bdbf4b468-8vj6q from openshift-operator-lifecycle-manager started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.692: INFO: Container olm-operator ready: true, restart count 0 -Jun 12 21:26:24.692: INFO: package-server-manager-5b897cb946-pz59r from openshift-operator-lifecycle-manager started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.692: INFO: Container package-server-manager ready: true, restart count 0 -Jun 12 21:26:24.692: INFO: packageserver-7f8bd8c95b-2zntg from openshift-operator-lifecycle-manager started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.692: INFO: Container packageserver ready: true, restart count 0 -Jun 12 21:26:24.692: INFO: metrics-78c5579cb7-nlfqq from openshift-roks-metrics started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.692: INFO: Container metrics ready: true, restart count 3 -Jun 12 21:26:24.692: INFO: push-gateway-85f6799b47-cgtdt from openshift-roks-metrics started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.692: INFO: Container push-gateway ready: true, restart count 0 -Jun 12 21:26:24.692: INFO: service-ca-operator-86d6dcd567-8jc2t from openshift-service-ca-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.692: INFO: Container service-ca-operator ready: true, restart count 1 -Jun 12 21:26:24.692: INFO: service-ca-7c79786568-vhxsl from openshift-service-ca started at 2023-06-12 17:55:23 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.692: INFO: Container service-ca-controller ready: true, restart count 0 -Jun 12 21:26:24.692: INFO: sonobuoy-e2e-job-9876719f3d1644bf from sonobuoy started at 2023-06-12 20:39:06 +0000 UTC (2 container statuses recorded) -Jun 12 21:26:24.692: INFO: Container e2e ready: true, restart count 0 -Jun 12 21:26:24.692: INFO: Container sonobuoy-worker ready: true, restart count 0 -Jun 12 21:26:24.692: INFO: sonobuoy-systemd-logs-daemon-set-129e475d543c42cc-nbw64 from sonobuoy started at 2023-06-12 20:39:07 +0000 UTC (2 container statuses recorded) -Jun 12 21:26:24.693: INFO: Container sonobuoy-worker ready: true, restart count 0 -Jun 12 21:26:24.693: INFO: Container systemd-logs ready: true, restart count 0 -Jun 12 21:26:24.693: INFO: tigera-operator-5b48cf996b-z7p6p from tigera-operator started at 2023-06-12 17:40:11 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.693: INFO: Container tigera-operator ready: true, restart count 7 -Jun 12 21:26:24.693: INFO: -Logging pods the apiserver thinks is on node 10.138.75.70 before test -Jun 12 21:26:24.751: INFO: calico-node-v822j from calico-system started at 2023-06-12 17:53:02 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.751: INFO: Container calico-node ready: true, restart count 0 -Jun 12 21:26:24.751: INFO: calico-typha-74d94b74f5-db4zz from calico-system started at 2023-06-12 17:53:02 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.751: INFO: Container calico-typha ready: true, restart count 0 -Jun 12 21:26:24.751: INFO: pod-handle-http-request from container-lifecycle-hook-571 started at 2023-06-12 21:26:12 +0000 UTC (2 container statuses recorded) -Jun 12 21:26:24.751: INFO: Container container-handle-http-request ready: true, restart count 0 -Jun 12 21:26:24.751: INFO: Container container-handle-https-request ready: true, restart count 0 -Jun 12 21:26:24.751: INFO: ibm-cloud-provider-ip-168-1-198-197-75947fc545-9m2wx from ibm-system started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.751: INFO: Container ibm-cloud-provider-ip-168-1-198-197 ready: true, restart count 0 -Jun 12 21:26:24.751: INFO: ibm-keepalived-watcher-nl9l9 from kube-system started at 2023-06-12 17:40:20 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.751: INFO: Container keepalived-watcher ready: true, restart count 0 -Jun 12 21:26:24.751: INFO: ibm-master-proxy-static-10.138.75.70 from kube-system started at 2023-06-12 17:40:17 +0000 UTC (2 container statuses recorded) -Jun 12 21:26:24.751: INFO: Container ibm-master-proxy-static ready: true, restart count 0 -Jun 12 21:26:24.751: INFO: Container pause ready: true, restart count 0 -Jun 12 21:26:24.752: INFO: ibmcloud-block-storage-driver-jl8fq from kube-system started at 2023-06-12 17:40:28 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.752: INFO: Container ibmcloud-block-storage-driver-container ready: true, restart count 0 -Jun 12 21:26:24.752: INFO: tuned-dmlsr from openshift-cluster-node-tuning-operator started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.752: INFO: Container tuned ready: true, restart count 0 -Jun 12 21:26:24.752: INFO: csi-snapshot-controller-7f8879b9ff-lhkmp from openshift-cluster-storage-operator started at 2023-06-12 17:55:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.752: INFO: Container snapshot-controller ready: true, restart count 0 -Jun 12 21:26:24.752: INFO: csi-snapshot-webhook-7bd9594b6d-9f476 from openshift-cluster-storage-operator started at 2023-06-12 17:55:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.752: INFO: Container webhook ready: true, restart count 0 -Jun 12 21:26:24.752: INFO: downloads-8b57f44bb-f7r76 from openshift-console started at 2023-06-12 17:55:24 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.752: INFO: Container download-server ready: true, restart count 0 -Jun 12 21:26:24.752: INFO: dns-default-5d2sp from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (2 container statuses recorded) -Jun 12 21:26:24.752: INFO: Container dns ready: true, restart count 0 -Jun 12 21:26:24.752: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:26:24.752: INFO: node-resolver-lf2bx from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.752: INFO: Container dns-node-resolver ready: true, restart count 0 -Jun 12 21:26:24.752: INFO: node-ca-mwjbd from openshift-image-registry started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.752: INFO: Container node-ca ready: true, restart count 0 -Jun 12 21:26:24.752: INFO: ingress-canary-xwc5b from openshift-ingress-canary started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.752: INFO: Container serve-healthcheck-canary ready: true, restart count 0 -Jun 12 21:26:24.752: INFO: router-default-7d454f944c-s862z from openshift-ingress started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.752: INFO: Container router ready: true, restart count 0 -Jun 12 21:26:24.752: INFO: openshift-kube-proxy-rckf9 from openshift-kube-proxy started at 2023-06-12 17:47:53 +0000 UTC (2 container statuses recorded) -Jun 12 21:26:24.752: INFO: Container kube-proxy ready: true, restart count 0 -Jun 12 21:26:24.752: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:26:24.752: INFO: certified-operators-9jhxm from openshift-marketplace started at 2023-06-12 17:57:19 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.752: INFO: Container registry-server ready: true, restart count 0 -Jun 12 21:26:24.752: INFO: redhat-marketplace-n9tcn from openshift-marketplace started at 2023-06-12 17:57:19 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.752: INFO: Container registry-server ready: true, restart count 0 -Jun 12 21:26:24.752: INFO: alertmanager-main-0 from openshift-monitoring started at 2023-06-12 18:01:41 +0000 UTC (6 container statuses recorded) -Jun 12 21:26:24.752: INFO: Container alertmanager ready: true, restart count 1 -Jun 12 21:26:24.752: INFO: Container alertmanager-proxy ready: true, restart count 0 -Jun 12 21:26:24.752: INFO: Container config-reloader ready: true, restart count 0 -Jun 12 21:26:24.752: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:26:24.752: INFO: Container kube-rbac-proxy-metric ready: true, restart count 0 -Jun 12 21:26:24.752: INFO: Container prom-label-proxy ready: true, restart count 0 -Jun 12 21:26:24.752: INFO: node-exporter-5vgf6 from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (2 container statuses recorded) -Jun 12 21:26:24.752: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:26:24.752: INFO: Container node-exporter ready: true, restart count 0 -Jun 12 21:26:24.753: INFO: openshift-state-metrics-7d7f8b4cf8-6kdhb from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (3 container statuses recorded) -Jun 12 21:26:24.753: INFO: Container kube-rbac-proxy-main ready: true, restart count 0 -Jun 12 21:26:24.753: INFO: Container kube-rbac-proxy-self ready: true, restart count 0 -Jun 12 21:26:24.753: INFO: Container openshift-state-metrics ready: true, restart count 0 -Jun 12 21:26:24.753: INFO: prometheus-adapter-7c58c77c58-2j47k from openshift-monitoring started at 2023-06-12 17:59:36 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.753: INFO: Container prometheus-adapter ready: true, restart count 0 -Jun 12 21:26:24.753: INFO: prometheus-k8s-1 from openshift-monitoring started at 2023-06-12 18:01:12 +0000 UTC (6 container statuses recorded) -Jun 12 21:26:24.753: INFO: Container config-reloader ready: true, restart count 0 -Jun 12 21:26:24.753: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:26:24.753: INFO: Container kube-rbac-proxy-thanos ready: true, restart count 0 -Jun 12 21:26:24.753: INFO: Container prometheus ready: true, restart count 0 -Jun 12 21:26:24.753: INFO: Container prometheus-proxy ready: true, restart count 0 -Jun 12 21:26:24.753: INFO: Container thanos-sidecar ready: true, restart count 0 -Jun 12 21:26:24.753: INFO: prometheus-operator-5d978dbf9c-zvq6g from openshift-monitoring started at 2023-06-12 17:59:19 +0000 UTC (2 container statuses recorded) -Jun 12 21:26:24.753: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:26:24.753: INFO: Container prometheus-operator ready: true, restart count 0 -Jun 12 21:26:24.753: INFO: prometheus-operator-admission-webhook-5d679565bb-sj42p from openshift-monitoring started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.753: INFO: Container prometheus-operator-admission-webhook ready: true, restart count 0 -Jun 12 21:26:24.753: INFO: telemeter-client-55c7b57d84-vh47h from openshift-monitoring started at 2023-06-12 17:59:37 +0000 UTC (3 container statuses recorded) -Jun 12 21:26:24.753: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:26:24.753: INFO: Container reload ready: true, restart count 0 -Jun 12 21:26:24.753: INFO: Container telemeter-client ready: true, restart count 0 -Jun 12 21:26:24.753: INFO: thanos-querier-6497df7b9-pg2z9 from openshift-monitoring started at 2023-06-12 17:59:42 +0000 UTC (6 container statuses recorded) -Jun 12 21:26:24.753: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:26:24.753: INFO: Container kube-rbac-proxy-metrics ready: true, restart count 0 -Jun 12 21:26:24.753: INFO: Container kube-rbac-proxy-rules ready: true, restart count 0 -Jun 12 21:26:24.753: INFO: Container oauth-proxy ready: true, restart count 0 -Jun 12 21:26:24.753: INFO: Container prom-label-proxy ready: true, restart count 0 -Jun 12 21:26:24.753: INFO: Container thanos-query ready: true, restart count 0 -Jun 12 21:26:24.753: INFO: multus-26bfs from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.753: INFO: Container kube-multus ready: true, restart count 0 -Jun 12 21:26:24.753: INFO: multus-additional-cni-plugins-9vls6 from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.753: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 -Jun 12 21:26:24.753: INFO: multus-admission-controller-5894dd7875-xldt9 from openshift-multus started at 2023-06-12 17:58:44 +0000 UTC (2 container statuses recorded) -Jun 12 21:26:24.753: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:26:24.753: INFO: Container multus-admission-controller ready: true, restart count 0 -Jun 12 21:26:24.753: INFO: network-metrics-daemon-g9zzs from openshift-multus started at 2023-06-12 17:47:49 +0000 UTC (2 container statuses recorded) -Jun 12 21:26:24.753: INFO: Container kube-rbac-proxy ready: true, restart count 0 -Jun 12 21:26:24.753: INFO: Container network-metrics-daemon ready: true, restart count 0 -Jun 12 21:26:24.753: INFO: network-check-target-l622r from openshift-network-diagnostics started at 2023-06-12 17:47:56 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.753: INFO: Container network-check-target-container ready: true, restart count 0 -Jun 12 21:26:24.753: INFO: sonobuoy from sonobuoy started at 2023-06-12 20:38:54 +0000 UTC (1 container statuses recorded) -Jun 12 21:26:24.753: INFO: Container kube-sonobuoy ready: true, restart count 0 -Jun 12 21:26:24.753: INFO: sonobuoy-systemd-logs-daemon-set-129e475d543c42cc-4dn8s from sonobuoy started at 2023-06-12 20:39:07 +0000 UTC (2 container statuses recorded) -Jun 12 21:26:24.753: INFO: Container sonobuoy-worker ready: true, restart count 0 -Jun 12 21:26:24.753: INFO: Container systemd-logs ready: true, restart count 0 -[It] validates that NodeSelector is respected if matching [Conformance] - test/e2e/scheduling/predicates.go:466 -STEP: Trying to launch a pod without a label to get a node which can launch it. 06/12/23 21:26:24.754 -Jun 12 21:26:24.778: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-4754" to be "running" -Jun 12 21:26:24.790: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 11.129302ms -Jun 12 21:26:26.804: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025478008s -Jun 12 21:26:28.801: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 4.023019738s -Jun 12 21:26:28.802: INFO: Pod "without-label" satisfied condition "running" -STEP: Explicitly delete pod here to free the resource it takes. 06/12/23 21:26:28.811 -STEP: Trying to apply a random label on the found node. 06/12/23 21:26:28.834 -STEP: verifying the node has the label kubernetes.io/e2e-dd6360c0-a645-4790-9b17-ee03789319e6 42 06/12/23 21:26:28.87 -STEP: Trying to relaunch the pod, now with labels. 06/12/23 21:26:28.879 -Jun 12 21:26:28.898: INFO: Waiting up to 5m0s for pod "with-labels" in namespace "sched-pred-4754" to be "not pending" -Jun 12 21:26:28.908: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 10.274664ms -Jun 12 21:26:30.920: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021357473s -Jun 12 21:26:32.920: INFO: Pod "with-labels": Phase="Running", Reason="", readiness=true. Elapsed: 4.022117091s -Jun 12 21:26:32.920: INFO: Pod "with-labels" satisfied condition "not pending" -STEP: removing the label kubernetes.io/e2e-dd6360c0-a645-4790-9b17-ee03789319e6 off the node 10.138.75.112 06/12/23 21:26:32.933 -STEP: verifying the node doesn't have the label kubernetes.io/e2e-dd6360c0-a645-4790-9b17-ee03789319e6 06/12/23 21:26:33.002 -[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[It] should check is all data is printed [Conformance] + test/e2e/kubectl/kubectl.go:1685 +Jul 27 02:01:02.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-1997 version' +Jul 27 02:01:02.095: INFO: stderr: "WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.\n" +Jul 27 02:01:02.095: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"26\", GitVersion:\"v1.26.6\", GitCommit:\"11902a838028edef305dfe2f96be929bc4d114d8\", GitTreeState:\"clean\", BuildDate:\"2023-06-14T09:56:58Z\", GoVersion:\"go1.19.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nKustomize Version: v4.5.7\nServer Version: version.Info{Major:\"1\", Minor:\"26\", GitVersion:\"v1.26.6+f245ced\", GitCommit:\"cbbd0bdba10e0b612e32cdeb6462daa6909df4be\", GitTreeState:\"clean\", BuildDate:\"2023-07-13T17:06:24Z\", GoVersion:\"go1.19.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" +[AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 -Jun 12 21:26:33.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] - test/e2e/scheduling/predicates.go:88 -[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] +Jul 27 02:01:02.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] +[DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] +[DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 -STEP: Destroying namespace "sched-pred-4754" for this suite. 06/12/23 21:26:33.128 +STEP: Destroying namespace "kubectl-1997" for this suite. 07/27/23 02:01:02.143 ------------------------------ -• [SLOW TEST] [8.778 seconds] -[sig-scheduling] SchedulerPredicates [Serial] -test/e2e/scheduling/framework.go:40 - validates that NodeSelector is respected if matching [Conformance] - test/e2e/scheduling/predicates.go:466 +• [0.260 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl version + test/e2e/kubectl/kubectl.go:1679 + should check is all data is printed [Conformance] + test/e2e/kubectl/kubectl.go:1685 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + [BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:26:24.364 - Jun 12 21:26:24.364: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename sched-pred 06/12/23 21:26:24.366 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:26:24.456 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:26:24.466 - [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + STEP: Creating a kubernetes client 07/27/23 02:01:01.945 + Jul 27 02:01:01.945: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename kubectl 07/27/23 02:01:01.946 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:01:01.993 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:01:02.006 + [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] - test/e2e/scheduling/predicates.go:97 - Jun 12 21:26:24.479: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready - Jun 12 21:26:24.516: INFO: Waiting for terminating namespaces to be deleted... - Jun 12 21:26:24.543: INFO: - Logging pods the apiserver thinks is on node 10.138.75.112 before test - Jun 12 21:26:24.596: INFO: calico-node-b9sdb from calico-system started at 2023-06-12 17:53:02 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.596: INFO: Container calico-node ready: true, restart count 0 - Jun 12 21:26:24.596: INFO: calico-typha-74d94b74f5-dc6td from calico-system started at 2023-06-12 17:53:09 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.596: INFO: Container calico-typha ready: true, restart count 0 - Jun 12 21:26:24.596: INFO: ibm-cloud-provider-ip-168-1-198-197-75947fc545-gxzn7 from ibm-system started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.596: INFO: Container ibm-cloud-provider-ip-168-1-198-197 ready: true, restart count 0 - Jun 12 21:26:24.596: INFO: ibm-keepalived-watcher-5hc6v from kube-system started at 2023-06-12 17:40:13 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.597: INFO: Container keepalived-watcher ready: true, restart count 0 - Jun 12 21:26:24.597: INFO: ibm-master-proxy-static-10.138.75.112 from kube-system started at 2023-06-12 17:40:09 +0000 UTC (2 container statuses recorded) - Jun 12 21:26:24.597: INFO: Container ibm-master-proxy-static ready: true, restart count 0 - Jun 12 21:26:24.597: INFO: Container pause ready: true, restart count 0 - Jun 12 21:26:24.597: INFO: ibmcloud-block-storage-driver-5zqmj from kube-system started at 2023-06-12 17:40:20 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.597: INFO: Container ibmcloud-block-storage-driver-container ready: true, restart count 0 - Jun 12 21:26:24.597: INFO: tuned-phslc from openshift-cluster-node-tuning-operator started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.597: INFO: Container tuned ready: true, restart count 0 - Jun 12 21:26:24.597: INFO: csi-snapshot-controller-7f8879b9ff-p456r from openshift-cluster-storage-operator started at 2023-06-12 17:55:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.597: INFO: Container snapshot-controller ready: true, restart count 0 - Jun 12 21:26:24.597: INFO: csi-snapshot-webhook-7bd9594b6d-bp5dr from openshift-cluster-storage-operator started at 2023-06-12 17:55:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.597: INFO: Container webhook ready: true, restart count 0 - Jun 12 21:26:24.597: INFO: console-5bf97c7949-w5sn5 from openshift-console started at 2023-06-12 18:01:02 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.597: INFO: Container console ready: true, restart count 0 - Jun 12 21:26:24.597: INFO: downloads-8b57f44bb-55ss5 from openshift-console started at 2023-06-12 17:55:24 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.597: INFO: Container download-server ready: true, restart count 0 - Jun 12 21:26:24.597: INFO: dns-default-hpnqj from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (2 container statuses recorded) - Jun 12 21:26:24.597: INFO: Container dns ready: true, restart count 0 - Jun 12 21:26:24.597: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:26:24.597: INFO: node-resolver-5st6j from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.597: INFO: Container dns-node-resolver ready: true, restart count 0 - Jun 12 21:26:24.597: INFO: image-registry-6c79bcf5c4-p7ss4 from openshift-image-registry started at 2023-06-12 18:00:30 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.597: INFO: Container registry ready: true, restart count 0 - Jun 12 21:26:24.597: INFO: node-ca-qm7sb from openshift-image-registry started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.597: INFO: Container node-ca ready: true, restart count 0 - Jun 12 21:26:24.598: INFO: ingress-canary-5qpcw from openshift-ingress-canary started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.598: INFO: Container serve-healthcheck-canary ready: true, restart count 0 - Jun 12 21:26:24.598: INFO: router-default-7d454f944c-62qgz from openshift-ingress started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.598: INFO: Container router ready: true, restart count 0 - Jun 12 21:26:24.598: INFO: openshift-kube-proxy-b9xs9 from openshift-kube-proxy started at 2023-06-12 17:47:53 +0000 UTC (2 container statuses recorded) - Jun 12 21:26:24.598: INFO: Container kube-proxy ready: true, restart count 0 - Jun 12 21:26:24.598: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:26:24.598: INFO: migrator-cfb6c8f7c-vx2tr from openshift-kube-storage-version-migrator started at 2023-06-12 17:55:28 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.598: INFO: Container migrator ready: true, restart count 0 - Jun 12 21:26:24.598: INFO: community-operators-fm8cx from openshift-marketplace started at 2023-06-12 17:57:19 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.598: INFO: Container registry-server ready: true, restart count 0 - Jun 12 21:26:24.598: INFO: redhat-operators-pr47d from openshift-marketplace started at 2023-06-12 19:05:36 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.598: INFO: Container registry-server ready: true, restart count 0 - Jun 12 21:26:24.598: INFO: alertmanager-main-1 from openshift-monitoring started at 2023-06-12 18:01:06 +0000 UTC (6 container statuses recorded) - Jun 12 21:26:24.598: INFO: Container alertmanager ready: true, restart count 1 - Jun 12 21:26:24.598: INFO: Container alertmanager-proxy ready: true, restart count 0 - Jun 12 21:26:24.598: INFO: Container config-reloader ready: true, restart count 0 - Jun 12 21:26:24.598: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:26:24.598: INFO: Container kube-rbac-proxy-metric ready: true, restart count 0 - Jun 12 21:26:24.598: INFO: Container prom-label-proxy ready: true, restart count 0 - Jun 12 21:26:24.598: INFO: kube-state-metrics-6ccfb58dc4-rgnnh from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (3 container statuses recorded) - Jun 12 21:26:24.598: INFO: Container kube-rbac-proxy-main ready: true, restart count 0 - Jun 12 21:26:24.598: INFO: Container kube-rbac-proxy-self ready: true, restart count 0 - Jun 12 21:26:24.598: INFO: Container kube-state-metrics ready: true, restart count 0 - Jun 12 21:26:24.598: INFO: node-exporter-r799t from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (2 container statuses recorded) - Jun 12 21:26:24.598: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:26:24.598: INFO: Container node-exporter ready: true, restart count 0 - Jun 12 21:26:24.598: INFO: prometheus-adapter-7c58c77c58-xfd55 from openshift-monitoring started at 2023-06-12 17:59:36 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.598: INFO: Container prometheus-adapter ready: true, restart count 0 - Jun 12 21:26:24.598: INFO: prometheus-k8s-0 from openshift-monitoring started at 2023-06-12 18:01:32 +0000 UTC (6 container statuses recorded) - Jun 12 21:26:24.598: INFO: Container config-reloader ready: true, restart count 0 - Jun 12 21:26:24.598: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:26:24.598: INFO: Container kube-rbac-proxy-thanos ready: true, restart count 0 - Jun 12 21:26:24.598: INFO: Container prometheus ready: true, restart count 0 - Jun 12 21:26:24.598: INFO: Container prometheus-proxy ready: true, restart count 0 - Jun 12 21:26:24.598: INFO: Container thanos-sidecar ready: true, restart count 0 - Jun 12 21:26:24.598: INFO: prometheus-operator-admission-webhook-5d679565bb-66wnf from openshift-monitoring started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.598: INFO: Container prometheus-operator-admission-webhook ready: true, restart count 0 - Jun 12 21:26:24.598: INFO: thanos-querier-6497df7b9-djrsc from openshift-monitoring started at 2023-06-12 17:59:42 +0000 UTC (6 container statuses recorded) - Jun 12 21:26:24.598: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:26:24.598: INFO: Container kube-rbac-proxy-metrics ready: true, restart count 0 - Jun 12 21:26:24.598: INFO: Container kube-rbac-proxy-rules ready: true, restart count 0 - Jun 12 21:26:24.598: INFO: Container oauth-proxy ready: true, restart count 0 - Jun 12 21:26:24.598: INFO: Container prom-label-proxy ready: true, restart count 0 - Jun 12 21:26:24.598: INFO: Container thanos-query ready: true, restart count 0 - Jun 12 21:26:24.599: INFO: multus-additional-cni-plugins-zpr6c from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.599: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 - Jun 12 21:26:24.599: INFO: multus-q452d from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.599: INFO: Container kube-multus ready: true, restart count 0 - Jun 12 21:26:24.599: INFO: network-metrics-daemon-vx56x from openshift-multus started at 2023-06-12 17:47:49 +0000 UTC (2 container statuses recorded) - Jun 12 21:26:24.599: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:26:24.599: INFO: Container network-metrics-daemon ready: true, restart count 0 - Jun 12 21:26:24.599: INFO: network-check-target-lfvfw from openshift-network-diagnostics started at 2023-06-12 17:47:56 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.599: INFO: Container network-check-target-container ready: true, restart count 0 - Jun 12 21:26:24.599: INFO: network-operator-5498bf7dc6-xv8r2 from openshift-network-operator started at 2023-06-12 17:47:21 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.599: INFO: Container network-operator ready: true, restart count 1 - Jun 12 21:26:24.599: INFO: collect-profiles-28110060-nx85j from openshift-operator-lifecycle-manager started at 2023-06-12 21:00:00 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.599: INFO: Container collect-profiles ready: false, restart count 0 - Jun 12 21:26:24.599: INFO: collect-profiles-28110075-c42rr from openshift-operator-lifecycle-manager started at 2023-06-12 21:15:00 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.599: INFO: Container collect-profiles ready: false, restart count 0 - Jun 12 21:26:24.599: INFO: packageserver-7f8bd8c95b-fgfhz from openshift-operator-lifecycle-manager started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.599: INFO: Container packageserver ready: true, restart count 0 - Jun 12 21:26:24.599: INFO: sonobuoy-systemd-logs-daemon-set-129e475d543c42cc-xk7f7 from sonobuoy started at 2023-06-12 20:39:06 +0000 UTC (2 container statuses recorded) - Jun 12 21:26:24.599: INFO: Container sonobuoy-worker ready: true, restart count 0 - Jun 12 21:26:24.599: INFO: Container systemd-logs ready: true, restart count 0 - Jun 12 21:26:24.599: INFO: - Logging pods the apiserver thinks is on node 10.138.75.116 before test - Jun 12 21:26:24.688: INFO: calico-kube-controllers-58944988fc-kv6pq from calico-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.688: INFO: Container calico-kube-controllers ready: true, restart count 0 - Jun 12 21:26:24.688: INFO: calico-node-nhd4m from calico-system started at 2023-06-12 17:53:02 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.688: INFO: Container calico-node ready: true, restart count 0 - Jun 12 21:26:24.688: INFO: ibm-file-plugin-5f8cc7b66-hc7b9 from kube-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.689: INFO: Container ibm-file-plugin-container ready: true, restart count 0 - Jun 12 21:26:24.689: INFO: ibm-keepalived-watcher-zp24l from kube-system started at 2023-06-12 17:40:01 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.689: INFO: Container keepalived-watcher ready: true, restart count 0 - Jun 12 21:26:24.689: INFO: ibm-master-proxy-static-10.138.75.116 from kube-system started at 2023-06-12 17:39:58 +0000 UTC (2 container statuses recorded) - Jun 12 21:26:24.689: INFO: Container ibm-master-proxy-static ready: true, restart count 0 - Jun 12 21:26:24.689: INFO: Container pause ready: true, restart count 0 - Jun 12 21:26:24.689: INFO: ibm-storage-watcher-f4db746b4-mlm76 from kube-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.689: INFO: Container ibm-storage-watcher-container ready: true, restart count 0 - Jun 12 21:26:24.689: INFO: ibmcloud-block-storage-driver-4wh25 from kube-system started at 2023-06-12 17:40:09 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.689: INFO: Container ibmcloud-block-storage-driver-container ready: true, restart count 0 - Jun 12 21:26:24.689: INFO: ibmcloud-block-storage-plugin-5f85bc9665-2ltn5 from kube-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.689: INFO: Container ibmcloud-block-storage-plugin-container ready: true, restart count 0 - Jun 12 21:26:24.689: INFO: vpn-7bc564c55c-htxd6 from kube-system started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.689: INFO: Container vpn ready: true, restart count 0 - Jun 12 21:26:24.689: INFO: cluster-node-tuning-operator-5f6cff5c99-z22gd from openshift-cluster-node-tuning-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.689: INFO: Container cluster-node-tuning-operator ready: true, restart count 0 - Jun 12 21:26:24.689: INFO: tuned-44pqh from openshift-cluster-node-tuning-operator started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.689: INFO: Container tuned ready: true, restart count 0 - Jun 12 21:26:24.690: INFO: cluster-samples-operator-597884bb5d-bv9cn from openshift-cluster-samples-operator started at 2023-06-12 17:54:08 +0000 UTC (2 container statuses recorded) - Jun 12 21:26:24.690: INFO: Container cluster-samples-operator ready: true, restart count 0 - Jun 12 21:26:24.690: INFO: Container cluster-samples-operator-watch ready: true, restart count 0 - Jun 12 21:26:24.690: INFO: cluster-storage-operator-75bb97486-7xrgf from openshift-cluster-storage-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.690: INFO: Container cluster-storage-operator ready: true, restart count 1 - Jun 12 21:26:24.690: INFO: csi-snapshot-controller-operator-69df8b995f-flpdz from openshift-cluster-storage-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.690: INFO: Container csi-snapshot-controller-operator ready: true, restart count 0 - Jun 12 21:26:24.690: INFO: console-operator-747447cc44-5hk9p from openshift-console-operator started at 2023-06-12 17:54:08 +0000 UTC (2 container statuses recorded) - Jun 12 21:26:24.690: INFO: Container console-operator ready: true, restart count 1 - Jun 12 21:26:24.690: INFO: Container conversion-webhook-server ready: true, restart count 2 - Jun 12 21:26:24.690: INFO: console-5bf97c7949-22prk from openshift-console started at 2023-06-12 18:01:30 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.690: INFO: Container console ready: true, restart count 0 - Jun 12 21:26:24.690: INFO: dns-operator-65c495d75-cd4fc from openshift-dns-operator started at 2023-06-12 17:54:08 +0000 UTC (2 container statuses recorded) - Jun 12 21:26:24.690: INFO: Container dns-operator ready: true, restart count 0 - Jun 12 21:26:24.690: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:26:24.690: INFO: dns-default-cw4pt from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (2 container statuses recorded) - Jun 12 21:26:24.690: INFO: Container dns ready: true, restart count 0 - Jun 12 21:26:24.690: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:26:24.690: INFO: node-resolver-8mss5 from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.690: INFO: Container dns-node-resolver ready: true, restart count 0 - Jun 12 21:26:24.690: INFO: cluster-image-registry-operator-f9c46b94f-swtmm from openshift-image-registry started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.690: INFO: Container cluster-image-registry-operator ready: true, restart count 0 - Jun 12 21:26:24.690: INFO: node-ca-5cs7d from openshift-image-registry started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.690: INFO: Container node-ca ready: true, restart count 0 - Jun 12 21:26:24.690: INFO: registry-pvc-permissions-j28ls from openshift-image-registry started at 2023-06-12 18:00:38 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.690: INFO: Container pvc-permissions ready: false, restart count 0 - Jun 12 21:26:24.690: INFO: ingress-canary-9xbwx from openshift-ingress-canary started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.691: INFO: Container serve-healthcheck-canary ready: true, restart count 0 - Jun 12 21:26:24.691: INFO: ingress-operator-57d9f78b9c-59cl8 from openshift-ingress-operator started at 2023-06-12 17:54:08 +0000 UTC (2 container statuses recorded) - Jun 12 21:26:24.691: INFO: Container ingress-operator ready: true, restart count 0 - Jun 12 21:26:24.691: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:26:24.691: INFO: insights-operator-7dfcfbc664-j8swm from openshift-insights started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.691: INFO: Container insights-operator ready: true, restart count 1 - Jun 12 21:26:24.691: INFO: openshift-kube-proxy-5hl4f from openshift-kube-proxy started at 2023-06-12 17:47:53 +0000 UTC (2 container statuses recorded) - Jun 12 21:26:24.691: INFO: Container kube-proxy ready: true, restart count 0 - Jun 12 21:26:24.691: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:26:24.691: INFO: kube-storage-version-migrator-operator-689b97b878-cqw2l from openshift-kube-storage-version-migrator-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.691: INFO: Container kube-storage-version-migrator-operator ready: true, restart count 1 - Jun 12 21:26:24.691: INFO: marketplace-operator-769ddf547d-mm52g from openshift-marketplace started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.691: INFO: Container marketplace-operator ready: true, restart count 0 - Jun 12 21:26:24.691: INFO: cluster-monitoring-operator-7df766d4db-cnq44 from openshift-monitoring started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.691: INFO: Container cluster-monitoring-operator ready: true, restart count 0 - Jun 12 21:26:24.691: INFO: node-exporter-s9sgk from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (2 container statuses recorded) - Jun 12 21:26:24.691: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:26:24.691: INFO: Container node-exporter ready: true, restart count 0 - Jun 12 21:26:24.691: INFO: multus-additional-cni-plugins-rsr27 from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.691: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 - Jun 12 21:26:24.691: INFO: multus-admission-controller-5894dd7875-bfbwp from openshift-multus started at 2023-06-12 17:58:31 +0000 UTC (2 container statuses recorded) - Jun 12 21:26:24.691: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:26:24.691: INFO: Container multus-admission-controller ready: true, restart count 0 - Jun 12 21:26:24.691: INFO: multus-ln9rr from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.691: INFO: Container kube-multus ready: true, restart count 0 - Jun 12 21:26:24.691: INFO: network-metrics-daemon-75s49 from openshift-multus started at 2023-06-12 17:47:49 +0000 UTC (2 container statuses recorded) - Jun 12 21:26:24.691: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:26:24.692: INFO: Container network-metrics-daemon ready: true, restart count 0 - Jun 12 21:26:24.692: INFO: network-check-source-7f6b75fdb6-8882l from openshift-network-diagnostics started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.692: INFO: Container check-endpoints ready: true, restart count 0 - Jun 12 21:26:24.692: INFO: network-check-target-kjfll from openshift-network-diagnostics started at 2023-06-12 17:47:56 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.692: INFO: Container network-check-target-container ready: true, restart count 0 - Jun 12 21:26:24.692: INFO: catalog-operator-874999f59-jggx9 from openshift-operator-lifecycle-manager started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.692: INFO: Container catalog-operator ready: true, restart count 0 - Jun 12 21:26:24.692: INFO: collect-profiles-28110045-fcbk8 from openshift-operator-lifecycle-manager started at 2023-06-12 20:45:00 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.692: INFO: Container collect-profiles ready: false, restart count 0 - Jun 12 21:26:24.692: INFO: olm-operator-bdbf4b468-8vj6q from openshift-operator-lifecycle-manager started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.692: INFO: Container olm-operator ready: true, restart count 0 - Jun 12 21:26:24.692: INFO: package-server-manager-5b897cb946-pz59r from openshift-operator-lifecycle-manager started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.692: INFO: Container package-server-manager ready: true, restart count 0 - Jun 12 21:26:24.692: INFO: packageserver-7f8bd8c95b-2zntg from openshift-operator-lifecycle-manager started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.692: INFO: Container packageserver ready: true, restart count 0 - Jun 12 21:26:24.692: INFO: metrics-78c5579cb7-nlfqq from openshift-roks-metrics started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.692: INFO: Container metrics ready: true, restart count 3 - Jun 12 21:26:24.692: INFO: push-gateway-85f6799b47-cgtdt from openshift-roks-metrics started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.692: INFO: Container push-gateway ready: true, restart count 0 - Jun 12 21:26:24.692: INFO: service-ca-operator-86d6dcd567-8jc2t from openshift-service-ca-operator started at 2023-06-12 17:54:08 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.692: INFO: Container service-ca-operator ready: true, restart count 1 - Jun 12 21:26:24.692: INFO: service-ca-7c79786568-vhxsl from openshift-service-ca started at 2023-06-12 17:55:23 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.692: INFO: Container service-ca-controller ready: true, restart count 0 - Jun 12 21:26:24.692: INFO: sonobuoy-e2e-job-9876719f3d1644bf from sonobuoy started at 2023-06-12 20:39:06 +0000 UTC (2 container statuses recorded) - Jun 12 21:26:24.692: INFO: Container e2e ready: true, restart count 0 - Jun 12 21:26:24.692: INFO: Container sonobuoy-worker ready: true, restart count 0 - Jun 12 21:26:24.692: INFO: sonobuoy-systemd-logs-daemon-set-129e475d543c42cc-nbw64 from sonobuoy started at 2023-06-12 20:39:07 +0000 UTC (2 container statuses recorded) - Jun 12 21:26:24.693: INFO: Container sonobuoy-worker ready: true, restart count 0 - Jun 12 21:26:24.693: INFO: Container systemd-logs ready: true, restart count 0 - Jun 12 21:26:24.693: INFO: tigera-operator-5b48cf996b-z7p6p from tigera-operator started at 2023-06-12 17:40:11 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.693: INFO: Container tigera-operator ready: true, restart count 7 - Jun 12 21:26:24.693: INFO: - Logging pods the apiserver thinks is on node 10.138.75.70 before test - Jun 12 21:26:24.751: INFO: calico-node-v822j from calico-system started at 2023-06-12 17:53:02 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.751: INFO: Container calico-node ready: true, restart count 0 - Jun 12 21:26:24.751: INFO: calico-typha-74d94b74f5-db4zz from calico-system started at 2023-06-12 17:53:02 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.751: INFO: Container calico-typha ready: true, restart count 0 - Jun 12 21:26:24.751: INFO: pod-handle-http-request from container-lifecycle-hook-571 started at 2023-06-12 21:26:12 +0000 UTC (2 container statuses recorded) - Jun 12 21:26:24.751: INFO: Container container-handle-http-request ready: true, restart count 0 - Jun 12 21:26:24.751: INFO: Container container-handle-https-request ready: true, restart count 0 - Jun 12 21:26:24.751: INFO: ibm-cloud-provider-ip-168-1-198-197-75947fc545-9m2wx from ibm-system started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.751: INFO: Container ibm-cloud-provider-ip-168-1-198-197 ready: true, restart count 0 - Jun 12 21:26:24.751: INFO: ibm-keepalived-watcher-nl9l9 from kube-system started at 2023-06-12 17:40:20 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.751: INFO: Container keepalived-watcher ready: true, restart count 0 - Jun 12 21:26:24.751: INFO: ibm-master-proxy-static-10.138.75.70 from kube-system started at 2023-06-12 17:40:17 +0000 UTC (2 container statuses recorded) - Jun 12 21:26:24.751: INFO: Container ibm-master-proxy-static ready: true, restart count 0 - Jun 12 21:26:24.751: INFO: Container pause ready: true, restart count 0 - Jun 12 21:26:24.752: INFO: ibmcloud-block-storage-driver-jl8fq from kube-system started at 2023-06-12 17:40:28 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.752: INFO: Container ibmcloud-block-storage-driver-container ready: true, restart count 0 - Jun 12 21:26:24.752: INFO: tuned-dmlsr from openshift-cluster-node-tuning-operator started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.752: INFO: Container tuned ready: true, restart count 0 - Jun 12 21:26:24.752: INFO: csi-snapshot-controller-7f8879b9ff-lhkmp from openshift-cluster-storage-operator started at 2023-06-12 17:55:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.752: INFO: Container snapshot-controller ready: true, restart count 0 - Jun 12 21:26:24.752: INFO: csi-snapshot-webhook-7bd9594b6d-9f476 from openshift-cluster-storage-operator started at 2023-06-12 17:55:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.752: INFO: Container webhook ready: true, restart count 0 - Jun 12 21:26:24.752: INFO: downloads-8b57f44bb-f7r76 from openshift-console started at 2023-06-12 17:55:24 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.752: INFO: Container download-server ready: true, restart count 0 - Jun 12 21:26:24.752: INFO: dns-default-5d2sp from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (2 container statuses recorded) - Jun 12 21:26:24.752: INFO: Container dns ready: true, restart count 0 - Jun 12 21:26:24.752: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:26:24.752: INFO: node-resolver-lf2bx from openshift-dns started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.752: INFO: Container dns-node-resolver ready: true, restart count 0 - Jun 12 21:26:24.752: INFO: node-ca-mwjbd from openshift-image-registry started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.752: INFO: Container node-ca ready: true, restart count 0 - Jun 12 21:26:24.752: INFO: ingress-canary-xwc5b from openshift-ingress-canary started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.752: INFO: Container serve-healthcheck-canary ready: true, restart count 0 - Jun 12 21:26:24.752: INFO: router-default-7d454f944c-s862z from openshift-ingress started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.752: INFO: Container router ready: true, restart count 0 - Jun 12 21:26:24.752: INFO: openshift-kube-proxy-rckf9 from openshift-kube-proxy started at 2023-06-12 17:47:53 +0000 UTC (2 container statuses recorded) - Jun 12 21:26:24.752: INFO: Container kube-proxy ready: true, restart count 0 - Jun 12 21:26:24.752: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:26:24.752: INFO: certified-operators-9jhxm from openshift-marketplace started at 2023-06-12 17:57:19 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.752: INFO: Container registry-server ready: true, restart count 0 - Jun 12 21:26:24.752: INFO: redhat-marketplace-n9tcn from openshift-marketplace started at 2023-06-12 17:57:19 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.752: INFO: Container registry-server ready: true, restart count 0 - Jun 12 21:26:24.752: INFO: alertmanager-main-0 from openshift-monitoring started at 2023-06-12 18:01:41 +0000 UTC (6 container statuses recorded) - Jun 12 21:26:24.752: INFO: Container alertmanager ready: true, restart count 1 - Jun 12 21:26:24.752: INFO: Container alertmanager-proxy ready: true, restart count 0 - Jun 12 21:26:24.752: INFO: Container config-reloader ready: true, restart count 0 - Jun 12 21:26:24.752: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:26:24.752: INFO: Container kube-rbac-proxy-metric ready: true, restart count 0 - Jun 12 21:26:24.752: INFO: Container prom-label-proxy ready: true, restart count 0 - Jun 12 21:26:24.752: INFO: node-exporter-5vgf6 from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (2 container statuses recorded) - Jun 12 21:26:24.752: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:26:24.752: INFO: Container node-exporter ready: true, restart count 0 - Jun 12 21:26:24.753: INFO: openshift-state-metrics-7d7f8b4cf8-6kdhb from openshift-monitoring started at 2023-06-12 17:59:31 +0000 UTC (3 container statuses recorded) - Jun 12 21:26:24.753: INFO: Container kube-rbac-proxy-main ready: true, restart count 0 - Jun 12 21:26:24.753: INFO: Container kube-rbac-proxy-self ready: true, restart count 0 - Jun 12 21:26:24.753: INFO: Container openshift-state-metrics ready: true, restart count 0 - Jun 12 21:26:24.753: INFO: prometheus-adapter-7c58c77c58-2j47k from openshift-monitoring started at 2023-06-12 17:59:36 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.753: INFO: Container prometheus-adapter ready: true, restart count 0 - Jun 12 21:26:24.753: INFO: prometheus-k8s-1 from openshift-monitoring started at 2023-06-12 18:01:12 +0000 UTC (6 container statuses recorded) - Jun 12 21:26:24.753: INFO: Container config-reloader ready: true, restart count 0 - Jun 12 21:26:24.753: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:26:24.753: INFO: Container kube-rbac-proxy-thanos ready: true, restart count 0 - Jun 12 21:26:24.753: INFO: Container prometheus ready: true, restart count 0 - Jun 12 21:26:24.753: INFO: Container prometheus-proxy ready: true, restart count 0 - Jun 12 21:26:24.753: INFO: Container thanos-sidecar ready: true, restart count 0 - Jun 12 21:26:24.753: INFO: prometheus-operator-5d978dbf9c-zvq6g from openshift-monitoring started at 2023-06-12 17:59:19 +0000 UTC (2 container statuses recorded) - Jun 12 21:26:24.753: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:26:24.753: INFO: Container prometheus-operator ready: true, restart count 0 - Jun 12 21:26:24.753: INFO: prometheus-operator-admission-webhook-5d679565bb-sj42p from openshift-monitoring started at 2023-06-12 17:58:31 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.753: INFO: Container prometheus-operator-admission-webhook ready: true, restart count 0 - Jun 12 21:26:24.753: INFO: telemeter-client-55c7b57d84-vh47h from openshift-monitoring started at 2023-06-12 17:59:37 +0000 UTC (3 container statuses recorded) - Jun 12 21:26:24.753: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:26:24.753: INFO: Container reload ready: true, restart count 0 - Jun 12 21:26:24.753: INFO: Container telemeter-client ready: true, restart count 0 - Jun 12 21:26:24.753: INFO: thanos-querier-6497df7b9-pg2z9 from openshift-monitoring started at 2023-06-12 17:59:42 +0000 UTC (6 container statuses recorded) - Jun 12 21:26:24.753: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:26:24.753: INFO: Container kube-rbac-proxy-metrics ready: true, restart count 0 - Jun 12 21:26:24.753: INFO: Container kube-rbac-proxy-rules ready: true, restart count 0 - Jun 12 21:26:24.753: INFO: Container oauth-proxy ready: true, restart count 0 - Jun 12 21:26:24.753: INFO: Container prom-label-proxy ready: true, restart count 0 - Jun 12 21:26:24.753: INFO: Container thanos-query ready: true, restart count 0 - Jun 12 21:26:24.753: INFO: multus-26bfs from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.753: INFO: Container kube-multus ready: true, restart count 0 - Jun 12 21:26:24.753: INFO: multus-additional-cni-plugins-9vls6 from openshift-multus started at 2023-06-12 17:47:48 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.753: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 - Jun 12 21:26:24.753: INFO: multus-admission-controller-5894dd7875-xldt9 from openshift-multus started at 2023-06-12 17:58:44 +0000 UTC (2 container statuses recorded) - Jun 12 21:26:24.753: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:26:24.753: INFO: Container multus-admission-controller ready: true, restart count 0 - Jun 12 21:26:24.753: INFO: network-metrics-daemon-g9zzs from openshift-multus started at 2023-06-12 17:47:49 +0000 UTC (2 container statuses recorded) - Jun 12 21:26:24.753: INFO: Container kube-rbac-proxy ready: true, restart count 0 - Jun 12 21:26:24.753: INFO: Container network-metrics-daemon ready: true, restart count 0 - Jun 12 21:26:24.753: INFO: network-check-target-l622r from openshift-network-diagnostics started at 2023-06-12 17:47:56 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.753: INFO: Container network-check-target-container ready: true, restart count 0 - Jun 12 21:26:24.753: INFO: sonobuoy from sonobuoy started at 2023-06-12 20:38:54 +0000 UTC (1 container statuses recorded) - Jun 12 21:26:24.753: INFO: Container kube-sonobuoy ready: true, restart count 0 - Jun 12 21:26:24.753: INFO: sonobuoy-systemd-logs-daemon-set-129e475d543c42cc-4dn8s from sonobuoy started at 2023-06-12 20:39:07 +0000 UTC (2 container statuses recorded) - Jun 12 21:26:24.753: INFO: Container sonobuoy-worker ready: true, restart count 0 - Jun 12 21:26:24.753: INFO: Container systemd-logs ready: true, restart count 0 - [It] validates that NodeSelector is respected if matching [Conformance] - test/e2e/scheduling/predicates.go:466 - STEP: Trying to launch a pod without a label to get a node which can launch it. 06/12/23 21:26:24.754 - Jun 12 21:26:24.778: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-4754" to be "running" - Jun 12 21:26:24.790: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 11.129302ms - Jun 12 21:26:26.804: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025478008s - Jun 12 21:26:28.801: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 4.023019738s - Jun 12 21:26:28.802: INFO: Pod "without-label" satisfied condition "running" - STEP: Explicitly delete pod here to free the resource it takes. 06/12/23 21:26:28.811 - STEP: Trying to apply a random label on the found node. 06/12/23 21:26:28.834 - STEP: verifying the node has the label kubernetes.io/e2e-dd6360c0-a645-4790-9b17-ee03789319e6 42 06/12/23 21:26:28.87 - STEP: Trying to relaunch the pod, now with labels. 06/12/23 21:26:28.879 - Jun 12 21:26:28.898: INFO: Waiting up to 5m0s for pod "with-labels" in namespace "sched-pred-4754" to be "not pending" - Jun 12 21:26:28.908: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 10.274664ms - Jun 12 21:26:30.920: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021357473s - Jun 12 21:26:32.920: INFO: Pod "with-labels": Phase="Running", Reason="", readiness=true. Elapsed: 4.022117091s - Jun 12 21:26:32.920: INFO: Pod "with-labels" satisfied condition "not pending" - STEP: removing the label kubernetes.io/e2e-dd6360c0-a645-4790-9b17-ee03789319e6 off the node 10.138.75.112 06/12/23 21:26:32.933 - STEP: verifying the node doesn't have the label kubernetes.io/e2e-dd6360c0-a645-4790-9b17-ee03789319e6 06/12/23 21:26:33.002 - [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [It] should check is all data is printed [Conformance] + test/e2e/kubectl/kubectl.go:1685 + Jul 27 02:01:02.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-1997 version' + Jul 27 02:01:02.095: INFO: stderr: "WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.\n" + Jul 27 02:01:02.095: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"26\", GitVersion:\"v1.26.6\", GitCommit:\"11902a838028edef305dfe2f96be929bc4d114d8\", GitTreeState:\"clean\", BuildDate:\"2023-06-14T09:56:58Z\", GoVersion:\"go1.19.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nKustomize Version: v4.5.7\nServer Version: version.Info{Major:\"1\", Minor:\"26\", GitVersion:\"v1.26.6+f245ced\", GitCommit:\"cbbd0bdba10e0b612e32cdeb6462daa6909df4be\", GitTreeState:\"clean\", BuildDate:\"2023-07-13T17:06:24Z\", GoVersion:\"go1.19.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" + [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 - Jun 12 21:26:33.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] - test/e2e/scheduling/predicates.go:88 - [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + Jul 27 02:01:02.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 - STEP: Destroying namespace "sched-pred-4754" for this suite. 06/12/23 21:26:33.128 + STEP: Destroying namespace "kubectl-1997" for this suite. 07/27/23 02:01:02.143 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSS +SSSSSSSSSSSSSS ------------------------------ -[sig-node] Kubelet when scheduling a busybox command in a pod - should print the output to logs [NodeConformance] [Conformance] - test/e2e/common/node/kubelet.go:52 -[BeforeEach] [sig-node] Kubelet +[sig-apps] Deployment + should validate Deployment Status endpoints [Conformance] + test/e2e/apps/deployment.go:479 +[BeforeEach] [sig-apps] Deployment set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:26:33.147 -Jun 12 21:26:33.147: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename kubelet-test 06/12/23 21:26:33.155 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:26:33.246 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:26:33.259 -[BeforeEach] [sig-node] Kubelet +STEP: Creating a kubernetes client 07/27/23 02:01:02.206 +Jul 27 02:01:02.206: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename deployment 07/27/23 02:01:02.207 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:01:02.334 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:01:02.346 +[BeforeEach] [sig-apps] Deployment test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-node] Kubelet - test/e2e/common/node/kubelet.go:41 -[It] should print the output to logs [NodeConformance] [Conformance] - test/e2e/common/node/kubelet.go:52 -Jun 12 21:26:33.397: INFO: Waiting up to 5m0s for pod "busybox-scheduling-739f8a8b-9aac-4440-9963-d54c7849f44a" in namespace "kubelet-test-1899" to be "running and ready" -Jun 12 21:26:33.448: INFO: Pod "busybox-scheduling-739f8a8b-9aac-4440-9963-d54c7849f44a": Phase="Pending", Reason="", readiness=false. Elapsed: 51.076506ms -Jun 12 21:26:33.448: INFO: The phase of Pod busybox-scheduling-739f8a8b-9aac-4440-9963-d54c7849f44a is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:26:35.457: INFO: Pod "busybox-scheduling-739f8a8b-9aac-4440-9963-d54c7849f44a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060569139s -Jun 12 21:26:35.458: INFO: The phase of Pod busybox-scheduling-739f8a8b-9aac-4440-9963-d54c7849f44a is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:26:37.458: INFO: Pod "busybox-scheduling-739f8a8b-9aac-4440-9963-d54c7849f44a": Phase="Running", Reason="", readiness=true. Elapsed: 4.061357661s -Jun 12 21:26:37.458: INFO: The phase of Pod busybox-scheduling-739f8a8b-9aac-4440-9963-d54c7849f44a is Running (Ready = true) -Jun 12 21:26:37.459: INFO: Pod "busybox-scheduling-739f8a8b-9aac-4440-9963-d54c7849f44a" satisfied condition "running and ready" -[AfterEach] [sig-node] Kubelet +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] should validate Deployment Status endpoints [Conformance] + test/e2e/apps/deployment.go:479 +STEP: creating a Deployment 07/27/23 02:01:02.39 +Jul 27 02:01:02.390: INFO: Creating simple deployment test-deployment-5ngkh +Jul 27 02:01:02.473: INFO: deployment "test-deployment-5ngkh" doesn't have the required revision set +STEP: Getting /status 07/27/23 02:01:04.537 +Jul 27 02:01:04.577: INFO: Deployment test-deployment-5ngkh has Conditions: [{Available True 2023-07-27 02:01:04 +0000 UTC 2023-07-27 02:01:04 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2023-07-27 02:01:04 +0000 UTC 2023-07-27 02:01:02 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-5ngkh-54bc444df" has successfully progressed.}] +STEP: updating Deployment Status 07/27/23 02:01:04.577 +Jul 27 02:01:04.605: INFO: updatedStatus.Conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 2, 1, 4, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 2, 1, 4, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 2, 1, 4, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 2, 1, 2, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"test-deployment-5ngkh-54bc444df\" has successfully progressed."}, v1.DeploymentCondition{Type:"StatusUpdate", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the Deployment status to be updated 07/27/23 02:01:04.605 +Jul 27 02:01:04.625: INFO: Observed &Deployment event: ADDED +Jul 27 02:01:04.625: INFO: Observed Deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-07-27 02:01:02 +0000 UTC 2023-07-27 02:01:02 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-5ngkh-54bc444df"} +Jul 27 02:01:04.625: INFO: Observed &Deployment event: MODIFIED +Jul 27 02:01:04.625: INFO: Observed Deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-07-27 02:01:02 +0000 UTC 2023-07-27 02:01:02 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-5ngkh-54bc444df"} +Jul 27 02:01:04.625: INFO: Observed Deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-07-27 02:01:02 +0000 UTC 2023-07-27 02:01:02 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Jul 27 02:01:04.625: INFO: Observed &Deployment event: MODIFIED +Jul 27 02:01:04.625: INFO: Observed Deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-07-27 02:01:02 +0000 UTC 2023-07-27 02:01:02 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Jul 27 02:01:04.625: INFO: Observed Deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-07-27 02:01:02 +0000 UTC 2023-07-27 02:01:02 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-5ngkh-54bc444df" is progressing.} +Jul 27 02:01:04.625: INFO: Observed &Deployment event: MODIFIED +Jul 27 02:01:04.625: INFO: Observed Deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-07-27 02:01:04 +0000 UTC 2023-07-27 02:01:04 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Jul 27 02:01:04.625: INFO: Observed Deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-07-27 02:01:04 +0000 UTC 2023-07-27 02:01:02 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-5ngkh-54bc444df" has successfully progressed.} +Jul 27 02:01:04.626: INFO: Observed &Deployment event: MODIFIED +Jul 27 02:01:04.626: INFO: Observed Deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-07-27 02:01:04 +0000 UTC 2023-07-27 02:01:04 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Jul 27 02:01:04.626: INFO: Observed Deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-07-27 02:01:04 +0000 UTC 2023-07-27 02:01:02 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-5ngkh-54bc444df" has successfully progressed.} +Jul 27 02:01:04.626: INFO: Found Deployment test-deployment-5ngkh in namespace deployment-6315 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Jul 27 02:01:04.626: INFO: Deployment test-deployment-5ngkh has an updated status +STEP: patching the Statefulset Status 07/27/23 02:01:04.626 +Jul 27 02:01:04.626: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Jul 27 02:01:04.642: INFO: Patched status conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"StatusPatched", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} +STEP: watching for the Deployment status to be patched 07/27/23 02:01:04.642 +Jul 27 02:01:04.659: INFO: Observed &Deployment event: ADDED +Jul 27 02:01:04.659: INFO: Observed deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-07-27 02:01:02 +0000 UTC 2023-07-27 02:01:02 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-5ngkh-54bc444df"} +Jul 27 02:01:04.659: INFO: Observed &Deployment event: MODIFIED +Jul 27 02:01:04.659: INFO: Observed deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-07-27 02:01:02 +0000 UTC 2023-07-27 02:01:02 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-5ngkh-54bc444df"} +Jul 27 02:01:04.659: INFO: Observed deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-07-27 02:01:02 +0000 UTC 2023-07-27 02:01:02 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Jul 27 02:01:04.659: INFO: Observed &Deployment event: MODIFIED +Jul 27 02:01:04.659: INFO: Observed deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-07-27 02:01:02 +0000 UTC 2023-07-27 02:01:02 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Jul 27 02:01:04.659: INFO: Observed deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-07-27 02:01:02 +0000 UTC 2023-07-27 02:01:02 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-5ngkh-54bc444df" is progressing.} +Jul 27 02:01:04.660: INFO: Observed &Deployment event: MODIFIED +Jul 27 02:01:04.660: INFO: Observed deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-07-27 02:01:04 +0000 UTC 2023-07-27 02:01:04 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Jul 27 02:01:04.660: INFO: Observed deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-07-27 02:01:04 +0000 UTC 2023-07-27 02:01:02 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-5ngkh-54bc444df" has successfully progressed.} +Jul 27 02:01:04.660: INFO: Observed &Deployment event: MODIFIED +Jul 27 02:01:04.660: INFO: Observed deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-07-27 02:01:04 +0000 UTC 2023-07-27 02:01:04 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Jul 27 02:01:04.660: INFO: Observed deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-07-27 02:01:04 +0000 UTC 2023-07-27 02:01:02 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-5ngkh-54bc444df" has successfully progressed.} +Jul 27 02:01:04.660: INFO: Observed deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Jul 27 02:01:04.660: INFO: Observed &Deployment event: MODIFIED +Jul 27 02:01:04.660: INFO: Found deployment test-deployment-5ngkh in namespace deployment-6315 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } +Jul 27 02:01:04.660: INFO: Deployment test-deployment-5ngkh has a patched status +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Jul 27 02:01:04.690: INFO: Deployment "test-deployment-5ngkh": +&Deployment{ObjectMeta:{test-deployment-5ngkh deployment-6315 7a7e87a9-fd85-406e-82d2-35f59ee1cda6 87105 1 2023-07-27 02:01:02 +0000 UTC map[e2e:testing name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2023-07-27 02:01:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {e2e.test Update apps/v1 2023-07-27 02:01:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"StatusPatched\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update apps/v1 2023-07-27 02:01:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004424268 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:StatusPatched,Status:True,Reason:,Message:,LastUpdateTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,},DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-07-27 02:01:04 +0000 UTC,LastTransitionTime:2023-07-27 02:01:04 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-deployment-5ngkh-54bc444df" has successfully progressed.,LastUpdateTime:2023-07-27 02:01:04 +0000 UTC,LastTransitionTime:2023-07-27 02:01:04 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Jul 27 02:01:04.699: INFO: New ReplicaSet "test-deployment-5ngkh-54bc444df" of Deployment "test-deployment-5ngkh": +&ReplicaSet{ObjectMeta:{test-deployment-5ngkh-54bc444df deployment-6315 3709beeb-8f84-42ff-928f-d5d0f65bb54f 87097 1 2023-07-27 02:01:02 +0000 UTC map[e2e:testing name:httpd pod-template-hash:54bc444df] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment-5ngkh 7a7e87a9-fd85-406e-82d2-35f59ee1cda6 0xc004424850 0xc004424851}] [] [{kube-controller-manager Update apps/v1 2023-07-27 02:01:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a7e87a9-fd85-406e-82d2-35f59ee1cda6\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 02:01:04 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,pod-template-hash: 54bc444df,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd pod-template-hash:54bc444df] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0044249f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Jul 27 02:01:04.708: INFO: Pod "test-deployment-5ngkh-54bc444df-dm4zj" is available: +&Pod{ObjectMeta:{test-deployment-5ngkh-54bc444df-dm4zj test-deployment-5ngkh-54bc444df- deployment-6315 51b5a27b-d005-43c0-b60f-0b3e7a2bf6c1 87096 0 2023-07-27 02:01:02 +0000 UTC map[e2e:testing name:httpd pod-template-hash:54bc444df] map[cni.projectcalico.org/containerID:915ea5954457d11597486bea23698b852de6624d1af925fae4425be7e7565e64 cni.projectcalico.org/podIP:172.17.225.55/32 cni.projectcalico.org/podIPs:172.17.225.55/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.225.55" + ], + "default": true, + "dns": {} +}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-deployment-5ngkh-54bc444df 3709beeb-8f84-42ff-928f-d5d0f65bb54f 0xc004424dd7 0xc004424dd8}] [] [{kube-controller-manager Update v1 2023-07-27 02:01:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3709beeb-8f84-42ff-928f-d5d0f65bb54f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-07-27 02:01:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-07-27 02:01:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-07-27 02:01:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.225.55\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-25rdr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-25rdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.19,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c47,c19,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-4llv2,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:01:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:01:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:01:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:01:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.19,PodIP:172.17.225.55,StartTime:2023-07-27 02:01:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-07-27 02:01:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://830610524fed0ed8452bac4963ec63c087da81b01382d911ca7cda929e26fd2f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.17.225.55,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment test/e2e/framework/node/init/init.go:32 -Jun 12 21:26:37.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Kubelet +Jul 27 02:01:04.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Deployment test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Kubelet +[DeferCleanup (Each)] [sig-apps] Deployment dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Kubelet +[DeferCleanup (Each)] [sig-apps] Deployment tear down framework | framework.go:193 -STEP: Destroying namespace "kubelet-test-1899" for this suite. 06/12/23 21:26:37.507 +STEP: Destroying namespace "deployment-6315" for this suite. 07/27/23 02:01:04.721 ------------------------------ -• [4.374 seconds] -[sig-node] Kubelet -test/e2e/common/node/framework.go:23 - when scheduling a busybox command in a pod - test/e2e/common/node/kubelet.go:44 - should print the output to logs [NodeConformance] [Conformance] - test/e2e/common/node/kubelet.go:52 +• [2.553 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + should validate Deployment Status endpoints [Conformance] + test/e2e/apps/deployment.go:479 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Kubelet + [BeforeEach] [sig-apps] Deployment set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:26:33.147 - Jun 12 21:26:33.147: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename kubelet-test 06/12/23 21:26:33.155 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:26:33.246 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:26:33.259 - [BeforeEach] [sig-node] Kubelet + STEP: Creating a kubernetes client 07/27/23 02:01:02.206 + Jul 27 02:01:02.206: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename deployment 07/27/23 02:01:02.207 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:01:02.334 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:01:02.346 + [BeforeEach] [sig-apps] Deployment test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-node] Kubelet - test/e2e/common/node/kubelet.go:41 - [It] should print the output to logs [NodeConformance] [Conformance] - test/e2e/common/node/kubelet.go:52 - Jun 12 21:26:33.397: INFO: Waiting up to 5m0s for pod "busybox-scheduling-739f8a8b-9aac-4440-9963-d54c7849f44a" in namespace "kubelet-test-1899" to be "running and ready" - Jun 12 21:26:33.448: INFO: Pod "busybox-scheduling-739f8a8b-9aac-4440-9963-d54c7849f44a": Phase="Pending", Reason="", readiness=false. Elapsed: 51.076506ms - Jun 12 21:26:33.448: INFO: The phase of Pod busybox-scheduling-739f8a8b-9aac-4440-9963-d54c7849f44a is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:26:35.457: INFO: Pod "busybox-scheduling-739f8a8b-9aac-4440-9963-d54c7849f44a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060569139s - Jun 12 21:26:35.458: INFO: The phase of Pod busybox-scheduling-739f8a8b-9aac-4440-9963-d54c7849f44a is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:26:37.458: INFO: Pod "busybox-scheduling-739f8a8b-9aac-4440-9963-d54c7849f44a": Phase="Running", Reason="", readiness=true. Elapsed: 4.061357661s - Jun 12 21:26:37.458: INFO: The phase of Pod busybox-scheduling-739f8a8b-9aac-4440-9963-d54c7849f44a is Running (Ready = true) - Jun 12 21:26:37.459: INFO: Pod "busybox-scheduling-739f8a8b-9aac-4440-9963-d54c7849f44a" satisfied condition "running and ready" - [AfterEach] [sig-node] Kubelet + [BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 + [It] should validate Deployment Status endpoints [Conformance] + test/e2e/apps/deployment.go:479 + STEP: creating a Deployment 07/27/23 02:01:02.39 + Jul 27 02:01:02.390: INFO: Creating simple deployment test-deployment-5ngkh + Jul 27 02:01:02.473: INFO: deployment "test-deployment-5ngkh" doesn't have the required revision set + STEP: Getting /status 07/27/23 02:01:04.537 + Jul 27 02:01:04.577: INFO: Deployment test-deployment-5ngkh has Conditions: [{Available True 2023-07-27 02:01:04 +0000 UTC 2023-07-27 02:01:04 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2023-07-27 02:01:04 +0000 UTC 2023-07-27 02:01:02 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-5ngkh-54bc444df" has successfully progressed.}] + STEP: updating Deployment Status 07/27/23 02:01:04.577 + Jul 27 02:01:04.605: INFO: updatedStatus.Conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 2, 1, 4, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 2, 1, 4, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 2, 1, 4, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 2, 1, 2, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"test-deployment-5ngkh-54bc444df\" has successfully progressed."}, v1.DeploymentCondition{Type:"StatusUpdate", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} + STEP: watching for the Deployment status to be updated 07/27/23 02:01:04.605 + Jul 27 02:01:04.625: INFO: Observed &Deployment event: ADDED + Jul 27 02:01:04.625: INFO: Observed Deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-07-27 02:01:02 +0000 UTC 2023-07-27 02:01:02 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-5ngkh-54bc444df"} + Jul 27 02:01:04.625: INFO: Observed &Deployment event: MODIFIED + Jul 27 02:01:04.625: INFO: Observed Deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-07-27 02:01:02 +0000 UTC 2023-07-27 02:01:02 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-5ngkh-54bc444df"} + Jul 27 02:01:04.625: INFO: Observed Deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-07-27 02:01:02 +0000 UTC 2023-07-27 02:01:02 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} + Jul 27 02:01:04.625: INFO: Observed &Deployment event: MODIFIED + Jul 27 02:01:04.625: INFO: Observed Deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-07-27 02:01:02 +0000 UTC 2023-07-27 02:01:02 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} + Jul 27 02:01:04.625: INFO: Observed Deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-07-27 02:01:02 +0000 UTC 2023-07-27 02:01:02 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-5ngkh-54bc444df" is progressing.} + Jul 27 02:01:04.625: INFO: Observed &Deployment event: MODIFIED + Jul 27 02:01:04.625: INFO: Observed Deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-07-27 02:01:04 +0000 UTC 2023-07-27 02:01:04 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} + Jul 27 02:01:04.625: INFO: Observed Deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-07-27 02:01:04 +0000 UTC 2023-07-27 02:01:02 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-5ngkh-54bc444df" has successfully progressed.} + Jul 27 02:01:04.626: INFO: Observed &Deployment event: MODIFIED + Jul 27 02:01:04.626: INFO: Observed Deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-07-27 02:01:04 +0000 UTC 2023-07-27 02:01:04 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} + Jul 27 02:01:04.626: INFO: Observed Deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-07-27 02:01:04 +0000 UTC 2023-07-27 02:01:02 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-5ngkh-54bc444df" has successfully progressed.} + Jul 27 02:01:04.626: INFO: Found Deployment test-deployment-5ngkh in namespace deployment-6315 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} + Jul 27 02:01:04.626: INFO: Deployment test-deployment-5ngkh has an updated status + STEP: patching the Statefulset Status 07/27/23 02:01:04.626 + Jul 27 02:01:04.626: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} + Jul 27 02:01:04.642: INFO: Patched status conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"StatusPatched", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} + STEP: watching for the Deployment status to be patched 07/27/23 02:01:04.642 + Jul 27 02:01:04.659: INFO: Observed &Deployment event: ADDED + Jul 27 02:01:04.659: INFO: Observed deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-07-27 02:01:02 +0000 UTC 2023-07-27 02:01:02 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-5ngkh-54bc444df"} + Jul 27 02:01:04.659: INFO: Observed &Deployment event: MODIFIED + Jul 27 02:01:04.659: INFO: Observed deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-07-27 02:01:02 +0000 UTC 2023-07-27 02:01:02 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-5ngkh-54bc444df"} + Jul 27 02:01:04.659: INFO: Observed deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-07-27 02:01:02 +0000 UTC 2023-07-27 02:01:02 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} + Jul 27 02:01:04.659: INFO: Observed &Deployment event: MODIFIED + Jul 27 02:01:04.659: INFO: Observed deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-07-27 02:01:02 +0000 UTC 2023-07-27 02:01:02 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} + Jul 27 02:01:04.659: INFO: Observed deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-07-27 02:01:02 +0000 UTC 2023-07-27 02:01:02 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-5ngkh-54bc444df" is progressing.} + Jul 27 02:01:04.660: INFO: Observed &Deployment event: MODIFIED + Jul 27 02:01:04.660: INFO: Observed deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-07-27 02:01:04 +0000 UTC 2023-07-27 02:01:04 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} + Jul 27 02:01:04.660: INFO: Observed deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-07-27 02:01:04 +0000 UTC 2023-07-27 02:01:02 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-5ngkh-54bc444df" has successfully progressed.} + Jul 27 02:01:04.660: INFO: Observed &Deployment event: MODIFIED + Jul 27 02:01:04.660: INFO: Observed deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-07-27 02:01:04 +0000 UTC 2023-07-27 02:01:04 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} + Jul 27 02:01:04.660: INFO: Observed deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-07-27 02:01:04 +0000 UTC 2023-07-27 02:01:02 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-5ngkh-54bc444df" has successfully progressed.} + Jul 27 02:01:04.660: INFO: Observed deployment test-deployment-5ngkh in namespace deployment-6315 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} + Jul 27 02:01:04.660: INFO: Observed &Deployment event: MODIFIED + Jul 27 02:01:04.660: INFO: Found deployment test-deployment-5ngkh in namespace deployment-6315 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } + Jul 27 02:01:04.660: INFO: Deployment test-deployment-5ngkh has a patched status + [AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 + Jul 27 02:01:04.690: INFO: Deployment "test-deployment-5ngkh": + &Deployment{ObjectMeta:{test-deployment-5ngkh deployment-6315 7a7e87a9-fd85-406e-82d2-35f59ee1cda6 87105 1 2023-07-27 02:01:02 +0000 UTC map[e2e:testing name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2023-07-27 02:01:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {e2e.test Update apps/v1 2023-07-27 02:01:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"StatusPatched\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update apps/v1 2023-07-27 02:01:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004424268 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:StatusPatched,Status:True,Reason:,Message:,LastUpdateTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,},DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-07-27 02:01:04 +0000 UTC,LastTransitionTime:2023-07-27 02:01:04 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-deployment-5ngkh-54bc444df" has successfully progressed.,LastUpdateTime:2023-07-27 02:01:04 +0000 UTC,LastTransitionTime:2023-07-27 02:01:04 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + + Jul 27 02:01:04.699: INFO: New ReplicaSet "test-deployment-5ngkh-54bc444df" of Deployment "test-deployment-5ngkh": + &ReplicaSet{ObjectMeta:{test-deployment-5ngkh-54bc444df deployment-6315 3709beeb-8f84-42ff-928f-d5d0f65bb54f 87097 1 2023-07-27 02:01:02 +0000 UTC map[e2e:testing name:httpd pod-template-hash:54bc444df] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment-5ngkh 7a7e87a9-fd85-406e-82d2-35f59ee1cda6 0xc004424850 0xc004424851}] [] [{kube-controller-manager Update apps/v1 2023-07-27 02:01:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a7e87a9-fd85-406e-82d2-35f59ee1cda6\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 02:01:04 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,pod-template-hash: 54bc444df,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd pod-template-hash:54bc444df] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0044249f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} + Jul 27 02:01:04.708: INFO: Pod "test-deployment-5ngkh-54bc444df-dm4zj" is available: + &Pod{ObjectMeta:{test-deployment-5ngkh-54bc444df-dm4zj test-deployment-5ngkh-54bc444df- deployment-6315 51b5a27b-d005-43c0-b60f-0b3e7a2bf6c1 87096 0 2023-07-27 02:01:02 +0000 UTC map[e2e:testing name:httpd pod-template-hash:54bc444df] map[cni.projectcalico.org/containerID:915ea5954457d11597486bea23698b852de6624d1af925fae4425be7e7565e64 cni.projectcalico.org/podIP:172.17.225.55/32 cni.projectcalico.org/podIPs:172.17.225.55/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.225.55" + ], + "default": true, + "dns": {} + }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-deployment-5ngkh-54bc444df 3709beeb-8f84-42ff-928f-d5d0f65bb54f 0xc004424dd7 0xc004424dd8}] [] [{kube-controller-manager Update v1 2023-07-27 02:01:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3709beeb-8f84-42ff-928f-d5d0f65bb54f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-07-27 02:01:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-07-27 02:01:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-07-27 02:01:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.225.55\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-25rdr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-25rdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.19,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c47,c19,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-4llv2,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:01:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:01:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:01:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:01:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.19,PodIP:172.17.225.55,StartTime:2023-07-27 02:01:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-07-27 02:01:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://830610524fed0ed8452bac4963ec63c087da81b01382d911ca7cda929e26fd2f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.17.225.55,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + [AfterEach] [sig-apps] Deployment test/e2e/framework/node/init/init.go:32 - Jun 12 21:26:37.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Kubelet + Jul 27 02:01:04.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Deployment test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Kubelet + [DeferCleanup (Each)] [sig-apps] Deployment dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Kubelet + [DeferCleanup (Each)] [sig-apps] Deployment tear down framework | framework.go:193 - STEP: Destroying namespace "kubelet-test-1899" for this suite. 06/12/23 21:26:37.507 + STEP: Destroying namespace "deployment-6315" for this suite. 07/27/23 02:01:04.721 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSS ------------------------------ -[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] - should have a working scale subresource [Conformance] - test/e2e/apps/statefulset.go:848 -[BeforeEach] [sig-apps] StatefulSet +[sig-node] Downward API + should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:44 +[BeforeEach] [sig-node] Downward API set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:26:37.527 -Jun 12 21:26:37.527: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename statefulset 06/12/23 21:26:37.529 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:26:37.596 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:26:37.614 -[BeforeEach] [sig-apps] StatefulSet +STEP: Creating a kubernetes client 07/27/23 02:01:04.76 +Jul 27 02:01:04.760: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename downward-api 07/27/23 02:01:04.761 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:01:04.802 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:01:04.814 +[BeforeEach] [sig-node] Downward API test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-apps] StatefulSet - test/e2e/apps/statefulset.go:98 -[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:113 -STEP: Creating service test in namespace statefulset-7005 06/12/23 21:26:37.624 -[It] should have a working scale subresource [Conformance] - test/e2e/apps/statefulset.go:848 -STEP: Creating statefulset ss in namespace statefulset-7005 06/12/23 21:26:37.656 -Jun 12 21:26:37.701: INFO: Found 0 stateful pods, waiting for 1 -Jun 12 21:26:47.713: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true -STEP: getting scale subresource 06/12/23 21:26:47.734 -STEP: updating a scale subresource 06/12/23 21:26:47.745 -STEP: verifying the statefulset Spec.Replicas was modified 06/12/23 21:26:47.764 -STEP: Patch a scale subresource 06/12/23 21:26:47.775 -STEP: verifying the statefulset Spec.Replicas was modified 06/12/23 21:26:47.792 -[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:124 -Jun 12 21:26:47.804: INFO: Deleting all statefulset in ns statefulset-7005 -Jun 12 21:26:47.815: INFO: Scaling statefulset ss to 0 -Jun 12 21:26:57.900: INFO: Waiting for statefulset status.replicas updated to 0 -Jun 12 21:26:57.913: INFO: Deleting statefulset ss -[AfterEach] [sig-apps] StatefulSet +[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:44 +STEP: Creating a pod to test downward api env vars 07/27/23 02:01:04.823 +Jul 27 02:01:04.854: INFO: Waiting up to 5m0s for pod "downward-api-cde6117b-a6d2-44d2-ac41-6b9bb1bb4b7f" in namespace "downward-api-8452" to be "Succeeded or Failed" +Jul 27 02:01:04.880: INFO: Pod "downward-api-cde6117b-a6d2-44d2-ac41-6b9bb1bb4b7f": Phase="Pending", Reason="", readiness=false. Elapsed: 25.885641ms +Jul 27 02:01:06.889: INFO: Pod "downward-api-cde6117b-a6d2-44d2-ac41-6b9bb1bb4b7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035351697s +Jul 27 02:01:08.889: INFO: Pod "downward-api-cde6117b-a6d2-44d2-ac41-6b9bb1bb4b7f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035543045s +Jul 27 02:01:10.891: INFO: Pod "downward-api-cde6117b-a6d2-44d2-ac41-6b9bb1bb4b7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037122883s +STEP: Saw pod success 07/27/23 02:01:10.891 +Jul 27 02:01:10.891: INFO: Pod "downward-api-cde6117b-a6d2-44d2-ac41-6b9bb1bb4b7f" satisfied condition "Succeeded or Failed" +Jul 27 02:01:10.899: INFO: Trying to get logs from node 10.245.128.19 pod downward-api-cde6117b-a6d2-44d2-ac41-6b9bb1bb4b7f container dapi-container: +STEP: delete the pod 07/27/23 02:01:10.945 +Jul 27 02:01:10.969: INFO: Waiting for pod downward-api-cde6117b-a6d2-44d2-ac41-6b9bb1bb4b7f to disappear +Jul 27 02:01:10.976: INFO: Pod downward-api-cde6117b-a6d2-44d2-ac41-6b9bb1bb4b7f no longer exists +[AfterEach] [sig-node] Downward API test/e2e/framework/node/init/init.go:32 -Jun 12 21:26:57.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] StatefulSet +Jul 27 02:01:10.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Downward API test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] StatefulSet +[DeferCleanup (Each)] [sig-node] Downward API dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] StatefulSet +[DeferCleanup (Each)] [sig-node] Downward API tear down framework | framework.go:193 -STEP: Destroying namespace "statefulset-7005" for this suite. 06/12/23 21:26:57.983 +STEP: Destroying namespace "downward-api-8452" for this suite. 07/27/23 02:01:10.988 ------------------------------ -• [SLOW TEST] [20.499 seconds] -[sig-apps] StatefulSet -test/e2e/apps/framework.go:23 - Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:103 - should have a working scale subresource [Conformance] - test/e2e/apps/statefulset.go:848 +• [SLOW TEST] [6.252 seconds] +[sig-node] Downward API +test/e2e/common/node/framework.go:23 + should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:44 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] StatefulSet + [BeforeEach] [sig-node] Downward API set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:26:37.527 - Jun 12 21:26:37.527: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename statefulset 06/12/23 21:26:37.529 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:26:37.596 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:26:37.614 - [BeforeEach] [sig-apps] StatefulSet + STEP: Creating a kubernetes client 07/27/23 02:01:04.76 + Jul 27 02:01:04.760: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename downward-api 07/27/23 02:01:04.761 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:01:04.802 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:01:04.814 + [BeforeEach] [sig-node] Downward API test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-apps] StatefulSet - test/e2e/apps/statefulset.go:98 - [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:113 - STEP: Creating service test in namespace statefulset-7005 06/12/23 21:26:37.624 - [It] should have a working scale subresource [Conformance] - test/e2e/apps/statefulset.go:848 - STEP: Creating statefulset ss in namespace statefulset-7005 06/12/23 21:26:37.656 - Jun 12 21:26:37.701: INFO: Found 0 stateful pods, waiting for 1 - Jun 12 21:26:47.713: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true - STEP: getting scale subresource 06/12/23 21:26:47.734 - STEP: updating a scale subresource 06/12/23 21:26:47.745 - STEP: verifying the statefulset Spec.Replicas was modified 06/12/23 21:26:47.764 - STEP: Patch a scale subresource 06/12/23 21:26:47.775 - STEP: verifying the statefulset Spec.Replicas was modified 06/12/23 21:26:47.792 - [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:124 - Jun 12 21:26:47.804: INFO: Deleting all statefulset in ns statefulset-7005 - Jun 12 21:26:47.815: INFO: Scaling statefulset ss to 0 - Jun 12 21:26:57.900: INFO: Waiting for statefulset status.replicas updated to 0 - Jun 12 21:26:57.913: INFO: Deleting statefulset ss - [AfterEach] [sig-apps] StatefulSet + [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:44 + STEP: Creating a pod to test downward api env vars 07/27/23 02:01:04.823 + Jul 27 02:01:04.854: INFO: Waiting up to 5m0s for pod "downward-api-cde6117b-a6d2-44d2-ac41-6b9bb1bb4b7f" in namespace "downward-api-8452" to be "Succeeded or Failed" + Jul 27 02:01:04.880: INFO: Pod "downward-api-cde6117b-a6d2-44d2-ac41-6b9bb1bb4b7f": Phase="Pending", Reason="", readiness=false. Elapsed: 25.885641ms + Jul 27 02:01:06.889: INFO: Pod "downward-api-cde6117b-a6d2-44d2-ac41-6b9bb1bb4b7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035351697s + Jul 27 02:01:08.889: INFO: Pod "downward-api-cde6117b-a6d2-44d2-ac41-6b9bb1bb4b7f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035543045s + Jul 27 02:01:10.891: INFO: Pod "downward-api-cde6117b-a6d2-44d2-ac41-6b9bb1bb4b7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037122883s + STEP: Saw pod success 07/27/23 02:01:10.891 + Jul 27 02:01:10.891: INFO: Pod "downward-api-cde6117b-a6d2-44d2-ac41-6b9bb1bb4b7f" satisfied condition "Succeeded or Failed" + Jul 27 02:01:10.899: INFO: Trying to get logs from node 10.245.128.19 pod downward-api-cde6117b-a6d2-44d2-ac41-6b9bb1bb4b7f container dapi-container: + STEP: delete the pod 07/27/23 02:01:10.945 + Jul 27 02:01:10.969: INFO: Waiting for pod downward-api-cde6117b-a6d2-44d2-ac41-6b9bb1bb4b7f to disappear + Jul 27 02:01:10.976: INFO: Pod downward-api-cde6117b-a6d2-44d2-ac41-6b9bb1bb4b7f no longer exists + [AfterEach] [sig-node] Downward API test/e2e/framework/node/init/init.go:32 - Jun 12 21:26:57.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] StatefulSet + Jul 27 02:01:10.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Downward API test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] StatefulSet + [DeferCleanup (Each)] [sig-node] Downward API dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] StatefulSet + [DeferCleanup (Each)] [sig-node] Downward API tear down framework | framework.go:193 - STEP: Destroying namespace "statefulset-7005" for this suite. 06/12/23 21:26:57.983 + STEP: Destroying namespace "downward-api-8452" for this suite. 07/27/23 02:01:10.988 << End Captured GinkgoWriter Output ------------------------------ -[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] - works for CRD preserving unknown fields at the schema root [Conformance] - test/e2e/apimachinery/crd_publish_openapi.go:194 -[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +SSSSSSSSSSSSSSS +------------------------------ +[sig-storage] CSIInlineVolumes + should support CSIVolumeSource in Pod API [Conformance] + test/e2e/storage/csi_inline.go:131 +[BeforeEach] [sig-storage] CSIInlineVolumes set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:26:58.029 -Jun 12 21:26:58.031: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename crd-publish-openapi 06/12/23 21:26:58.034 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:26:58.095 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:26:58.11 -[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +STEP: Creating a kubernetes client 07/27/23 02:01:11.012 +Jul 27 02:01:11.012: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename csiinlinevolumes 07/27/23 02:01:11.013 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:01:11.053 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:01:11.062 +[BeforeEach] [sig-storage] CSIInlineVolumes test/e2e/framework/metrics/init/init.go:31 -[It] works for CRD preserving unknown fields at the schema root [Conformance] - test/e2e/apimachinery/crd_publish_openapi.go:194 -Jun 12 21:26:58.129: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: kubectl validation (kubectl create and apply) allows request with any unknown properties 06/12/23 21:27:07.036 -Jun 12 21:27:07.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-3683 --namespace=crd-publish-openapi-3683 create -f -' -Jun 12 21:27:09.948: INFO: stderr: "" -Jun 12 21:27:09.948: INFO: stdout: "e2e-test-crd-publish-openapi-2358-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" -Jun 12 21:27:09.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-3683 --namespace=crd-publish-openapi-3683 delete e2e-test-crd-publish-openapi-2358-crds test-cr' -Jun 12 21:27:10.179: INFO: stderr: "" -Jun 12 21:27:10.180: INFO: stdout: "e2e-test-crd-publish-openapi-2358-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" -Jun 12 21:27:10.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-3683 --namespace=crd-publish-openapi-3683 apply -f -' -Jun 12 21:27:14.160: INFO: stderr: "" -Jun 12 21:27:14.160: INFO: stdout: "e2e-test-crd-publish-openapi-2358-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" -Jun 12 21:27:14.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-3683 --namespace=crd-publish-openapi-3683 delete e2e-test-crd-publish-openapi-2358-crds test-cr' -Jun 12 21:27:14.366: INFO: stderr: "" -Jun 12 21:27:14.366: INFO: stdout: "e2e-test-crd-publish-openapi-2358-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" -STEP: kubectl explain works to explain CR 06/12/23 21:27:14.366 -Jun 12 21:27:14.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-3683 explain e2e-test-crd-publish-openapi-2358-crds' -Jun 12 21:27:15.348: INFO: stderr: "" -Jun 12 21:27:15.348: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-2358-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" -[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +[It] should support CSIVolumeSource in Pod API [Conformance] + test/e2e/storage/csi_inline.go:131 +STEP: creating 07/27/23 02:01:11.075 +W0727 02:01:11.105489 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "pod-csi-inline-volumes" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "pod-csi-inline-volumes" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "pod-csi-inline-volumes" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "pod-csi-inline-volumes" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +W0727 02:01:11.105508 20 warnings.go:70] pod-csi-inline-volumes uses an inline volume provided by CSIDriver e2e.example.com and namespace csiinlinevolumes-2273 has a pod security warn level that is lower than privileged +STEP: getting 07/27/23 02:01:11.131 +STEP: listing in namespace 07/27/23 02:01:11.14 +STEP: patching 07/27/23 02:01:11.148 +STEP: deleting 07/27/23 02:01:11.183 +[AfterEach] [sig-storage] CSIInlineVolumes test/e2e/framework/node/init/init.go:32 -Jun 12 21:27:23.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +Jul 27 02:01:11.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] CSIInlineVolumes test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-storage] CSIInlineVolumes dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-storage] CSIInlineVolumes tear down framework | framework.go:193 -STEP: Destroying namespace "crd-publish-openapi-3683" for this suite. 06/12/23 21:27:24.103 +STEP: Destroying namespace "csiinlinevolumes-2273" for this suite. 07/27/23 02:01:11.217 ------------------------------ -• [SLOW TEST] [26.148 seconds] -[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] -test/e2e/apimachinery/framework.go:23 - works for CRD preserving unknown fields at the schema root [Conformance] - test/e2e/apimachinery/crd_publish_openapi.go:194 +• [0.227 seconds] +[sig-storage] CSIInlineVolumes +test/e2e/storage/utils/framework.go:23 + should support CSIVolumeSource in Pod API [Conformance] + test/e2e/storage/csi_inline.go:131 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + [BeforeEach] [sig-storage] CSIInlineVolumes set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:26:58.029 - Jun 12 21:26:58.031: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename crd-publish-openapi 06/12/23 21:26:58.034 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:26:58.095 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:26:58.11 - [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + STEP: Creating a kubernetes client 07/27/23 02:01:11.012 + Jul 27 02:01:11.012: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename csiinlinevolumes 07/27/23 02:01:11.013 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:01:11.053 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:01:11.062 + [BeforeEach] [sig-storage] CSIInlineVolumes test/e2e/framework/metrics/init/init.go:31 - [It] works for CRD preserving unknown fields at the schema root [Conformance] - test/e2e/apimachinery/crd_publish_openapi.go:194 - Jun 12 21:26:58.129: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: kubectl validation (kubectl create and apply) allows request with any unknown properties 06/12/23 21:27:07.036 - Jun 12 21:27:07.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-3683 --namespace=crd-publish-openapi-3683 create -f -' - Jun 12 21:27:09.948: INFO: stderr: "" - Jun 12 21:27:09.948: INFO: stdout: "e2e-test-crd-publish-openapi-2358-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" - Jun 12 21:27:09.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-3683 --namespace=crd-publish-openapi-3683 delete e2e-test-crd-publish-openapi-2358-crds test-cr' - Jun 12 21:27:10.179: INFO: stderr: "" - Jun 12 21:27:10.180: INFO: stdout: "e2e-test-crd-publish-openapi-2358-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" - Jun 12 21:27:10.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-3683 --namespace=crd-publish-openapi-3683 apply -f -' - Jun 12 21:27:14.160: INFO: stderr: "" - Jun 12 21:27:14.160: INFO: stdout: "e2e-test-crd-publish-openapi-2358-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" - Jun 12 21:27:14.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-3683 --namespace=crd-publish-openapi-3683 delete e2e-test-crd-publish-openapi-2358-crds test-cr' - Jun 12 21:27:14.366: INFO: stderr: "" - Jun 12 21:27:14.366: INFO: stdout: "e2e-test-crd-publish-openapi-2358-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" - STEP: kubectl explain works to explain CR 06/12/23 21:27:14.366 - Jun 12 21:27:14.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-3683 explain e2e-test-crd-publish-openapi-2358-crds' - Jun 12 21:27:15.348: INFO: stderr: "" - Jun 12 21:27:15.348: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-2358-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" - [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + [It] should support CSIVolumeSource in Pod API [Conformance] + test/e2e/storage/csi_inline.go:131 + STEP: creating 07/27/23 02:01:11.075 + W0727 02:01:11.105489 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "pod-csi-inline-volumes" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "pod-csi-inline-volumes" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "pod-csi-inline-volumes" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "pod-csi-inline-volumes" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + W0727 02:01:11.105508 20 warnings.go:70] pod-csi-inline-volumes uses an inline volume provided by CSIDriver e2e.example.com and namespace csiinlinevolumes-2273 has a pod security warn level that is lower than privileged + STEP: getting 07/27/23 02:01:11.131 + STEP: listing in namespace 07/27/23 02:01:11.14 + STEP: patching 07/27/23 02:01:11.148 + STEP: deleting 07/27/23 02:01:11.183 + [AfterEach] [sig-storage] CSIInlineVolumes test/e2e/framework/node/init/init.go:32 - Jun 12 21:27:23.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + Jul 27 02:01:11.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] CSIInlineVolumes test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-storage] CSIInlineVolumes dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-storage] CSIInlineVolumes tear down framework | framework.go:193 - STEP: Destroying namespace "crd-publish-openapi-3683" for this suite. 06/12/23 21:27:24.103 + STEP: Destroying namespace "csiinlinevolumes-2273" for this suite. 07/27/23 02:01:11.217 << End Captured GinkgoWriter Output ------------------------------ -S +SSSSSSS ------------------------------ -[sig-node] PreStop - should call prestop when killing a pod [Conformance] - test/e2e/node/pre_stop.go:168 -[BeforeEach] [sig-node] PreStop +[sig-network] Services + should delete a collection of services [Conformance] + test/e2e/network/service.go:3654 +[BeforeEach] [sig-network] Services set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:27:24.187 -Jun 12 21:27:24.188: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename prestop 06/12/23 21:27:24.192 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:27:24.251 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:27:24.276 -[BeforeEach] [sig-node] PreStop +STEP: Creating a kubernetes client 07/27/23 02:01:11.24 +Jul 27 02:01:11.240: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename services 07/27/23 02:01:11.241 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:01:11.281 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:01:11.29 +[BeforeEach] [sig-network] Services test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-node] PreStop - test/e2e/node/pre_stop.go:159 -[It] should call prestop when killing a pod [Conformance] - test/e2e/node/pre_stop.go:168 -STEP: Creating server pod server in namespace prestop-4751 06/12/23 21:27:24.302 -STEP: Waiting for pods to come up. 06/12/23 21:27:24.36 -Jun 12 21:27:24.360: INFO: Waiting up to 5m0s for pod "server" in namespace "prestop-4751" to be "running" -Jun 12 21:27:24.369: INFO: Pod "server": Phase="Pending", Reason="", readiness=false. Elapsed: 9.160926ms -Jun 12 21:27:26.399: INFO: Pod "server": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039114587s -Jun 12 21:27:28.381: INFO: Pod "server": Phase="Running", Reason="", readiness=true. Elapsed: 4.020486029s -Jun 12 21:27:28.381: INFO: Pod "server" satisfied condition "running" -STEP: Creating tester pod tester in namespace prestop-4751 06/12/23 21:27:28.397 -Jun 12 21:27:28.413: INFO: Waiting up to 5m0s for pod "tester" in namespace "prestop-4751" to be "running" -Jun 12 21:27:28.458: INFO: Pod "tester": Phase="Pending", Reason="", readiness=false. Elapsed: 44.957427ms -Jun 12 21:27:30.467: INFO: Pod "tester": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053711492s -Jun 12 21:27:32.469: INFO: Pod "tester": Phase="Running", Reason="", readiness=true. Elapsed: 4.056259338s -Jun 12 21:27:32.470: INFO: Pod "tester" satisfied condition "running" -STEP: Deleting pre-stop pod 06/12/23 21:27:32.47 -Jun 12 21:27:37.527: INFO: Saw: { - "Hostname": "server", - "Sent": null, - "Received": { - "prestop": 1 - }, - "Errors": null, - "Log": [ - "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", - "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", - "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." - ], - "StillContactingPeers": true -} -STEP: Deleting the server pod 06/12/23 21:27:37.527 -[AfterEach] [sig-node] PreStop +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should delete a collection of services [Conformance] + test/e2e/network/service.go:3654 +STEP: creating a collection of services 07/27/23 02:01:11.299 +Jul 27 02:01:11.299: INFO: Creating e2e-svc-a-mpw4j +Jul 27 02:01:11.360: INFO: Creating e2e-svc-b-tvsb8 +Jul 27 02:01:11.404: INFO: Creating e2e-svc-c-86lkq +STEP: deleting service collection 07/27/23 02:01:11.507 +Jul 27 02:01:11.628: INFO: Collection of services has been deleted +[AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 -Jun 12 21:27:37.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] PreStop +Jul 27 02:01:11.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] PreStop +[DeferCleanup (Each)] [sig-network] Services dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] PreStop +[DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 -STEP: Destroying namespace "prestop-4751" for this suite. 06/12/23 21:27:37.599 +STEP: Destroying namespace "services-9953" for this suite. 07/27/23 02:01:11.639 ------------------------------ -• [SLOW TEST] [13.446 seconds] -[sig-node] PreStop -test/e2e/node/framework.go:23 - should call prestop when killing a pod [Conformance] - test/e2e/node/pre_stop.go:168 +• [0.423 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should delete a collection of services [Conformance] + test/e2e/network/service.go:3654 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] PreStop + [BeforeEach] [sig-network] Services set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:27:24.187 - Jun 12 21:27:24.188: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename prestop 06/12/23 21:27:24.192 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:27:24.251 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:27:24.276 - [BeforeEach] [sig-node] PreStop + STEP: Creating a kubernetes client 07/27/23 02:01:11.24 + Jul 27 02:01:11.240: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename services 07/27/23 02:01:11.241 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:01:11.281 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:01:11.29 + [BeforeEach] [sig-network] Services test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-node] PreStop - test/e2e/node/pre_stop.go:159 - [It] should call prestop when killing a pod [Conformance] - test/e2e/node/pre_stop.go:168 - STEP: Creating server pod server in namespace prestop-4751 06/12/23 21:27:24.302 - STEP: Waiting for pods to come up. 06/12/23 21:27:24.36 - Jun 12 21:27:24.360: INFO: Waiting up to 5m0s for pod "server" in namespace "prestop-4751" to be "running" - Jun 12 21:27:24.369: INFO: Pod "server": Phase="Pending", Reason="", readiness=false. Elapsed: 9.160926ms - Jun 12 21:27:26.399: INFO: Pod "server": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039114587s - Jun 12 21:27:28.381: INFO: Pod "server": Phase="Running", Reason="", readiness=true. Elapsed: 4.020486029s - Jun 12 21:27:28.381: INFO: Pod "server" satisfied condition "running" - STEP: Creating tester pod tester in namespace prestop-4751 06/12/23 21:27:28.397 - Jun 12 21:27:28.413: INFO: Waiting up to 5m0s for pod "tester" in namespace "prestop-4751" to be "running" - Jun 12 21:27:28.458: INFO: Pod "tester": Phase="Pending", Reason="", readiness=false. Elapsed: 44.957427ms - Jun 12 21:27:30.467: INFO: Pod "tester": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053711492s - Jun 12 21:27:32.469: INFO: Pod "tester": Phase="Running", Reason="", readiness=true. Elapsed: 4.056259338s - Jun 12 21:27:32.470: INFO: Pod "tester" satisfied condition "running" - STEP: Deleting pre-stop pod 06/12/23 21:27:32.47 - Jun 12 21:27:37.527: INFO: Saw: { - "Hostname": "server", - "Sent": null, - "Received": { - "prestop": 1 - }, - "Errors": null, - "Log": [ - "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", - "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", - "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." - ], - "StillContactingPeers": true - } - STEP: Deleting the server pod 06/12/23 21:27:37.527 - [AfterEach] [sig-node] PreStop + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should delete a collection of services [Conformance] + test/e2e/network/service.go:3654 + STEP: creating a collection of services 07/27/23 02:01:11.299 + Jul 27 02:01:11.299: INFO: Creating e2e-svc-a-mpw4j + Jul 27 02:01:11.360: INFO: Creating e2e-svc-b-tvsb8 + Jul 27 02:01:11.404: INFO: Creating e2e-svc-c-86lkq + STEP: deleting service collection 07/27/23 02:01:11.507 + Jul 27 02:01:11.628: INFO: Collection of services has been deleted + [AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 - Jun 12 21:27:37.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] PreStop + Jul 27 02:01:11.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] PreStop + [DeferCleanup (Each)] [sig-network] Services dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] PreStop + [DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 - STEP: Destroying namespace "prestop-4751" for this suite. 06/12/23 21:27:37.599 + STEP: Destroying namespace "services-9953" for this suite. 07/27/23 02:01:11.639 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------- -[sig-storage] Downward API volume - should provide container's cpu limit [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:193 -[BeforeEach] [sig-storage] Downward API volume +[sig-node] ConfigMap + should be consumable via environment variable [NodeConformance] [Conformance] + test/e2e/common/node/configmap.go:45 +[BeforeEach] [sig-node] ConfigMap set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:27:37.66 -Jun 12 21:27:37.660: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename downward-api 06/12/23 21:27:37.662 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:27:37.813 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:27:37.857 -[BeforeEach] [sig-storage] Downward API volume +STEP: Creating a kubernetes client 07/27/23 02:01:11.664 +Jul 27 02:01:11.664: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename configmap 07/27/23 02:01:11.665 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:01:11.722 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:01:11.731 +[BeforeEach] [sig-node] ConfigMap test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-storage] Downward API volume - test/e2e/common/storage/downwardapi_volume.go:44 -[It] should provide container's cpu limit [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:193 -STEP: Creating a pod to test downward API volume plugin 06/12/23 21:27:37.87 -Jun 12 21:27:37.891: INFO: Waiting up to 5m0s for pod "downwardapi-volume-182d6fdf-e7ab-4400-92d3-f856ff0b230f" in namespace "downward-api-9628" to be "Succeeded or Failed" -Jun 12 21:27:37.899: INFO: Pod "downwardapi-volume-182d6fdf-e7ab-4400-92d3-f856ff0b230f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064188ms -Jun 12 21:27:39.924: INFO: Pod "downwardapi-volume-182d6fdf-e7ab-4400-92d3-f856ff0b230f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032813864s -Jun 12 21:27:41.911: INFO: Pod "downwardapi-volume-182d6fdf-e7ab-4400-92d3-f856ff0b230f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020435886s -Jun 12 21:27:43.910: INFO: Pod "downwardapi-volume-182d6fdf-e7ab-4400-92d3-f856ff0b230f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019113462s -STEP: Saw pod success 06/12/23 21:27:43.91 -Jun 12 21:27:43.911: INFO: Pod "downwardapi-volume-182d6fdf-e7ab-4400-92d3-f856ff0b230f" satisfied condition "Succeeded or Failed" -Jun 12 21:27:43.920: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-182d6fdf-e7ab-4400-92d3-f856ff0b230f container client-container: -STEP: delete the pod 06/12/23 21:27:43.955 -Jun 12 21:27:43.978: INFO: Waiting for pod downwardapi-volume-182d6fdf-e7ab-4400-92d3-f856ff0b230f to disappear -Jun 12 21:27:43.987: INFO: Pod downwardapi-volume-182d6fdf-e7ab-4400-92d3-f856ff0b230f no longer exists -[AfterEach] [sig-storage] Downward API volume +[It] should be consumable via environment variable [NodeConformance] [Conformance] + test/e2e/common/node/configmap.go:45 +STEP: Creating configMap configmap-2253/configmap-test-d2999f46-724c-40ee-bb3d-3cf8b18a0d74 07/27/23 02:01:11.741 +STEP: Creating a pod to test consume configMaps 07/27/23 02:01:11.767 +Jul 27 02:01:11.796: INFO: Waiting up to 5m0s for pod "pod-configmaps-b4b2da77-7383-4723-a572-a9b5dc216eb3" in namespace "configmap-2253" to be "Succeeded or Failed" +Jul 27 02:01:11.804: INFO: Pod "pod-configmaps-b4b2da77-7383-4723-a572-a9b5dc216eb3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.948555ms +Jul 27 02:01:13.813: INFO: Pod "pod-configmaps-b4b2da77-7383-4723-a572-a9b5dc216eb3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016937889s +Jul 27 02:01:15.814: INFO: Pod "pod-configmaps-b4b2da77-7383-4723-a572-a9b5dc216eb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018268333s +STEP: Saw pod success 07/27/23 02:01:15.814 +Jul 27 02:01:15.814: INFO: Pod "pod-configmaps-b4b2da77-7383-4723-a572-a9b5dc216eb3" satisfied condition "Succeeded or Failed" +Jul 27 02:01:15.822: INFO: Trying to get logs from node 10.245.128.19 pod pod-configmaps-b4b2da77-7383-4723-a572-a9b5dc216eb3 container env-test: +STEP: delete the pod 07/27/23 02:01:15.844 +Jul 27 02:01:15.867: INFO: Waiting for pod pod-configmaps-b4b2da77-7383-4723-a572-a9b5dc216eb3 to disappear +Jul 27 02:01:15.876: INFO: Pod pod-configmaps-b4b2da77-7383-4723-a572-a9b5dc216eb3 no longer exists +[AfterEach] [sig-node] ConfigMap test/e2e/framework/node/init/init.go:32 -Jun 12 21:27:43.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Downward API volume +Jul 27 02:01:15.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] ConfigMap test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Downward API volume +[DeferCleanup (Each)] [sig-node] ConfigMap dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Downward API volume +[DeferCleanup (Each)] [sig-node] ConfigMap tear down framework | framework.go:193 -STEP: Destroying namespace "downward-api-9628" for this suite. 06/12/23 21:27:44.006 +STEP: Destroying namespace "configmap-2253" for this suite. 07/27/23 02:01:15.888 ------------------------------ -• [SLOW TEST] [6.368 seconds] -[sig-storage] Downward API volume -test/e2e/common/storage/framework.go:23 - should provide container's cpu limit [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:193 +• [4.258 seconds] +[sig-node] ConfigMap +test/e2e/common/node/framework.go:23 + should be consumable via environment variable [NodeConformance] [Conformance] + test/e2e/common/node/configmap.go:45 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Downward API volume + [BeforeEach] [sig-node] ConfigMap set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:27:37.66 - Jun 12 21:27:37.660: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename downward-api 06/12/23 21:27:37.662 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:27:37.813 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:27:37.857 - [BeforeEach] [sig-storage] Downward API volume + STEP: Creating a kubernetes client 07/27/23 02:01:11.664 + Jul 27 02:01:11.664: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename configmap 07/27/23 02:01:11.665 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:01:11.722 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:01:11.731 + [BeforeEach] [sig-node] ConfigMap test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-storage] Downward API volume - test/e2e/common/storage/downwardapi_volume.go:44 - [It] should provide container's cpu limit [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:193 - STEP: Creating a pod to test downward API volume plugin 06/12/23 21:27:37.87 - Jun 12 21:27:37.891: INFO: Waiting up to 5m0s for pod "downwardapi-volume-182d6fdf-e7ab-4400-92d3-f856ff0b230f" in namespace "downward-api-9628" to be "Succeeded or Failed" - Jun 12 21:27:37.899: INFO: Pod "downwardapi-volume-182d6fdf-e7ab-4400-92d3-f856ff0b230f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064188ms - Jun 12 21:27:39.924: INFO: Pod "downwardapi-volume-182d6fdf-e7ab-4400-92d3-f856ff0b230f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032813864s - Jun 12 21:27:41.911: INFO: Pod "downwardapi-volume-182d6fdf-e7ab-4400-92d3-f856ff0b230f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020435886s - Jun 12 21:27:43.910: INFO: Pod "downwardapi-volume-182d6fdf-e7ab-4400-92d3-f856ff0b230f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019113462s - STEP: Saw pod success 06/12/23 21:27:43.91 - Jun 12 21:27:43.911: INFO: Pod "downwardapi-volume-182d6fdf-e7ab-4400-92d3-f856ff0b230f" satisfied condition "Succeeded or Failed" - Jun 12 21:27:43.920: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-182d6fdf-e7ab-4400-92d3-f856ff0b230f container client-container: - STEP: delete the pod 06/12/23 21:27:43.955 - Jun 12 21:27:43.978: INFO: Waiting for pod downwardapi-volume-182d6fdf-e7ab-4400-92d3-f856ff0b230f to disappear - Jun 12 21:27:43.987: INFO: Pod downwardapi-volume-182d6fdf-e7ab-4400-92d3-f856ff0b230f no longer exists - [AfterEach] [sig-storage] Downward API volume + [It] should be consumable via environment variable [NodeConformance] [Conformance] + test/e2e/common/node/configmap.go:45 + STEP: Creating configMap configmap-2253/configmap-test-d2999f46-724c-40ee-bb3d-3cf8b18a0d74 07/27/23 02:01:11.741 + STEP: Creating a pod to test consume configMaps 07/27/23 02:01:11.767 + Jul 27 02:01:11.796: INFO: Waiting up to 5m0s for pod "pod-configmaps-b4b2da77-7383-4723-a572-a9b5dc216eb3" in namespace "configmap-2253" to be "Succeeded or Failed" + Jul 27 02:01:11.804: INFO: Pod "pod-configmaps-b4b2da77-7383-4723-a572-a9b5dc216eb3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.948555ms + Jul 27 02:01:13.813: INFO: Pod "pod-configmaps-b4b2da77-7383-4723-a572-a9b5dc216eb3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016937889s + Jul 27 02:01:15.814: INFO: Pod "pod-configmaps-b4b2da77-7383-4723-a572-a9b5dc216eb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018268333s + STEP: Saw pod success 07/27/23 02:01:15.814 + Jul 27 02:01:15.814: INFO: Pod "pod-configmaps-b4b2da77-7383-4723-a572-a9b5dc216eb3" satisfied condition "Succeeded or Failed" + Jul 27 02:01:15.822: INFO: Trying to get logs from node 10.245.128.19 pod pod-configmaps-b4b2da77-7383-4723-a572-a9b5dc216eb3 container env-test: + STEP: delete the pod 07/27/23 02:01:15.844 + Jul 27 02:01:15.867: INFO: Waiting for pod pod-configmaps-b4b2da77-7383-4723-a572-a9b5dc216eb3 to disappear + Jul 27 02:01:15.876: INFO: Pod pod-configmaps-b4b2da77-7383-4723-a572-a9b5dc216eb3 no longer exists + [AfterEach] [sig-node] ConfigMap test/e2e/framework/node/init/init.go:32 - Jun 12 21:27:43.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Downward API volume + Jul 27 02:01:15.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] ConfigMap test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Downward API volume + [DeferCleanup (Each)] [sig-node] ConfigMap dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Downward API volume + [DeferCleanup (Each)] [sig-node] ConfigMap tear down framework | framework.go:193 - STEP: Destroying namespace "downward-api-9628" for this suite. 06/12/23 21:27:44.006 + STEP: Destroying namespace "configmap-2253" for this suite. 07/27/23 02:01:15.888 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSS ------------------------------ -[sig-cli] Kubectl client Update Demo - should create and stop a replication controller [Conformance] - test/e2e/kubectl/kubectl.go:339 +[sig-cli] Kubectl client Kubectl label + should update the label on a resource [Conformance] + test/e2e/kubectl/kubectl.go:1509 [BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:27:44.038 -Jun 12 21:27:44.038: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename kubectl 06/12/23 21:27:44.04 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:27:44.096 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:27:44.11 +STEP: Creating a kubernetes client 07/27/23 02:01:15.922 +Jul 27 02:01:15.922: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename kubectl 07/27/23 02:01:15.923 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:01:15.962 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:01:15.973 [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:274 -[BeforeEach] Update Demo - test/e2e/kubectl/kubectl.go:326 -[It] should create and stop a replication controller [Conformance] - test/e2e/kubectl/kubectl.go:339 -STEP: creating a replication controller 06/12/23 21:27:44.125 -Jun 12 21:27:44.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-9436 create -f -' -Jun 12 21:27:48.164: INFO: stderr: "" -Jun 12 21:27:48.164: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" -STEP: waiting for all containers in name=update-demo pods to come up. 06/12/23 21:27:48.164 -Jun 12 21:27:48.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-9436 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' -Jun 12 21:27:48.360: INFO: stderr: "" -Jun 12 21:27:48.360: INFO: stdout: "update-demo-nautilus-fvc6z update-demo-nautilus-s4vrg " -Jun 12 21:27:48.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-9436 get pods update-demo-nautilus-fvc6z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' -Jun 12 21:27:48.801: INFO: stderr: "" -Jun 12 21:27:48.801: INFO: stdout: "" -Jun 12 21:27:48.801: INFO: update-demo-nautilus-fvc6z is created but not running -Jun 12 21:27:53.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-9436 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' -Jun 12 21:27:54.177: INFO: stderr: "" -Jun 12 21:27:54.177: INFO: stdout: "update-demo-nautilus-fvc6z update-demo-nautilus-s4vrg " -Jun 12 21:27:54.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-9436 get pods update-demo-nautilus-fvc6z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' -Jun 12 21:27:54.499: INFO: stderr: "" -Jun 12 21:27:54.499: INFO: stdout: "true" -Jun 12 21:27:54.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-9436 get pods update-demo-nautilus-fvc6z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' -Jun 12 21:27:54.713: INFO: stderr: "" -Jun 12 21:27:54.713: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" -Jun 12 21:27:54.713: INFO: validating pod update-demo-nautilus-fvc6z -Jun 12 21:27:54.729: INFO: got data: { - "image": "nautilus.jpg" -} - -Jun 12 21:27:54.730: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . -Jun 12 21:27:54.730: INFO: update-demo-nautilus-fvc6z is verified up and running -Jun 12 21:27:54.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-9436 get pods update-demo-nautilus-s4vrg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' -Jun 12 21:27:54.923: INFO: stderr: "" -Jun 12 21:27:54.923: INFO: stdout: "true" -Jun 12 21:27:54.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-9436 get pods update-demo-nautilus-s4vrg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' -Jun 12 21:27:55.172: INFO: stderr: "" -Jun 12 21:27:55.172: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" -Jun 12 21:27:55.172: INFO: validating pod update-demo-nautilus-s4vrg -Jun 12 21:27:55.189: INFO: got data: { - "image": "nautilus.jpg" -} - -Jun 12 21:27:55.189: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . -Jun 12 21:27:55.189: INFO: update-demo-nautilus-s4vrg is verified up and running -STEP: using delete to clean up resources 06/12/23 21:27:55.189 -Jun 12 21:27:55.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-9436 delete --grace-period=0 --force -f -' -Jun 12 21:27:55.393: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" -Jun 12 21:27:55.393: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" -Jun 12 21:27:55.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-9436 get rc,svc -l name=update-demo --no-headers' -Jun 12 21:27:55.603: INFO: stderr: "No resources found in kubectl-9436 namespace.\n" -Jun 12 21:27:55.603: INFO: stdout: "" -Jun 12 21:27:55.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-9436 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' -Jun 12 21:27:55.833: INFO: stderr: "" -Jun 12 21:27:55.833: INFO: stdout: "" +[BeforeEach] Kubectl label + test/e2e/kubectl/kubectl.go:1494 +STEP: creating the pod 07/27/23 02:01:15.982 +Jul 27 02:01:15.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-3479 create -f -' +Jul 27 02:01:16.477: INFO: stderr: "" +Jul 27 02:01:16.477: INFO: stdout: "pod/pause created\n" +Jul 27 02:01:16.477: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] +Jul 27 02:01:16.477: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3479" to be "running and ready" +Jul 27 02:01:16.491: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 13.614168ms +Jul 27 02:01:16.491: INFO: Error evaluating pod condition running and ready: want pod 'pause' on '10.245.128.19' to be 'Running' but was 'Pending' +Jul 27 02:01:18.501: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.024046539s +Jul 27 02:01:18.501: INFO: Pod "pause" satisfied condition "running and ready" +Jul 27 02:01:18.501: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] +[It] should update the label on a resource [Conformance] + test/e2e/kubectl/kubectl.go:1509 +STEP: adding the label testing-label with value testing-label-value to a pod 07/27/23 02:01:18.501 +Jul 27 02:01:18.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-3479 label pods pause testing-label=testing-label-value' +Jul 27 02:01:18.630: INFO: stderr: "" +Jul 27 02:01:18.630: INFO: stdout: "pod/pause labeled\n" +STEP: verifying the pod has the label testing-label with the value testing-label-value 07/27/23 02:01:18.63 +Jul 27 02:01:18.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-3479 get pod pause -L testing-label' +Jul 27 02:01:18.727: INFO: stderr: "" +Jul 27 02:01:18.727: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s testing-label-value\n" +STEP: removing the label testing-label of a pod 07/27/23 02:01:18.727 +Jul 27 02:01:18.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-3479 label pods pause testing-label-' +Jul 27 02:01:18.868: INFO: stderr: "" +Jul 27 02:01:18.868: INFO: stdout: "pod/pause unlabeled\n" +STEP: verifying the pod doesn't have the label testing-label 07/27/23 02:01:18.868 +Jul 27 02:01:18.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-3479 get pod pause -L testing-label' +Jul 27 02:01:18.968: INFO: stderr: "" +Jul 27 02:01:18.968: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s \n" +[AfterEach] Kubectl label + test/e2e/kubectl/kubectl.go:1500 +STEP: using delete to clean up resources 07/27/23 02:01:18.968 +Jul 27 02:01:18.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-3479 delete --grace-period=0 --force -f -' +Jul 27 02:01:19.122: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jul 27 02:01:19.122: INFO: stdout: "pod \"pause\" force deleted\n" +Jul 27 02:01:19.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-3479 get rc,svc -l name=pause --no-headers' +Jul 27 02:01:19.225: INFO: stderr: "No resources found in kubectl-3479 namespace.\n" +Jul 27 02:01:19.225: INFO: stdout: "" +Jul 27 02:01:19.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-3479 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Jul 27 02:01:19.299: INFO: stderr: "" +Jul 27 02:01:19.299: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 -Jun 12 21:27:55.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +Jul 27 02:01:19.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 -STEP: Destroying namespace "kubectl-9436" for this suite. 06/12/23 21:27:55.862 +STEP: Destroying namespace "kubectl-3479" for this suite. 07/27/23 02:01:19.334 ------------------------------ -• [SLOW TEST] [11.848 seconds] +• [3.440 seconds] [sig-cli] Kubectl client test/e2e/kubectl/framework.go:23 - Update Demo - test/e2e/kubectl/kubectl.go:324 - should create and stop a replication controller [Conformance] - test/e2e/kubectl/kubectl.go:339 + Kubectl label + test/e2e/kubectl/kubectl.go:1492 + should update the label on a resource [Conformance] + test/e2e/kubectl/kubectl.go:1509 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:27:44.038 - Jun 12 21:27:44.038: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename kubectl 06/12/23 21:27:44.04 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:27:44.096 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:27:44.11 + STEP: Creating a kubernetes client 07/27/23 02:01:15.922 + Jul 27 02:01:15.922: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename kubectl 07/27/23 02:01:15.923 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:01:15.962 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:01:15.973 [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:274 - [BeforeEach] Update Demo - test/e2e/kubectl/kubectl.go:326 - [It] should create and stop a replication controller [Conformance] - test/e2e/kubectl/kubectl.go:339 - STEP: creating a replication controller 06/12/23 21:27:44.125 - Jun 12 21:27:44.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-9436 create -f -' - Jun 12 21:27:48.164: INFO: stderr: "" - Jun 12 21:27:48.164: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" - STEP: waiting for all containers in name=update-demo pods to come up. 06/12/23 21:27:48.164 - Jun 12 21:27:48.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-9436 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' - Jun 12 21:27:48.360: INFO: stderr: "" - Jun 12 21:27:48.360: INFO: stdout: "update-demo-nautilus-fvc6z update-demo-nautilus-s4vrg " - Jun 12 21:27:48.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-9436 get pods update-demo-nautilus-fvc6z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' - Jun 12 21:27:48.801: INFO: stderr: "" - Jun 12 21:27:48.801: INFO: stdout: "" - Jun 12 21:27:48.801: INFO: update-demo-nautilus-fvc6z is created but not running - Jun 12 21:27:53.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-9436 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' - Jun 12 21:27:54.177: INFO: stderr: "" - Jun 12 21:27:54.177: INFO: stdout: "update-demo-nautilus-fvc6z update-demo-nautilus-s4vrg " - Jun 12 21:27:54.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-9436 get pods update-demo-nautilus-fvc6z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' - Jun 12 21:27:54.499: INFO: stderr: "" - Jun 12 21:27:54.499: INFO: stdout: "true" - Jun 12 21:27:54.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-9436 get pods update-demo-nautilus-fvc6z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' - Jun 12 21:27:54.713: INFO: stderr: "" - Jun 12 21:27:54.713: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" - Jun 12 21:27:54.713: INFO: validating pod update-demo-nautilus-fvc6z - Jun 12 21:27:54.729: INFO: got data: { - "image": "nautilus.jpg" - } - - Jun 12 21:27:54.730: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . - Jun 12 21:27:54.730: INFO: update-demo-nautilus-fvc6z is verified up and running - Jun 12 21:27:54.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-9436 get pods update-demo-nautilus-s4vrg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' - Jun 12 21:27:54.923: INFO: stderr: "" - Jun 12 21:27:54.923: INFO: stdout: "true" - Jun 12 21:27:54.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-9436 get pods update-demo-nautilus-s4vrg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' - Jun 12 21:27:55.172: INFO: stderr: "" - Jun 12 21:27:55.172: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" - Jun 12 21:27:55.172: INFO: validating pod update-demo-nautilus-s4vrg - Jun 12 21:27:55.189: INFO: got data: { - "image": "nautilus.jpg" - } - - Jun 12 21:27:55.189: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . - Jun 12 21:27:55.189: INFO: update-demo-nautilus-s4vrg is verified up and running - STEP: using delete to clean up resources 06/12/23 21:27:55.189 - Jun 12 21:27:55.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-9436 delete --grace-period=0 --force -f -' - Jun 12 21:27:55.393: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" - Jun 12 21:27:55.393: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" - Jun 12 21:27:55.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-9436 get rc,svc -l name=update-demo --no-headers' - Jun 12 21:27:55.603: INFO: stderr: "No resources found in kubectl-9436 namespace.\n" - Jun 12 21:27:55.603: INFO: stdout: "" - Jun 12 21:27:55.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-9436 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' - Jun 12 21:27:55.833: INFO: stderr: "" - Jun 12 21:27:55.833: INFO: stdout: "" + [BeforeEach] Kubectl label + test/e2e/kubectl/kubectl.go:1494 + STEP: creating the pod 07/27/23 02:01:15.982 + Jul 27 02:01:15.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-3479 create -f -' + Jul 27 02:01:16.477: INFO: stderr: "" + Jul 27 02:01:16.477: INFO: stdout: "pod/pause created\n" + Jul 27 02:01:16.477: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] + Jul 27 02:01:16.477: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3479" to be "running and ready" + Jul 27 02:01:16.491: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 13.614168ms + Jul 27 02:01:16.491: INFO: Error evaluating pod condition running and ready: want pod 'pause' on '10.245.128.19' to be 'Running' but was 'Pending' + Jul 27 02:01:18.501: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.024046539s + Jul 27 02:01:18.501: INFO: Pod "pause" satisfied condition "running and ready" + Jul 27 02:01:18.501: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] + [It] should update the label on a resource [Conformance] + test/e2e/kubectl/kubectl.go:1509 + STEP: adding the label testing-label with value testing-label-value to a pod 07/27/23 02:01:18.501 + Jul 27 02:01:18.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-3479 label pods pause testing-label=testing-label-value' + Jul 27 02:01:18.630: INFO: stderr: "" + Jul 27 02:01:18.630: INFO: stdout: "pod/pause labeled\n" + STEP: verifying the pod has the label testing-label with the value testing-label-value 07/27/23 02:01:18.63 + Jul 27 02:01:18.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-3479 get pod pause -L testing-label' + Jul 27 02:01:18.727: INFO: stderr: "" + Jul 27 02:01:18.727: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s testing-label-value\n" + STEP: removing the label testing-label of a pod 07/27/23 02:01:18.727 + Jul 27 02:01:18.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-3479 label pods pause testing-label-' + Jul 27 02:01:18.868: INFO: stderr: "" + Jul 27 02:01:18.868: INFO: stdout: "pod/pause unlabeled\n" + STEP: verifying the pod doesn't have the label testing-label 07/27/23 02:01:18.868 + Jul 27 02:01:18.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-3479 get pod pause -L testing-label' + Jul 27 02:01:18.968: INFO: stderr: "" + Jul 27 02:01:18.968: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s \n" + [AfterEach] Kubectl label + test/e2e/kubectl/kubectl.go:1500 + STEP: using delete to clean up resources 07/27/23 02:01:18.968 + Jul 27 02:01:18.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-3479 delete --grace-period=0 --force -f -' + Jul 27 02:01:19.122: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Jul 27 02:01:19.122: INFO: stdout: "pod \"pause\" force deleted\n" + Jul 27 02:01:19.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-3479 get rc,svc -l name=pause --no-headers' + Jul 27 02:01:19.225: INFO: stderr: "No resources found in kubectl-3479 namespace.\n" + Jul 27 02:01:19.225: INFO: stdout: "" + Jul 27 02:01:19.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-3479 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' + Jul 27 02:01:19.299: INFO: stderr: "" + Jul 27 02:01:19.299: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 - Jun 12 21:27:55.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + Jul 27 02:01:19.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 - STEP: Destroying namespace "kubectl-9436" for this suite. 06/12/23 21:27:55.862 + STEP: Destroying namespace "kubectl-3479" for this suite. 07/27/23 02:01:19.334 << End Captured GinkgoWriter Output ------------------------------ -SSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-network] Networking Granular Checks: Pods - should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/network/networking.go:105 -[BeforeEach] [sig-network] Networking +[sig-storage] Secrets + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:89 +[BeforeEach] [sig-storage] Secrets set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:27:55.887 -Jun 12 21:27:55.888: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename pod-network-test 06/12/23 21:27:55.89 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:27:56.023 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:27:56.048 -[BeforeEach] [sig-network] Networking +STEP: Creating a kubernetes client 07/27/23 02:01:19.363 +Jul 27 02:01:19.363: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename secrets 07/27/23 02:01:19.365 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:01:19.414 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:01:19.425 +[BeforeEach] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:31 -[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/network/networking.go:105 -STEP: Performing setup for networking test in namespace pod-network-test-8543 06/12/23 21:27:56.074 -STEP: creating a selector 06/12/23 21:27:56.075 -STEP: Creating the service pods in kubernetes 06/12/23 21:27:56.076 -Jun 12 21:27:56.076: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable -Jun 12 21:27:56.160: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-8543" to be "running and ready" -Jun 12 21:27:56.173: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 13.43111ms -Jun 12 21:27:56.173: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:27:58.198: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03856325s -Jun 12 21:27:58.198: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:28:00.212: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.051992067s -Jun 12 21:28:00.212: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:28:02.184: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.024392299s -Jun 12 21:28:02.184: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:28:04.184: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.023986442s -Jun 12 21:28:04.184: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:28:06.183: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.023769515s -Jun 12 21:28:06.183: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:28:08.183: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.023428617s -Jun 12 21:28:08.183: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:28:10.183: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.023534722s -Jun 12 21:28:10.183: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:28:12.183: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.023433193s -Jun 12 21:28:12.183: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:28:14.182: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.022630089s -Jun 12 21:28:14.182: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:28:16.184: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.024326102s -Jun 12 21:28:16.184: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:28:18.184: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 22.024027402s -Jun 12 21:28:18.184: INFO: The phase of Pod netserver-0 is Running (Ready = true) -Jun 12 21:28:18.184: INFO: Pod "netserver-0" satisfied condition "running and ready" -Jun 12 21:28:18.192: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-8543" to be "running and ready" -Jun 12 21:28:18.200: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 7.671912ms -Jun 12 21:28:18.200: INFO: The phase of Pod netserver-1 is Running (Ready = true) -Jun 12 21:28:18.200: INFO: Pod "netserver-1" satisfied condition "running and ready" -Jun 12 21:28:18.209: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-8543" to be "running and ready" -Jun 12 21:28:18.217: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 7.739559ms -Jun 12 21:28:18.217: INFO: The phase of Pod netserver-2 is Running (Ready = true) -Jun 12 21:28:18.217: INFO: Pod "netserver-2" satisfied condition "running and ready" -STEP: Creating test pods 06/12/23 21:28:18.225 -Jun 12 21:28:18.253: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-8543" to be "running" -Jun 12 21:28:18.262: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 8.557929ms -Jun 12 21:28:20.272: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018639957s -Jun 12 21:28:22.273: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.019565551s -Jun 12 21:28:22.273: INFO: Pod "test-container-pod" satisfied condition "running" -Jun 12 21:28:22.282: INFO: Waiting up to 5m0s for pod "host-test-container-pod" in namespace "pod-network-test-8543" to be "running" -Jun 12 21:28:22.291: INFO: Pod "host-test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 9.242733ms -Jun 12 21:28:22.292: INFO: Pod "host-test-container-pod" satisfied condition "running" -Jun 12 21:28:22.305: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 -Jun 12 21:28:22.305: INFO: Going to poll 172.30.161.83 on port 8083 at least 0 times, with a maximum of 39 tries before failing -Jun 12 21:28:22.313: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.30.161.83:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8543 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} -Jun 12 21:28:22.313: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 21:28:22.315: INFO: ExecWithOptions: Clientset creation -Jun 12 21:28:22.315: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-8543/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F172.30.161.83%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) -Jun 12 21:28:22.896: INFO: Found all 1 expected endpoints: [netserver-0] -Jun 12 21:28:22.897: INFO: Going to poll 172.30.185.126 on port 8083 at least 0 times, with a maximum of 39 tries before failing -Jun 12 21:28:22.909: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.30.185.126:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8543 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} -Jun 12 21:28:22.909: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 21:28:22.911: INFO: ExecWithOptions: Clientset creation -Jun 12 21:28:22.911: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-8543/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F172.30.185.126%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) -Jun 12 21:28:23.328: INFO: Found all 1 expected endpoints: [netserver-1] -Jun 12 21:28:23.328: INFO: Going to poll 172.30.224.18 on port 8083 at least 0 times, with a maximum of 39 tries before failing -Jun 12 21:28:23.337: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.30.224.18:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8543 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} -Jun 12 21:28:23.337: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 21:28:23.338: INFO: ExecWithOptions: Clientset creation -Jun 12 21:28:23.339: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-8543/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F172.30.224.18%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) -Jun 12 21:28:23.668: INFO: Found all 1 expected endpoints: [netserver-2] -[AfterEach] [sig-network] Networking +[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:89 +STEP: Creating secret with name secret-test-map-f8277969-646c-46d1-b8e0-177498ca7c85 07/27/23 02:01:19.438 +STEP: Creating a pod to test consume secrets 07/27/23 02:01:19.452 +Jul 27 02:01:19.480: INFO: Waiting up to 5m0s for pod "pod-secrets-364c2524-f169-4e79-98db-75e21af58e67" in namespace "secrets-4816" to be "Succeeded or Failed" +Jul 27 02:01:19.490: INFO: Pod "pod-secrets-364c2524-f169-4e79-98db-75e21af58e67": Phase="Pending", Reason="", readiness=false. Elapsed: 10.240457ms +Jul 27 02:01:21.500: INFO: Pod "pod-secrets-364c2524-f169-4e79-98db-75e21af58e67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019680474s +Jul 27 02:01:23.516: INFO: Pod "pod-secrets-364c2524-f169-4e79-98db-75e21af58e67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035979561s +STEP: Saw pod success 07/27/23 02:01:23.516 +Jul 27 02:01:23.516: INFO: Pod "pod-secrets-364c2524-f169-4e79-98db-75e21af58e67" satisfied condition "Succeeded or Failed" +Jul 27 02:01:23.526: INFO: Trying to get logs from node 10.245.128.19 pod pod-secrets-364c2524-f169-4e79-98db-75e21af58e67 container secret-volume-test: +STEP: delete the pod 07/27/23 02:01:23.547 +Jul 27 02:01:23.567: INFO: Waiting for pod pod-secrets-364c2524-f169-4e79-98db-75e21af58e67 to disappear +Jul 27 02:01:23.575: INFO: Pod pod-secrets-364c2524-f169-4e79-98db-75e21af58e67 no longer exists +[AfterEach] [sig-storage] Secrets test/e2e/framework/node/init/init.go:32 -Jun 12 21:28:23.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] Networking +Jul 27 02:01:23.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] Networking +[DeferCleanup (Each)] [sig-storage] Secrets dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] Networking +[DeferCleanup (Each)] [sig-storage] Secrets tear down framework | framework.go:193 -STEP: Destroying namespace "pod-network-test-8543" for this suite. 06/12/23 21:28:23.69 +STEP: Destroying namespace "secrets-4816" for this suite. 07/27/23 02:01:23.587 ------------------------------ -• [SLOW TEST] [27.829 seconds] -[sig-network] Networking -test/e2e/common/network/framework.go:23 - Granular Checks: Pods - test/e2e/common/network/networking.go:32 - should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/network/networking.go:105 +• [4.250 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:89 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] Networking + [BeforeEach] [sig-storage] Secrets set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:27:55.887 - Jun 12 21:27:55.888: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename pod-network-test 06/12/23 21:27:55.89 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:27:56.023 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:27:56.048 - [BeforeEach] [sig-network] Networking + STEP: Creating a kubernetes client 07/27/23 02:01:19.363 + Jul 27 02:01:19.363: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename secrets 07/27/23 02:01:19.365 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:01:19.414 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:01:19.425 + [BeforeEach] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:31 - [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/network/networking.go:105 - STEP: Performing setup for networking test in namespace pod-network-test-8543 06/12/23 21:27:56.074 - STEP: creating a selector 06/12/23 21:27:56.075 - STEP: Creating the service pods in kubernetes 06/12/23 21:27:56.076 - Jun 12 21:27:56.076: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable - Jun 12 21:27:56.160: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-8543" to be "running and ready" - Jun 12 21:27:56.173: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 13.43111ms - Jun 12 21:27:56.173: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:27:58.198: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03856325s - Jun 12 21:27:58.198: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:28:00.212: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.051992067s - Jun 12 21:28:00.212: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:28:02.184: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.024392299s - Jun 12 21:28:02.184: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:28:04.184: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.023986442s - Jun 12 21:28:04.184: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:28:06.183: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.023769515s - Jun 12 21:28:06.183: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:28:08.183: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.023428617s - Jun 12 21:28:08.183: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:28:10.183: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.023534722s - Jun 12 21:28:10.183: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:28:12.183: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.023433193s - Jun 12 21:28:12.183: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:28:14.182: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.022630089s - Jun 12 21:28:14.182: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:28:16.184: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.024326102s - Jun 12 21:28:16.184: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:28:18.184: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 22.024027402s - Jun 12 21:28:18.184: INFO: The phase of Pod netserver-0 is Running (Ready = true) - Jun 12 21:28:18.184: INFO: Pod "netserver-0" satisfied condition "running and ready" - Jun 12 21:28:18.192: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-8543" to be "running and ready" - Jun 12 21:28:18.200: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 7.671912ms - Jun 12 21:28:18.200: INFO: The phase of Pod netserver-1 is Running (Ready = true) - Jun 12 21:28:18.200: INFO: Pod "netserver-1" satisfied condition "running and ready" - Jun 12 21:28:18.209: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-8543" to be "running and ready" - Jun 12 21:28:18.217: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 7.739559ms - Jun 12 21:28:18.217: INFO: The phase of Pod netserver-2 is Running (Ready = true) - Jun 12 21:28:18.217: INFO: Pod "netserver-2" satisfied condition "running and ready" - STEP: Creating test pods 06/12/23 21:28:18.225 - Jun 12 21:28:18.253: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-8543" to be "running" - Jun 12 21:28:18.262: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 8.557929ms - Jun 12 21:28:20.272: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018639957s - Jun 12 21:28:22.273: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.019565551s - Jun 12 21:28:22.273: INFO: Pod "test-container-pod" satisfied condition "running" - Jun 12 21:28:22.282: INFO: Waiting up to 5m0s for pod "host-test-container-pod" in namespace "pod-network-test-8543" to be "running" - Jun 12 21:28:22.291: INFO: Pod "host-test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 9.242733ms - Jun 12 21:28:22.292: INFO: Pod "host-test-container-pod" satisfied condition "running" - Jun 12 21:28:22.305: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 - Jun 12 21:28:22.305: INFO: Going to poll 172.30.161.83 on port 8083 at least 0 times, with a maximum of 39 tries before failing - Jun 12 21:28:22.313: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.30.161.83:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8543 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} - Jun 12 21:28:22.313: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 21:28:22.315: INFO: ExecWithOptions: Clientset creation - Jun 12 21:28:22.315: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-8543/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F172.30.161.83%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) - Jun 12 21:28:22.896: INFO: Found all 1 expected endpoints: [netserver-0] - Jun 12 21:28:22.897: INFO: Going to poll 172.30.185.126 on port 8083 at least 0 times, with a maximum of 39 tries before failing - Jun 12 21:28:22.909: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.30.185.126:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8543 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} - Jun 12 21:28:22.909: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 21:28:22.911: INFO: ExecWithOptions: Clientset creation - Jun 12 21:28:22.911: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-8543/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F172.30.185.126%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) - Jun 12 21:28:23.328: INFO: Found all 1 expected endpoints: [netserver-1] - Jun 12 21:28:23.328: INFO: Going to poll 172.30.224.18 on port 8083 at least 0 times, with a maximum of 39 tries before failing - Jun 12 21:28:23.337: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.30.224.18:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8543 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} - Jun 12 21:28:23.337: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 21:28:23.338: INFO: ExecWithOptions: Clientset creation - Jun 12 21:28:23.339: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-8543/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F172.30.224.18%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) - Jun 12 21:28:23.668: INFO: Found all 1 expected endpoints: [netserver-2] - [AfterEach] [sig-network] Networking + [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:89 + STEP: Creating secret with name secret-test-map-f8277969-646c-46d1-b8e0-177498ca7c85 07/27/23 02:01:19.438 + STEP: Creating a pod to test consume secrets 07/27/23 02:01:19.452 + Jul 27 02:01:19.480: INFO: Waiting up to 5m0s for pod "pod-secrets-364c2524-f169-4e79-98db-75e21af58e67" in namespace "secrets-4816" to be "Succeeded or Failed" + Jul 27 02:01:19.490: INFO: Pod "pod-secrets-364c2524-f169-4e79-98db-75e21af58e67": Phase="Pending", Reason="", readiness=false. Elapsed: 10.240457ms + Jul 27 02:01:21.500: INFO: Pod "pod-secrets-364c2524-f169-4e79-98db-75e21af58e67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019680474s + Jul 27 02:01:23.516: INFO: Pod "pod-secrets-364c2524-f169-4e79-98db-75e21af58e67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035979561s + STEP: Saw pod success 07/27/23 02:01:23.516 + Jul 27 02:01:23.516: INFO: Pod "pod-secrets-364c2524-f169-4e79-98db-75e21af58e67" satisfied condition "Succeeded or Failed" + Jul 27 02:01:23.526: INFO: Trying to get logs from node 10.245.128.19 pod pod-secrets-364c2524-f169-4e79-98db-75e21af58e67 container secret-volume-test: + STEP: delete the pod 07/27/23 02:01:23.547 + Jul 27 02:01:23.567: INFO: Waiting for pod pod-secrets-364c2524-f169-4e79-98db-75e21af58e67 to disappear + Jul 27 02:01:23.575: INFO: Pod pod-secrets-364c2524-f169-4e79-98db-75e21af58e67 no longer exists + [AfterEach] [sig-storage] Secrets test/e2e/framework/node/init/init.go:32 - Jun 12 21:28:23.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] Networking + Jul 27 02:01:23.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] Networking + [DeferCleanup (Each)] [sig-storage] Secrets dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] Networking + [DeferCleanup (Each)] [sig-storage] Secrets tear down framework | framework.go:193 - STEP: Destroying namespace "pod-network-test-8543" for this suite. 06/12/23 21:28:23.69 + STEP: Destroying namespace "secrets-4816" for this suite. 07/27/23 02:01:23.587 << End Captured GinkgoWriter Output ------------------------------ -SS +SSSSSSSSSSSSSSSSS ------------------------------ -[sig-node] Pods - should patch a pod status [Conformance] - test/e2e/common/node/pods.go:1083 -[BeforeEach] [sig-node] Pods +[sig-cli] Kubectl client Proxy server + should support proxy with --port 0 [Conformance] + test/e2e/kubectl/kubectl.go:1787 +[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:28:23.718 -Jun 12 21:28:23.719: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename pods 06/12/23 21:28:23.721 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:28:23.806 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:28:23.82 -[BeforeEach] [sig-node] Pods +STEP: Creating a kubernetes client 07/27/23 02:01:23.614 +Jul 27 02:01:23.614: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename kubectl 07/27/23 02:01:23.615 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:01:23.659 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:01:23.668 +[BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-node] Pods - test/e2e/common/node/pods.go:194 -[It] should patch a pod status [Conformance] - test/e2e/common/node/pods.go:1083 -STEP: Create a pod 06/12/23 21:28:23.834 -Jun 12 21:28:23.864: INFO: Waiting up to 5m0s for pod "pod-jmtx7" in namespace "pods-8190" to be "running" -Jun 12 21:28:23.902: INFO: Pod "pod-jmtx7": Phase="Pending", Reason="", readiness=false. Elapsed: 38.277067ms -Jun 12 21:28:25.911: INFO: Pod "pod-jmtx7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047490068s -Jun 12 21:28:27.923: INFO: Pod "pod-jmtx7": Phase="Running", Reason="", readiness=true. Elapsed: 4.058850732s -Jun 12 21:28:27.923: INFO: Pod "pod-jmtx7" satisfied condition "running" -STEP: patching /status 06/12/23 21:28:27.923 -Jun 12 21:28:27.945: INFO: Status Message: "Patched by e2e test" and Reason: "E2E" -[AfterEach] [sig-node] Pods +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[It] should support proxy with --port 0 [Conformance] + test/e2e/kubectl/kubectl.go:1787 +STEP: starting the proxy server 07/27/23 02:01:23.718 +Jul 27 02:01:23.718: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-6686 proxy -p 0 --disable-filter' +STEP: curling proxy /api/ output 07/27/23 02:01:23.765 +[AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 -Jun 12 21:28:27.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Pods - test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Pods - dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Pods - tear down framework | framework.go:193 -STEP: Destroying namespace "pods-8190" for this suite. 06/12/23 21:28:27.965 ------------------------------- -• [4.335 seconds] -[sig-node] Pods -test/e2e/common/node/framework.go:23 - should patch a pod status [Conformance] - test/e2e/common/node/pods.go:1083 - - Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Pods - set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:28:23.718 - Jun 12 21:28:23.719: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename pods 06/12/23 21:28:23.721 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:28:23.806 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:28:23.82 - [BeforeEach] [sig-node] Pods - test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-node] Pods - test/e2e/common/node/pods.go:194 - [It] should patch a pod status [Conformance] - test/e2e/common/node/pods.go:1083 - STEP: Create a pod 06/12/23 21:28:23.834 - Jun 12 21:28:23.864: INFO: Waiting up to 5m0s for pod "pod-jmtx7" in namespace "pods-8190" to be "running" - Jun 12 21:28:23.902: INFO: Pod "pod-jmtx7": Phase="Pending", Reason="", readiness=false. Elapsed: 38.277067ms - Jun 12 21:28:25.911: INFO: Pod "pod-jmtx7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047490068s - Jun 12 21:28:27.923: INFO: Pod "pod-jmtx7": Phase="Running", Reason="", readiness=true. Elapsed: 4.058850732s - Jun 12 21:28:27.923: INFO: Pod "pod-jmtx7" satisfied condition "running" - STEP: patching /status 06/12/23 21:28:27.923 - Jun 12 21:28:27.945: INFO: Status Message: "Patched by e2e test" and Reason: "E2E" - [AfterEach] [sig-node] Pods - test/e2e/framework/node/init/init.go:32 - Jun 12 21:28:27.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Pods - test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Pods - dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Pods - tear down framework | framework.go:193 - STEP: Destroying namespace "pods-8190" for this suite. 06/12/23 21:28:27.965 - << End Captured GinkgoWriter Output ------------------------------- -[sig-cli] Kubectl client Kubectl replace - should update a single-container pod's image [Conformance] - test/e2e/kubectl/kubectl.go:1747 -[BeforeEach] [sig-cli] Kubectl client - set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:28:28.054 -Jun 12 21:28:28.055: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename kubectl 06/12/23 21:28:28.057 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:28:28.114 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:28:28.126 -[BeforeEach] [sig-cli] Kubectl client - test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-cli] Kubectl client - test/e2e/kubectl/kubectl.go:274 -[BeforeEach] Kubectl replace - test/e2e/kubectl/kubectl.go:1734 -[It] should update a single-container pod's image [Conformance] - test/e2e/kubectl/kubectl.go:1747 -STEP: running the image registry.k8s.io/e2e-test-images/httpd:2.4.38-4 06/12/23 21:28:28.139 -Jun 12 21:28:28.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-883 run e2e-test-httpd-pod --image=registry.k8s.io/e2e-test-images/httpd:2.4.38-4 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' -Jun 12 21:28:28.381: INFO: stderr: "" -Jun 12 21:28:28.382: INFO: stdout: "pod/e2e-test-httpd-pod created\n" -STEP: verifying the pod e2e-test-httpd-pod is running 06/12/23 21:28:28.382 -STEP: verifying the pod e2e-test-httpd-pod was created 06/12/23 21:28:33.438 -Jun 12 21:28:33.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-883 get pod e2e-test-httpd-pod -o json' -Jun 12 21:28:34.633: INFO: stderr: "" -Jun 12 21:28:34.633: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"cni.projectcalico.org/containerID\": \"875c14b199a7af097d9e0b2a005ec2f52a0215264e60b72d50cfa7c76aedefbf\",\n \"cni.projectcalico.org/podIP\": \"172.30.185.111/32\",\n \"cni.projectcalico.org/podIPs\": \"172.30.185.111/32\",\n \"k8s.v1.cni.cncf.io/network-status\": \"[{\\n \\\"name\\\": \\\"k8s-pod-network\\\",\\n \\\"ips\\\": [\\n \\\"172.30.185.111\\\"\\n ],\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"openshift.io/scc\": \"anyuid\"\n },\n \"creationTimestamp\": \"2023-06-12T21:28:28Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-883\",\n \"resourceVersion\": \"103388\",\n \"uid\": \"f2628593-e783-4e51-9096-5faaa5f20140\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"registry.k8s.io/e2e-test-images/httpd:2.4.38-4\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"securityContext\": {\n \"capabilities\": {\n \"drop\": [\n \"MKNOD\"\n ]\n }\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-n74xv\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"imagePullSecrets\": [\n {\n \"name\": \"default-dockercfg-nmfp7\"\n }\n ],\n \"nodeName\": \"10.138.75.116\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {\n \"seLinuxOptions\": {\n \"level\": \"s0:c49,c9\"\n }\n },\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-n74xv\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"service-ca.crt\",\n \"path\": \"service-ca.crt\"\n }\n ],\n \"name\": \"openshift-service-ca.crt\"\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-06-12T21:28:28Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-06-12T21:28:32Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-06-12T21:28:32Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-06-12T21:28:28Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"cri-o://738101dd6317f9f81c3a1d3e3cb1fe60f82c5fcbc2c4289d5bccb36fcf9d0c0d\",\n \"image\": \"registry.k8s.io/e2e-test-images/httpd:2.4.38-4\",\n \"imageID\": \"registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2023-06-12T21:28:30Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.138.75.116\",\n \"phase\": \"Running\",\n \"podIP\": \"172.30.185.111\",\n \"podIPs\": [\n {\n \"ip\": \"172.30.185.111\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2023-06-12T21:28:28Z\"\n }\n}\n" -STEP: replace the image in the pod 06/12/23 21:28:34.633 -Jun 12 21:28:34.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-883 replace -f -' -Jun 12 21:28:36.373: INFO: stderr: "" -Jun 12 21:28:36.373: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" -STEP: verifying the pod e2e-test-httpd-pod has the right image registry.k8s.io/e2e-test-images/busybox:1.29-4 06/12/23 21:28:36.373 -[AfterEach] Kubectl replace - test/e2e/kubectl/kubectl.go:1738 -Jun 12 21:28:36.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-883 delete pods e2e-test-httpd-pod' -Jun 12 21:28:49.383: INFO: stderr: "" -Jun 12 21:28:49.383: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" -[AfterEach] [sig-cli] Kubectl client - test/e2e/framework/node/init/init.go:32 -Jun 12 21:28:49.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-cli] Kubectl client +Jul 27 02:01:23.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 -STEP: Destroying namespace "kubectl-883" for this suite. 06/12/23 21:28:49.4 +STEP: Destroying namespace "kubectl-6686" for this suite. 07/27/23 02:01:23.795 ------------------------------ -• [SLOW TEST] [21.369 seconds] +• [0.206 seconds] [sig-cli] Kubectl client test/e2e/kubectl/framework.go:23 - Kubectl replace - test/e2e/kubectl/kubectl.go:1731 - should update a single-container pod's image [Conformance] - test/e2e/kubectl/kubectl.go:1747 + Proxy server + test/e2e/kubectl/kubectl.go:1780 + should support proxy with --port 0 [Conformance] + test/e2e/kubectl/kubectl.go:1787 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:28:28.054 - Jun 12 21:28:28.055: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename kubectl 06/12/23 21:28:28.057 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:28:28.114 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:28:28.126 + STEP: Creating a kubernetes client 07/27/23 02:01:23.614 + Jul 27 02:01:23.614: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename kubectl 07/27/23 02:01:23.615 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:01:23.659 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:01:23.668 [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:274 - [BeforeEach] Kubectl replace - test/e2e/kubectl/kubectl.go:1734 - [It] should update a single-container pod's image [Conformance] - test/e2e/kubectl/kubectl.go:1747 - STEP: running the image registry.k8s.io/e2e-test-images/httpd:2.4.38-4 06/12/23 21:28:28.139 - Jun 12 21:28:28.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-883 run e2e-test-httpd-pod --image=registry.k8s.io/e2e-test-images/httpd:2.4.38-4 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' - Jun 12 21:28:28.381: INFO: stderr: "" - Jun 12 21:28:28.382: INFO: stdout: "pod/e2e-test-httpd-pod created\n" - STEP: verifying the pod e2e-test-httpd-pod is running 06/12/23 21:28:28.382 - STEP: verifying the pod e2e-test-httpd-pod was created 06/12/23 21:28:33.438 - Jun 12 21:28:33.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-883 get pod e2e-test-httpd-pod -o json' - Jun 12 21:28:34.633: INFO: stderr: "" - Jun 12 21:28:34.633: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"cni.projectcalico.org/containerID\": \"875c14b199a7af097d9e0b2a005ec2f52a0215264e60b72d50cfa7c76aedefbf\",\n \"cni.projectcalico.org/podIP\": \"172.30.185.111/32\",\n \"cni.projectcalico.org/podIPs\": \"172.30.185.111/32\",\n \"k8s.v1.cni.cncf.io/network-status\": \"[{\\n \\\"name\\\": \\\"k8s-pod-network\\\",\\n \\\"ips\\\": [\\n \\\"172.30.185.111\\\"\\n ],\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"openshift.io/scc\": \"anyuid\"\n },\n \"creationTimestamp\": \"2023-06-12T21:28:28Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-883\",\n \"resourceVersion\": \"103388\",\n \"uid\": \"f2628593-e783-4e51-9096-5faaa5f20140\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"registry.k8s.io/e2e-test-images/httpd:2.4.38-4\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"securityContext\": {\n \"capabilities\": {\n \"drop\": [\n \"MKNOD\"\n ]\n }\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-n74xv\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"imagePullSecrets\": [\n {\n \"name\": \"default-dockercfg-nmfp7\"\n }\n ],\n \"nodeName\": \"10.138.75.116\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {\n \"seLinuxOptions\": {\n \"level\": \"s0:c49,c9\"\n }\n },\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-n74xv\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"service-ca.crt\",\n \"path\": \"service-ca.crt\"\n }\n ],\n \"name\": \"openshift-service-ca.crt\"\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-06-12T21:28:28Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-06-12T21:28:32Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-06-12T21:28:32Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-06-12T21:28:28Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"cri-o://738101dd6317f9f81c3a1d3e3cb1fe60f82c5fcbc2c4289d5bccb36fcf9d0c0d\",\n \"image\": \"registry.k8s.io/e2e-test-images/httpd:2.4.38-4\",\n \"imageID\": \"registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2023-06-12T21:28:30Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.138.75.116\",\n \"phase\": \"Running\",\n \"podIP\": \"172.30.185.111\",\n \"podIPs\": [\n {\n \"ip\": \"172.30.185.111\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2023-06-12T21:28:28Z\"\n }\n}\n" - STEP: replace the image in the pod 06/12/23 21:28:34.633 - Jun 12 21:28:34.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-883 replace -f -' - Jun 12 21:28:36.373: INFO: stderr: "" - Jun 12 21:28:36.373: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" - STEP: verifying the pod e2e-test-httpd-pod has the right image registry.k8s.io/e2e-test-images/busybox:1.29-4 06/12/23 21:28:36.373 - [AfterEach] Kubectl replace - test/e2e/kubectl/kubectl.go:1738 - Jun 12 21:28:36.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-883 delete pods e2e-test-httpd-pod' - Jun 12 21:28:49.383: INFO: stderr: "" - Jun 12 21:28:49.383: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" + [It] should support proxy with --port 0 [Conformance] + test/e2e/kubectl/kubectl.go:1787 + STEP: starting the proxy server 07/27/23 02:01:23.718 + Jul 27 02:01:23.718: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-6686 proxy -p 0 --disable-filter' + STEP: curling proxy /api/ output 07/27/23 02:01:23.765 [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 - Jun 12 21:28:49.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + Jul 27 02:01:23.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 - STEP: Destroying namespace "kubectl-883" for this suite. 06/12/23 21:28:49.4 + STEP: Destroying namespace "kubectl-6686" for this suite. 07/27/23 02:01:23.795 << End Captured GinkgoWriter Output ------------------------------ -SS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-network] Services - should be able to change the type from NodePort to ExternalName [Conformance] - test/e2e/network/service.go:1557 -[BeforeEach] [sig-network] Services +[sig-apps] ControllerRevision [Serial] + should manage the lifecycle of a ControllerRevision [Conformance] + test/e2e/apps/controller_revision.go:124 +[BeforeEach] [sig-apps] ControllerRevision [Serial] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:28:49.424 -Jun 12 21:28:49.425: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename services 06/12/23 21:28:49.426 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:28:49.483 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:28:49.499 -[BeforeEach] [sig-network] Services +STEP: Creating a kubernetes client 07/27/23 02:01:23.822 +Jul 27 02:01:23.822: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename controllerrevisions 07/27/23 02:01:23.822 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:01:23.861 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:01:23.87 +[BeforeEach] [sig-apps] ControllerRevision [Serial] test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-network] Services - test/e2e/network/service.go:766 -[It] should be able to change the type from NodePort to ExternalName [Conformance] - test/e2e/network/service.go:1557 -STEP: creating a service nodeport-service with the type=NodePort in namespace services-2341 06/12/23 21:28:49.516 -STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service 06/12/23 21:28:49.587 -STEP: creating service externalsvc in namespace services-2341 06/12/23 21:28:49.587 -STEP: creating replication controller externalsvc in namespace services-2341 06/12/23 21:28:49.63 -I0612 21:28:49.651370 23 runners.go:193] Created replication controller with name: externalsvc, namespace: services-2341, replica count: 2 -I0612 21:28:52.703366 23 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady -STEP: changing the NodePort service to type=ExternalName 06/12/23 21:28:52.715 -Jun 12 21:28:52.774: INFO: Creating new exec pod -Jun 12 21:28:52.795: INFO: Waiting up to 5m0s for pod "execpodn284r" in namespace "services-2341" to be "running" -Jun 12 21:28:52.804: INFO: Pod "execpodn284r": Phase="Pending", Reason="", readiness=false. Elapsed: 9.319019ms -Jun 12 21:28:54.817: INFO: Pod "execpodn284r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021836488s -Jun 12 21:28:56.815: INFO: Pod "execpodn284r": Phase="Running", Reason="", readiness=true. Elapsed: 4.019956513s -Jun 12 21:28:56.815: INFO: Pod "execpodn284r" satisfied condition "running" -Jun 12 21:28:56.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-2341 exec execpodn284r -- /bin/sh -x -c nslookup nodeport-service.services-2341.svc.cluster.local' -Jun 12 21:28:57.674: INFO: stderr: "+ nslookup nodeport-service.services-2341.svc.cluster.local\n" -Jun 12 21:28:57.674: INFO: stdout: "Server:\t\t172.21.0.10\nAddress:\t172.21.0.10#53\n\nnodeport-service.services-2341.svc.cluster.local\tcanonical name = externalsvc.services-2341.svc.cluster.local.\nName:\texternalsvc.services-2341.svc.cluster.local\nAddress: 172.21.178.120\n\n" -STEP: deleting ReplicationController externalsvc in namespace services-2341, will wait for the garbage collector to delete the pods 06/12/23 21:28:57.674 -Jun 12 21:28:57.764: INFO: Deleting ReplicationController externalsvc took: 26.692893ms -Jun 12 21:28:57.865: INFO: Terminating ReplicationController externalsvc pods took: 100.687522ms -Jun 12 21:29:01.927: INFO: Cleaning up the NodePort to ExternalName test service -[AfterEach] [sig-network] Services +[BeforeEach] [sig-apps] ControllerRevision [Serial] + test/e2e/apps/controller_revision.go:93 +[It] should manage the lifecycle of a ControllerRevision [Conformance] + test/e2e/apps/controller_revision.go:124 +STEP: Creating DaemonSet "e2e-5drv9-daemon-set" 07/27/23 02:01:23.967 +STEP: Check that daemon pods launch on every node of the cluster. 07/27/23 02:01:23.98 +Jul 27 02:01:24.010: INFO: Number of nodes with available pods controlled by daemonset e2e-5drv9-daemon-set: 0 +Jul 27 02:01:24.010: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 +Jul 27 02:01:25.033: INFO: Number of nodes with available pods controlled by daemonset e2e-5drv9-daemon-set: 0 +Jul 27 02:01:25.033: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 +Jul 27 02:01:26.032: INFO: Number of nodes with available pods controlled by daemonset e2e-5drv9-daemon-set: 2 +Jul 27 02:01:26.032: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 +Jul 27 02:01:27.033: INFO: Number of nodes with available pods controlled by daemonset e2e-5drv9-daemon-set: 3 +Jul 27 02:01:27.033: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset e2e-5drv9-daemon-set +STEP: Confirm DaemonSet "e2e-5drv9-daemon-set" successfully created with "daemonset-name=e2e-5drv9-daemon-set" label 07/27/23 02:01:27.043 +STEP: Listing all ControllerRevisions with label "daemonset-name=e2e-5drv9-daemon-set" 07/27/23 02:01:27.063 +Jul 27 02:01:27.078: INFO: Located ControllerRevision: "e2e-5drv9-daemon-set-5b966db8d9" +STEP: Patching ControllerRevision "e2e-5drv9-daemon-set-5b966db8d9" 07/27/23 02:01:27.087 +Jul 27 02:01:27.105: INFO: e2e-5drv9-daemon-set-5b966db8d9 has been patched +STEP: Create a new ControllerRevision 07/27/23 02:01:27.105 +Jul 27 02:01:27.121: INFO: Created ControllerRevision: e2e-5drv9-daemon-set-7c97fd7bf7 +STEP: Confirm that there are two ControllerRevisions 07/27/23 02:01:27.121 +Jul 27 02:01:27.121: INFO: Requesting list of ControllerRevisions to confirm quantity +Jul 27 02:01:27.131: INFO: Found 2 ControllerRevisions +STEP: Deleting ControllerRevision "e2e-5drv9-daemon-set-5b966db8d9" 07/27/23 02:01:27.131 +STEP: Confirm that there is only one ControllerRevision 07/27/23 02:01:27.148 +Jul 27 02:01:27.148: INFO: Requesting list of ControllerRevisions to confirm quantity +Jul 27 02:01:27.157: INFO: Found 1 ControllerRevisions +STEP: Updating ControllerRevision "e2e-5drv9-daemon-set-7c97fd7bf7" 07/27/23 02:01:27.167 +Jul 27 02:01:27.194: INFO: e2e-5drv9-daemon-set-7c97fd7bf7 has been updated +STEP: Generate another ControllerRevision by patching the Daemonset 07/27/23 02:01:27.194 +W0727 02:01:27.207885 20 warnings.go:70] unknown field "updateStrategy" +STEP: Confirm that there are two ControllerRevisions 07/27/23 02:01:27.207 +Jul 27 02:01:27.208: INFO: Requesting list of ControllerRevisions to confirm quantity +Jul 27 02:01:28.218: INFO: Requesting list of ControllerRevisions to confirm quantity +Jul 27 02:01:28.230: INFO: Found 2 ControllerRevisions +STEP: Removing a ControllerRevision via 'DeleteCollection' with labelSelector: "e2e-5drv9-daemon-set-7c97fd7bf7=updated" 07/27/23 02:01:28.23 +STEP: Confirm that there is only one ControllerRevision 07/27/23 02:01:28.252 +Jul 27 02:01:28.253: INFO: Requesting list of ControllerRevisions to confirm quantity +Jul 27 02:01:28.263: INFO: Found 1 ControllerRevisions +Jul 27 02:01:28.274: INFO: ControllerRevision "e2e-5drv9-daemon-set-5fc594b696" has revision 3 +[AfterEach] [sig-apps] ControllerRevision [Serial] + test/e2e/apps/controller_revision.go:58 +STEP: Deleting DaemonSet "e2e-5drv9-daemon-set" 07/27/23 02:01:28.283 +STEP: deleting DaemonSet.extensions e2e-5drv9-daemon-set in namespace controllerrevisions-3024, will wait for the garbage collector to delete the pods 07/27/23 02:01:28.283 +Jul 27 02:01:28.358: INFO: Deleting DaemonSet.extensions e2e-5drv9-daemon-set took: 14.626713ms +Jul 27 02:01:28.459: INFO: Terminating DaemonSet.extensions e2e-5drv9-daemon-set pods took: 100.985556ms +Jul 27 02:01:30.168: INFO: Number of nodes with available pods controlled by daemonset e2e-5drv9-daemon-set: 0 +Jul 27 02:01:30.168: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset e2e-5drv9-daemon-set +Jul 27 02:01:30.176: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"87886"},"items":null} + +Jul 27 02:01:30.184: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"87886"},"items":null} + +[AfterEach] [sig-apps] ControllerRevision [Serial] test/e2e/framework/node/init/init.go:32 -Jun 12 21:29:01.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] Services +Jul 27 02:01:30.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] Services +[DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] Services +[DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] tear down framework | framework.go:193 -STEP: Destroying namespace "services-2341" for this suite. 06/12/23 21:29:02.014 +STEP: Destroying namespace "controllerrevisions-3024" for this suite. 07/27/23 02:01:30.233 ------------------------------ -• [SLOW TEST] [12.623 seconds] -[sig-network] Services -test/e2e/network/common/framework.go:23 - should be able to change the type from NodePort to ExternalName [Conformance] - test/e2e/network/service.go:1557 +• [SLOW TEST] [6.443 seconds] +[sig-apps] ControllerRevision [Serial] +test/e2e/apps/framework.go:23 + should manage the lifecycle of a ControllerRevision [Conformance] + test/e2e/apps/controller_revision.go:124 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] Services + [BeforeEach] [sig-apps] ControllerRevision [Serial] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:28:49.424 - Jun 12 21:28:49.425: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename services 06/12/23 21:28:49.426 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:28:49.483 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:28:49.499 - [BeforeEach] [sig-network] Services + STEP: Creating a kubernetes client 07/27/23 02:01:23.822 + Jul 27 02:01:23.822: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename controllerrevisions 07/27/23 02:01:23.822 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:01:23.861 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:01:23.87 + [BeforeEach] [sig-apps] ControllerRevision [Serial] test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-network] Services - test/e2e/network/service.go:766 - [It] should be able to change the type from NodePort to ExternalName [Conformance] - test/e2e/network/service.go:1557 - STEP: creating a service nodeport-service with the type=NodePort in namespace services-2341 06/12/23 21:28:49.516 - STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service 06/12/23 21:28:49.587 - STEP: creating service externalsvc in namespace services-2341 06/12/23 21:28:49.587 - STEP: creating replication controller externalsvc in namespace services-2341 06/12/23 21:28:49.63 - I0612 21:28:49.651370 23 runners.go:193] Created replication controller with name: externalsvc, namespace: services-2341, replica count: 2 - I0612 21:28:52.703366 23 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady - STEP: changing the NodePort service to type=ExternalName 06/12/23 21:28:52.715 - Jun 12 21:28:52.774: INFO: Creating new exec pod - Jun 12 21:28:52.795: INFO: Waiting up to 5m0s for pod "execpodn284r" in namespace "services-2341" to be "running" - Jun 12 21:28:52.804: INFO: Pod "execpodn284r": Phase="Pending", Reason="", readiness=false. Elapsed: 9.319019ms - Jun 12 21:28:54.817: INFO: Pod "execpodn284r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021836488s - Jun 12 21:28:56.815: INFO: Pod "execpodn284r": Phase="Running", Reason="", readiness=true. Elapsed: 4.019956513s - Jun 12 21:28:56.815: INFO: Pod "execpodn284r" satisfied condition "running" - Jun 12 21:28:56.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-2341 exec execpodn284r -- /bin/sh -x -c nslookup nodeport-service.services-2341.svc.cluster.local' - Jun 12 21:28:57.674: INFO: stderr: "+ nslookup nodeport-service.services-2341.svc.cluster.local\n" - Jun 12 21:28:57.674: INFO: stdout: "Server:\t\t172.21.0.10\nAddress:\t172.21.0.10#53\n\nnodeport-service.services-2341.svc.cluster.local\tcanonical name = externalsvc.services-2341.svc.cluster.local.\nName:\texternalsvc.services-2341.svc.cluster.local\nAddress: 172.21.178.120\n\n" - STEP: deleting ReplicationController externalsvc in namespace services-2341, will wait for the garbage collector to delete the pods 06/12/23 21:28:57.674 - Jun 12 21:28:57.764: INFO: Deleting ReplicationController externalsvc took: 26.692893ms - Jun 12 21:28:57.865: INFO: Terminating ReplicationController externalsvc pods took: 100.687522ms - Jun 12 21:29:01.927: INFO: Cleaning up the NodePort to ExternalName test service - [AfterEach] [sig-network] Services + [BeforeEach] [sig-apps] ControllerRevision [Serial] + test/e2e/apps/controller_revision.go:93 + [It] should manage the lifecycle of a ControllerRevision [Conformance] + test/e2e/apps/controller_revision.go:124 + STEP: Creating DaemonSet "e2e-5drv9-daemon-set" 07/27/23 02:01:23.967 + STEP: Check that daemon pods launch on every node of the cluster. 07/27/23 02:01:23.98 + Jul 27 02:01:24.010: INFO: Number of nodes with available pods controlled by daemonset e2e-5drv9-daemon-set: 0 + Jul 27 02:01:24.010: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 + Jul 27 02:01:25.033: INFO: Number of nodes with available pods controlled by daemonset e2e-5drv9-daemon-set: 0 + Jul 27 02:01:25.033: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 + Jul 27 02:01:26.032: INFO: Number of nodes with available pods controlled by daemonset e2e-5drv9-daemon-set: 2 + Jul 27 02:01:26.032: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 + Jul 27 02:01:27.033: INFO: Number of nodes with available pods controlled by daemonset e2e-5drv9-daemon-set: 3 + Jul 27 02:01:27.033: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset e2e-5drv9-daemon-set + STEP: Confirm DaemonSet "e2e-5drv9-daemon-set" successfully created with "daemonset-name=e2e-5drv9-daemon-set" label 07/27/23 02:01:27.043 + STEP: Listing all ControllerRevisions with label "daemonset-name=e2e-5drv9-daemon-set" 07/27/23 02:01:27.063 + Jul 27 02:01:27.078: INFO: Located ControllerRevision: "e2e-5drv9-daemon-set-5b966db8d9" + STEP: Patching ControllerRevision "e2e-5drv9-daemon-set-5b966db8d9" 07/27/23 02:01:27.087 + Jul 27 02:01:27.105: INFO: e2e-5drv9-daemon-set-5b966db8d9 has been patched + STEP: Create a new ControllerRevision 07/27/23 02:01:27.105 + Jul 27 02:01:27.121: INFO: Created ControllerRevision: e2e-5drv9-daemon-set-7c97fd7bf7 + STEP: Confirm that there are two ControllerRevisions 07/27/23 02:01:27.121 + Jul 27 02:01:27.121: INFO: Requesting list of ControllerRevisions to confirm quantity + Jul 27 02:01:27.131: INFO: Found 2 ControllerRevisions + STEP: Deleting ControllerRevision "e2e-5drv9-daemon-set-5b966db8d9" 07/27/23 02:01:27.131 + STEP: Confirm that there is only one ControllerRevision 07/27/23 02:01:27.148 + Jul 27 02:01:27.148: INFO: Requesting list of ControllerRevisions to confirm quantity + Jul 27 02:01:27.157: INFO: Found 1 ControllerRevisions + STEP: Updating ControllerRevision "e2e-5drv9-daemon-set-7c97fd7bf7" 07/27/23 02:01:27.167 + Jul 27 02:01:27.194: INFO: e2e-5drv9-daemon-set-7c97fd7bf7 has been updated + STEP: Generate another ControllerRevision by patching the Daemonset 07/27/23 02:01:27.194 + W0727 02:01:27.207885 20 warnings.go:70] unknown field "updateStrategy" + STEP: Confirm that there are two ControllerRevisions 07/27/23 02:01:27.207 + Jul 27 02:01:27.208: INFO: Requesting list of ControllerRevisions to confirm quantity + Jul 27 02:01:28.218: INFO: Requesting list of ControllerRevisions to confirm quantity + Jul 27 02:01:28.230: INFO: Found 2 ControllerRevisions + STEP: Removing a ControllerRevision via 'DeleteCollection' with labelSelector: "e2e-5drv9-daemon-set-7c97fd7bf7=updated" 07/27/23 02:01:28.23 + STEP: Confirm that there is only one ControllerRevision 07/27/23 02:01:28.252 + Jul 27 02:01:28.253: INFO: Requesting list of ControllerRevisions to confirm quantity + Jul 27 02:01:28.263: INFO: Found 1 ControllerRevisions + Jul 27 02:01:28.274: INFO: ControllerRevision "e2e-5drv9-daemon-set-5fc594b696" has revision 3 + [AfterEach] [sig-apps] ControllerRevision [Serial] + test/e2e/apps/controller_revision.go:58 + STEP: Deleting DaemonSet "e2e-5drv9-daemon-set" 07/27/23 02:01:28.283 + STEP: deleting DaemonSet.extensions e2e-5drv9-daemon-set in namespace controllerrevisions-3024, will wait for the garbage collector to delete the pods 07/27/23 02:01:28.283 + Jul 27 02:01:28.358: INFO: Deleting DaemonSet.extensions e2e-5drv9-daemon-set took: 14.626713ms + Jul 27 02:01:28.459: INFO: Terminating DaemonSet.extensions e2e-5drv9-daemon-set pods took: 100.985556ms + Jul 27 02:01:30.168: INFO: Number of nodes with available pods controlled by daemonset e2e-5drv9-daemon-set: 0 + Jul 27 02:01:30.168: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset e2e-5drv9-daemon-set + Jul 27 02:01:30.176: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"87886"},"items":null} + + Jul 27 02:01:30.184: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"87886"},"items":null} + + [AfterEach] [sig-apps] ControllerRevision [Serial] test/e2e/framework/node/init/init.go:32 - Jun 12 21:29:01.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] Services + Jul 27 02:01:30.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] Services + [DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] Services + [DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] tear down framework | framework.go:193 - STEP: Destroying namespace "services-2341" for this suite. 06/12/23 21:29:02.014 + STEP: Destroying namespace "controllerrevisions-3024" for this suite. 07/27/23 02:01:30.233 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSS ------------------------------ -[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition - creating/deleting custom resource definition objects works [Conformance] - test/e2e/apimachinery/custom_resource_definition.go:58 -[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +[sig-storage] EmptyDir volumes + should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:97 +[BeforeEach] [sig-storage] EmptyDir volumes set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:29:02.051 -Jun 12 21:29:02.052: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename custom-resource-definition 06/12/23 21:29:02.057 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:29:02.124 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:29:02.157 -[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +STEP: Creating a kubernetes client 07/27/23 02:01:30.265 +Jul 27 02:01:30.265: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename emptydir 07/27/23 02:01:30.266 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:01:30.352 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:01:30.364 +[BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:31 -[It] creating/deleting custom resource definition objects works [Conformance] - test/e2e/apimachinery/custom_resource_definition.go:58 -Jun 12 21:29:02.171: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:97 +STEP: Creating a pod to test emptydir 0644 on tmpfs 07/27/23 02:01:30.373 +W0727 02:01:30.404621 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "test-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "test-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "test-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "test-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +Jul 27 02:01:30.404: INFO: Waiting up to 5m0s for pod "pod-ae14e932-8d2d-4558-90b0-6d414284e353" in namespace "emptydir-9730" to be "Succeeded or Failed" +Jul 27 02:01:30.414: INFO: Pod "pod-ae14e932-8d2d-4558-90b0-6d414284e353": Phase="Pending", Reason="", readiness=false. Elapsed: 9.697657ms +Jul 27 02:01:32.424: INFO: Pod "pod-ae14e932-8d2d-4558-90b0-6d414284e353": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019929169s +Jul 27 02:01:34.424: INFO: Pod "pod-ae14e932-8d2d-4558-90b0-6d414284e353": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019565335s +STEP: Saw pod success 07/27/23 02:01:34.424 +Jul 27 02:01:34.424: INFO: Pod "pod-ae14e932-8d2d-4558-90b0-6d414284e353" satisfied condition "Succeeded or Failed" +Jul 27 02:01:34.432: INFO: Trying to get logs from node 10.245.128.19 pod pod-ae14e932-8d2d-4558-90b0-6d414284e353 container test-container: +STEP: delete the pod 07/27/23 02:01:34.454 +Jul 27 02:01:34.480: INFO: Waiting for pod pod-ae14e932-8d2d-4558-90b0-6d414284e353 to disappear +Jul 27 02:01:34.487: INFO: Pod pod-ae14e932-8d2d-4558-90b0-6d414284e353 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/node/init/init.go:32 -Jun 12 21:29:02.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +Jul 27 02:01:34.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes tear down framework | framework.go:193 -STEP: Destroying namespace "custom-resource-definition-1627" for this suite. 06/12/23 21:29:02.789 +STEP: Destroying namespace "emptydir-9730" for this suite. 07/27/23 02:01:34.503 ------------------------------ -• [0.763 seconds] -[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] -test/e2e/apimachinery/framework.go:23 - Simple CustomResourceDefinition - test/e2e/apimachinery/custom_resource_definition.go:50 - creating/deleting custom resource definition objects works [Conformance] - test/e2e/apimachinery/custom_resource_definition.go:58 +• [4.265 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:97 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + [BeforeEach] [sig-storage] EmptyDir volumes set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:29:02.051 - Jun 12 21:29:02.052: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename custom-resource-definition 06/12/23 21:29:02.057 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:29:02.124 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:29:02.157 - [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + STEP: Creating a kubernetes client 07/27/23 02:01:30.265 + Jul 27 02:01:30.265: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename emptydir 07/27/23 02:01:30.266 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:01:30.352 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:01:30.364 + [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:31 - [It] creating/deleting custom resource definition objects works [Conformance] - test/e2e/apimachinery/custom_resource_definition.go:58 - Jun 12 21:29:02.171: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:97 + STEP: Creating a pod to test emptydir 0644 on tmpfs 07/27/23 02:01:30.373 + W0727 02:01:30.404621 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "test-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "test-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "test-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "test-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + Jul 27 02:01:30.404: INFO: Waiting up to 5m0s for pod "pod-ae14e932-8d2d-4558-90b0-6d414284e353" in namespace "emptydir-9730" to be "Succeeded or Failed" + Jul 27 02:01:30.414: INFO: Pod "pod-ae14e932-8d2d-4558-90b0-6d414284e353": Phase="Pending", Reason="", readiness=false. Elapsed: 9.697657ms + Jul 27 02:01:32.424: INFO: Pod "pod-ae14e932-8d2d-4558-90b0-6d414284e353": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019929169s + Jul 27 02:01:34.424: INFO: Pod "pod-ae14e932-8d2d-4558-90b0-6d414284e353": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019565335s + STEP: Saw pod success 07/27/23 02:01:34.424 + Jul 27 02:01:34.424: INFO: Pod "pod-ae14e932-8d2d-4558-90b0-6d414284e353" satisfied condition "Succeeded or Failed" + Jul 27 02:01:34.432: INFO: Trying to get logs from node 10.245.128.19 pod pod-ae14e932-8d2d-4558-90b0-6d414284e353 container test-container: + STEP: delete the pod 07/27/23 02:01:34.454 + Jul 27 02:01:34.480: INFO: Waiting for pod pod-ae14e932-8d2d-4558-90b0-6d414284e353 to disappear + Jul 27 02:01:34.487: INFO: Pod pod-ae14e932-8d2d-4558-90b0-6d414284e353 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/node/init/init.go:32 - Jun 12 21:29:02.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + Jul 27 02:01:34.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes tear down framework | framework.go:193 - STEP: Destroying namespace "custom-resource-definition-1627" for this suite. 06/12/23 21:29:02.789 + STEP: Destroying namespace "emptydir-9730" for this suite. 07/27/23 02:01:34.503 << End Captured GinkgoWriter Output ------------------------------ -SSS +SSSSS ------------------------------ -[sig-storage] Projected configMap - optional updates should be reflected in volume [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:174 -[BeforeEach] [sig-storage] Projected configMap +[sig-node] Ephemeral Containers [NodeConformance] + will start an ephemeral container in an existing pod [Conformance] + test/e2e/common/node/ephemeral_containers.go:45 +[BeforeEach] [sig-node] Ephemeral Containers [NodeConformance] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:29:02.815 -Jun 12 21:29:02.815: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename projected 06/12/23 21:29:02.816 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:29:02.871 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:29:02.883 -[BeforeEach] [sig-storage] Projected configMap +STEP: Creating a kubernetes client 07/27/23 02:01:34.53 +Jul 27 02:01:34.530: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename ephemeral-containers-test 07/27/23 02:01:34.531 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:01:34.572 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:01:34.582 +[BeforeEach] [sig-node] Ephemeral Containers [NodeConformance] test/e2e/framework/metrics/init/init.go:31 -[It] optional updates should be reflected in volume [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:174 -Jun 12 21:29:02.912: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node -STEP: Creating configMap with name cm-test-opt-del-5bd0d476-2175-447c-9925-3d9c01caf44b 06/12/23 21:29:02.912 -STEP: Creating configMap with name cm-test-opt-upd-ea8b92ff-e565-4213-9f28-1681f26ad095 06/12/23 21:29:02.941 -STEP: Creating the pod 06/12/23 21:29:02.961 -Jun 12 21:29:02.990: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-563adad6-cc3d-4d50-8627-511a1082c200" in namespace "projected-9965" to be "running and ready" -Jun 12 21:29:02.998: INFO: Pod "pod-projected-configmaps-563adad6-cc3d-4d50-8627-511a1082c200": Phase="Pending", Reason="", readiness=false. Elapsed: 7.931877ms -Jun 12 21:29:02.998: INFO: The phase of Pod pod-projected-configmaps-563adad6-cc3d-4d50-8627-511a1082c200 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:29:05.035: INFO: Pod "pod-projected-configmaps-563adad6-cc3d-4d50-8627-511a1082c200": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044679354s -Jun 12 21:29:05.035: INFO: The phase of Pod pod-projected-configmaps-563adad6-cc3d-4d50-8627-511a1082c200 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:29:07.011: INFO: Pod "pod-projected-configmaps-563adad6-cc3d-4d50-8627-511a1082c200": Phase="Running", Reason="", readiness=true. Elapsed: 4.020873723s -Jun 12 21:29:07.011: INFO: The phase of Pod pod-projected-configmaps-563adad6-cc3d-4d50-8627-511a1082c200 is Running (Ready = true) -Jun 12 21:29:07.011: INFO: Pod "pod-projected-configmaps-563adad6-cc3d-4d50-8627-511a1082c200" satisfied condition "running and ready" -STEP: Deleting configmap cm-test-opt-del-5bd0d476-2175-447c-9925-3d9c01caf44b 06/12/23 21:29:07.2 -STEP: Updating configmap cm-test-opt-upd-ea8b92ff-e565-4213-9f28-1681f26ad095 06/12/23 21:29:07.233 -STEP: Creating configMap with name cm-test-opt-create-55f73de4-0149-4fbd-91db-581d30cda125 06/12/23 21:29:07.268 -STEP: waiting to observe update in volume 06/12/23 21:29:07.283 -[AfterEach] [sig-storage] Projected configMap +[BeforeEach] [sig-node] Ephemeral Containers [NodeConformance] + test/e2e/common/node/ephemeral_containers.go:38 +[It] will start an ephemeral container in an existing pod [Conformance] + test/e2e/common/node/ephemeral_containers.go:45 +STEP: creating a target pod 07/27/23 02:01:34.591 +W0727 02:01:34.614841 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "test-container-1" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "test-container-1" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "test-container-1" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "test-container-1" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +Jul 27 02:01:34.615: INFO: Waiting up to 5m0s for pod "ephemeral-containers-target-pod" in namespace "ephemeral-containers-test-2092" to be "running and ready" +Jul 27 02:01:34.627: INFO: Pod "ephemeral-containers-target-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 12.690309ms +Jul 27 02:01:34.627: INFO: The phase of Pod ephemeral-containers-target-pod is Pending, waiting for it to be Running (with Ready = true) +Jul 27 02:01:36.664: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.049886807s +Jul 27 02:01:36.664: INFO: The phase of Pod ephemeral-containers-target-pod is Running (Ready = true) +Jul 27 02:01:36.664: INFO: Pod "ephemeral-containers-target-pod" satisfied condition "running and ready" +STEP: adding an ephemeral container 07/27/23 02:01:36.723 +Jul 27 02:01:36.794: INFO: Waiting up to 1m0s for pod "ephemeral-containers-target-pod" in namespace "ephemeral-containers-test-2092" to be "container debugger running" +Jul 27 02:01:36.850: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 56.424758ms +Jul 27 02:01:38.860: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.065820521s +Jul 27 02:01:40.860: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.065784572s +Jul 27 02:01:40.860: INFO: Pod "ephemeral-containers-target-pod" satisfied condition "container debugger running" +STEP: checking pod container endpoints 07/27/23 02:01:40.86 +Jul 27 02:01:40.860: INFO: ExecWithOptions {Command:[/bin/echo marco] Namespace:ephemeral-containers-test-2092 PodName:ephemeral-containers-target-pod ContainerName:debugger Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jul 27 02:01:40.860: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 02:01:40.860: INFO: ExecWithOptions: Clientset creation +Jul 27 02:01:40.860: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/ephemeral-containers-test-2092/pods/ephemeral-containers-target-pod/exec?command=%2Fbin%2Fecho&command=marco&container=debugger&container=debugger&stderr=true&stdout=true) +Jul 27 02:01:40.981: INFO: Exec stderr: "" +[AfterEach] [sig-node] Ephemeral Containers [NodeConformance] test/e2e/framework/node/init/init.go:32 -Jun 12 21:30:32.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Projected configMap +Jul 27 02:01:41.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Ephemeral Containers [NodeConformance] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Projected configMap +[DeferCleanup (Each)] [sig-node] Ephemeral Containers [NodeConformance] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Projected configMap +[DeferCleanup (Each)] [sig-node] Ephemeral Containers [NodeConformance] tear down framework | framework.go:193 -STEP: Destroying namespace "projected-9965" for this suite. 06/12/23 21:30:32.88 +STEP: Destroying namespace "ephemeral-containers-test-2092" for this suite. 07/27/23 02:01:41.012 ------------------------------ -• [SLOW TEST] [90.089 seconds] -[sig-storage] Projected configMap -test/e2e/common/storage/framework.go:23 - optional updates should be reflected in volume [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:174 +• [SLOW TEST] [6.504 seconds] +[sig-node] Ephemeral Containers [NodeConformance] +test/e2e/common/node/framework.go:23 + will start an ephemeral container in an existing pod [Conformance] + test/e2e/common/node/ephemeral_containers.go:45 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Projected configMap + [BeforeEach] [sig-node] Ephemeral Containers [NodeConformance] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:29:02.815 - Jun 12 21:29:02.815: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename projected 06/12/23 21:29:02.816 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:29:02.871 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:29:02.883 - [BeforeEach] [sig-storage] Projected configMap + STEP: Creating a kubernetes client 07/27/23 02:01:34.53 + Jul 27 02:01:34.530: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename ephemeral-containers-test 07/27/23 02:01:34.531 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:01:34.572 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:01:34.582 + [BeforeEach] [sig-node] Ephemeral Containers [NodeConformance] test/e2e/framework/metrics/init/init.go:31 - [It] optional updates should be reflected in volume [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:174 - Jun 12 21:29:02.912: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node - STEP: Creating configMap with name cm-test-opt-del-5bd0d476-2175-447c-9925-3d9c01caf44b 06/12/23 21:29:02.912 - STEP: Creating configMap with name cm-test-opt-upd-ea8b92ff-e565-4213-9f28-1681f26ad095 06/12/23 21:29:02.941 - STEP: Creating the pod 06/12/23 21:29:02.961 - Jun 12 21:29:02.990: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-563adad6-cc3d-4d50-8627-511a1082c200" in namespace "projected-9965" to be "running and ready" - Jun 12 21:29:02.998: INFO: Pod "pod-projected-configmaps-563adad6-cc3d-4d50-8627-511a1082c200": Phase="Pending", Reason="", readiness=false. Elapsed: 7.931877ms - Jun 12 21:29:02.998: INFO: The phase of Pod pod-projected-configmaps-563adad6-cc3d-4d50-8627-511a1082c200 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:29:05.035: INFO: Pod "pod-projected-configmaps-563adad6-cc3d-4d50-8627-511a1082c200": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044679354s - Jun 12 21:29:05.035: INFO: The phase of Pod pod-projected-configmaps-563adad6-cc3d-4d50-8627-511a1082c200 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:29:07.011: INFO: Pod "pod-projected-configmaps-563adad6-cc3d-4d50-8627-511a1082c200": Phase="Running", Reason="", readiness=true. Elapsed: 4.020873723s - Jun 12 21:29:07.011: INFO: The phase of Pod pod-projected-configmaps-563adad6-cc3d-4d50-8627-511a1082c200 is Running (Ready = true) - Jun 12 21:29:07.011: INFO: Pod "pod-projected-configmaps-563adad6-cc3d-4d50-8627-511a1082c200" satisfied condition "running and ready" - STEP: Deleting configmap cm-test-opt-del-5bd0d476-2175-447c-9925-3d9c01caf44b 06/12/23 21:29:07.2 - STEP: Updating configmap cm-test-opt-upd-ea8b92ff-e565-4213-9f28-1681f26ad095 06/12/23 21:29:07.233 - STEP: Creating configMap with name cm-test-opt-create-55f73de4-0149-4fbd-91db-581d30cda125 06/12/23 21:29:07.268 - STEP: waiting to observe update in volume 06/12/23 21:29:07.283 - [AfterEach] [sig-storage] Projected configMap + [BeforeEach] [sig-node] Ephemeral Containers [NodeConformance] + test/e2e/common/node/ephemeral_containers.go:38 + [It] will start an ephemeral container in an existing pod [Conformance] + test/e2e/common/node/ephemeral_containers.go:45 + STEP: creating a target pod 07/27/23 02:01:34.591 + W0727 02:01:34.614841 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "test-container-1" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "test-container-1" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "test-container-1" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "test-container-1" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + Jul 27 02:01:34.615: INFO: Waiting up to 5m0s for pod "ephemeral-containers-target-pod" in namespace "ephemeral-containers-test-2092" to be "running and ready" + Jul 27 02:01:34.627: INFO: Pod "ephemeral-containers-target-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 12.690309ms + Jul 27 02:01:34.627: INFO: The phase of Pod ephemeral-containers-target-pod is Pending, waiting for it to be Running (with Ready = true) + Jul 27 02:01:36.664: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.049886807s + Jul 27 02:01:36.664: INFO: The phase of Pod ephemeral-containers-target-pod is Running (Ready = true) + Jul 27 02:01:36.664: INFO: Pod "ephemeral-containers-target-pod" satisfied condition "running and ready" + STEP: adding an ephemeral container 07/27/23 02:01:36.723 + Jul 27 02:01:36.794: INFO: Waiting up to 1m0s for pod "ephemeral-containers-target-pod" in namespace "ephemeral-containers-test-2092" to be "container debugger running" + Jul 27 02:01:36.850: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 56.424758ms + Jul 27 02:01:38.860: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.065820521s + Jul 27 02:01:40.860: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.065784572s + Jul 27 02:01:40.860: INFO: Pod "ephemeral-containers-target-pod" satisfied condition "container debugger running" + STEP: checking pod container endpoints 07/27/23 02:01:40.86 + Jul 27 02:01:40.860: INFO: ExecWithOptions {Command:[/bin/echo marco] Namespace:ephemeral-containers-test-2092 PodName:ephemeral-containers-target-pod ContainerName:debugger Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jul 27 02:01:40.860: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 02:01:40.860: INFO: ExecWithOptions: Clientset creation + Jul 27 02:01:40.860: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/ephemeral-containers-test-2092/pods/ephemeral-containers-target-pod/exec?command=%2Fbin%2Fecho&command=marco&container=debugger&container=debugger&stderr=true&stdout=true) + Jul 27 02:01:40.981: INFO: Exec stderr: "" + [AfterEach] [sig-node] Ephemeral Containers [NodeConformance] test/e2e/framework/node/init/init.go:32 - Jun 12 21:30:32.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Projected configMap + Jul 27 02:01:41.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Ephemeral Containers [NodeConformance] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Projected configMap + [DeferCleanup (Each)] [sig-node] Ephemeral Containers [NodeConformance] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Projected configMap + [DeferCleanup (Each)] [sig-node] Ephemeral Containers [NodeConformance] tear down framework | framework.go:193 - STEP: Destroying namespace "projected-9965" for this suite. 06/12/23 21:30:32.88 + STEP: Destroying namespace "ephemeral-containers-test-2092" for this suite. 07/27/23 02:01:41.012 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSS +SS ------------------------------ -[sig-apps] Job - should apply changes to a job status [Conformance] - test/e2e/apps/job.go:636 -[BeforeEach] [sig-apps] Job +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart http hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:167 +[BeforeEach] [sig-node] Container Lifecycle Hook set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:30:32.927 -Jun 12 21:30:32.928: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename job 06/12/23 21:30:32.938 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:30:32.998 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:30:33.017 -[BeforeEach] [sig-apps] Job +STEP: Creating a kubernetes client 07/27/23 02:01:41.035 +Jul 27 02:01:41.035: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename container-lifecycle-hook 07/27/23 02:01:41.036 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:01:41.087 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:01:41.116 +[BeforeEach] [sig-node] Container Lifecycle Hook test/e2e/framework/metrics/init/init.go:31 -[It] should apply changes to a job status [Conformance] - test/e2e/apps/job.go:636 -STEP: Creating a job 06/12/23 21:30:33.045 -STEP: Ensure pods equal to parallelism count is attached to the job 06/12/23 21:30:33.089 -STEP: patching /status 06/12/23 21:30:37.098 -STEP: updating /status 06/12/23 21:30:37.116 -STEP: get /status 06/12/23 21:30:37.143 -[AfterEach] [sig-apps] Job +[BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:77 +STEP: create the container to handle the HTTPGet hook request. 07/27/23 02:01:41.137 +Jul 27 02:01:41.180: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-808" to be "running and ready" +Jul 27 02:01:41.202: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 22.639907ms +Jul 27 02:01:41.203: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Jul 27 02:01:43.213: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 2.033290461s +Jul 27 02:01:43.213: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) +Jul 27 02:01:43.213: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" +[It] should execute poststart http hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:167 +STEP: create the pod with lifecycle hook 07/27/23 02:01:43.221 +Jul 27 02:01:43.237: INFO: Waiting up to 5m0s for pod "pod-with-poststart-http-hook" in namespace "container-lifecycle-hook-808" to be "running and ready" +Jul 27 02:01:43.252: INFO: Pod "pod-with-poststart-http-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 15.563619ms +Jul 27 02:01:43.252: INFO: The phase of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) +Jul 27 02:01:45.261: INFO: Pod "pod-with-poststart-http-hook": Phase="Running", Reason="", readiness=true. Elapsed: 2.024784409s +Jul 27 02:01:45.261: INFO: The phase of Pod pod-with-poststart-http-hook is Running (Ready = true) +Jul 27 02:01:45.261: INFO: Pod "pod-with-poststart-http-hook" satisfied condition "running and ready" +STEP: check poststart hook 07/27/23 02:01:45.27 +STEP: delete the pod with lifecycle hook 07/27/23 02:01:45.311 +Jul 27 02:01:45.327: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Jul 27 02:01:45.344: INFO: Pod pod-with-poststart-http-hook still exists +Jul 27 02:01:47.347: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Jul 27 02:01:47.357: INFO: Pod pod-with-poststart-http-hook still exists +Jul 27 02:01:49.348: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Jul 27 02:01:49.358: INFO: Pod pod-with-poststart-http-hook no longer exists +[AfterEach] [sig-node] Container Lifecycle Hook test/e2e/framework/node/init/init.go:32 -Jun 12 21:30:37.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] Job +Jul 27 02:01:49.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] Job +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] Job +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook tear down framework | framework.go:193 -STEP: Destroying namespace "job-9122" for this suite. 06/12/23 21:30:37.17 +STEP: Destroying namespace "container-lifecycle-hook-808" for this suite. 07/27/23 02:01:49.371 ------------------------------ -• [4.267 seconds] -[sig-apps] Job -test/e2e/apps/framework.go:23 - should apply changes to a job status [Conformance] - test/e2e/apps/job.go:636 +• [SLOW TEST] [8.364 seconds] +[sig-node] Container Lifecycle Hook +test/e2e/common/node/framework.go:23 + when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:46 + should execute poststart http hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:167 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] Job + [BeforeEach] [sig-node] Container Lifecycle Hook set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:30:32.927 - Jun 12 21:30:32.928: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename job 06/12/23 21:30:32.938 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:30:32.998 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:30:33.017 - [BeforeEach] [sig-apps] Job + STEP: Creating a kubernetes client 07/27/23 02:01:41.035 + Jul 27 02:01:41.035: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename container-lifecycle-hook 07/27/23 02:01:41.036 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:01:41.087 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:01:41.116 + [BeforeEach] [sig-node] Container Lifecycle Hook test/e2e/framework/metrics/init/init.go:31 - [It] should apply changes to a job status [Conformance] - test/e2e/apps/job.go:636 - STEP: Creating a job 06/12/23 21:30:33.045 - STEP: Ensure pods equal to parallelism count is attached to the job 06/12/23 21:30:33.089 - STEP: patching /status 06/12/23 21:30:37.098 - STEP: updating /status 06/12/23 21:30:37.116 - STEP: get /status 06/12/23 21:30:37.143 - [AfterEach] [sig-apps] Job + [BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:77 + STEP: create the container to handle the HTTPGet hook request. 07/27/23 02:01:41.137 + Jul 27 02:01:41.180: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-808" to be "running and ready" + Jul 27 02:01:41.202: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 22.639907ms + Jul 27 02:01:41.203: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) + Jul 27 02:01:43.213: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 2.033290461s + Jul 27 02:01:43.213: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) + Jul 27 02:01:43.213: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" + [It] should execute poststart http hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:167 + STEP: create the pod with lifecycle hook 07/27/23 02:01:43.221 + Jul 27 02:01:43.237: INFO: Waiting up to 5m0s for pod "pod-with-poststart-http-hook" in namespace "container-lifecycle-hook-808" to be "running and ready" + Jul 27 02:01:43.252: INFO: Pod "pod-with-poststart-http-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 15.563619ms + Jul 27 02:01:43.252: INFO: The phase of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) + Jul 27 02:01:45.261: INFO: Pod "pod-with-poststart-http-hook": Phase="Running", Reason="", readiness=true. Elapsed: 2.024784409s + Jul 27 02:01:45.261: INFO: The phase of Pod pod-with-poststart-http-hook is Running (Ready = true) + Jul 27 02:01:45.261: INFO: Pod "pod-with-poststart-http-hook" satisfied condition "running and ready" + STEP: check poststart hook 07/27/23 02:01:45.27 + STEP: delete the pod with lifecycle hook 07/27/23 02:01:45.311 + Jul 27 02:01:45.327: INFO: Waiting for pod pod-with-poststart-http-hook to disappear + Jul 27 02:01:45.344: INFO: Pod pod-with-poststart-http-hook still exists + Jul 27 02:01:47.347: INFO: Waiting for pod pod-with-poststart-http-hook to disappear + Jul 27 02:01:47.357: INFO: Pod pod-with-poststart-http-hook still exists + Jul 27 02:01:49.348: INFO: Waiting for pod pod-with-poststart-http-hook to disappear + Jul 27 02:01:49.358: INFO: Pod pod-with-poststart-http-hook no longer exists + [AfterEach] [sig-node] Container Lifecycle Hook test/e2e/framework/node/init/init.go:32 - Jun 12 21:30:37.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] Job + Jul 27 02:01:49.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] Job + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] Job + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook tear down framework | framework.go:193 - STEP: Destroying namespace "job-9122" for this suite. 06/12/23 21:30:37.17 + STEP: Destroying namespace "container-lifecycle-hook-808" for this suite. 07/27/23 02:01:49.371 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSS ------------------------------ -[sig-node] Variable Expansion - should allow composing env vars into new env vars [NodeConformance] [Conformance] - test/e2e/common/node/expansion.go:44 -[BeforeEach] [sig-node] Variable Expansion +[sig-network] HostPort + validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + test/e2e/network/hostport.go:63 +[BeforeEach] [sig-network] HostPort set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:30:37.195 -Jun 12 21:30:37.195: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename var-expansion 06/12/23 21:30:37.197 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:30:37.253 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:30:37.282 -[BeforeEach] [sig-node] Variable Expansion +STEP: Creating a kubernetes client 07/27/23 02:01:49.399 +Jul 27 02:01:49.400: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename hostport 07/27/23 02:01:49.4 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:01:49.448 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:01:49.461 +[BeforeEach] [sig-network] HostPort test/e2e/framework/metrics/init/init.go:31 -[It] should allow composing env vars into new env vars [NodeConformance] [Conformance] - test/e2e/common/node/expansion.go:44 -STEP: Creating a pod to test env composition 06/12/23 21:30:37.296 -Jun 12 21:30:37.323: INFO: Waiting up to 5m0s for pod "var-expansion-df910aef-adb3-4379-bca7-86db497a1b98" in namespace "var-expansion-1546" to be "Succeeded or Failed" -Jun 12 21:30:37.335: INFO: Pod "var-expansion-df910aef-adb3-4379-bca7-86db497a1b98": Phase="Pending", Reason="", readiness=false. Elapsed: 12.089885ms -Jun 12 21:30:39.346: INFO: Pod "var-expansion-df910aef-adb3-4379-bca7-86db497a1b98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022940801s -Jun 12 21:30:41.377: INFO: Pod "var-expansion-df910aef-adb3-4379-bca7-86db497a1b98": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053636878s -Jun 12 21:30:43.351: INFO: Pod "var-expansion-df910aef-adb3-4379-bca7-86db497a1b98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028174423s -STEP: Saw pod success 06/12/23 21:30:43.351 -Jun 12 21:30:43.352: INFO: Pod "var-expansion-df910aef-adb3-4379-bca7-86db497a1b98" satisfied condition "Succeeded or Failed" -Jun 12 21:30:43.361: INFO: Trying to get logs from node 10.138.75.112 pod var-expansion-df910aef-adb3-4379-bca7-86db497a1b98 container dapi-container: -STEP: delete the pod 06/12/23 21:30:43.433 -Jun 12 21:30:43.453: INFO: Waiting for pod var-expansion-df910aef-adb3-4379-bca7-86db497a1b98 to disappear -Jun 12 21:30:43.463: INFO: Pod var-expansion-df910aef-adb3-4379-bca7-86db497a1b98 no longer exists -[AfterEach] [sig-node] Variable Expansion +[BeforeEach] [sig-network] HostPort + test/e2e/network/hostport.go:49 +[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + test/e2e/network/hostport.go:63 +STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled 07/27/23 02:01:49.482 +Jul 27 02:01:49.514: INFO: Waiting up to 5m0s for pod "pod1" in namespace "hostport-6960" to be "running and ready" +Jul 27 02:01:49.526: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.263686ms +Jul 27 02:01:49.526: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Jul 27 02:01:51.534: INFO: Pod "pod1": Phase="Running", Reason="", readiness=false. Elapsed: 2.020846782s +Jul 27 02:01:51.534: INFO: The phase of Pod pod1 is Running (Ready = false) +Jul 27 02:01:53.537: INFO: Pod "pod1": Phase="Running", Reason="", readiness=true. Elapsed: 4.023306804s +Jul 27 02:01:53.537: INFO: The phase of Pod pod1 is Running (Ready = true) +Jul 27 02:01:53.537: INFO: Pod "pod1" satisfied condition "running and ready" +STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 10.245.128.17 on the node which pod1 resides and expect scheduled 07/27/23 02:01:53.537 +Jul 27 02:01:53.559: INFO: Waiting up to 5m0s for pod "pod2" in namespace "hostport-6960" to be "running and ready" +Jul 27 02:01:53.567: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.228273ms +Jul 27 02:01:53.567: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Jul 27 02:01:55.577: INFO: Pod "pod2": Phase="Running", Reason="", readiness=false. Elapsed: 2.017498379s +Jul 27 02:01:55.577: INFO: The phase of Pod pod2 is Running (Ready = false) +Jul 27 02:01:57.576: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 4.016698578s +Jul 27 02:01:57.576: INFO: The phase of Pod pod2 is Running (Ready = true) +Jul 27 02:01:57.576: INFO: Pod "pod2" satisfied condition "running and ready" +STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 10.245.128.17 but use UDP protocol on the node which pod2 resides 07/27/23 02:01:57.576 +Jul 27 02:01:57.593: INFO: Waiting up to 5m0s for pod "pod3" in namespace "hostport-6960" to be "running and ready" +Jul 27 02:01:57.603: INFO: Pod "pod3": Phase="Pending", Reason="", readiness=false. Elapsed: 9.98252ms +Jul 27 02:01:57.603: INFO: The phase of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) +Jul 27 02:01:59.611: INFO: Pod "pod3": Phase="Running", Reason="", readiness=true. Elapsed: 2.018705669s +Jul 27 02:01:59.611: INFO: The phase of Pod pod3 is Running (Ready = true) +Jul 27 02:01:59.611: INFO: Pod "pod3" satisfied condition "running and ready" +Jul 27 02:01:59.628: INFO: Waiting up to 5m0s for pod "e2e-host-exec" in namespace "hostport-6960" to be "running and ready" +Jul 27 02:01:59.636: INFO: Pod "e2e-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 8.134078ms +Jul 27 02:01:59.636: INFO: The phase of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) +Jul 27 02:02:01.672: INFO: Pod "e2e-host-exec": Phase="Running", Reason="", readiness=true. Elapsed: 2.044009624s +Jul 27 02:02:01.672: INFO: The phase of Pod e2e-host-exec is Running (Ready = true) +Jul 27 02:02:01.672: INFO: Pod "e2e-host-exec" satisfied condition "running and ready" +STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 07/27/23 02:02:01.685 +Jul 27 02:02:01.685: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.245.128.17 http://127.0.0.1:54323/hostname] Namespace:hostport-6960 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jul 27 02:02:01.685: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 02:02:01.685: INFO: ExecWithOptions: Clientset creation +Jul 27 02:02:01.685: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/hostport-6960/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+--interface+10.245.128.17+http%3A%2F%2F127.0.0.1%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) +STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.245.128.17, port: 54323 07/27/23 02:02:01.895 +Jul 27 02:02:01.895: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.245.128.17:54323/hostname] Namespace:hostport-6960 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jul 27 02:02:01.895: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 02:02:01.896: INFO: ExecWithOptions: Clientset creation +Jul 27 02:02:01.896: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/hostport-6960/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+http%3A%2F%2F10.245.128.17%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) +STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.245.128.17, port: 54323 UDP 07/27/23 02:02:02.067 +Jul 27 02:02:02.068: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostname | nc -u -w 5 10.245.128.17 54323] Namespace:hostport-6960 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jul 27 02:02:02.068: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 02:02:02.068: INFO: ExecWithOptions: Clientset creation +Jul 27 02:02:02.068: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/hostport-6960/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostname+%7C+nc+-u+-w+5+10.245.128.17+54323&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) +[AfterEach] [sig-network] HostPort test/e2e/framework/node/init/init.go:32 -Jun 12 21:30:43.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Variable Expansion +Jul 27 02:02:07.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] HostPort test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Variable Expansion +[DeferCleanup (Each)] [sig-network] HostPort dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Variable Expansion +[DeferCleanup (Each)] [sig-network] HostPort tear down framework | framework.go:193 -STEP: Destroying namespace "var-expansion-1546" for this suite. 06/12/23 21:30:43.522 +STEP: Destroying namespace "hostport-6960" for this suite. 07/27/23 02:02:07.305 ------------------------------ -• [SLOW TEST] [6.349 seconds] -[sig-node] Variable Expansion -test/e2e/common/node/framework.go:23 - should allow composing env vars into new env vars [NodeConformance] [Conformance] - test/e2e/common/node/expansion.go:44 +• [SLOW TEST] [17.928 seconds] +[sig-network] HostPort +test/e2e/network/common/framework.go:23 + validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + test/e2e/network/hostport.go:63 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Variable Expansion + [BeforeEach] [sig-network] HostPort set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:30:37.195 - Jun 12 21:30:37.195: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename var-expansion 06/12/23 21:30:37.197 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:30:37.253 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:30:37.282 - [BeforeEach] [sig-node] Variable Expansion + STEP: Creating a kubernetes client 07/27/23 02:01:49.399 + Jul 27 02:01:49.400: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename hostport 07/27/23 02:01:49.4 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:01:49.448 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:01:49.461 + [BeforeEach] [sig-network] HostPort test/e2e/framework/metrics/init/init.go:31 - [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] - test/e2e/common/node/expansion.go:44 - STEP: Creating a pod to test env composition 06/12/23 21:30:37.296 - Jun 12 21:30:37.323: INFO: Waiting up to 5m0s for pod "var-expansion-df910aef-adb3-4379-bca7-86db497a1b98" in namespace "var-expansion-1546" to be "Succeeded or Failed" - Jun 12 21:30:37.335: INFO: Pod "var-expansion-df910aef-adb3-4379-bca7-86db497a1b98": Phase="Pending", Reason="", readiness=false. Elapsed: 12.089885ms - Jun 12 21:30:39.346: INFO: Pod "var-expansion-df910aef-adb3-4379-bca7-86db497a1b98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022940801s - Jun 12 21:30:41.377: INFO: Pod "var-expansion-df910aef-adb3-4379-bca7-86db497a1b98": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053636878s - Jun 12 21:30:43.351: INFO: Pod "var-expansion-df910aef-adb3-4379-bca7-86db497a1b98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028174423s - STEP: Saw pod success 06/12/23 21:30:43.351 - Jun 12 21:30:43.352: INFO: Pod "var-expansion-df910aef-adb3-4379-bca7-86db497a1b98" satisfied condition "Succeeded or Failed" - Jun 12 21:30:43.361: INFO: Trying to get logs from node 10.138.75.112 pod var-expansion-df910aef-adb3-4379-bca7-86db497a1b98 container dapi-container: - STEP: delete the pod 06/12/23 21:30:43.433 - Jun 12 21:30:43.453: INFO: Waiting for pod var-expansion-df910aef-adb3-4379-bca7-86db497a1b98 to disappear - Jun 12 21:30:43.463: INFO: Pod var-expansion-df910aef-adb3-4379-bca7-86db497a1b98 no longer exists - [AfterEach] [sig-node] Variable Expansion + [BeforeEach] [sig-network] HostPort + test/e2e/network/hostport.go:49 + [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + test/e2e/network/hostport.go:63 + STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled 07/27/23 02:01:49.482 + Jul 27 02:01:49.514: INFO: Waiting up to 5m0s for pod "pod1" in namespace "hostport-6960" to be "running and ready" + Jul 27 02:01:49.526: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.263686ms + Jul 27 02:01:49.526: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) + Jul 27 02:01:51.534: INFO: Pod "pod1": Phase="Running", Reason="", readiness=false. Elapsed: 2.020846782s + Jul 27 02:01:51.534: INFO: The phase of Pod pod1 is Running (Ready = false) + Jul 27 02:01:53.537: INFO: Pod "pod1": Phase="Running", Reason="", readiness=true. Elapsed: 4.023306804s + Jul 27 02:01:53.537: INFO: The phase of Pod pod1 is Running (Ready = true) + Jul 27 02:01:53.537: INFO: Pod "pod1" satisfied condition "running and ready" + STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 10.245.128.17 on the node which pod1 resides and expect scheduled 07/27/23 02:01:53.537 + Jul 27 02:01:53.559: INFO: Waiting up to 5m0s for pod "pod2" in namespace "hostport-6960" to be "running and ready" + Jul 27 02:01:53.567: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.228273ms + Jul 27 02:01:53.567: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) + Jul 27 02:01:55.577: INFO: Pod "pod2": Phase="Running", Reason="", readiness=false. Elapsed: 2.017498379s + Jul 27 02:01:55.577: INFO: The phase of Pod pod2 is Running (Ready = false) + Jul 27 02:01:57.576: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 4.016698578s + Jul 27 02:01:57.576: INFO: The phase of Pod pod2 is Running (Ready = true) + Jul 27 02:01:57.576: INFO: Pod "pod2" satisfied condition "running and ready" + STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 10.245.128.17 but use UDP protocol on the node which pod2 resides 07/27/23 02:01:57.576 + Jul 27 02:01:57.593: INFO: Waiting up to 5m0s for pod "pod3" in namespace "hostport-6960" to be "running and ready" + Jul 27 02:01:57.603: INFO: Pod "pod3": Phase="Pending", Reason="", readiness=false. Elapsed: 9.98252ms + Jul 27 02:01:57.603: INFO: The phase of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) + Jul 27 02:01:59.611: INFO: Pod "pod3": Phase="Running", Reason="", readiness=true. Elapsed: 2.018705669s + Jul 27 02:01:59.611: INFO: The phase of Pod pod3 is Running (Ready = true) + Jul 27 02:01:59.611: INFO: Pod "pod3" satisfied condition "running and ready" + Jul 27 02:01:59.628: INFO: Waiting up to 5m0s for pod "e2e-host-exec" in namespace "hostport-6960" to be "running and ready" + Jul 27 02:01:59.636: INFO: Pod "e2e-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 8.134078ms + Jul 27 02:01:59.636: INFO: The phase of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) + Jul 27 02:02:01.672: INFO: Pod "e2e-host-exec": Phase="Running", Reason="", readiness=true. Elapsed: 2.044009624s + Jul 27 02:02:01.672: INFO: The phase of Pod e2e-host-exec is Running (Ready = true) + Jul 27 02:02:01.672: INFO: Pod "e2e-host-exec" satisfied condition "running and ready" + STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 07/27/23 02:02:01.685 + Jul 27 02:02:01.685: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.245.128.17 http://127.0.0.1:54323/hostname] Namespace:hostport-6960 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jul 27 02:02:01.685: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 02:02:01.685: INFO: ExecWithOptions: Clientset creation + Jul 27 02:02:01.685: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/hostport-6960/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+--interface+10.245.128.17+http%3A%2F%2F127.0.0.1%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) + STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.245.128.17, port: 54323 07/27/23 02:02:01.895 + Jul 27 02:02:01.895: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.245.128.17:54323/hostname] Namespace:hostport-6960 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jul 27 02:02:01.895: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 02:02:01.896: INFO: ExecWithOptions: Clientset creation + Jul 27 02:02:01.896: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/hostport-6960/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+http%3A%2F%2F10.245.128.17%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) + STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.245.128.17, port: 54323 UDP 07/27/23 02:02:02.067 + Jul 27 02:02:02.068: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostname | nc -u -w 5 10.245.128.17 54323] Namespace:hostport-6960 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jul 27 02:02:02.068: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 02:02:02.068: INFO: ExecWithOptions: Clientset creation + Jul 27 02:02:02.068: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/hostport-6960/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostname+%7C+nc+-u+-w+5+10.245.128.17+54323&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) + [AfterEach] [sig-network] HostPort test/e2e/framework/node/init/init.go:32 - Jun 12 21:30:43.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Variable Expansion + Jul 27 02:02:07.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] HostPort test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Variable Expansion + [DeferCleanup (Each)] [sig-network] HostPort dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Variable Expansion + [DeferCleanup (Each)] [sig-network] HostPort tear down framework | framework.go:193 - STEP: Destroying namespace "var-expansion-1546" for this suite. 06/12/23 21:30:43.522 + STEP: Destroying namespace "hostport-6960" for this suite. 07/27/23 02:02:07.305 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------- -[sig-node] Kubelet when scheduling an agnhost Pod with hostAliases - should write entries to /etc/hosts [NodeConformance] [Conformance] - test/e2e/common/node/kubelet.go:148 -[BeforeEach] [sig-node] Kubelet +[sig-storage] Secrets + should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:99 +[BeforeEach] [sig-storage] Secrets set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:30:43.555 -Jun 12 21:30:43.556: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename kubelet-test 06/12/23 21:30:43.559 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:30:43.658 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:30:43.703 -[BeforeEach] [sig-node] Kubelet +STEP: Creating a kubernetes client 07/27/23 02:02:07.328 +Jul 27 02:02:07.328: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename secrets 07/27/23 02:02:07.329 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:02:07.374 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:02:07.384 +[BeforeEach] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-node] Kubelet - test/e2e/common/node/kubelet.go:41 -[It] should write entries to /etc/hosts [NodeConformance] [Conformance] - test/e2e/common/node/kubelet.go:148 -STEP: Waiting for pod completion 06/12/23 21:30:43.784 -Jun 12 21:30:43.784: INFO: Waiting up to 3m0s for pod "agnhost-host-aliases74de687b-5c09-4c55-a2b3-72f9cec959d8" in namespace "kubelet-test-2650" to be "completed" -Jun 12 21:30:43.811: INFO: Pod "agnhost-host-aliases74de687b-5c09-4c55-a2b3-72f9cec959d8": Phase="Pending", Reason="", readiness=false. Elapsed: 26.233188ms -Jun 12 21:30:45.820: INFO: Pod "agnhost-host-aliases74de687b-5c09-4c55-a2b3-72f9cec959d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035983543s -Jun 12 21:30:47.851: INFO: Pod "agnhost-host-aliases74de687b-5c09-4c55-a2b3-72f9cec959d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066311308s -Jun 12 21:30:49.821: INFO: Pod "agnhost-host-aliases74de687b-5c09-4c55-a2b3-72f9cec959d8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036272223s -Jun 12 21:30:51.821: INFO: Pod "agnhost-host-aliases74de687b-5c09-4c55-a2b3-72f9cec959d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.036189274s -Jun 12 21:30:51.821: INFO: Pod "agnhost-host-aliases74de687b-5c09-4c55-a2b3-72f9cec959d8" satisfied condition "completed" -[AfterEach] [sig-node] Kubelet +[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:99 +STEP: Creating secret with name secret-test-b5a0aa28-e3ee-44f1-a895-ff8cb0c9a090 07/27/23 02:02:07.451 +STEP: Creating a pod to test consume secrets 07/27/23 02:02:07.465 +Jul 27 02:02:07.492: INFO: Waiting up to 5m0s for pod "pod-secrets-4263883e-c1e9-4f52-a91a-06af74e12bd3" in namespace "secrets-8818" to be "Succeeded or Failed" +Jul 27 02:02:07.506: INFO: Pod "pod-secrets-4263883e-c1e9-4f52-a91a-06af74e12bd3": Phase="Pending", Reason="", readiness=false. Elapsed: 14.761236ms +Jul 27 02:02:09.516: INFO: Pod "pod-secrets-4263883e-c1e9-4f52-a91a-06af74e12bd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024164872s +Jul 27 02:02:11.516: INFO: Pod "pod-secrets-4263883e-c1e9-4f52-a91a-06af74e12bd3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024678142s +Jul 27 02:02:13.516: INFO: Pod "pod-secrets-4263883e-c1e9-4f52-a91a-06af74e12bd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024176696s +STEP: Saw pod success 07/27/23 02:02:13.516 +Jul 27 02:02:13.516: INFO: Pod "pod-secrets-4263883e-c1e9-4f52-a91a-06af74e12bd3" satisfied condition "Succeeded or Failed" +Jul 27 02:02:13.524: INFO: Trying to get logs from node 10.245.128.19 pod pod-secrets-4263883e-c1e9-4f52-a91a-06af74e12bd3 container secret-volume-test: +STEP: delete the pod 07/27/23 02:02:13.543 +Jul 27 02:02:13.566: INFO: Waiting for pod pod-secrets-4263883e-c1e9-4f52-a91a-06af74e12bd3 to disappear +Jul 27 02:02:13.573: INFO: Pod pod-secrets-4263883e-c1e9-4f52-a91a-06af74e12bd3 no longer exists +[AfterEach] [sig-storage] Secrets test/e2e/framework/node/init/init.go:32 -Jun 12 21:30:51.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Kubelet +Jul 27 02:02:13.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Kubelet +[DeferCleanup (Each)] [sig-storage] Secrets dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Kubelet +[DeferCleanup (Each)] [sig-storage] Secrets tear down framework | framework.go:193 -STEP: Destroying namespace "kubelet-test-2650" for this suite. 06/12/23 21:30:51.857 +STEP: Destroying namespace "secrets-8818" for this suite. 07/27/23 02:02:13.588 +STEP: Destroying namespace "secret-namespace-6556" for this suite. 07/27/23 02:02:13.612 ------------------------------ -• [SLOW TEST] [8.326 seconds] -[sig-node] Kubelet -test/e2e/common/node/framework.go:23 - when scheduling an agnhost Pod with hostAliases - test/e2e/common/node/kubelet.go:140 - should write entries to /etc/hosts [NodeConformance] [Conformance] - test/e2e/common/node/kubelet.go:148 +• [SLOW TEST] [6.316 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:99 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Kubelet + [BeforeEach] [sig-storage] Secrets set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:30:43.555 - Jun 12 21:30:43.556: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename kubelet-test 06/12/23 21:30:43.559 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:30:43.658 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:30:43.703 - [BeforeEach] [sig-node] Kubelet + STEP: Creating a kubernetes client 07/27/23 02:02:07.328 + Jul 27 02:02:07.328: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename secrets 07/27/23 02:02:07.329 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:02:07.374 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:02:07.384 + [BeforeEach] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-node] Kubelet - test/e2e/common/node/kubelet.go:41 - [It] should write entries to /etc/hosts [NodeConformance] [Conformance] - test/e2e/common/node/kubelet.go:148 - STEP: Waiting for pod completion 06/12/23 21:30:43.784 - Jun 12 21:30:43.784: INFO: Waiting up to 3m0s for pod "agnhost-host-aliases74de687b-5c09-4c55-a2b3-72f9cec959d8" in namespace "kubelet-test-2650" to be "completed" - Jun 12 21:30:43.811: INFO: Pod "agnhost-host-aliases74de687b-5c09-4c55-a2b3-72f9cec959d8": Phase="Pending", Reason="", readiness=false. Elapsed: 26.233188ms - Jun 12 21:30:45.820: INFO: Pod "agnhost-host-aliases74de687b-5c09-4c55-a2b3-72f9cec959d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035983543s - Jun 12 21:30:47.851: INFO: Pod "agnhost-host-aliases74de687b-5c09-4c55-a2b3-72f9cec959d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066311308s - Jun 12 21:30:49.821: INFO: Pod "agnhost-host-aliases74de687b-5c09-4c55-a2b3-72f9cec959d8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036272223s - Jun 12 21:30:51.821: INFO: Pod "agnhost-host-aliases74de687b-5c09-4c55-a2b3-72f9cec959d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.036189274s - Jun 12 21:30:51.821: INFO: Pod "agnhost-host-aliases74de687b-5c09-4c55-a2b3-72f9cec959d8" satisfied condition "completed" - [AfterEach] [sig-node] Kubelet + [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:99 + STEP: Creating secret with name secret-test-b5a0aa28-e3ee-44f1-a895-ff8cb0c9a090 07/27/23 02:02:07.451 + STEP: Creating a pod to test consume secrets 07/27/23 02:02:07.465 + Jul 27 02:02:07.492: INFO: Waiting up to 5m0s for pod "pod-secrets-4263883e-c1e9-4f52-a91a-06af74e12bd3" in namespace "secrets-8818" to be "Succeeded or Failed" + Jul 27 02:02:07.506: INFO: Pod "pod-secrets-4263883e-c1e9-4f52-a91a-06af74e12bd3": Phase="Pending", Reason="", readiness=false. Elapsed: 14.761236ms + Jul 27 02:02:09.516: INFO: Pod "pod-secrets-4263883e-c1e9-4f52-a91a-06af74e12bd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024164872s + Jul 27 02:02:11.516: INFO: Pod "pod-secrets-4263883e-c1e9-4f52-a91a-06af74e12bd3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024678142s + Jul 27 02:02:13.516: INFO: Pod "pod-secrets-4263883e-c1e9-4f52-a91a-06af74e12bd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024176696s + STEP: Saw pod success 07/27/23 02:02:13.516 + Jul 27 02:02:13.516: INFO: Pod "pod-secrets-4263883e-c1e9-4f52-a91a-06af74e12bd3" satisfied condition "Succeeded or Failed" + Jul 27 02:02:13.524: INFO: Trying to get logs from node 10.245.128.19 pod pod-secrets-4263883e-c1e9-4f52-a91a-06af74e12bd3 container secret-volume-test: + STEP: delete the pod 07/27/23 02:02:13.543 + Jul 27 02:02:13.566: INFO: Waiting for pod pod-secrets-4263883e-c1e9-4f52-a91a-06af74e12bd3 to disappear + Jul 27 02:02:13.573: INFO: Pod pod-secrets-4263883e-c1e9-4f52-a91a-06af74e12bd3 no longer exists + [AfterEach] [sig-storage] Secrets test/e2e/framework/node/init/init.go:32 - Jun 12 21:30:51.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Kubelet + Jul 27 02:02:13.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Kubelet + [DeferCleanup (Each)] [sig-storage] Secrets dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Kubelet + [DeferCleanup (Each)] [sig-storage] Secrets tear down framework | framework.go:193 - STEP: Destroying namespace "kubelet-test-2650" for this suite. 06/12/23 21:30:51.857 + STEP: Destroying namespace "secrets-8818" for this suite. 07/27/23 02:02:13.588 + STEP: Destroying namespace "secret-namespace-6556" for this suite. 07/27/23 02:02:13.612 << End Captured GinkgoWriter Output ------------------------------ -SSSSS +SSSS ------------------------------ -[sig-api-machinery] Namespaces [Serial] - should ensure that all services are removed when a namespace is deleted [Conformance] - test/e2e/apimachinery/namespace.go:251 -[BeforeEach] [sig-api-machinery] Namespaces [Serial] +[sig-cli] Kubectl client Kubectl api-versions + should check if v1 is in available api versions [Conformance] + test/e2e/kubectl/kubectl.go:824 +[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:30:51.881 -Jun 12 21:30:51.882: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename namespaces 06/12/23 21:30:51.883 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:30:51.935 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:30:51.948 -[BeforeEach] [sig-api-machinery] Namespaces [Serial] +STEP: Creating a kubernetes client 07/27/23 02:02:13.645 +Jul 27 02:02:13.645: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename kubectl 07/27/23 02:02:13.646 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:02:13.687 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:02:13.696 +[BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 -[It] should ensure that all services are removed when a namespace is deleted [Conformance] - test/e2e/apimachinery/namespace.go:251 -STEP: Creating a test namespace 06/12/23 21:30:51.964 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:30:52.044 -STEP: Creating a service in the namespace 06/12/23 21:30:52.058 -STEP: Deleting the namespace 06/12/23 21:30:52.096 -STEP: Waiting for the namespace to be removed. 06/12/23 21:30:52.129 -STEP: Recreating the namespace 06/12/23 21:30:59.142 -STEP: Verifying there is no service in the namespace 06/12/23 21:30:59.204 -[AfterEach] [sig-api-machinery] Namespaces [Serial] +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[It] should check if v1 is in available api versions [Conformance] + test/e2e/kubectl/kubectl.go:824 +STEP: validating api versions 07/27/23 02:02:13.709 +Jul 27 02:02:13.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-4903 api-versions' +Jul 27 02:02:13.917: INFO: stderr: "" +Jul 27 02:02:13.917: INFO: stdout: "admissionregistration.k8s.io/v1\napiextensions.k8s.io/v1\napiregistration.k8s.io/v1\napiserver.openshift.io/v1\napps.openshift.io/v1\napps/v1\nauthentication.k8s.io/v1\nauthorization.k8s.io/v1\nauthorization.openshift.io/v1\nautoscaling/v1\nautoscaling/v2\nbatch/v1\nbuild.openshift.io/v1\ncertificates.k8s.io/v1\ncloudcredential.openshift.io/v1\nconfig.openshift.io/v1\nconsole.openshift.io/v1\nconsole.openshift.io/v1alpha1\ncontrolplane.operator.openshift.io/v1alpha1\ncoordination.k8s.io/v1\ncrd.projectcalico.org/v1\ndiscovery.k8s.io/v1\nevents.k8s.io/v1\nflowcontrol.apiserver.k8s.io/v1beta2\nflowcontrol.apiserver.k8s.io/v1beta3\nhelm.openshift.io/v1beta1\nibm.com/v1alpha1\nimage.openshift.io/v1\nimageregistry.operator.openshift.io/v1\ningress.operator.openshift.io/v1\nk8s.cni.cncf.io/v1\nmachineconfiguration.openshift.io/v1\nmetrics.k8s.io/v1beta1\nmigration.k8s.io/v1alpha1\nmonitoring.coreos.com/v1\nmonitoring.coreos.com/v1alpha1\nmonitoring.coreos.com/v1beta1\nnetwork.operator.openshift.io/v1\nnetworking.k8s.io/v1\nnode.k8s.io/v1\noauth.openshift.io/v1\noperator.openshift.io/v1\noperator.openshift.io/v1alpha1\noperator.tigera.io/v1\noperators.coreos.com/v1\noperators.coreos.com/v1alpha1\noperators.coreos.com/v1alpha2\noperators.coreos.com/v2\npackages.operators.coreos.com/v1\nperformance.openshift.io/v1\nperformance.openshift.io/v1alpha1\nperformance.openshift.io/v2\npolicy/v1\nproject.openshift.io/v1\nquota.openshift.io/v1\nrbac.authorization.k8s.io/v1\nroute.openshift.io/v1\nsamples.operator.openshift.io/v1\nscheduling.k8s.io/v1\nsecurity.internal.openshift.io/v1\nsecurity.openshift.io/v1\nsnapshot.storage.k8s.io/v1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\ntemplate.openshift.io/v1\ntuned.openshift.io/v1\nuser.openshift.io/v1\nv1\nwhereabouts.cni.cncf.io/v1alpha1\n" +[AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 -Jun 12 21:30:59.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] +Jul 27 02:02:13.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] +[DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] +[DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 -STEP: Destroying namespace "namespaces-3188" for this suite. 06/12/23 21:30:59.259 -STEP: Destroying namespace "nsdeletetest-2068" for this suite. 06/12/23 21:30:59.28 -Jun 12 21:30:59.293: INFO: Namespace nsdeletetest-2068 was already deleted -STEP: Destroying namespace "nsdeletetest-6199" for this suite. 06/12/23 21:30:59.293 +STEP: Destroying namespace "kubectl-4903" for this suite. 07/27/23 02:02:13.935 ------------------------------ -• [SLOW TEST] [7.436 seconds] -[sig-api-machinery] Namespaces [Serial] -test/e2e/apimachinery/framework.go:23 - should ensure that all services are removed when a namespace is deleted [Conformance] - test/e2e/apimachinery/namespace.go:251 +• [0.315 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl api-versions + test/e2e/kubectl/kubectl.go:818 + should check if v1 is in available api versions [Conformance] + test/e2e/kubectl/kubectl.go:824 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] Namespaces [Serial] + [BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:30:51.881 - Jun 12 21:30:51.882: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename namespaces 06/12/23 21:30:51.883 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:30:51.935 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:30:51.948 - [BeforeEach] [sig-api-machinery] Namespaces [Serial] + STEP: Creating a kubernetes client 07/27/23 02:02:13.645 + Jul 27 02:02:13.645: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename kubectl 07/27/23 02:02:13.646 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:02:13.687 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:02:13.696 + [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 - [It] should ensure that all services are removed when a namespace is deleted [Conformance] - test/e2e/apimachinery/namespace.go:251 - STEP: Creating a test namespace 06/12/23 21:30:51.964 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:30:52.044 - STEP: Creating a service in the namespace 06/12/23 21:30:52.058 - STEP: Deleting the namespace 06/12/23 21:30:52.096 - STEP: Waiting for the namespace to be removed. 06/12/23 21:30:52.129 - STEP: Recreating the namespace 06/12/23 21:30:59.142 - STEP: Verifying there is no service in the namespace 06/12/23 21:30:59.204 - [AfterEach] [sig-api-machinery] Namespaces [Serial] + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [It] should check if v1 is in available api versions [Conformance] + test/e2e/kubectl/kubectl.go:824 + STEP: validating api versions 07/27/23 02:02:13.709 + Jul 27 02:02:13.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-4903 api-versions' + Jul 27 02:02:13.917: INFO: stderr: "" + Jul 27 02:02:13.917: INFO: stdout: "admissionregistration.k8s.io/v1\napiextensions.k8s.io/v1\napiregistration.k8s.io/v1\napiserver.openshift.io/v1\napps.openshift.io/v1\napps/v1\nauthentication.k8s.io/v1\nauthorization.k8s.io/v1\nauthorization.openshift.io/v1\nautoscaling/v1\nautoscaling/v2\nbatch/v1\nbuild.openshift.io/v1\ncertificates.k8s.io/v1\ncloudcredential.openshift.io/v1\nconfig.openshift.io/v1\nconsole.openshift.io/v1\nconsole.openshift.io/v1alpha1\ncontrolplane.operator.openshift.io/v1alpha1\ncoordination.k8s.io/v1\ncrd.projectcalico.org/v1\ndiscovery.k8s.io/v1\nevents.k8s.io/v1\nflowcontrol.apiserver.k8s.io/v1beta2\nflowcontrol.apiserver.k8s.io/v1beta3\nhelm.openshift.io/v1beta1\nibm.com/v1alpha1\nimage.openshift.io/v1\nimageregistry.operator.openshift.io/v1\ningress.operator.openshift.io/v1\nk8s.cni.cncf.io/v1\nmachineconfiguration.openshift.io/v1\nmetrics.k8s.io/v1beta1\nmigration.k8s.io/v1alpha1\nmonitoring.coreos.com/v1\nmonitoring.coreos.com/v1alpha1\nmonitoring.coreos.com/v1beta1\nnetwork.operator.openshift.io/v1\nnetworking.k8s.io/v1\nnode.k8s.io/v1\noauth.openshift.io/v1\noperator.openshift.io/v1\noperator.openshift.io/v1alpha1\noperator.tigera.io/v1\noperators.coreos.com/v1\noperators.coreos.com/v1alpha1\noperators.coreos.com/v1alpha2\noperators.coreos.com/v2\npackages.operators.coreos.com/v1\nperformance.openshift.io/v1\nperformance.openshift.io/v1alpha1\nperformance.openshift.io/v2\npolicy/v1\nproject.openshift.io/v1\nquota.openshift.io/v1\nrbac.authorization.k8s.io/v1\nroute.openshift.io/v1\nsamples.operator.openshift.io/v1\nscheduling.k8s.io/v1\nsecurity.internal.openshift.io/v1\nsecurity.openshift.io/v1\nsnapshot.storage.k8s.io/v1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\ntemplate.openshift.io/v1\ntuned.openshift.io/v1\nuser.openshift.io/v1\nv1\nwhereabouts.cni.cncf.io/v1alpha1\n" + [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 - Jun 12 21:30:59.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + Jul 27 02:02:13.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 - STEP: Destroying namespace "namespaces-3188" for this suite. 06/12/23 21:30:59.259 - STEP: Destroying namespace "nsdeletetest-2068" for this suite. 06/12/23 21:30:59.28 - Jun 12 21:30:59.293: INFO: Namespace nsdeletetest-2068 was already deleted - STEP: Destroying namespace "nsdeletetest-6199" for this suite. 06/12/23 21:30:59.293 + STEP: Destroying namespace "kubectl-4903" for this suite. 07/27/23 02:02:13.935 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSS ------------------------------- -[sig-node] Kubelet when scheduling a busybox command that always fails in a pod - should be possible to delete [NodeConformance] [Conformance] - test/e2e/common/node/kubelet.go:135 -[BeforeEach] [sig-node] Kubelet +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should validate Statefulset Status endpoints [Conformance] + test/e2e/apps/statefulset.go:977 +[BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:30:59.323 -Jun 12 21:30:59.323: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename kubelet-test 06/12/23 21:30:59.325 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:30:59.406 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:30:59.417 -[BeforeEach] [sig-node] Kubelet +STEP: Creating a kubernetes client 07/27/23 02:02:13.96 +Jul 27 02:02:13.960: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename statefulset 07/27/23 02:02:13.961 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:02:14.02 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:02:14.029 +[BeforeEach] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-node] Kubelet - test/e2e/common/node/kubelet.go:41 -[BeforeEach] when scheduling a busybox command that always fails in a pod - test/e2e/common/node/kubelet.go:85 -[It] should be possible to delete [NodeConformance] [Conformance] - test/e2e/common/node/kubelet.go:135 -[AfterEach] [sig-node] Kubelet +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 +STEP: Creating service test in namespace statefulset-7078 07/27/23 02:02:14.04 +[It] should validate Statefulset Status endpoints [Conformance] + test/e2e/apps/statefulset.go:977 +STEP: Creating statefulset ss in namespace statefulset-7078 07/27/23 02:02:14.078 +Jul 27 02:02:14.104: INFO: Found 0 stateful pods, waiting for 1 +Jul 27 02:02:24.116: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Patch Statefulset to include a label 07/27/23 02:02:24.137 +STEP: Getting /status 07/27/23 02:02:24.157 +Jul 27 02:02:24.170: INFO: StatefulSet ss has Conditions: []v1.StatefulSetCondition(nil) +STEP: updating the StatefulSet Status 07/27/23 02:02:24.17 +Jul 27 02:02:24.207: INFO: updatedStatus.Conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the statefulset status to be updated 07/27/23 02:02:24.207 +Jul 27 02:02:24.212: INFO: Observed &StatefulSet event: ADDED +Jul 27 02:02:24.212: INFO: Found Statefulset ss in namespace statefulset-7078 with labels: map[e2e:testing] annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Jul 27 02:02:24.212: INFO: Statefulset ss has an updated status +STEP: patching the Statefulset Status 07/27/23 02:02:24.212 +Jul 27 02:02:24.213: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Jul 27 02:02:24.235: INFO: Patched status conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} +STEP: watching for the Statefulset status to be patched 07/27/23 02:02:24.236 +Jul 27 02:02:24.241: INFO: Observed &StatefulSet event: ADDED +Jul 27 02:02:24.241: INFO: Observed Statefulset ss in namespace statefulset-7078 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Jul 27 02:02:24.241: INFO: Observed &StatefulSet event: MODIFIED +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 +Jul 27 02:02:24.241: INFO: Deleting all statefulset in ns statefulset-7078 +Jul 27 02:02:24.254: INFO: Scaling statefulset ss to 0 +Jul 27 02:02:34.330: INFO: Waiting for statefulset status.replicas updated to 0 +Jul 27 02:02:34.356: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 -Jun 12 21:30:59.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Kubelet +Jul 27 02:02:34.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Kubelet +[DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Kubelet +[DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 -STEP: Destroying namespace "kubelet-test-8731" for this suite. 06/12/23 21:30:59.521 +STEP: Destroying namespace "statefulset-7078" for this suite. 07/27/23 02:02:34.416 ------------------------------ -• [0.229 seconds] -[sig-node] Kubelet -test/e2e/common/node/framework.go:23 - when scheduling a busybox command that always fails in a pod - test/e2e/common/node/kubelet.go:82 - should be possible to delete [NodeConformance] [Conformance] - test/e2e/common/node/kubelet.go:135 +• [SLOW TEST] [20.478 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:103 + should validate Statefulset Status endpoints [Conformance] + test/e2e/apps/statefulset.go:977 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Kubelet + [BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:30:59.323 - Jun 12 21:30:59.323: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename kubelet-test 06/12/23 21:30:59.325 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:30:59.406 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:30:59.417 - [BeforeEach] [sig-node] Kubelet + STEP: Creating a kubernetes client 07/27/23 02:02:13.96 + Jul 27 02:02:13.960: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename statefulset 07/27/23 02:02:13.961 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:02:14.02 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:02:14.029 + [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-node] Kubelet - test/e2e/common/node/kubelet.go:41 - [BeforeEach] when scheduling a busybox command that always fails in a pod - test/e2e/common/node/kubelet.go:85 - [It] should be possible to delete [NodeConformance] [Conformance] - test/e2e/common/node/kubelet.go:135 - [AfterEach] [sig-node] Kubelet + [BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 + [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 + STEP: Creating service test in namespace statefulset-7078 07/27/23 02:02:14.04 + [It] should validate Statefulset Status endpoints [Conformance] + test/e2e/apps/statefulset.go:977 + STEP: Creating statefulset ss in namespace statefulset-7078 07/27/23 02:02:14.078 + Jul 27 02:02:14.104: INFO: Found 0 stateful pods, waiting for 1 + Jul 27 02:02:24.116: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true + STEP: Patch Statefulset to include a label 07/27/23 02:02:24.137 + STEP: Getting /status 07/27/23 02:02:24.157 + Jul 27 02:02:24.170: INFO: StatefulSet ss has Conditions: []v1.StatefulSetCondition(nil) + STEP: updating the StatefulSet Status 07/27/23 02:02:24.17 + Jul 27 02:02:24.207: INFO: updatedStatus.Conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} + STEP: watching for the statefulset status to be updated 07/27/23 02:02:24.207 + Jul 27 02:02:24.212: INFO: Observed &StatefulSet event: ADDED + Jul 27 02:02:24.212: INFO: Found Statefulset ss in namespace statefulset-7078 with labels: map[e2e:testing] annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} + Jul 27 02:02:24.212: INFO: Statefulset ss has an updated status + STEP: patching the Statefulset Status 07/27/23 02:02:24.212 + Jul 27 02:02:24.213: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} + Jul 27 02:02:24.235: INFO: Patched status conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} + STEP: watching for the Statefulset status to be patched 07/27/23 02:02:24.236 + Jul 27 02:02:24.241: INFO: Observed &StatefulSet event: ADDED + Jul 27 02:02:24.241: INFO: Observed Statefulset ss in namespace statefulset-7078 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} + Jul 27 02:02:24.241: INFO: Observed &StatefulSet event: MODIFIED + [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 + Jul 27 02:02:24.241: INFO: Deleting all statefulset in ns statefulset-7078 + Jul 27 02:02:24.254: INFO: Scaling statefulset ss to 0 + Jul 27 02:02:34.330: INFO: Waiting for statefulset status.replicas updated to 0 + Jul 27 02:02:34.356: INFO: Deleting statefulset ss + [AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 - Jun 12 21:30:59.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Kubelet + Jul 27 02:02:34.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Kubelet + [DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Kubelet + [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 - STEP: Destroying namespace "kubelet-test-8731" for this suite. 06/12/23 21:30:59.521 + STEP: Destroying namespace "statefulset-7078" for this suite. 07/27/23 02:02:34.416 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2250 +[BeforeEach] [sig-network] Services + set up framework | framework.go:178 +STEP: Creating a kubernetes client 07/27/23 02:02:34.441 +Jul 27 02:02:34.441: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename services 07/27/23 02:02:34.442 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:02:34.489 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:02:34.499 +[BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2250 +STEP: creating service in namespace services-6384 07/27/23 02:02:34.514 +STEP: creating service affinity-nodeport-transition in namespace services-6384 07/27/23 02:02:34.514 +STEP: creating replication controller affinity-nodeport-transition in namespace services-6384 07/27/23 02:02:34.585 +I0727 02:02:34.607520 20 runners.go:193] Created replication controller with name: affinity-nodeport-transition, namespace: services-6384, replica count: 3 +I0727 02:02:37.668865 20 runners.go:193] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Jul 27 02:02:37.733: INFO: Creating new exec pod +Jul 27 02:02:37.773: INFO: Waiting up to 5m0s for pod "execpod-affinityj6lxd" in namespace "services-6384" to be "running" +Jul 27 02:02:37.784: INFO: Pod "execpod-affinityj6lxd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.58911ms +Jul 27 02:02:39.794: INFO: Pod "execpod-affinityj6lxd": Phase="Running", Reason="", readiness=true. Elapsed: 2.02093463s +Jul 27 02:02:39.794: INFO: Pod "execpod-affinityj6lxd" satisfied condition "running" +Jul 27 02:02:40.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-6384 exec execpod-affinityj6lxd -- /bin/sh -x -c nc -v -z -w 2 affinity-nodeport-transition 80' +Jul 27 02:02:41.102: INFO: stderr: "+ nc -v -z -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" +Jul 27 02:02:41.103: INFO: stdout: "" +Jul 27 02:02:41.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-6384 exec execpod-affinityj6lxd -- /bin/sh -x -c nc -v -z -w 2 172.21.78.20 80' +Jul 27 02:02:41.298: INFO: stderr: "+ nc -v -z -w 2 172.21.78.20 80\nConnection to 172.21.78.20 80 port [tcp/http] succeeded!\n" +Jul 27 02:02:41.298: INFO: stdout: "" +Jul 27 02:02:41.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-6384 exec execpod-affinityj6lxd -- /bin/sh -x -c nc -v -z -w 2 10.245.128.18 30713' +Jul 27 02:02:41.596: INFO: stderr: "+ nc -v -z -w 2 10.245.128.18 30713\nConnection to 10.245.128.18 30713 port [tcp/*] succeeded!\n" +Jul 27 02:02:41.596: INFO: stdout: "" +Jul 27 02:02:41.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-6384 exec execpod-affinityj6lxd -- /bin/sh -x -c nc -v -z -w 2 10.245.128.17 30713' +Jul 27 02:02:41.930: INFO: stderr: "+ nc -v -z -w 2 10.245.128.17 30713\nConnection to 10.245.128.17 30713 port [tcp/*] succeeded!\n" +Jul 27 02:02:41.930: INFO: stdout: "" +Jul 27 02:02:42.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-6384 exec execpod-affinityj6lxd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.245.128.17:30713/ ; done' +Jul 27 02:02:42.477: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n" +Jul 27 02:02:42.477: INFO: stdout: "\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-87bz4\naffinity-nodeport-transition-87bz4\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-87bz4\naffinity-nodeport-transition-t2nfk\naffinity-nodeport-transition-87bz4\naffinity-nodeport-transition-87bz4\naffinity-nodeport-transition-t2nfk\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-t2nfk\naffinity-nodeport-transition-t2nfk\naffinity-nodeport-transition-87bz4\naffinity-nodeport-transition-t2nfk\naffinity-nodeport-transition-87bz4\naffinity-nodeport-transition-t2nfk" +Jul 27 02:02:42.477: INFO: Received response from host: affinity-nodeport-transition-ftmqg +Jul 27 02:02:42.477: INFO: Received response from host: affinity-nodeport-transition-87bz4 +Jul 27 02:02:42.477: INFO: Received response from host: affinity-nodeport-transition-87bz4 +Jul 27 02:02:42.477: INFO: Received response from host: affinity-nodeport-transition-ftmqg +Jul 27 02:02:42.477: INFO: Received response from host: affinity-nodeport-transition-87bz4 +Jul 27 02:02:42.477: INFO: Received response from host: affinity-nodeport-transition-t2nfk +Jul 27 02:02:42.477: INFO: Received response from host: affinity-nodeport-transition-87bz4 +Jul 27 02:02:42.477: INFO: Received response from host: affinity-nodeport-transition-87bz4 +Jul 27 02:02:42.477: INFO: Received response from host: affinity-nodeport-transition-t2nfk +Jul 27 02:02:42.477: INFO: Received response from host: affinity-nodeport-transition-ftmqg +Jul 27 02:02:42.477: INFO: Received response from host: affinity-nodeport-transition-t2nfk +Jul 27 02:02:42.477: INFO: Received response from host: affinity-nodeport-transition-t2nfk +Jul 27 02:02:42.477: INFO: Received response from host: affinity-nodeport-transition-87bz4 +Jul 27 02:02:42.477: INFO: Received response from host: affinity-nodeport-transition-t2nfk +Jul 27 02:02:42.477: INFO: Received response from host: affinity-nodeport-transition-87bz4 +Jul 27 02:02:42.477: INFO: Received response from host: affinity-nodeport-transition-t2nfk +Jul 27 02:02:42.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-6384 exec execpod-affinityj6lxd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.245.128.17:30713/ ; done' +Jul 27 02:02:42.831: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n" +Jul 27 02:02:42.832: INFO: stdout: "\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-ftmqg" +Jul 27 02:02:42.832: INFO: Received response from host: affinity-nodeport-transition-ftmqg +Jul 27 02:02:42.832: INFO: Received response from host: affinity-nodeport-transition-ftmqg +Jul 27 02:02:42.832: INFO: Received response from host: affinity-nodeport-transition-ftmqg +Jul 27 02:02:42.832: INFO: Received response from host: affinity-nodeport-transition-ftmqg +Jul 27 02:02:42.832: INFO: Received response from host: affinity-nodeport-transition-ftmqg +Jul 27 02:02:42.832: INFO: Received response from host: affinity-nodeport-transition-ftmqg +Jul 27 02:02:42.832: INFO: Received response from host: affinity-nodeport-transition-ftmqg +Jul 27 02:02:42.832: INFO: Received response from host: affinity-nodeport-transition-ftmqg +Jul 27 02:02:42.832: INFO: Received response from host: affinity-nodeport-transition-ftmqg +Jul 27 02:02:42.832: INFO: Received response from host: affinity-nodeport-transition-ftmqg +Jul 27 02:02:42.832: INFO: Received response from host: affinity-nodeport-transition-ftmqg +Jul 27 02:02:42.832: INFO: Received response from host: affinity-nodeport-transition-ftmqg +Jul 27 02:02:42.832: INFO: Received response from host: affinity-nodeport-transition-ftmqg +Jul 27 02:02:42.832: INFO: Received response from host: affinity-nodeport-transition-ftmqg +Jul 27 02:02:42.832: INFO: Received response from host: affinity-nodeport-transition-ftmqg +Jul 27 02:02:42.832: INFO: Received response from host: affinity-nodeport-transition-ftmqg +Jul 27 02:02:42.832: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-6384, will wait for the garbage collector to delete the pods 07/27/23 02:02:42.859 +Jul 27 02:02:42.951: INFO: Deleting ReplicationController affinity-nodeport-transition took: 24.50975ms +Jul 27 02:02:43.051: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.103587ms +[AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 +Jul 27 02:02:45.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 +STEP: Destroying namespace "services-6384" for this suite. 07/27/23 02:02:45.934 +------------------------------ +• [SLOW TEST] [11.517 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2250 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + set up framework | framework.go:178 + STEP: Creating a kubernetes client 07/27/23 02:02:34.441 + Jul 27 02:02:34.441: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename services 07/27/23 02:02:34.442 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:02:34.489 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:02:34.499 + [BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2250 + STEP: creating service in namespace services-6384 07/27/23 02:02:34.514 + STEP: creating service affinity-nodeport-transition in namespace services-6384 07/27/23 02:02:34.514 + STEP: creating replication controller affinity-nodeport-transition in namespace services-6384 07/27/23 02:02:34.585 + I0727 02:02:34.607520 20 runners.go:193] Created replication controller with name: affinity-nodeport-transition, namespace: services-6384, replica count: 3 + I0727 02:02:37.668865 20 runners.go:193] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Jul 27 02:02:37.733: INFO: Creating new exec pod + Jul 27 02:02:37.773: INFO: Waiting up to 5m0s for pod "execpod-affinityj6lxd" in namespace "services-6384" to be "running" + Jul 27 02:02:37.784: INFO: Pod "execpod-affinityj6lxd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.58911ms + Jul 27 02:02:39.794: INFO: Pod "execpod-affinityj6lxd": Phase="Running", Reason="", readiness=true. Elapsed: 2.02093463s + Jul 27 02:02:39.794: INFO: Pod "execpod-affinityj6lxd" satisfied condition "running" + Jul 27 02:02:40.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-6384 exec execpod-affinityj6lxd -- /bin/sh -x -c nc -v -z -w 2 affinity-nodeport-transition 80' + Jul 27 02:02:41.102: INFO: stderr: "+ nc -v -z -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" + Jul 27 02:02:41.103: INFO: stdout: "" + Jul 27 02:02:41.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-6384 exec execpod-affinityj6lxd -- /bin/sh -x -c nc -v -z -w 2 172.21.78.20 80' + Jul 27 02:02:41.298: INFO: stderr: "+ nc -v -z -w 2 172.21.78.20 80\nConnection to 172.21.78.20 80 port [tcp/http] succeeded!\n" + Jul 27 02:02:41.298: INFO: stdout: "" + Jul 27 02:02:41.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-6384 exec execpod-affinityj6lxd -- /bin/sh -x -c nc -v -z -w 2 10.245.128.18 30713' + Jul 27 02:02:41.596: INFO: stderr: "+ nc -v -z -w 2 10.245.128.18 30713\nConnection to 10.245.128.18 30713 port [tcp/*] succeeded!\n" + Jul 27 02:02:41.596: INFO: stdout: "" + Jul 27 02:02:41.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-6384 exec execpod-affinityj6lxd -- /bin/sh -x -c nc -v -z -w 2 10.245.128.17 30713' + Jul 27 02:02:41.930: INFO: stderr: "+ nc -v -z -w 2 10.245.128.17 30713\nConnection to 10.245.128.17 30713 port [tcp/*] succeeded!\n" + Jul 27 02:02:41.930: INFO: stdout: "" + Jul 27 02:02:42.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-6384 exec execpod-affinityj6lxd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.245.128.17:30713/ ; done' + Jul 27 02:02:42.477: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n" + Jul 27 02:02:42.477: INFO: stdout: "\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-87bz4\naffinity-nodeport-transition-87bz4\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-87bz4\naffinity-nodeport-transition-t2nfk\naffinity-nodeport-transition-87bz4\naffinity-nodeport-transition-87bz4\naffinity-nodeport-transition-t2nfk\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-t2nfk\naffinity-nodeport-transition-t2nfk\naffinity-nodeport-transition-87bz4\naffinity-nodeport-transition-t2nfk\naffinity-nodeport-transition-87bz4\naffinity-nodeport-transition-t2nfk" + Jul 27 02:02:42.477: INFO: Received response from host: affinity-nodeport-transition-ftmqg + Jul 27 02:02:42.477: INFO: Received response from host: affinity-nodeport-transition-87bz4 + Jul 27 02:02:42.477: INFO: Received response from host: affinity-nodeport-transition-87bz4 + Jul 27 02:02:42.477: INFO: Received response from host: affinity-nodeport-transition-ftmqg + Jul 27 02:02:42.477: INFO: Received response from host: affinity-nodeport-transition-87bz4 + Jul 27 02:02:42.477: INFO: Received response from host: affinity-nodeport-transition-t2nfk + Jul 27 02:02:42.477: INFO: Received response from host: affinity-nodeport-transition-87bz4 + Jul 27 02:02:42.477: INFO: Received response from host: affinity-nodeport-transition-87bz4 + Jul 27 02:02:42.477: INFO: Received response from host: affinity-nodeport-transition-t2nfk + Jul 27 02:02:42.477: INFO: Received response from host: affinity-nodeport-transition-ftmqg + Jul 27 02:02:42.477: INFO: Received response from host: affinity-nodeport-transition-t2nfk + Jul 27 02:02:42.477: INFO: Received response from host: affinity-nodeport-transition-t2nfk + Jul 27 02:02:42.477: INFO: Received response from host: affinity-nodeport-transition-87bz4 + Jul 27 02:02:42.477: INFO: Received response from host: affinity-nodeport-transition-t2nfk + Jul 27 02:02:42.477: INFO: Received response from host: affinity-nodeport-transition-87bz4 + Jul 27 02:02:42.477: INFO: Received response from host: affinity-nodeport-transition-t2nfk + Jul 27 02:02:42.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-6384 exec execpod-affinityj6lxd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.245.128.17:30713/ ; done' + Jul 27 02:02:42.831: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.245.128.17:30713/\n" + Jul 27 02:02:42.832: INFO: stdout: "\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-ftmqg\naffinity-nodeport-transition-ftmqg" + Jul 27 02:02:42.832: INFO: Received response from host: affinity-nodeport-transition-ftmqg + Jul 27 02:02:42.832: INFO: Received response from host: affinity-nodeport-transition-ftmqg + Jul 27 02:02:42.832: INFO: Received response from host: affinity-nodeport-transition-ftmqg + Jul 27 02:02:42.832: INFO: Received response from host: affinity-nodeport-transition-ftmqg + Jul 27 02:02:42.832: INFO: Received response from host: affinity-nodeport-transition-ftmqg + Jul 27 02:02:42.832: INFO: Received response from host: affinity-nodeport-transition-ftmqg + Jul 27 02:02:42.832: INFO: Received response from host: affinity-nodeport-transition-ftmqg + Jul 27 02:02:42.832: INFO: Received response from host: affinity-nodeport-transition-ftmqg + Jul 27 02:02:42.832: INFO: Received response from host: affinity-nodeport-transition-ftmqg + Jul 27 02:02:42.832: INFO: Received response from host: affinity-nodeport-transition-ftmqg + Jul 27 02:02:42.832: INFO: Received response from host: affinity-nodeport-transition-ftmqg + Jul 27 02:02:42.832: INFO: Received response from host: affinity-nodeport-transition-ftmqg + Jul 27 02:02:42.832: INFO: Received response from host: affinity-nodeport-transition-ftmqg + Jul 27 02:02:42.832: INFO: Received response from host: affinity-nodeport-transition-ftmqg + Jul 27 02:02:42.832: INFO: Received response from host: affinity-nodeport-transition-ftmqg + Jul 27 02:02:42.832: INFO: Received response from host: affinity-nodeport-transition-ftmqg + Jul 27 02:02:42.832: INFO: Cleaning up the exec pod + STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-6384, will wait for the garbage collector to delete the pods 07/27/23 02:02:42.859 + Jul 27 02:02:42.951: INFO: Deleting ReplicationController affinity-nodeport-transition took: 24.50975ms + Jul 27 02:02:43.051: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.103587ms + [AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 + Jul 27 02:02:45.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 + STEP: Destroying namespace "services-6384" for this suite. 07/27/23 02:02:45.934 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI - should provide container's memory request [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:235 + should update labels on modification [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:130 [BeforeEach] [sig-storage] Projected downwardAPI set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:30:59.559 -Jun 12 21:30:59.559: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename projected 06/12/23 21:30:59.561 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:30:59.621 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:30:59.663 +STEP: Creating a kubernetes client 07/27/23 02:02:45.959 +Jul 27 02:02:45.959: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename projected 07/27/23 02:02:45.96 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:02:46.002 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:02:46.012 [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/common/storage/projected_downwardapi.go:44 -[It] should provide container's memory request [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:235 -STEP: Creating a pod to test downward API volume plugin 06/12/23 21:30:59.676 -Jun 12 21:30:59.700: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8ba1c898-d29b-4b42-9951-4ce6877a7977" in namespace "projected-9310" to be "Succeeded or Failed" -Jun 12 21:30:59.712: INFO: Pod "downwardapi-volume-8ba1c898-d29b-4b42-9951-4ce6877a7977": Phase="Pending", Reason="", readiness=false. Elapsed: 11.824794ms -Jun 12 21:31:01.722: INFO: Pod "downwardapi-volume-8ba1c898-d29b-4b42-9951-4ce6877a7977": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022266374s -Jun 12 21:31:03.722: INFO: Pod "downwardapi-volume-8ba1c898-d29b-4b42-9951-4ce6877a7977": Phase="Running", Reason="", readiness=false. Elapsed: 4.021803554s -Jun 12 21:31:05.774: INFO: Pod "downwardapi-volume-8ba1c898-d29b-4b42-9951-4ce6877a7977": Phase="Running", Reason="", readiness=false. Elapsed: 6.073500082s -Jun 12 21:31:07.753: INFO: Pod "downwardapi-volume-8ba1c898-d29b-4b42-9951-4ce6877a7977": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052926896s -STEP: Saw pod success 06/12/23 21:31:07.753 -Jun 12 21:31:07.754: INFO: Pod "downwardapi-volume-8ba1c898-d29b-4b42-9951-4ce6877a7977" satisfied condition "Succeeded or Failed" -Jun 12 21:31:07.762: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-8ba1c898-d29b-4b42-9951-4ce6877a7977 container client-container: -STEP: delete the pod 06/12/23 21:31:07.779 -Jun 12 21:31:07.806: INFO: Waiting for pod downwardapi-volume-8ba1c898-d29b-4b42-9951-4ce6877a7977 to disappear -Jun 12 21:31:07.814: INFO: Pod downwardapi-volume-8ba1c898-d29b-4b42-9951-4ce6877a7977 no longer exists +[It] should update labels on modification [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:130 +STEP: Creating the pod 07/27/23 02:02:46.025 +Jul 27 02:02:46.055: INFO: Waiting up to 5m0s for pod "labelsupdate11a7631b-8dda-4c20-bfc9-64f4394e0ef3" in namespace "projected-229" to be "running and ready" +Jul 27 02:02:46.064: INFO: Pod "labelsupdate11a7631b-8dda-4c20-bfc9-64f4394e0ef3": Phase="Pending", Reason="", readiness=false. Elapsed: 9.573164ms +Jul 27 02:02:46.064: INFO: The phase of Pod labelsupdate11a7631b-8dda-4c20-bfc9-64f4394e0ef3 is Pending, waiting for it to be Running (with Ready = true) +Jul 27 02:02:48.075: INFO: Pod "labelsupdate11a7631b-8dda-4c20-bfc9-64f4394e0ef3": Phase="Running", Reason="", readiness=true. Elapsed: 2.020026522s +Jul 27 02:02:48.075: INFO: The phase of Pod labelsupdate11a7631b-8dda-4c20-bfc9-64f4394e0ef3 is Running (Ready = true) +Jul 27 02:02:48.075: INFO: Pod "labelsupdate11a7631b-8dda-4c20-bfc9-64f4394e0ef3" satisfied condition "running and ready" +Jul 27 02:02:48.639: INFO: Successfully updated pod "labelsupdate11a7631b-8dda-4c20-bfc9-64f4394e0ef3" [AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/node/init/init.go:32 -Jun 12 21:31:07.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +Jul 27 02:02:50.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Projected downwardAPI dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] Projected downwardAPI tear down framework | framework.go:193 -STEP: Destroying namespace "projected-9310" for this suite. 06/12/23 21:31:07.841 +STEP: Destroying namespace "projected-229" for this suite. 07/27/23 02:02:50.689 ------------------------------ -• [SLOW TEST] [8.304 seconds] +• [4.753 seconds] [sig-storage] Projected downwardAPI test/e2e/common/storage/framework.go:23 - should provide container's memory request [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:235 + should update labels on modification [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:130 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] Projected downwardAPI set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:30:59.559 - Jun 12 21:30:59.559: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename projected 06/12/23 21:30:59.561 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:30:59.621 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:30:59.663 + STEP: Creating a kubernetes client 07/27/23 02:02:45.959 + Jul 27 02:02:45.959: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename projected 07/27/23 02:02:45.96 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:02:46.002 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:02:46.012 [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/common/storage/projected_downwardapi.go:44 - [It] should provide container's memory request [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:235 - STEP: Creating a pod to test downward API volume plugin 06/12/23 21:30:59.676 - Jun 12 21:30:59.700: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8ba1c898-d29b-4b42-9951-4ce6877a7977" in namespace "projected-9310" to be "Succeeded or Failed" - Jun 12 21:30:59.712: INFO: Pod "downwardapi-volume-8ba1c898-d29b-4b42-9951-4ce6877a7977": Phase="Pending", Reason="", readiness=false. Elapsed: 11.824794ms - Jun 12 21:31:01.722: INFO: Pod "downwardapi-volume-8ba1c898-d29b-4b42-9951-4ce6877a7977": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022266374s - Jun 12 21:31:03.722: INFO: Pod "downwardapi-volume-8ba1c898-d29b-4b42-9951-4ce6877a7977": Phase="Running", Reason="", readiness=false. Elapsed: 4.021803554s - Jun 12 21:31:05.774: INFO: Pod "downwardapi-volume-8ba1c898-d29b-4b42-9951-4ce6877a7977": Phase="Running", Reason="", readiness=false. Elapsed: 6.073500082s - Jun 12 21:31:07.753: INFO: Pod "downwardapi-volume-8ba1c898-d29b-4b42-9951-4ce6877a7977": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052926896s - STEP: Saw pod success 06/12/23 21:31:07.753 - Jun 12 21:31:07.754: INFO: Pod "downwardapi-volume-8ba1c898-d29b-4b42-9951-4ce6877a7977" satisfied condition "Succeeded or Failed" - Jun 12 21:31:07.762: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-8ba1c898-d29b-4b42-9951-4ce6877a7977 container client-container: - STEP: delete the pod 06/12/23 21:31:07.779 - Jun 12 21:31:07.806: INFO: Waiting for pod downwardapi-volume-8ba1c898-d29b-4b42-9951-4ce6877a7977 to disappear - Jun 12 21:31:07.814: INFO: Pod downwardapi-volume-8ba1c898-d29b-4b42-9951-4ce6877a7977 no longer exists + [It] should update labels on modification [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:130 + STEP: Creating the pod 07/27/23 02:02:46.025 + Jul 27 02:02:46.055: INFO: Waiting up to 5m0s for pod "labelsupdate11a7631b-8dda-4c20-bfc9-64f4394e0ef3" in namespace "projected-229" to be "running and ready" + Jul 27 02:02:46.064: INFO: Pod "labelsupdate11a7631b-8dda-4c20-bfc9-64f4394e0ef3": Phase="Pending", Reason="", readiness=false. Elapsed: 9.573164ms + Jul 27 02:02:46.064: INFO: The phase of Pod labelsupdate11a7631b-8dda-4c20-bfc9-64f4394e0ef3 is Pending, waiting for it to be Running (with Ready = true) + Jul 27 02:02:48.075: INFO: Pod "labelsupdate11a7631b-8dda-4c20-bfc9-64f4394e0ef3": Phase="Running", Reason="", readiness=true. Elapsed: 2.020026522s + Jul 27 02:02:48.075: INFO: The phase of Pod labelsupdate11a7631b-8dda-4c20-bfc9-64f4394e0ef3 is Running (Ready = true) + Jul 27 02:02:48.075: INFO: Pod "labelsupdate11a7631b-8dda-4c20-bfc9-64f4394e0ef3" satisfied condition "running and ready" + Jul 27 02:02:48.639: INFO: Successfully updated pod "labelsupdate11a7631b-8dda-4c20-bfc9-64f4394e0ef3" [AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/node/init/init.go:32 - Jun 12 21:31:07.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + Jul 27 02:02:50.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Projected downwardAPI dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] Projected downwardAPI tear down framework | framework.go:193 - STEP: Destroying namespace "projected-9310" for this suite. 06/12/23 21:31:07.841 + STEP: Destroying namespace "projected-229" for this suite. 07/27/23 02:02:50.689 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should perform canary updates and phased rolling updates of template modifications [Conformance] + test/e2e/apps/statefulset.go:317 +[BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 07/27/23 02:02:50.713 +Jul 27 02:02:50.714: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename statefulset 07/27/23 02:02:50.714 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:02:50.768 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:02:50.777 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 +STEP: Creating service test in namespace statefulset-6621 07/27/23 02:02:50.787 +[It] should perform canary updates and phased rolling updates of template modifications [Conformance] + test/e2e/apps/statefulset.go:317 +STEP: Creating a new StatefulSet 07/27/23 02:02:50.824 +Jul 27 02:02:50.899: INFO: Found 0 stateful pods, waiting for 3 +Jul 27 02:03:00.910: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Jul 27 02:03:00.910: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Jul 27 02:03:00.910: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Updating stateful set template: update image from registry.k8s.io/e2e-test-images/httpd:2.4.38-4 to registry.k8s.io/e2e-test-images/httpd:2.4.39-4 07/27/23 02:03:00.941 +Jul 27 02:03:00.984: INFO: Updating stateful set ss2 +STEP: Creating a new revision 07/27/23 02:03:00.984 +STEP: Not applying an update when the partition is greater than the number of replicas 07/27/23 02:03:11.032 +STEP: Performing a canary update 07/27/23 02:03:11.033 +Jul 27 02:03:11.073: INFO: Updating stateful set ss2 +Jul 27 02:03:11.099: INFO: Waiting for Pod statefulset-6621/ss2-2 to have revision ss2-5459d8585b update revision ss2-7b6c9599d5 +STEP: Restoring Pods to the correct revision when they are deleted 07/27/23 02:03:21.122 +Jul 27 02:03:21.184: INFO: Found 2 stateful pods, waiting for 3 +Jul 27 02:03:31.195: INFO: Found 2 stateful pods, waiting for 3 +Jul 27 02:03:41.201: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Jul 27 02:03:41.201: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Jul 27 02:03:41.201: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Performing a phased rolling update 07/27/23 02:03:41.229 +Jul 27 02:03:41.361: INFO: Updating stateful set ss2 +Jul 27 02:03:41.417: INFO: Waiting for Pod statefulset-6621/ss2-1 to have revision ss2-5459d8585b update revision ss2-7b6c9599d5 +Jul 27 02:03:51.504: INFO: Updating stateful set ss2 +Jul 27 02:03:51.525: INFO: Waiting for StatefulSet statefulset-6621/ss2 to complete update +Jul 27 02:03:51.525: INFO: Waiting for Pod statefulset-6621/ss2-0 to have revision ss2-5459d8585b update revision ss2-7b6c9599d5 +Jul 27 02:04:01.554: INFO: Waiting for StatefulSet statefulset-6621/ss2 to complete update +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 +Jul 27 02:04:11.565: INFO: Deleting all statefulset in ns statefulset-6621 +Jul 27 02:04:11.579: INFO: Scaling statefulset ss2 to 0 +Jul 27 02:04:21.664: INFO: Waiting for statefulset status.replicas updated to 0 +Jul 27 02:04:21.708: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 +Jul 27 02:04:21.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 +STEP: Destroying namespace "statefulset-6621" for this suite. 07/27/23 02:04:21.793 +------------------------------ +• [SLOW TEST] [91.102 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:103 + should perform canary updates and phased rolling updates of template modifications [Conformance] + test/e2e/apps/statefulset.go:317 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 07/27/23 02:02:50.713 + Jul 27 02:02:50.714: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename statefulset 07/27/23 02:02:50.714 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:02:50.768 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:02:50.777 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 + [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 + STEP: Creating service test in namespace statefulset-6621 07/27/23 02:02:50.787 + [It] should perform canary updates and phased rolling updates of template modifications [Conformance] + test/e2e/apps/statefulset.go:317 + STEP: Creating a new StatefulSet 07/27/23 02:02:50.824 + Jul 27 02:02:50.899: INFO: Found 0 stateful pods, waiting for 3 + Jul 27 02:03:00.910: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true + Jul 27 02:03:00.910: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true + Jul 27 02:03:00.910: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true + STEP: Updating stateful set template: update image from registry.k8s.io/e2e-test-images/httpd:2.4.38-4 to registry.k8s.io/e2e-test-images/httpd:2.4.39-4 07/27/23 02:03:00.941 + Jul 27 02:03:00.984: INFO: Updating stateful set ss2 + STEP: Creating a new revision 07/27/23 02:03:00.984 + STEP: Not applying an update when the partition is greater than the number of replicas 07/27/23 02:03:11.032 + STEP: Performing a canary update 07/27/23 02:03:11.033 + Jul 27 02:03:11.073: INFO: Updating stateful set ss2 + Jul 27 02:03:11.099: INFO: Waiting for Pod statefulset-6621/ss2-2 to have revision ss2-5459d8585b update revision ss2-7b6c9599d5 + STEP: Restoring Pods to the correct revision when they are deleted 07/27/23 02:03:21.122 + Jul 27 02:03:21.184: INFO: Found 2 stateful pods, waiting for 3 + Jul 27 02:03:31.195: INFO: Found 2 stateful pods, waiting for 3 + Jul 27 02:03:41.201: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true + Jul 27 02:03:41.201: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true + Jul 27 02:03:41.201: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true + STEP: Performing a phased rolling update 07/27/23 02:03:41.229 + Jul 27 02:03:41.361: INFO: Updating stateful set ss2 + Jul 27 02:03:41.417: INFO: Waiting for Pod statefulset-6621/ss2-1 to have revision ss2-5459d8585b update revision ss2-7b6c9599d5 + Jul 27 02:03:51.504: INFO: Updating stateful set ss2 + Jul 27 02:03:51.525: INFO: Waiting for StatefulSet statefulset-6621/ss2 to complete update + Jul 27 02:03:51.525: INFO: Waiting for Pod statefulset-6621/ss2-0 to have revision ss2-5459d8585b update revision ss2-7b6c9599d5 + Jul 27 02:04:01.554: INFO: Waiting for StatefulSet statefulset-6621/ss2 to complete update + [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 + Jul 27 02:04:11.565: INFO: Deleting all statefulset in ns statefulset-6621 + Jul 27 02:04:11.579: INFO: Scaling statefulset ss2 to 0 + Jul 27 02:04:21.664: INFO: Waiting for statefulset status.replicas updated to 0 + Jul 27 02:04:21.708: INFO: Deleting statefulset ss2 + [AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 + Jul 27 02:04:21.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 + STEP: Destroying namespace "statefulset-6621" for this suite. 07/27/23 02:04:21.793 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] test/e2e/common/storage/projected_secret.go:88 [BeforeEach] [sig-storage] Projected secret set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:31:07.867 -Jun 12 21:31:07.867: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename projected 06/12/23 21:31:07.87 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:31:07.931 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:31:07.943 +STEP: Creating a kubernetes client 07/27/23 02:04:21.816 +Jul 27 02:04:21.816: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename projected 07/27/23 02:04:21.817 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:04:21.858 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:04:21.868 [BeforeEach] [sig-storage] Projected secret test/e2e/framework/metrics/init/init.go:31 [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] test/e2e/common/storage/projected_secret.go:88 -STEP: Creating projection with secret that has name projected-secret-test-map-487ef33a-3a51-4d54-9151-d3f24572ae2f 06/12/23 21:31:07.954 -STEP: Creating a pod to test consume secrets 06/12/23 21:31:07.969 -Jun 12 21:31:07.991: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d29d8f59-2d41-4609-9961-a988f2e35c95" in namespace "projected-8401" to be "Succeeded or Failed" -Jun 12 21:31:07.999: INFO: Pod "pod-projected-secrets-d29d8f59-2d41-4609-9961-a988f2e35c95": Phase="Pending", Reason="", readiness=false. Elapsed: 8.205928ms -Jun 12 21:31:10.008: INFO: Pod "pod-projected-secrets-d29d8f59-2d41-4609-9961-a988f2e35c95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017624887s -Jun 12 21:31:12.008: INFO: Pod "pod-projected-secrets-d29d8f59-2d41-4609-9961-a988f2e35c95": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017695151s -Jun 12 21:31:14.008: INFO: Pod "pod-projected-secrets-d29d8f59-2d41-4609-9961-a988f2e35c95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017430981s -STEP: Saw pod success 06/12/23 21:31:14.009 -Jun 12 21:31:14.009: INFO: Pod "pod-projected-secrets-d29d8f59-2d41-4609-9961-a988f2e35c95" satisfied condition "Succeeded or Failed" -Jun 12 21:31:14.022: INFO: Trying to get logs from node 10.138.75.70 pod pod-projected-secrets-d29d8f59-2d41-4609-9961-a988f2e35c95 container projected-secret-volume-test: -STEP: delete the pod 06/12/23 21:31:14.041 -Jun 12 21:31:14.064: INFO: Waiting for pod pod-projected-secrets-d29d8f59-2d41-4609-9961-a988f2e35c95 to disappear -Jun 12 21:31:14.071: INFO: Pod pod-projected-secrets-d29d8f59-2d41-4609-9961-a988f2e35c95 no longer exists +STEP: Creating projection with secret that has name projected-secret-test-map-5c1f330a-d5f6-4233-aa8d-793d7c6634a2 07/27/23 02:04:21.908 +STEP: Creating a pod to test consume secrets 07/27/23 02:04:21.946 +Jul 27 02:04:21.983: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-333659e8-390e-4916-a845-bcf88ab0a56b" in namespace "projected-4766" to be "Succeeded or Failed" +Jul 27 02:04:22.006: INFO: Pod "pod-projected-secrets-333659e8-390e-4916-a845-bcf88ab0a56b": Phase="Pending", Reason="", readiness=false. Elapsed: 22.697496ms +Jul 27 02:04:24.026: INFO: Pod "pod-projected-secrets-333659e8-390e-4916-a845-bcf88ab0a56b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042608601s +Jul 27 02:04:26.020: INFO: Pod "pod-projected-secrets-333659e8-390e-4916-a845-bcf88ab0a56b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037016365s +Jul 27 02:04:28.020: INFO: Pod "pod-projected-secrets-333659e8-390e-4916-a845-bcf88ab0a56b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037107797s +STEP: Saw pod success 07/27/23 02:04:28.02 +Jul 27 02:04:28.020: INFO: Pod "pod-projected-secrets-333659e8-390e-4916-a845-bcf88ab0a56b" satisfied condition "Succeeded or Failed" +Jul 27 02:04:28.032: INFO: Trying to get logs from node 10.245.128.19 pod pod-projected-secrets-333659e8-390e-4916-a845-bcf88ab0a56b container projected-secret-volume-test: +STEP: delete the pod 07/27/23 02:04:28.091 +Jul 27 02:04:28.117: INFO: Waiting for pod pod-projected-secrets-333659e8-390e-4916-a845-bcf88ab0a56b to disappear +Jul 27 02:04:28.125: INFO: Pod pod-projected-secrets-333659e8-390e-4916-a845-bcf88ab0a56b no longer exists [AfterEach] [sig-storage] Projected secret test/e2e/framework/node/init/init.go:32 -Jun 12 21:31:14.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +Jul 27 02:04:28.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Projected secret test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Projected secret dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] Projected secret tear down framework | framework.go:193 -STEP: Destroying namespace "projected-8401" for this suite. 06/12/23 21:31:14.086 +STEP: Destroying namespace "projected-4766" for this suite. 07/27/23 02:04:28.139 ------------------------------ -• [SLOW TEST] [6.243 seconds] +• [SLOW TEST] [6.347 seconds] [sig-storage] Projected secret test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] @@ -18929,2223 +15833,2805 @@ test/e2e/common/storage/framework.go:23 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] Projected secret set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:31:07.867 - Jun 12 21:31:07.867: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename projected 06/12/23 21:31:07.87 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:31:07.931 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:31:07.943 + STEP: Creating a kubernetes client 07/27/23 02:04:21.816 + Jul 27 02:04:21.816: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename projected 07/27/23 02:04:21.817 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:04:21.858 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:04:21.868 [BeforeEach] [sig-storage] Projected secret test/e2e/framework/metrics/init/init.go:31 [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] test/e2e/common/storage/projected_secret.go:88 - STEP: Creating projection with secret that has name projected-secret-test-map-487ef33a-3a51-4d54-9151-d3f24572ae2f 06/12/23 21:31:07.954 - STEP: Creating a pod to test consume secrets 06/12/23 21:31:07.969 - Jun 12 21:31:07.991: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d29d8f59-2d41-4609-9961-a988f2e35c95" in namespace "projected-8401" to be "Succeeded or Failed" - Jun 12 21:31:07.999: INFO: Pod "pod-projected-secrets-d29d8f59-2d41-4609-9961-a988f2e35c95": Phase="Pending", Reason="", readiness=false. Elapsed: 8.205928ms - Jun 12 21:31:10.008: INFO: Pod "pod-projected-secrets-d29d8f59-2d41-4609-9961-a988f2e35c95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017624887s - Jun 12 21:31:12.008: INFO: Pod "pod-projected-secrets-d29d8f59-2d41-4609-9961-a988f2e35c95": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017695151s - Jun 12 21:31:14.008: INFO: Pod "pod-projected-secrets-d29d8f59-2d41-4609-9961-a988f2e35c95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017430981s - STEP: Saw pod success 06/12/23 21:31:14.009 - Jun 12 21:31:14.009: INFO: Pod "pod-projected-secrets-d29d8f59-2d41-4609-9961-a988f2e35c95" satisfied condition "Succeeded or Failed" - Jun 12 21:31:14.022: INFO: Trying to get logs from node 10.138.75.70 pod pod-projected-secrets-d29d8f59-2d41-4609-9961-a988f2e35c95 container projected-secret-volume-test: - STEP: delete the pod 06/12/23 21:31:14.041 - Jun 12 21:31:14.064: INFO: Waiting for pod pod-projected-secrets-d29d8f59-2d41-4609-9961-a988f2e35c95 to disappear - Jun 12 21:31:14.071: INFO: Pod pod-projected-secrets-d29d8f59-2d41-4609-9961-a988f2e35c95 no longer exists + STEP: Creating projection with secret that has name projected-secret-test-map-5c1f330a-d5f6-4233-aa8d-793d7c6634a2 07/27/23 02:04:21.908 + STEP: Creating a pod to test consume secrets 07/27/23 02:04:21.946 + Jul 27 02:04:21.983: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-333659e8-390e-4916-a845-bcf88ab0a56b" in namespace "projected-4766" to be "Succeeded or Failed" + Jul 27 02:04:22.006: INFO: Pod "pod-projected-secrets-333659e8-390e-4916-a845-bcf88ab0a56b": Phase="Pending", Reason="", readiness=false. Elapsed: 22.697496ms + Jul 27 02:04:24.026: INFO: Pod "pod-projected-secrets-333659e8-390e-4916-a845-bcf88ab0a56b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042608601s + Jul 27 02:04:26.020: INFO: Pod "pod-projected-secrets-333659e8-390e-4916-a845-bcf88ab0a56b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037016365s + Jul 27 02:04:28.020: INFO: Pod "pod-projected-secrets-333659e8-390e-4916-a845-bcf88ab0a56b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037107797s + STEP: Saw pod success 07/27/23 02:04:28.02 + Jul 27 02:04:28.020: INFO: Pod "pod-projected-secrets-333659e8-390e-4916-a845-bcf88ab0a56b" satisfied condition "Succeeded or Failed" + Jul 27 02:04:28.032: INFO: Trying to get logs from node 10.245.128.19 pod pod-projected-secrets-333659e8-390e-4916-a845-bcf88ab0a56b container projected-secret-volume-test: + STEP: delete the pod 07/27/23 02:04:28.091 + Jul 27 02:04:28.117: INFO: Waiting for pod pod-projected-secrets-333659e8-390e-4916-a845-bcf88ab0a56b to disappear + Jul 27 02:04:28.125: INFO: Pod pod-projected-secrets-333659e8-390e-4916-a845-bcf88ab0a56b no longer exists [AfterEach] [sig-storage] Projected secret test/e2e/framework/node/init/init.go:32 - Jun 12 21:31:14.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + Jul 27 02:04:28.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Projected secret test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Projected secret dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] Projected secret tear down framework | framework.go:193 - STEP: Destroying namespace "projected-8401" for this suite. 06/12/23 21:31:14.086 + STEP: Destroying namespace "projected-4766" for this suite. 07/27/23 02:04:28.139 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSS +SSSSS ------------------------------ -[sig-scheduling] SchedulerPreemption [Serial] - validates lower priority pod preemption by critical pod [Conformance] - test/e2e/scheduling/preemption.go:224 -[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] +[sig-scheduling] SchedulerPredicates [Serial] + validates resource limits of pods that are allowed to run [Conformance] + test/e2e/scheduling/predicates.go:331 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:31:14.111 -Jun 12 21:31:14.111: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename sched-preemption 06/12/23 21:31:14.112 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:31:14.185 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:31:14.223 -[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] +STEP: Creating a kubernetes client 07/27/23 02:04:28.164 +Jul 27 02:04:28.164: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename sched-pred 07/27/23 02:04:28.165 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:04:28.226 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:04:28.241 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] - test/e2e/scheduling/preemption.go:97 -Jun 12 21:31:14.327: INFO: Waiting up to 1m0s for all nodes to be ready -Jun 12 21:32:14.550: INFO: Waiting for terminating namespaces to be deleted... -[It] validates lower priority pod preemption by critical pod [Conformance] - test/e2e/scheduling/preemption.go:224 -STEP: Create pods that use 4/5 of node resources. 06/12/23 21:32:14.576 -Jun 12 21:32:14.626: INFO: Created pod: pod0-0-sched-preemption-low-priority -Jun 12 21:32:14.639: INFO: Created pod: pod0-1-sched-preemption-medium-priority -Jun 12 21:32:14.672: INFO: Created pod: pod1-0-sched-preemption-medium-priority -Jun 12 21:32:14.689: INFO: Created pod: pod1-1-sched-preemption-medium-priority -Jun 12 21:32:14.738: INFO: Created pod: pod2-0-sched-preemption-medium-priority -Jun 12 21:32:14.758: INFO: Created pod: pod2-1-sched-preemption-medium-priority -STEP: Wait for pods to be scheduled. 06/12/23 21:32:14.758 -Jun 12 21:32:14.759: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-611" to be "running" -Jun 12 21:32:14.773: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 14.126627ms -Jun 12 21:32:16.792: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032710563s -Jun 12 21:32:18.784: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 4.024891094s -Jun 12 21:32:18.784: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" -Jun 12 21:32:18.784: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-611" to be "running" -Jun 12 21:32:18.792: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 8.141815ms -Jun 12 21:32:18.793: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" -Jun 12 21:32:18.793: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-611" to be "running" -Jun 12 21:32:18.801: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 8.229854ms -Jun 12 21:32:18.801: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" -Jun 12 21:32:18.801: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-611" to be "running" -Jun 12 21:32:18.810: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 7.869427ms -Jun 12 21:32:18.810: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" -Jun 12 21:32:18.810: INFO: Waiting up to 5m0s for pod "pod2-0-sched-preemption-medium-priority" in namespace "sched-preemption-611" to be "running" -Jun 12 21:32:18.818: INFO: Pod "pod2-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 8.070129ms -Jun 12 21:32:18.818: INFO: Pod "pod2-0-sched-preemption-medium-priority" satisfied condition "running" -Jun 12 21:32:18.818: INFO: Waiting up to 5m0s for pod "pod2-1-sched-preemption-medium-priority" in namespace "sched-preemption-611" to be "running" -Jun 12 21:32:18.826: INFO: Pod "pod2-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 7.77958ms -Jun 12 21:32:18.827: INFO: Pod "pod2-1-sched-preemption-medium-priority" satisfied condition "running" -STEP: Run a critical pod that use same resources as that of a lower priority pod 06/12/23 21:32:18.827 -Jun 12 21:32:18.853: INFO: Waiting up to 2m0s for pod "critical-pod" in namespace "kube-system" to be "running" -Jun 12 21:32:18.861: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 8.078411ms -Jun 12 21:32:20.871: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017862147s -Jun 12 21:32:22.871: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018508331s -Jun 12 21:32:24.870: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017451746s -Jun 12 21:32:26.872: INFO: Pod "critical-pod": Phase="Running", Reason="", readiness=true. Elapsed: 8.018993632s -Jun 12 21:32:26.872: INFO: Pod "critical-pod" satisfied condition "running" -[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:97 +Jul 27 02:04:28.277: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Jul 27 02:04:28.306: INFO: Waiting for terminating namespaces to be deleted... +Jul 27 02:04:28.357: INFO: +Logging pods the apiserver thinks is on node 10.245.128.17 before test +Jul 27 02:04:28.411: INFO: calico-node-6gb7d from calico-system started at 2023-07-26 23:19:48 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.411: INFO: Container calico-node ready: true, restart count 0 +Jul 27 02:04:28.411: INFO: ibm-keepalived-watcher-krnnt from kube-system started at 2023-07-26 23:12:13 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.411: INFO: Container keepalived-watcher ready: true, restart count 0 +Jul 27 02:04:28.411: INFO: ibm-master-proxy-static-10.245.128.17 from kube-system started at 2023-07-26 23:12:09 +0000 UTC (2 container statuses recorded) +Jul 27 02:04:28.411: INFO: Container ibm-master-proxy-static ready: true, restart count 0 +Jul 27 02:04:28.411: INFO: Container pause ready: true, restart count 0 +Jul 27 02:04:28.411: INFO: ibm-vpc-block-csi-controller-0 from kube-system started at 2023-07-26 23:25:41 +0000 UTC (7 container statuses recorded) +Jul 27 02:04:28.411: INFO: Container csi-attacher ready: true, restart count 0 +Jul 27 02:04:28.411: INFO: Container csi-provisioner ready: true, restart count 0 +Jul 27 02:04:28.411: INFO: Container csi-resizer ready: true, restart count 0 +Jul 27 02:04:28.411: INFO: Container csi-snapshotter ready: true, restart count 0 +Jul 27 02:04:28.411: INFO: Container iks-vpc-block-driver ready: true, restart count 0 +Jul 27 02:04:28.411: INFO: Container liveness-probe ready: true, restart count 0 +Jul 27 02:04:28.411: INFO: Container storage-secret-sidecar ready: true, restart count 0 +Jul 27 02:04:28.411: INFO: ibm-vpc-block-csi-node-pb2sj from kube-system started at 2023-07-26 23:12:13 +0000 UTC (4 container statuses recorded) +Jul 27 02:04:28.411: INFO: Container csi-driver-registrar ready: true, restart count 0 +Jul 27 02:04:28.411: INFO: Container iks-vpc-block-node-driver ready: true, restart count 0 +Jul 27 02:04:28.411: INFO: Container liveness-probe ready: true, restart count 0 +Jul 27 02:04:28.411: INFO: Container storage-secret-sidecar ready: true, restart count 0 +Jul 27 02:04:28.411: INFO: vpn-7d8b749c64-87d9s from kube-system started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.411: INFO: Container vpn ready: true, restart count 0 +Jul 27 02:04:28.411: INFO: tuned-wnh5v from openshift-cluster-node-tuning-operator started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.411: INFO: Container tuned ready: true, restart count 0 +Jul 27 02:04:28.411: INFO: csi-snapshot-controller-5b77984679-frszr from openshift-cluster-storage-operator started at 2023-07-26 23:21:55 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.411: INFO: Container snapshot-controller ready: true, restart count 0 +Jul 27 02:04:28.411: INFO: csi-snapshot-webhook-78b8c8d77c-2pk6s from openshift-cluster-storage-operator started at 2023-07-26 23:21:55 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.411: INFO: Container webhook ready: true, restart count 0 +Jul 27 02:04:28.411: INFO: console-7fd48bd95f-wksvb from openshift-console started at 2023-07-26 23:27:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.411: INFO: Container console ready: true, restart count 0 +Jul 27 02:04:28.411: INFO: downloads-6874b45df6-w7xkq from openshift-console started at 2023-07-26 23:22:05 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.411: INFO: Container download-server ready: true, restart count 0 +Jul 27 02:04:28.411: INFO: dns-default-5mw2g from openshift-dns started at 2023-07-26 23:25:39 +0000 UTC (2 container statuses recorded) +Jul 27 02:04:28.411: INFO: Container dns ready: true, restart count 0 +Jul 27 02:04:28.411: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:04:28.411: INFO: node-resolver-2kt92 from openshift-dns started at 2023-07-26 23:25:38 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.411: INFO: Container dns-node-resolver ready: true, restart count 0 +Jul 27 02:04:28.411: INFO: image-registry-69fbbd6d88-6xgnp from openshift-image-registry started at 2023-07-27 01:50:07 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.411: INFO: Container registry ready: true, restart count 0 +Jul 27 02:04:28.411: INFO: node-ca-pmxp9 from openshift-image-registry started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.411: INFO: Container node-ca ready: true, restart count 0 +Jul 27 02:04:28.411: INFO: ingress-canary-wh5qj from openshift-ingress-canary started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.411: INFO: Container serve-healthcheck-canary ready: true, restart count 0 +Jul 27 02:04:28.411: INFO: router-default-865b575f54-qjwfv from openshift-ingress started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.411: INFO: Container router ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: openshift-kube-proxy-r7t77 from openshift-kube-proxy started at 2023-07-26 23:17:29 +0000 UTC (2 container statuses recorded) +Jul 27 02:04:28.412: INFO: Container kube-proxy ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: migrator-77d7ddf546-9g7xm from openshift-kube-storage-version-migrator started at 2023-07-26 23:25:38 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.412: INFO: Container migrator ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: certified-operators-qlqcc from openshift-marketplace started at 2023-07-26 23:23:19 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.412: INFO: Container registry-server ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: community-operators-dtgmg from openshift-marketplace started at 2023-07-26 23:23:19 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.412: INFO: Container registry-server ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: redhat-marketplace-vnvdb from openshift-marketplace started at 2023-07-26 23:23:19 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.412: INFO: Container registry-server ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: redhat-operators-9qw52 from openshift-marketplace started at 2023-07-27 01:30:34 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.412: INFO: Container registry-server ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: alertmanager-main-1 from openshift-monitoring started at 2023-07-26 23:27:44 +0000 UTC (6 container statuses recorded) +Jul 27 02:04:28.412: INFO: Container alertmanager ready: true, restart count 1 +Jul 27 02:04:28.412: INFO: Container alertmanager-proxy ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: Container config-reloader ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy-metric ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: Container prom-label-proxy ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: kube-state-metrics-575bd9d6b6-2wk6g from openshift-monitoring started at 2023-07-27 01:50:07 +0000 UTC (3 container statuses recorded) +Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy-main ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy-self ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: Container kube-state-metrics ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: node-exporter-2tscc from openshift-monitoring started at 2023-07-26 23:26:18 +0000 UTC (2 container statuses recorded) +Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: Container node-exporter ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: openshift-state-metrics-99754b784-vdbrs from openshift-monitoring started at 2023-07-27 01:50:07 +0000 UTC (3 container statuses recorded) +Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy-main ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy-self ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: Container openshift-state-metrics ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: prometheus-adapter-657855c676-qlc95 from openshift-monitoring started at 2023-07-26 23:26:23 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.412: INFO: Container prometheus-adapter ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: prometheus-k8s-1 from openshift-monitoring started at 2023-07-26 23:27:58 +0000 UTC (6 container statuses recorded) +Jul 27 02:04:28.412: INFO: Container config-reloader ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy-thanos ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: Container prometheus ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: Container prometheus-proxy ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: Container thanos-sidecar ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: prometheus-operator-765bbdfd45-twq98 from openshift-monitoring started at 2023-07-27 01:50:07 +0000 UTC (2 container statuses recorded) +Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: Container prometheus-operator ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: prometheus-operator-admission-webhook-84c7bbc8cc-hct4l from openshift-monitoring started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.412: INFO: Container prometheus-operator-admission-webhook ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: telemeter-client-c964ff8c9-xszvz from openshift-monitoring started at 2023-07-27 01:50:07 +0000 UTC (3 container statuses recorded) +Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: Container reload ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: Container telemeter-client ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: thanos-querier-7f9c896d7f-xqld6 from openshift-monitoring started at 2023-07-26 23:26:32 +0000 UTC (6 container statuses recorded) +Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy-metrics ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy-rules ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: Container oauth-proxy ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: Container prom-label-proxy ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: Container thanos-query ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: multus-5x56j from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.412: INFO: Container kube-multus ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: multus-additional-cni-plugins-p7gf5 from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.412: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: multus-admission-controller-8ccd764f4-j68g7 from openshift-multus started at 2023-07-26 23:25:38 +0000 UTC (2 container statuses recorded) +Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: Container multus-admission-controller ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: network-metrics-daemon-djvdx from openshift-multus started at 2023-07-26 23:17:25 +0000 UTC (2 container statuses recorded) +Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: Container network-metrics-daemon ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: network-check-target-2j7hq from openshift-network-diagnostics started at 2023-07-26 23:17:32 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.412: INFO: Container network-check-target-container ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: collect-profiles-28173690-9m5v7 from openshift-operator-lifecycle-manager started at 2023-07-27 01:30:00 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.412: INFO: Container collect-profiles ready: false, restart count 0 +Jul 27 02:04:28.412: INFO: collect-profiles-28173720-hn9xm from openshift-operator-lifecycle-manager started at 2023-07-27 02:00:00 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.412: INFO: Container collect-profiles ready: false, restart count 0 +Jul 27 02:04:28.412: INFO: packageserver-b9964c68-p2fd4 from openshift-operator-lifecycle-manager started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.412: INFO: Container packageserver ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: service-ca-665db46585-9cprv from openshift-service-ca started at 2023-07-26 23:21:59 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.412: INFO: Container service-ca-controller ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: sonobuoy-e2e-job-17fd703895604ed7 from sonobuoy started at 2023-07-27 01:27:09 +0000 UTC (2 container statuses recorded) +Jul 27 02:04:28.412: INFO: Container e2e ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: sonobuoy-systemd-logs-daemon-set-2f6dede7a68c4213-vft4d from sonobuoy started at 2023-07-27 01:27:09 +0000 UTC (2 container statuses recorded) +Jul 27 02:04:28.412: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: Container systemd-logs ready: true, restart count 0 +Jul 27 02:04:28.412: INFO: tigera-operator-5b48cf996b-5zb5v from tigera-operator started at 2023-07-26 23:12:21 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.412: INFO: Container tigera-operator ready: true, restart count 6 +Jul 27 02:04:28.412: INFO: +Logging pods the apiserver thinks is on node 10.245.128.18 before test +Jul 27 02:04:28.474: INFO: calico-kube-controllers-5575667dcd-ps6n9 from calico-system started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.474: INFO: Container calico-kube-controllers ready: true, restart count 0 +Jul 27 02:04:28.474: INFO: calico-node-2vsm9 from calico-system started at 2023-07-26 23:19:48 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.474: INFO: Container calico-node ready: true, restart count 0 +Jul 27 02:04:28.474: INFO: calico-typha-5549cc5cdc-nsmq8 from calico-system started at 2023-07-26 23:19:56 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.474: INFO: Container calico-typha ready: true, restart count 0 +Jul 27 02:04:28.474: INFO: managed-storage-validation-webhooks-6dfcff48fb-4xxsq from ibm-odf-validation-webhook started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.474: INFO: Container managed-storage-validation-webhooks ready: true, restart count 0 +Jul 27 02:04:28.474: INFO: managed-storage-validation-webhooks-6dfcff48fb-k6pcc from ibm-odf-validation-webhook started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.474: INFO: Container managed-storage-validation-webhooks ready: true, restart count 0 +Jul 27 02:04:28.474: INFO: managed-storage-validation-webhooks-6dfcff48fb-swht2 from ibm-odf-validation-webhook started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.474: INFO: Container managed-storage-validation-webhooks ready: true, restart count 1 +Jul 27 02:04:28.474: INFO: ibm-keepalived-watcher-wjqkn from kube-system started at 2023-07-26 23:12:23 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.474: INFO: Container keepalived-watcher ready: true, restart count 0 +Jul 27 02:04:28.474: INFO: ibm-master-proxy-static-10.245.128.18 from kube-system started at 2023-07-26 23:12:20 +0000 UTC (2 container statuses recorded) +Jul 27 02:04:28.474: INFO: Container ibm-master-proxy-static ready: true, restart count 0 +Jul 27 02:04:28.474: INFO: Container pause ready: true, restart count 0 +Jul 27 02:04:28.474: INFO: ibm-storage-metrics-agent-9fd89b544-292dm from kube-system started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container ibm-storage-metrics-agent ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: Container storage-secret-sidecar ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: ibm-vpc-block-csi-node-lp4cr from kube-system started at 2023-07-26 23:12:23 +0000 UTC (4 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container csi-driver-registrar ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: Container iks-vpc-block-node-driver ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: Container liveness-probe ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: Container storage-secret-sidecar ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: cluster-node-tuning-operator-5b85c5d47b-9cbp5 from openshift-cluster-node-tuning-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container cluster-node-tuning-operator ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: tuned-zxrv4 from openshift-cluster-node-tuning-operator started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container tuned ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: cluster-samples-operator-588cc6f8cc-fh5hj from openshift-cluster-samples-operator started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container cluster-samples-operator ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: Container cluster-samples-operator-watch ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: cluster-storage-operator-586d5b4d95-tq97j from openshift-cluster-storage-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container cluster-storage-operator ready: true, restart count 1 +Jul 27 02:04:28.475: INFO: csi-snapshot-controller-5b77984679-wxrv8 from openshift-cluster-storage-operator started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container snapshot-controller ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: csi-snapshot-controller-operator-7c998b6874-9flch from openshift-cluster-storage-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container csi-snapshot-controller-operator ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: csi-snapshot-webhook-78b8c8d77c-jqbww from openshift-cluster-storage-operator started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container webhook ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: console-operator-8486d48d6-4xzr7 from openshift-console-operator started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container console-operator ready: true, restart count 1 +Jul 27 02:04:28.475: INFO: Container conversion-webhook-server ready: true, restart count 2 +Jul 27 02:04:28.475: INFO: console-7fd48bd95f-pzr2s from openshift-console started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container console ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: downloads-6874b45df6-nm9q6 from openshift-console started at 2023-07-27 01:50:07 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container download-server ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: dns-operator-7c549b76fd-t56tt from openshift-dns-operator started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container dns-operator ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: dns-default-r982z from openshift-dns started at 2023-07-26 23:25:39 +0000 UTC (2 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container dns ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: node-resolver-txjwq from openshift-dns started at 2023-07-26 23:25:38 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container dns-node-resolver ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: cluster-image-registry-operator-96d4d84cf-65k8l from openshift-image-registry started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container cluster-image-registry-operator ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: node-ca-ntzct from openshift-image-registry started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container node-ca ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: ingress-canary-jphk8 from openshift-ingress-canary started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container serve-healthcheck-canary ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: ingress-operator-64bc7f7964-9sbtr from openshift-ingress-operator started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container ingress-operator ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: router-default-865b575f54-b946s from openshift-ingress started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container router ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: insights-operator-5db47f7654-r8xdq from openshift-insights started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container insights-operator ready: true, restart count 1 +Jul 27 02:04:28.475: INFO: openshift-kube-proxy-6hxmn from openshift-kube-proxy started at 2023-07-26 23:17:29 +0000 UTC (2 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container kube-proxy ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: kube-storage-version-migrator-operator-f4b8bf677-c24bz from openshift-kube-storage-version-migrator-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container kube-storage-version-migrator-operator ready: true, restart count 1 +Jul 27 02:04:28.475: INFO: marketplace-operator-5ddbd9fdbc-lrhrq from openshift-marketplace started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container marketplace-operator ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: alertmanager-main-0 from openshift-monitoring started at 2023-07-27 01:50:10 +0000 UTC (6 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container alertmanager ready: true, restart count 1 +Jul 27 02:04:28.475: INFO: Container alertmanager-proxy ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: Container config-reloader ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: Container kube-rbac-proxy-metric ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: Container prom-label-proxy ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: cluster-monitoring-operator-7448698f65-65wn9 from openshift-monitoring started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container cluster-monitoring-operator ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: node-exporter-d46sh from openshift-monitoring started at 2023-07-26 23:26:18 +0000 UTC (2 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: Container node-exporter ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: prometheus-adapter-657855c676-hwbr7 from openshift-monitoring started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container prometheus-adapter ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: prometheus-k8s-0 from openshift-monitoring started at 2023-07-27 01:50:11 +0000 UTC (6 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container config-reloader ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: Container kube-rbac-proxy-thanos ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: Container prometheus ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: Container prometheus-proxy ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: Container thanos-sidecar ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: prometheus-operator-admission-webhook-84c7bbc8cc-jvbxn from openshift-monitoring started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container prometheus-operator-admission-webhook ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: thanos-querier-7f9c896d7f-fk8mk from openshift-monitoring started at 2023-07-27 01:45:59 +0000 UTC (6 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: Container kube-rbac-proxy-metrics ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: Container kube-rbac-proxy-rules ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: Container oauth-proxy ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: Container prom-label-proxy ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: Container thanos-query ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: multus-additional-cni-plugins-njhzm from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: multus-admission-controller-8ccd764f4-7kmkg from openshift-multus started at 2023-07-26 23:25:53 +0000 UTC (2 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: Container multus-admission-controller ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: multus-zhftn from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container kube-multus ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: network-metrics-daemon-cglg2 from openshift-multus started at 2023-07-26 23:17:25 +0000 UTC (2 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: Container network-metrics-daemon ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: network-check-source-6777f6456-pt5nn from openshift-network-diagnostics started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container check-endpoints ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: network-check-target-85dgs from openshift-network-diagnostics started at 2023-07-26 23:17:32 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container network-check-target-container ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: network-operator-6dddb4f685-gc764 from openshift-network-operator started at 2023-07-26 23:17:11 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container network-operator ready: true, restart count 1 +Jul 27 02:04:28.475: INFO: catalog-operator-69ccd5899d-lrpkv from openshift-operator-lifecycle-manager started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container catalog-operator ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: collect-profiles-28173705-ctzz8 from openshift-operator-lifecycle-manager started at 2023-07-27 01:45:00 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container collect-profiles ready: false, restart count 0 +Jul 27 02:04:28.475: INFO: olm-operator-8448b5677d-bf2sl from openshift-operator-lifecycle-manager started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container olm-operator ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: package-server-manager-579d664b8c-klrwt from openshift-operator-lifecycle-manager started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container package-server-manager ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: packageserver-b9964c68-6gdlp from openshift-operator-lifecycle-manager started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container packageserver ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: metrics-6ff747d58d-llt7w from openshift-roks-metrics started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container metrics ready: true, restart count 2 +Jul 27 02:04:28.475: INFO: push-gateway-6448c6788-hrxtl from openshift-roks-metrics started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container push-gateway ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: service-ca-operator-5db987957b-pftl9 from openshift-service-ca-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container service-ca-operator ready: true, restart count 1 +Jul 27 02:04:28.475: INFO: sonobuoy from sonobuoy started at 2023-07-27 01:26:57 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container kube-sonobuoy ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: sonobuoy-systemd-logs-daemon-set-2f6dede7a68c4213-7p2cx from sonobuoy started at 2023-07-27 01:27:09 +0000 UTC (2 container statuses recorded) +Jul 27 02:04:28.475: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: Container systemd-logs ready: true, restart count 0 +Jul 27 02:04:28.475: INFO: +Logging pods the apiserver thinks is on node 10.245.128.19 before test +Jul 27 02:04:28.520: INFO: calico-node-tnbmn from calico-system started at 2023-07-26 23:19:48 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.520: INFO: Container calico-node ready: true, restart count 0 +Jul 27 02:04:28.520: INFO: calico-typha-5549cc5cdc-25l9k from calico-system started at 2023-07-26 23:19:48 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.520: INFO: Container calico-typha ready: true, restart count 0 +Jul 27 02:04:28.520: INFO: ibm-keepalived-watcher-228gb from kube-system started at 2023-07-26 23:12:15 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.520: INFO: Container keepalived-watcher ready: true, restart count 0 +Jul 27 02:04:28.520: INFO: ibm-master-proxy-static-10.245.128.19 from kube-system started at 2023-07-26 23:12:13 +0000 UTC (2 container statuses recorded) +Jul 27 02:04:28.520: INFO: Container ibm-master-proxy-static ready: true, restart count 0 +Jul 27 02:04:28.520: INFO: Container pause ready: true, restart count 0 +Jul 27 02:04:28.520: INFO: ibm-vpc-block-csi-node-m8dqf from kube-system started at 2023-07-26 23:12:15 +0000 UTC (4 container statuses recorded) +Jul 27 02:04:28.520: INFO: Container csi-driver-registrar ready: true, restart count 0 +Jul 27 02:04:28.520: INFO: Container iks-vpc-block-node-driver ready: true, restart count 0 +Jul 27 02:04:28.520: INFO: Container liveness-probe ready: true, restart count 0 +Jul 27 02:04:28.520: INFO: Container storage-secret-sidecar ready: true, restart count 0 +Jul 27 02:04:28.520: INFO: tuned-8xqng from openshift-cluster-node-tuning-operator started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.520: INFO: Container tuned ready: true, restart count 0 +Jul 27 02:04:28.520: INFO: dns-default-9k25b from openshift-dns started at 2023-07-27 01:50:33 +0000 UTC (2 container statuses recorded) +Jul 27 02:04:28.520: INFO: Container dns ready: true, restart count 0 +Jul 27 02:04:28.520: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:04:28.520: INFO: node-resolver-s2q44 from openshift-dns started at 2023-07-26 23:25:38 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.520: INFO: Container dns-node-resolver ready: true, restart count 0 +Jul 27 02:04:28.520: INFO: node-ca-kz4vp from openshift-image-registry started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.520: INFO: Container node-ca ready: true, restart count 0 +Jul 27 02:04:28.520: INFO: ingress-canary-nf2dw from openshift-ingress-canary started at 2023-07-27 01:50:33 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.520: INFO: Container serve-healthcheck-canary ready: true, restart count 0 +Jul 27 02:04:28.520: INFO: openshift-kube-proxy-4qg5c from openshift-kube-proxy started at 2023-07-26 23:17:29 +0000 UTC (2 container statuses recorded) +Jul 27 02:04:28.520: INFO: Container kube-proxy ready: true, restart count 0 +Jul 27 02:04:28.520: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:04:28.520: INFO: node-exporter-vz8m9 from openshift-monitoring started at 2023-07-26 23:26:18 +0000 UTC (2 container statuses recorded) +Jul 27 02:04:28.520: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:04:28.520: INFO: Container node-exporter ready: true, restart count 0 +Jul 27 02:04:28.520: INFO: multus-287s2 from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.520: INFO: Container kube-multus ready: true, restart count 0 +Jul 27 02:04:28.520: INFO: multus-additional-cni-plugins-xns7c from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.520: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 +Jul 27 02:04:28.520: INFO: network-metrics-daemon-xpw2q from openshift-multus started at 2023-07-26 23:17:25 +0000 UTC (2 container statuses recorded) +Jul 27 02:04:28.520: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:04:28.520: INFO: Container network-metrics-daemon ready: true, restart count 0 +Jul 27 02:04:28.520: INFO: network-check-target-hf22d from openshift-network-diagnostics started at 2023-07-26 23:17:32 +0000 UTC (1 container statuses recorded) +Jul 27 02:04:28.520: INFO: Container network-check-target-container ready: true, restart count 0 +Jul 27 02:04:28.520: INFO: sonobuoy-systemd-logs-daemon-set-2f6dede7a68c4213-p74pn from sonobuoy started at 2023-07-27 01:27:09 +0000 UTC (2 container statuses recorded) +Jul 27 02:04:28.520: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jul 27 02:04:28.520: INFO: Container systemd-logs ready: true, restart count 0 +[It] validates resource limits of pods that are allowed to run [Conformance] + test/e2e/scheduling/predicates.go:331 +STEP: verifying the node has the label node 10.245.128.17 07/27/23 02:04:28.629 +STEP: verifying the node has the label node 10.245.128.18 07/27/23 02:04:28.666 +STEP: verifying the node has the label node 10.245.128.19 07/27/23 02:04:28.74 +Jul 27 02:04:28.849: INFO: Pod calico-kube-controllers-5575667dcd-ps6n9 requesting resource cpu=10m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod calico-node-2vsm9 requesting resource cpu=250m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod calico-node-6gb7d requesting resource cpu=250m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod calico-node-tnbmn requesting resource cpu=250m on Node 10.245.128.19 +Jul 27 02:04:28.849: INFO: Pod calico-typha-5549cc5cdc-25l9k requesting resource cpu=250m on Node 10.245.128.19 +Jul 27 02:04:28.849: INFO: Pod calico-typha-5549cc5cdc-nsmq8 requesting resource cpu=250m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod managed-storage-validation-webhooks-6dfcff48fb-4xxsq requesting resource cpu=10m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod managed-storage-validation-webhooks-6dfcff48fb-k6pcc requesting resource cpu=10m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod managed-storage-validation-webhooks-6dfcff48fb-swht2 requesting resource cpu=10m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod ibm-keepalived-watcher-228gb requesting resource cpu=5m on Node 10.245.128.19 +Jul 27 02:04:28.849: INFO: Pod ibm-keepalived-watcher-krnnt requesting resource cpu=5m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod ibm-keepalived-watcher-wjqkn requesting resource cpu=5m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod ibm-master-proxy-static-10.245.128.17 requesting resource cpu=26m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod ibm-master-proxy-static-10.245.128.18 requesting resource cpu=26m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod ibm-master-proxy-static-10.245.128.19 requesting resource cpu=26m on Node 10.245.128.19 +Jul 27 02:04:28.849: INFO: Pod ibm-storage-metrics-agent-9fd89b544-292dm requesting resource cpu=60m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod ibm-vpc-block-csi-controller-0 requesting resource cpu=165m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod ibm-vpc-block-csi-node-lp4cr requesting resource cpu=55m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod ibm-vpc-block-csi-node-m8dqf requesting resource cpu=55m on Node 10.245.128.19 +Jul 27 02:04:28.849: INFO: Pod ibm-vpc-block-csi-node-pb2sj requesting resource cpu=55m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod vpn-7d8b749c64-87d9s requesting resource cpu=5m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod cluster-node-tuning-operator-5b85c5d47b-9cbp5 requesting resource cpu=10m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod tuned-8xqng requesting resource cpu=10m on Node 10.245.128.19 +Jul 27 02:04:28.849: INFO: Pod tuned-wnh5v requesting resource cpu=10m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod tuned-zxrv4 requesting resource cpu=10m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod cluster-samples-operator-588cc6f8cc-fh5hj requesting resource cpu=20m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod cluster-storage-operator-586d5b4d95-tq97j requesting resource cpu=10m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod csi-snapshot-controller-5b77984679-frszr requesting resource cpu=10m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod csi-snapshot-controller-5b77984679-wxrv8 requesting resource cpu=10m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod csi-snapshot-controller-operator-7c998b6874-9flch requesting resource cpu=10m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod csi-snapshot-webhook-78b8c8d77c-2pk6s requesting resource cpu=10m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod csi-snapshot-webhook-78b8c8d77c-jqbww requesting resource cpu=10m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod console-operator-8486d48d6-4xzr7 requesting resource cpu=20m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod console-7fd48bd95f-pzr2s requesting resource cpu=10m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod console-7fd48bd95f-wksvb requesting resource cpu=10m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod downloads-6874b45df6-nm9q6 requesting resource cpu=10m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod downloads-6874b45df6-w7xkq requesting resource cpu=10m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod dns-operator-7c549b76fd-t56tt requesting resource cpu=20m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod dns-default-5mw2g requesting resource cpu=60m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod dns-default-9k25b requesting resource cpu=60m on Node 10.245.128.19 +Jul 27 02:04:28.849: INFO: Pod dns-default-r982z requesting resource cpu=60m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod node-resolver-2kt92 requesting resource cpu=5m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod node-resolver-s2q44 requesting resource cpu=5m on Node 10.245.128.19 +Jul 27 02:04:28.849: INFO: Pod node-resolver-txjwq requesting resource cpu=5m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod cluster-image-registry-operator-96d4d84cf-65k8l requesting resource cpu=10m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod image-registry-69fbbd6d88-6xgnp requesting resource cpu=100m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod node-ca-kz4vp requesting resource cpu=10m on Node 10.245.128.19 +Jul 27 02:04:28.849: INFO: Pod node-ca-ntzct requesting resource cpu=10m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod node-ca-pmxp9 requesting resource cpu=10m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod ingress-canary-jphk8 requesting resource cpu=10m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod ingress-canary-nf2dw requesting resource cpu=10m on Node 10.245.128.19 +Jul 27 02:04:28.849: INFO: Pod ingress-canary-wh5qj requesting resource cpu=10m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod ingress-operator-64bc7f7964-9sbtr requesting resource cpu=20m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod router-default-865b575f54-b946s requesting resource cpu=100m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod router-default-865b575f54-qjwfv requesting resource cpu=100m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod insights-operator-5db47f7654-r8xdq requesting resource cpu=10m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod openshift-kube-proxy-4qg5c requesting resource cpu=110m on Node 10.245.128.19 +Jul 27 02:04:28.849: INFO: Pod openshift-kube-proxy-6hxmn requesting resource cpu=110m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod openshift-kube-proxy-r7t77 requesting resource cpu=110m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod kube-storage-version-migrator-operator-f4b8bf677-c24bz requesting resource cpu=10m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod migrator-77d7ddf546-9g7xm requesting resource cpu=10m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod certified-operators-qlqcc requesting resource cpu=10m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod community-operators-dtgmg requesting resource cpu=10m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod marketplace-operator-5ddbd9fdbc-lrhrq requesting resource cpu=10m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod redhat-marketplace-vnvdb requesting resource cpu=10m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod redhat-operators-9qw52 requesting resource cpu=10m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod alertmanager-main-0 requesting resource cpu=9m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod alertmanager-main-1 requesting resource cpu=9m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod cluster-monitoring-operator-7448698f65-65wn9 requesting resource cpu=10m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod kube-state-metrics-575bd9d6b6-2wk6g requesting resource cpu=4m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod node-exporter-2tscc requesting resource cpu=9m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod node-exporter-d46sh requesting resource cpu=9m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod node-exporter-vz8m9 requesting resource cpu=9m on Node 10.245.128.19 +Jul 27 02:04:28.849: INFO: Pod openshift-state-metrics-99754b784-vdbrs requesting resource cpu=3m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod prometheus-adapter-657855c676-hwbr7 requesting resource cpu=1m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod prometheus-adapter-657855c676-qlc95 requesting resource cpu=1m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod prometheus-k8s-0 requesting resource cpu=75m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod prometheus-k8s-1 requesting resource cpu=75m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod prometheus-operator-765bbdfd45-twq98 requesting resource cpu=6m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod prometheus-operator-admission-webhook-84c7bbc8cc-hct4l requesting resource cpu=5m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod prometheus-operator-admission-webhook-84c7bbc8cc-jvbxn requesting resource cpu=5m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod telemeter-client-c964ff8c9-xszvz requesting resource cpu=3m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod thanos-querier-7f9c896d7f-fk8mk requesting resource cpu=15m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod thanos-querier-7f9c896d7f-xqld6 requesting resource cpu=15m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod multus-287s2 requesting resource cpu=10m on Node 10.245.128.19 +Jul 27 02:04:28.849: INFO: Pod multus-5x56j requesting resource cpu=10m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod multus-additional-cni-plugins-njhzm requesting resource cpu=10m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod multus-additional-cni-plugins-p7gf5 requesting resource cpu=10m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod multus-additional-cni-plugins-xns7c requesting resource cpu=10m on Node 10.245.128.19 +Jul 27 02:04:28.849: INFO: Pod multus-admission-controller-8ccd764f4-7kmkg requesting resource cpu=20m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod multus-admission-controller-8ccd764f4-j68g7 requesting resource cpu=20m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod multus-zhftn requesting resource cpu=10m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod network-metrics-daemon-cglg2 requesting resource cpu=20m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod network-metrics-daemon-djvdx requesting resource cpu=20m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod network-metrics-daemon-xpw2q requesting resource cpu=20m on Node 10.245.128.19 +Jul 27 02:04:28.849: INFO: Pod network-check-source-6777f6456-pt5nn requesting resource cpu=10m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod network-check-target-2j7hq requesting resource cpu=10m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod network-check-target-85dgs requesting resource cpu=10m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod network-check-target-hf22d requesting resource cpu=10m on Node 10.245.128.19 +Jul 27 02:04:28.849: INFO: Pod network-operator-6dddb4f685-gc764 requesting resource cpu=10m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod catalog-operator-69ccd5899d-lrpkv requesting resource cpu=10m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod olm-operator-8448b5677d-bf2sl requesting resource cpu=10m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod package-server-manager-579d664b8c-klrwt requesting resource cpu=10m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod packageserver-b9964c68-6gdlp requesting resource cpu=10m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod packageserver-b9964c68-p2fd4 requesting resource cpu=10m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod metrics-6ff747d58d-llt7w requesting resource cpu=10m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod push-gateway-6448c6788-hrxtl requesting resource cpu=10m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod service-ca-operator-5db987957b-pftl9 requesting resource cpu=10m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod service-ca-665db46585-9cprv requesting resource cpu=10m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod sonobuoy requesting resource cpu=0m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod sonobuoy-e2e-job-17fd703895604ed7 requesting resource cpu=0m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod sonobuoy-systemd-logs-daemon-set-2f6dede7a68c4213-7p2cx requesting resource cpu=0m on Node 10.245.128.18 +Jul 27 02:04:28.849: INFO: Pod sonobuoy-systemd-logs-daemon-set-2f6dede7a68c4213-p74pn requesting resource cpu=0m on Node 10.245.128.19 +Jul 27 02:04:28.849: INFO: Pod sonobuoy-systemd-logs-daemon-set-2f6dede7a68c4213-vft4d requesting resource cpu=0m on Node 10.245.128.17 +Jul 27 02:04:28.849: INFO: Pod tigera-operator-5b48cf996b-5zb5v requesting resource cpu=100m on Node 10.245.128.17 +STEP: Starting Pods to consume most of the cluster CPU. 07/27/23 02:04:28.849 +Jul 27 02:04:28.849: INFO: Creating a pod which consumes cpu=1812m on Node 10.245.128.17 +Jul 27 02:04:28.876: INFO: Creating a pod which consumes cpu=1711m on Node 10.245.128.18 +Jul 27 02:04:28.895: INFO: Creating a pod which consumes cpu=2142m on Node 10.245.128.19 +Jul 27 02:04:28.946: INFO: Waiting up to 5m0s for pod "filler-pod-384fb7c2-ed82-426b-ab91-111db5be4dd7" in namespace "sched-pred-8221" to be "running" +Jul 27 02:04:28.962: INFO: Pod "filler-pod-384fb7c2-ed82-426b-ab91-111db5be4dd7": Phase="Pending", Reason="", readiness=false. Elapsed: 15.733952ms +Jul 27 02:04:30.973: INFO: Pod "filler-pod-384fb7c2-ed82-426b-ab91-111db5be4dd7": Phase="Running", Reason="", readiness=true. Elapsed: 2.026553517s +Jul 27 02:04:30.973: INFO: Pod "filler-pod-384fb7c2-ed82-426b-ab91-111db5be4dd7" satisfied condition "running" +Jul 27 02:04:30.973: INFO: Waiting up to 5m0s for pod "filler-pod-6d3d2084-424b-4b8e-bebf-a0c72fc953bb" in namespace "sched-pred-8221" to be "running" +Jul 27 02:04:30.982: INFO: Pod "filler-pod-6d3d2084-424b-4b8e-bebf-a0c72fc953bb": Phase="Running", Reason="", readiness=true. Elapsed: 9.402236ms +Jul 27 02:04:30.982: INFO: Pod "filler-pod-6d3d2084-424b-4b8e-bebf-a0c72fc953bb" satisfied condition "running" +Jul 27 02:04:30.982: INFO: Waiting up to 5m0s for pod "filler-pod-4650fb76-4f2b-4a76-ad61-1da0f940f7ac" in namespace "sched-pred-8221" to be "running" +Jul 27 02:04:30.994: INFO: Pod "filler-pod-4650fb76-4f2b-4a76-ad61-1da0f940f7ac": Phase="Pending", Reason="", readiness=false. Elapsed: 11.315576ms +Jul 27 02:04:33.008: INFO: Pod "filler-pod-4650fb76-4f2b-4a76-ad61-1da0f940f7ac": Phase="Running", Reason="", readiness=true. Elapsed: 2.025470172s +Jul 27 02:04:33.008: INFO: Pod "filler-pod-4650fb76-4f2b-4a76-ad61-1da0f940f7ac" satisfied condition "running" +STEP: Creating another pod that requires unavailable amount of CPU. 07/27/23 02:04:33.008 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-384fb7c2-ed82-426b-ab91-111db5be4dd7.177597304f184acc], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8221/filler-pod-384fb7c2-ed82-426b-ab91-111db5be4dd7 to 10.245.128.17] 07/27/23 02:04:33.019 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-384fb7c2-ed82-426b-ab91-111db5be4dd7.177597307cc416d4], Reason = [AddedInterface], Message = [Add eth0 [172.17.218.57/32] from k8s-pod-network] 07/27/23 02:04:33.019 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-384fb7c2-ed82-426b-ab91-111db5be4dd7.17759730890821e5], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 07/27/23 02:04:33.019 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-384fb7c2-ed82-426b-ab91-111db5be4dd7.1775973098953e82], Reason = [Created], Message = [Created container filler-pod-384fb7c2-ed82-426b-ab91-111db5be4dd7] 07/27/23 02:04:33.019 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-384fb7c2-ed82-426b-ab91-111db5be4dd7.177597309b50f2c5], Reason = [Started], Message = [Started container filler-pod-384fb7c2-ed82-426b-ab91-111db5be4dd7] 07/27/23 02:04:33.019 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-4650fb76-4f2b-4a76-ad61-1da0f940f7ac.17759730536af2c9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8221/filler-pod-4650fb76-4f2b-4a76-ad61-1da0f940f7ac to 10.245.128.19] 07/27/23 02:04:33.019 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-4650fb76-4f2b-4a76-ad61-1da0f940f7ac.1775973080146f17], Reason = [AddedInterface], Message = [Add eth0 [172.17.225.39/32] from k8s-pod-network] 07/27/23 02:04:33.019 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-4650fb76-4f2b-4a76-ad61-1da0f940f7ac.1775973089fa558e], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 07/27/23 02:04:33.019 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-4650fb76-4f2b-4a76-ad61-1da0f940f7ac.17759730949eac4a], Reason = [Created], Message = [Created container filler-pod-4650fb76-4f2b-4a76-ad61-1da0f940f7ac] 07/27/23 02:04:33.019 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-4650fb76-4f2b-4a76-ad61-1da0f940f7ac.1775973095f23eff], Reason = [Started], Message = [Started container filler-pod-4650fb76-4f2b-4a76-ad61-1da0f940f7ac] 07/27/23 02:04:33.019 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-6d3d2084-424b-4b8e-bebf-a0c72fc953bb.17759730520a038e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8221/filler-pod-6d3d2084-424b-4b8e-bebf-a0c72fc953bb to 10.245.128.18] 07/27/23 02:04:33.02 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-6d3d2084-424b-4b8e-bebf-a0c72fc953bb.1775973086dc1039], Reason = [AddedInterface], Message = [Add eth0 [172.17.230.184/32] from k8s-pod-network] 07/27/23 02:04:33.02 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-6d3d2084-424b-4b8e-bebf-a0c72fc953bb.1775973097185dd0], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 07/27/23 02:04:33.02 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-6d3d2084-424b-4b8e-bebf-a0c72fc953bb.17759730a691fc98], Reason = [Created], Message = [Created container filler-pod-6d3d2084-424b-4b8e-bebf-a0c72fc953bb] 07/27/23 02:04:33.02 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-6d3d2084-424b-4b8e-bebf-a0c72fc953bb.17759730a8ee691b], Reason = [Started], Message = [Started container filler-pod-6d3d2084-424b-4b8e-bebf-a0c72fc953bb] 07/27/23 02:04:33.02 +STEP: Considering event: +Type = [Warning], Name = [additional-pod.1775973146c3099d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 Insufficient cpu. preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod..] 07/27/23 02:04:33.115 +STEP: removing the label node off the node 10.245.128.17 07/27/23 02:04:34.055 +STEP: verifying the node doesn't have the label node 07/27/23 02:04:34.109 +STEP: removing the label node off the node 10.245.128.18 07/27/23 02:04:34.122 +STEP: verifying the node doesn't have the label node 07/27/23 02:04:34.16 +STEP: removing the label node off the node 10.245.128.19 07/27/23 02:04:34.174 +STEP: verifying the node doesn't have the label node 07/27/23 02:04:34.209 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 -Jun 12 21:32:26.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] - test/e2e/scheduling/preemption.go:84 -[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] +Jul 27 02:04:34.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:88 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 -STEP: Destroying namespace "sched-preemption-611" for this suite. 06/12/23 21:32:27.122 +STEP: Destroying namespace "sched-pred-8221" for this suite. 07/27/23 02:04:34.241 ------------------------------ -• [SLOW TEST] [73.046 seconds] -[sig-scheduling] SchedulerPreemption [Serial] +• [SLOW TEST] [6.103 seconds] +[sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 - validates lower priority pod preemption by critical pod [Conformance] - test/e2e/scheduling/preemption.go:224 + validates resource limits of pods that are allowed to run [Conformance] + test/e2e/scheduling/predicates.go:331 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:31:14.111 - Jun 12 21:31:14.111: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename sched-preemption 06/12/23 21:31:14.112 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:31:14.185 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:31:14.223 - [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + STEP: Creating a kubernetes client 07/27/23 02:04:28.164 + Jul 27 02:04:28.164: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename sched-pred 07/27/23 02:04:28.165 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:04:28.226 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:04:28.241 + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] - test/e2e/scheduling/preemption.go:97 - Jun 12 21:31:14.327: INFO: Waiting up to 1m0s for all nodes to be ready - Jun 12 21:32:14.550: INFO: Waiting for terminating namespaces to be deleted... - [It] validates lower priority pod preemption by critical pod [Conformance] - test/e2e/scheduling/preemption.go:224 - STEP: Create pods that use 4/5 of node resources. 06/12/23 21:32:14.576 - Jun 12 21:32:14.626: INFO: Created pod: pod0-0-sched-preemption-low-priority - Jun 12 21:32:14.639: INFO: Created pod: pod0-1-sched-preemption-medium-priority - Jun 12 21:32:14.672: INFO: Created pod: pod1-0-sched-preemption-medium-priority - Jun 12 21:32:14.689: INFO: Created pod: pod1-1-sched-preemption-medium-priority - Jun 12 21:32:14.738: INFO: Created pod: pod2-0-sched-preemption-medium-priority - Jun 12 21:32:14.758: INFO: Created pod: pod2-1-sched-preemption-medium-priority - STEP: Wait for pods to be scheduled. 06/12/23 21:32:14.758 - Jun 12 21:32:14.759: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-611" to be "running" - Jun 12 21:32:14.773: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 14.126627ms - Jun 12 21:32:16.792: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032710563s - Jun 12 21:32:18.784: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 4.024891094s - Jun 12 21:32:18.784: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" - Jun 12 21:32:18.784: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-611" to be "running" - Jun 12 21:32:18.792: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 8.141815ms - Jun 12 21:32:18.793: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" - Jun 12 21:32:18.793: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-611" to be "running" - Jun 12 21:32:18.801: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 8.229854ms - Jun 12 21:32:18.801: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" - Jun 12 21:32:18.801: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-611" to be "running" - Jun 12 21:32:18.810: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 7.869427ms - Jun 12 21:32:18.810: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" - Jun 12 21:32:18.810: INFO: Waiting up to 5m0s for pod "pod2-0-sched-preemption-medium-priority" in namespace "sched-preemption-611" to be "running" - Jun 12 21:32:18.818: INFO: Pod "pod2-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 8.070129ms - Jun 12 21:32:18.818: INFO: Pod "pod2-0-sched-preemption-medium-priority" satisfied condition "running" - Jun 12 21:32:18.818: INFO: Waiting up to 5m0s for pod "pod2-1-sched-preemption-medium-priority" in namespace "sched-preemption-611" to be "running" - Jun 12 21:32:18.826: INFO: Pod "pod2-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 7.77958ms - Jun 12 21:32:18.827: INFO: Pod "pod2-1-sched-preemption-medium-priority" satisfied condition "running" - STEP: Run a critical pod that use same resources as that of a lower priority pod 06/12/23 21:32:18.827 - Jun 12 21:32:18.853: INFO: Waiting up to 2m0s for pod "critical-pod" in namespace "kube-system" to be "running" - Jun 12 21:32:18.861: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 8.078411ms - Jun 12 21:32:20.871: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017862147s - Jun 12 21:32:22.871: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018508331s - Jun 12 21:32:24.870: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017451746s - Jun 12 21:32:26.872: INFO: Pod "critical-pod": Phase="Running", Reason="", readiness=true. Elapsed: 8.018993632s - Jun 12 21:32:26.872: INFO: Pod "critical-pod" satisfied condition "running" - [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:97 + Jul 27 02:04:28.277: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready + Jul 27 02:04:28.306: INFO: Waiting for terminating namespaces to be deleted... + Jul 27 02:04:28.357: INFO: + Logging pods the apiserver thinks is on node 10.245.128.17 before test + Jul 27 02:04:28.411: INFO: calico-node-6gb7d from calico-system started at 2023-07-26 23:19:48 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.411: INFO: Container calico-node ready: true, restart count 0 + Jul 27 02:04:28.411: INFO: ibm-keepalived-watcher-krnnt from kube-system started at 2023-07-26 23:12:13 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.411: INFO: Container keepalived-watcher ready: true, restart count 0 + Jul 27 02:04:28.411: INFO: ibm-master-proxy-static-10.245.128.17 from kube-system started at 2023-07-26 23:12:09 +0000 UTC (2 container statuses recorded) + Jul 27 02:04:28.411: INFO: Container ibm-master-proxy-static ready: true, restart count 0 + Jul 27 02:04:28.411: INFO: Container pause ready: true, restart count 0 + Jul 27 02:04:28.411: INFO: ibm-vpc-block-csi-controller-0 from kube-system started at 2023-07-26 23:25:41 +0000 UTC (7 container statuses recorded) + Jul 27 02:04:28.411: INFO: Container csi-attacher ready: true, restart count 0 + Jul 27 02:04:28.411: INFO: Container csi-provisioner ready: true, restart count 0 + Jul 27 02:04:28.411: INFO: Container csi-resizer ready: true, restart count 0 + Jul 27 02:04:28.411: INFO: Container csi-snapshotter ready: true, restart count 0 + Jul 27 02:04:28.411: INFO: Container iks-vpc-block-driver ready: true, restart count 0 + Jul 27 02:04:28.411: INFO: Container liveness-probe ready: true, restart count 0 + Jul 27 02:04:28.411: INFO: Container storage-secret-sidecar ready: true, restart count 0 + Jul 27 02:04:28.411: INFO: ibm-vpc-block-csi-node-pb2sj from kube-system started at 2023-07-26 23:12:13 +0000 UTC (4 container statuses recorded) + Jul 27 02:04:28.411: INFO: Container csi-driver-registrar ready: true, restart count 0 + Jul 27 02:04:28.411: INFO: Container iks-vpc-block-node-driver ready: true, restart count 0 + Jul 27 02:04:28.411: INFO: Container liveness-probe ready: true, restart count 0 + Jul 27 02:04:28.411: INFO: Container storage-secret-sidecar ready: true, restart count 0 + Jul 27 02:04:28.411: INFO: vpn-7d8b749c64-87d9s from kube-system started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.411: INFO: Container vpn ready: true, restart count 0 + Jul 27 02:04:28.411: INFO: tuned-wnh5v from openshift-cluster-node-tuning-operator started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.411: INFO: Container tuned ready: true, restart count 0 + Jul 27 02:04:28.411: INFO: csi-snapshot-controller-5b77984679-frszr from openshift-cluster-storage-operator started at 2023-07-26 23:21:55 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.411: INFO: Container snapshot-controller ready: true, restart count 0 + Jul 27 02:04:28.411: INFO: csi-snapshot-webhook-78b8c8d77c-2pk6s from openshift-cluster-storage-operator started at 2023-07-26 23:21:55 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.411: INFO: Container webhook ready: true, restart count 0 + Jul 27 02:04:28.411: INFO: console-7fd48bd95f-wksvb from openshift-console started at 2023-07-26 23:27:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.411: INFO: Container console ready: true, restart count 0 + Jul 27 02:04:28.411: INFO: downloads-6874b45df6-w7xkq from openshift-console started at 2023-07-26 23:22:05 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.411: INFO: Container download-server ready: true, restart count 0 + Jul 27 02:04:28.411: INFO: dns-default-5mw2g from openshift-dns started at 2023-07-26 23:25:39 +0000 UTC (2 container statuses recorded) + Jul 27 02:04:28.411: INFO: Container dns ready: true, restart count 0 + Jul 27 02:04:28.411: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:04:28.411: INFO: node-resolver-2kt92 from openshift-dns started at 2023-07-26 23:25:38 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.411: INFO: Container dns-node-resolver ready: true, restart count 0 + Jul 27 02:04:28.411: INFO: image-registry-69fbbd6d88-6xgnp from openshift-image-registry started at 2023-07-27 01:50:07 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.411: INFO: Container registry ready: true, restart count 0 + Jul 27 02:04:28.411: INFO: node-ca-pmxp9 from openshift-image-registry started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.411: INFO: Container node-ca ready: true, restart count 0 + Jul 27 02:04:28.411: INFO: ingress-canary-wh5qj from openshift-ingress-canary started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.411: INFO: Container serve-healthcheck-canary ready: true, restart count 0 + Jul 27 02:04:28.411: INFO: router-default-865b575f54-qjwfv from openshift-ingress started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.411: INFO: Container router ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: openshift-kube-proxy-r7t77 from openshift-kube-proxy started at 2023-07-26 23:17:29 +0000 UTC (2 container statuses recorded) + Jul 27 02:04:28.412: INFO: Container kube-proxy ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: migrator-77d7ddf546-9g7xm from openshift-kube-storage-version-migrator started at 2023-07-26 23:25:38 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.412: INFO: Container migrator ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: certified-operators-qlqcc from openshift-marketplace started at 2023-07-26 23:23:19 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.412: INFO: Container registry-server ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: community-operators-dtgmg from openshift-marketplace started at 2023-07-26 23:23:19 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.412: INFO: Container registry-server ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: redhat-marketplace-vnvdb from openshift-marketplace started at 2023-07-26 23:23:19 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.412: INFO: Container registry-server ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: redhat-operators-9qw52 from openshift-marketplace started at 2023-07-27 01:30:34 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.412: INFO: Container registry-server ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: alertmanager-main-1 from openshift-monitoring started at 2023-07-26 23:27:44 +0000 UTC (6 container statuses recorded) + Jul 27 02:04:28.412: INFO: Container alertmanager ready: true, restart count 1 + Jul 27 02:04:28.412: INFO: Container alertmanager-proxy ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: Container config-reloader ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy-metric ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: Container prom-label-proxy ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: kube-state-metrics-575bd9d6b6-2wk6g from openshift-monitoring started at 2023-07-27 01:50:07 +0000 UTC (3 container statuses recorded) + Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy-main ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy-self ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: Container kube-state-metrics ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: node-exporter-2tscc from openshift-monitoring started at 2023-07-26 23:26:18 +0000 UTC (2 container statuses recorded) + Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: Container node-exporter ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: openshift-state-metrics-99754b784-vdbrs from openshift-monitoring started at 2023-07-27 01:50:07 +0000 UTC (3 container statuses recorded) + Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy-main ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy-self ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: Container openshift-state-metrics ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: prometheus-adapter-657855c676-qlc95 from openshift-monitoring started at 2023-07-26 23:26:23 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.412: INFO: Container prometheus-adapter ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: prometheus-k8s-1 from openshift-monitoring started at 2023-07-26 23:27:58 +0000 UTC (6 container statuses recorded) + Jul 27 02:04:28.412: INFO: Container config-reloader ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy-thanos ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: Container prometheus ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: Container prometheus-proxy ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: Container thanos-sidecar ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: prometheus-operator-765bbdfd45-twq98 from openshift-monitoring started at 2023-07-27 01:50:07 +0000 UTC (2 container statuses recorded) + Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: Container prometheus-operator ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: prometheus-operator-admission-webhook-84c7bbc8cc-hct4l from openshift-monitoring started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.412: INFO: Container prometheus-operator-admission-webhook ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: telemeter-client-c964ff8c9-xszvz from openshift-monitoring started at 2023-07-27 01:50:07 +0000 UTC (3 container statuses recorded) + Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: Container reload ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: Container telemeter-client ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: thanos-querier-7f9c896d7f-xqld6 from openshift-monitoring started at 2023-07-26 23:26:32 +0000 UTC (6 container statuses recorded) + Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy-metrics ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy-rules ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: Container oauth-proxy ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: Container prom-label-proxy ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: Container thanos-query ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: multus-5x56j from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.412: INFO: Container kube-multus ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: multus-additional-cni-plugins-p7gf5 from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.412: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: multus-admission-controller-8ccd764f4-j68g7 from openshift-multus started at 2023-07-26 23:25:38 +0000 UTC (2 container statuses recorded) + Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: Container multus-admission-controller ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: network-metrics-daemon-djvdx from openshift-multus started at 2023-07-26 23:17:25 +0000 UTC (2 container statuses recorded) + Jul 27 02:04:28.412: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: Container network-metrics-daemon ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: network-check-target-2j7hq from openshift-network-diagnostics started at 2023-07-26 23:17:32 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.412: INFO: Container network-check-target-container ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: collect-profiles-28173690-9m5v7 from openshift-operator-lifecycle-manager started at 2023-07-27 01:30:00 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.412: INFO: Container collect-profiles ready: false, restart count 0 + Jul 27 02:04:28.412: INFO: collect-profiles-28173720-hn9xm from openshift-operator-lifecycle-manager started at 2023-07-27 02:00:00 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.412: INFO: Container collect-profiles ready: false, restart count 0 + Jul 27 02:04:28.412: INFO: packageserver-b9964c68-p2fd4 from openshift-operator-lifecycle-manager started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.412: INFO: Container packageserver ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: service-ca-665db46585-9cprv from openshift-service-ca started at 2023-07-26 23:21:59 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.412: INFO: Container service-ca-controller ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: sonobuoy-e2e-job-17fd703895604ed7 from sonobuoy started at 2023-07-27 01:27:09 +0000 UTC (2 container statuses recorded) + Jul 27 02:04:28.412: INFO: Container e2e ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: Container sonobuoy-worker ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: sonobuoy-systemd-logs-daemon-set-2f6dede7a68c4213-vft4d from sonobuoy started at 2023-07-27 01:27:09 +0000 UTC (2 container statuses recorded) + Jul 27 02:04:28.412: INFO: Container sonobuoy-worker ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: Container systemd-logs ready: true, restart count 0 + Jul 27 02:04:28.412: INFO: tigera-operator-5b48cf996b-5zb5v from tigera-operator started at 2023-07-26 23:12:21 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.412: INFO: Container tigera-operator ready: true, restart count 6 + Jul 27 02:04:28.412: INFO: + Logging pods the apiserver thinks is on node 10.245.128.18 before test + Jul 27 02:04:28.474: INFO: calico-kube-controllers-5575667dcd-ps6n9 from calico-system started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.474: INFO: Container calico-kube-controllers ready: true, restart count 0 + Jul 27 02:04:28.474: INFO: calico-node-2vsm9 from calico-system started at 2023-07-26 23:19:48 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.474: INFO: Container calico-node ready: true, restart count 0 + Jul 27 02:04:28.474: INFO: calico-typha-5549cc5cdc-nsmq8 from calico-system started at 2023-07-26 23:19:56 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.474: INFO: Container calico-typha ready: true, restart count 0 + Jul 27 02:04:28.474: INFO: managed-storage-validation-webhooks-6dfcff48fb-4xxsq from ibm-odf-validation-webhook started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.474: INFO: Container managed-storage-validation-webhooks ready: true, restart count 0 + Jul 27 02:04:28.474: INFO: managed-storage-validation-webhooks-6dfcff48fb-k6pcc from ibm-odf-validation-webhook started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.474: INFO: Container managed-storage-validation-webhooks ready: true, restart count 0 + Jul 27 02:04:28.474: INFO: managed-storage-validation-webhooks-6dfcff48fb-swht2 from ibm-odf-validation-webhook started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.474: INFO: Container managed-storage-validation-webhooks ready: true, restart count 1 + Jul 27 02:04:28.474: INFO: ibm-keepalived-watcher-wjqkn from kube-system started at 2023-07-26 23:12:23 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.474: INFO: Container keepalived-watcher ready: true, restart count 0 + Jul 27 02:04:28.474: INFO: ibm-master-proxy-static-10.245.128.18 from kube-system started at 2023-07-26 23:12:20 +0000 UTC (2 container statuses recorded) + Jul 27 02:04:28.474: INFO: Container ibm-master-proxy-static ready: true, restart count 0 + Jul 27 02:04:28.474: INFO: Container pause ready: true, restart count 0 + Jul 27 02:04:28.474: INFO: ibm-storage-metrics-agent-9fd89b544-292dm from kube-system started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container ibm-storage-metrics-agent ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: Container storage-secret-sidecar ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: ibm-vpc-block-csi-node-lp4cr from kube-system started at 2023-07-26 23:12:23 +0000 UTC (4 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container csi-driver-registrar ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: Container iks-vpc-block-node-driver ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: Container liveness-probe ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: Container storage-secret-sidecar ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: cluster-node-tuning-operator-5b85c5d47b-9cbp5 from openshift-cluster-node-tuning-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container cluster-node-tuning-operator ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: tuned-zxrv4 from openshift-cluster-node-tuning-operator started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container tuned ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: cluster-samples-operator-588cc6f8cc-fh5hj from openshift-cluster-samples-operator started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container cluster-samples-operator ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: Container cluster-samples-operator-watch ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: cluster-storage-operator-586d5b4d95-tq97j from openshift-cluster-storage-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container cluster-storage-operator ready: true, restart count 1 + Jul 27 02:04:28.475: INFO: csi-snapshot-controller-5b77984679-wxrv8 from openshift-cluster-storage-operator started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container snapshot-controller ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: csi-snapshot-controller-operator-7c998b6874-9flch from openshift-cluster-storage-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container csi-snapshot-controller-operator ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: csi-snapshot-webhook-78b8c8d77c-jqbww from openshift-cluster-storage-operator started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container webhook ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: console-operator-8486d48d6-4xzr7 from openshift-console-operator started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container console-operator ready: true, restart count 1 + Jul 27 02:04:28.475: INFO: Container conversion-webhook-server ready: true, restart count 2 + Jul 27 02:04:28.475: INFO: console-7fd48bd95f-pzr2s from openshift-console started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container console ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: downloads-6874b45df6-nm9q6 from openshift-console started at 2023-07-27 01:50:07 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container download-server ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: dns-operator-7c549b76fd-t56tt from openshift-dns-operator started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container dns-operator ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: dns-default-r982z from openshift-dns started at 2023-07-26 23:25:39 +0000 UTC (2 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container dns ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: node-resolver-txjwq from openshift-dns started at 2023-07-26 23:25:38 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container dns-node-resolver ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: cluster-image-registry-operator-96d4d84cf-65k8l from openshift-image-registry started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container cluster-image-registry-operator ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: node-ca-ntzct from openshift-image-registry started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container node-ca ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: ingress-canary-jphk8 from openshift-ingress-canary started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container serve-healthcheck-canary ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: ingress-operator-64bc7f7964-9sbtr from openshift-ingress-operator started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container ingress-operator ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: router-default-865b575f54-b946s from openshift-ingress started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container router ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: insights-operator-5db47f7654-r8xdq from openshift-insights started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container insights-operator ready: true, restart count 1 + Jul 27 02:04:28.475: INFO: openshift-kube-proxy-6hxmn from openshift-kube-proxy started at 2023-07-26 23:17:29 +0000 UTC (2 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container kube-proxy ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: kube-storage-version-migrator-operator-f4b8bf677-c24bz from openshift-kube-storage-version-migrator-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container kube-storage-version-migrator-operator ready: true, restart count 1 + Jul 27 02:04:28.475: INFO: marketplace-operator-5ddbd9fdbc-lrhrq from openshift-marketplace started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container marketplace-operator ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: alertmanager-main-0 from openshift-monitoring started at 2023-07-27 01:50:10 +0000 UTC (6 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container alertmanager ready: true, restart count 1 + Jul 27 02:04:28.475: INFO: Container alertmanager-proxy ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: Container config-reloader ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: Container kube-rbac-proxy-metric ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: Container prom-label-proxy ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: cluster-monitoring-operator-7448698f65-65wn9 from openshift-monitoring started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container cluster-monitoring-operator ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: node-exporter-d46sh from openshift-monitoring started at 2023-07-26 23:26:18 +0000 UTC (2 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: Container node-exporter ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: prometheus-adapter-657855c676-hwbr7 from openshift-monitoring started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container prometheus-adapter ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: prometheus-k8s-0 from openshift-monitoring started at 2023-07-27 01:50:11 +0000 UTC (6 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container config-reloader ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: Container kube-rbac-proxy-thanos ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: Container prometheus ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: Container prometheus-proxy ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: Container thanos-sidecar ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: prometheus-operator-admission-webhook-84c7bbc8cc-jvbxn from openshift-monitoring started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container prometheus-operator-admission-webhook ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: thanos-querier-7f9c896d7f-fk8mk from openshift-monitoring started at 2023-07-27 01:45:59 +0000 UTC (6 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: Container kube-rbac-proxy-metrics ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: Container kube-rbac-proxy-rules ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: Container oauth-proxy ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: Container prom-label-proxy ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: Container thanos-query ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: multus-additional-cni-plugins-njhzm from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: multus-admission-controller-8ccd764f4-7kmkg from openshift-multus started at 2023-07-26 23:25:53 +0000 UTC (2 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: Container multus-admission-controller ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: multus-zhftn from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container kube-multus ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: network-metrics-daemon-cglg2 from openshift-multus started at 2023-07-26 23:17:25 +0000 UTC (2 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: Container network-metrics-daemon ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: network-check-source-6777f6456-pt5nn from openshift-network-diagnostics started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container check-endpoints ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: network-check-target-85dgs from openshift-network-diagnostics started at 2023-07-26 23:17:32 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container network-check-target-container ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: network-operator-6dddb4f685-gc764 from openshift-network-operator started at 2023-07-26 23:17:11 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container network-operator ready: true, restart count 1 + Jul 27 02:04:28.475: INFO: catalog-operator-69ccd5899d-lrpkv from openshift-operator-lifecycle-manager started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container catalog-operator ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: collect-profiles-28173705-ctzz8 from openshift-operator-lifecycle-manager started at 2023-07-27 01:45:00 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container collect-profiles ready: false, restart count 0 + Jul 27 02:04:28.475: INFO: olm-operator-8448b5677d-bf2sl from openshift-operator-lifecycle-manager started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container olm-operator ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: package-server-manager-579d664b8c-klrwt from openshift-operator-lifecycle-manager started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container package-server-manager ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: packageserver-b9964c68-6gdlp from openshift-operator-lifecycle-manager started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container packageserver ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: metrics-6ff747d58d-llt7w from openshift-roks-metrics started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container metrics ready: true, restart count 2 + Jul 27 02:04:28.475: INFO: push-gateway-6448c6788-hrxtl from openshift-roks-metrics started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container push-gateway ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: service-ca-operator-5db987957b-pftl9 from openshift-service-ca-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container service-ca-operator ready: true, restart count 1 + Jul 27 02:04:28.475: INFO: sonobuoy from sonobuoy started at 2023-07-27 01:26:57 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container kube-sonobuoy ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: sonobuoy-systemd-logs-daemon-set-2f6dede7a68c4213-7p2cx from sonobuoy started at 2023-07-27 01:27:09 +0000 UTC (2 container statuses recorded) + Jul 27 02:04:28.475: INFO: Container sonobuoy-worker ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: Container systemd-logs ready: true, restart count 0 + Jul 27 02:04:28.475: INFO: + Logging pods the apiserver thinks is on node 10.245.128.19 before test + Jul 27 02:04:28.520: INFO: calico-node-tnbmn from calico-system started at 2023-07-26 23:19:48 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.520: INFO: Container calico-node ready: true, restart count 0 + Jul 27 02:04:28.520: INFO: calico-typha-5549cc5cdc-25l9k from calico-system started at 2023-07-26 23:19:48 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.520: INFO: Container calico-typha ready: true, restart count 0 + Jul 27 02:04:28.520: INFO: ibm-keepalived-watcher-228gb from kube-system started at 2023-07-26 23:12:15 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.520: INFO: Container keepalived-watcher ready: true, restart count 0 + Jul 27 02:04:28.520: INFO: ibm-master-proxy-static-10.245.128.19 from kube-system started at 2023-07-26 23:12:13 +0000 UTC (2 container statuses recorded) + Jul 27 02:04:28.520: INFO: Container ibm-master-proxy-static ready: true, restart count 0 + Jul 27 02:04:28.520: INFO: Container pause ready: true, restart count 0 + Jul 27 02:04:28.520: INFO: ibm-vpc-block-csi-node-m8dqf from kube-system started at 2023-07-26 23:12:15 +0000 UTC (4 container statuses recorded) + Jul 27 02:04:28.520: INFO: Container csi-driver-registrar ready: true, restart count 0 + Jul 27 02:04:28.520: INFO: Container iks-vpc-block-node-driver ready: true, restart count 0 + Jul 27 02:04:28.520: INFO: Container liveness-probe ready: true, restart count 0 + Jul 27 02:04:28.520: INFO: Container storage-secret-sidecar ready: true, restart count 0 + Jul 27 02:04:28.520: INFO: tuned-8xqng from openshift-cluster-node-tuning-operator started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.520: INFO: Container tuned ready: true, restart count 0 + Jul 27 02:04:28.520: INFO: dns-default-9k25b from openshift-dns started at 2023-07-27 01:50:33 +0000 UTC (2 container statuses recorded) + Jul 27 02:04:28.520: INFO: Container dns ready: true, restart count 0 + Jul 27 02:04:28.520: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:04:28.520: INFO: node-resolver-s2q44 from openshift-dns started at 2023-07-26 23:25:38 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.520: INFO: Container dns-node-resolver ready: true, restart count 0 + Jul 27 02:04:28.520: INFO: node-ca-kz4vp from openshift-image-registry started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.520: INFO: Container node-ca ready: true, restart count 0 + Jul 27 02:04:28.520: INFO: ingress-canary-nf2dw from openshift-ingress-canary started at 2023-07-27 01:50:33 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.520: INFO: Container serve-healthcheck-canary ready: true, restart count 0 + Jul 27 02:04:28.520: INFO: openshift-kube-proxy-4qg5c from openshift-kube-proxy started at 2023-07-26 23:17:29 +0000 UTC (2 container statuses recorded) + Jul 27 02:04:28.520: INFO: Container kube-proxy ready: true, restart count 0 + Jul 27 02:04:28.520: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:04:28.520: INFO: node-exporter-vz8m9 from openshift-monitoring started at 2023-07-26 23:26:18 +0000 UTC (2 container statuses recorded) + Jul 27 02:04:28.520: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:04:28.520: INFO: Container node-exporter ready: true, restart count 0 + Jul 27 02:04:28.520: INFO: multus-287s2 from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.520: INFO: Container kube-multus ready: true, restart count 0 + Jul 27 02:04:28.520: INFO: multus-additional-cni-plugins-xns7c from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.520: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 + Jul 27 02:04:28.520: INFO: network-metrics-daemon-xpw2q from openshift-multus started at 2023-07-26 23:17:25 +0000 UTC (2 container statuses recorded) + Jul 27 02:04:28.520: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:04:28.520: INFO: Container network-metrics-daemon ready: true, restart count 0 + Jul 27 02:04:28.520: INFO: network-check-target-hf22d from openshift-network-diagnostics started at 2023-07-26 23:17:32 +0000 UTC (1 container statuses recorded) + Jul 27 02:04:28.520: INFO: Container network-check-target-container ready: true, restart count 0 + Jul 27 02:04:28.520: INFO: sonobuoy-systemd-logs-daemon-set-2f6dede7a68c4213-p74pn from sonobuoy started at 2023-07-27 01:27:09 +0000 UTC (2 container statuses recorded) + Jul 27 02:04:28.520: INFO: Container sonobuoy-worker ready: true, restart count 0 + Jul 27 02:04:28.520: INFO: Container systemd-logs ready: true, restart count 0 + [It] validates resource limits of pods that are allowed to run [Conformance] + test/e2e/scheduling/predicates.go:331 + STEP: verifying the node has the label node 10.245.128.17 07/27/23 02:04:28.629 + STEP: verifying the node has the label node 10.245.128.18 07/27/23 02:04:28.666 + STEP: verifying the node has the label node 10.245.128.19 07/27/23 02:04:28.74 + Jul 27 02:04:28.849: INFO: Pod calico-kube-controllers-5575667dcd-ps6n9 requesting resource cpu=10m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod calico-node-2vsm9 requesting resource cpu=250m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod calico-node-6gb7d requesting resource cpu=250m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod calico-node-tnbmn requesting resource cpu=250m on Node 10.245.128.19 + Jul 27 02:04:28.849: INFO: Pod calico-typha-5549cc5cdc-25l9k requesting resource cpu=250m on Node 10.245.128.19 + Jul 27 02:04:28.849: INFO: Pod calico-typha-5549cc5cdc-nsmq8 requesting resource cpu=250m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod managed-storage-validation-webhooks-6dfcff48fb-4xxsq requesting resource cpu=10m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod managed-storage-validation-webhooks-6dfcff48fb-k6pcc requesting resource cpu=10m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod managed-storage-validation-webhooks-6dfcff48fb-swht2 requesting resource cpu=10m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod ibm-keepalived-watcher-228gb requesting resource cpu=5m on Node 10.245.128.19 + Jul 27 02:04:28.849: INFO: Pod ibm-keepalived-watcher-krnnt requesting resource cpu=5m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod ibm-keepalived-watcher-wjqkn requesting resource cpu=5m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod ibm-master-proxy-static-10.245.128.17 requesting resource cpu=26m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod ibm-master-proxy-static-10.245.128.18 requesting resource cpu=26m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod ibm-master-proxy-static-10.245.128.19 requesting resource cpu=26m on Node 10.245.128.19 + Jul 27 02:04:28.849: INFO: Pod ibm-storage-metrics-agent-9fd89b544-292dm requesting resource cpu=60m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod ibm-vpc-block-csi-controller-0 requesting resource cpu=165m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod ibm-vpc-block-csi-node-lp4cr requesting resource cpu=55m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod ibm-vpc-block-csi-node-m8dqf requesting resource cpu=55m on Node 10.245.128.19 + Jul 27 02:04:28.849: INFO: Pod ibm-vpc-block-csi-node-pb2sj requesting resource cpu=55m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod vpn-7d8b749c64-87d9s requesting resource cpu=5m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod cluster-node-tuning-operator-5b85c5d47b-9cbp5 requesting resource cpu=10m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod tuned-8xqng requesting resource cpu=10m on Node 10.245.128.19 + Jul 27 02:04:28.849: INFO: Pod tuned-wnh5v requesting resource cpu=10m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod tuned-zxrv4 requesting resource cpu=10m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod cluster-samples-operator-588cc6f8cc-fh5hj requesting resource cpu=20m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod cluster-storage-operator-586d5b4d95-tq97j requesting resource cpu=10m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod csi-snapshot-controller-5b77984679-frszr requesting resource cpu=10m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod csi-snapshot-controller-5b77984679-wxrv8 requesting resource cpu=10m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod csi-snapshot-controller-operator-7c998b6874-9flch requesting resource cpu=10m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod csi-snapshot-webhook-78b8c8d77c-2pk6s requesting resource cpu=10m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod csi-snapshot-webhook-78b8c8d77c-jqbww requesting resource cpu=10m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod console-operator-8486d48d6-4xzr7 requesting resource cpu=20m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod console-7fd48bd95f-pzr2s requesting resource cpu=10m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod console-7fd48bd95f-wksvb requesting resource cpu=10m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod downloads-6874b45df6-nm9q6 requesting resource cpu=10m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod downloads-6874b45df6-w7xkq requesting resource cpu=10m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod dns-operator-7c549b76fd-t56tt requesting resource cpu=20m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod dns-default-5mw2g requesting resource cpu=60m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod dns-default-9k25b requesting resource cpu=60m on Node 10.245.128.19 + Jul 27 02:04:28.849: INFO: Pod dns-default-r982z requesting resource cpu=60m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod node-resolver-2kt92 requesting resource cpu=5m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod node-resolver-s2q44 requesting resource cpu=5m on Node 10.245.128.19 + Jul 27 02:04:28.849: INFO: Pod node-resolver-txjwq requesting resource cpu=5m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod cluster-image-registry-operator-96d4d84cf-65k8l requesting resource cpu=10m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod image-registry-69fbbd6d88-6xgnp requesting resource cpu=100m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod node-ca-kz4vp requesting resource cpu=10m on Node 10.245.128.19 + Jul 27 02:04:28.849: INFO: Pod node-ca-ntzct requesting resource cpu=10m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod node-ca-pmxp9 requesting resource cpu=10m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod ingress-canary-jphk8 requesting resource cpu=10m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod ingress-canary-nf2dw requesting resource cpu=10m on Node 10.245.128.19 + Jul 27 02:04:28.849: INFO: Pod ingress-canary-wh5qj requesting resource cpu=10m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod ingress-operator-64bc7f7964-9sbtr requesting resource cpu=20m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod router-default-865b575f54-b946s requesting resource cpu=100m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod router-default-865b575f54-qjwfv requesting resource cpu=100m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod insights-operator-5db47f7654-r8xdq requesting resource cpu=10m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod openshift-kube-proxy-4qg5c requesting resource cpu=110m on Node 10.245.128.19 + Jul 27 02:04:28.849: INFO: Pod openshift-kube-proxy-6hxmn requesting resource cpu=110m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod openshift-kube-proxy-r7t77 requesting resource cpu=110m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod kube-storage-version-migrator-operator-f4b8bf677-c24bz requesting resource cpu=10m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod migrator-77d7ddf546-9g7xm requesting resource cpu=10m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod certified-operators-qlqcc requesting resource cpu=10m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod community-operators-dtgmg requesting resource cpu=10m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod marketplace-operator-5ddbd9fdbc-lrhrq requesting resource cpu=10m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod redhat-marketplace-vnvdb requesting resource cpu=10m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod redhat-operators-9qw52 requesting resource cpu=10m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod alertmanager-main-0 requesting resource cpu=9m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod alertmanager-main-1 requesting resource cpu=9m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod cluster-monitoring-operator-7448698f65-65wn9 requesting resource cpu=10m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod kube-state-metrics-575bd9d6b6-2wk6g requesting resource cpu=4m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod node-exporter-2tscc requesting resource cpu=9m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod node-exporter-d46sh requesting resource cpu=9m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod node-exporter-vz8m9 requesting resource cpu=9m on Node 10.245.128.19 + Jul 27 02:04:28.849: INFO: Pod openshift-state-metrics-99754b784-vdbrs requesting resource cpu=3m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod prometheus-adapter-657855c676-hwbr7 requesting resource cpu=1m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod prometheus-adapter-657855c676-qlc95 requesting resource cpu=1m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod prometheus-k8s-0 requesting resource cpu=75m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod prometheus-k8s-1 requesting resource cpu=75m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod prometheus-operator-765bbdfd45-twq98 requesting resource cpu=6m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod prometheus-operator-admission-webhook-84c7bbc8cc-hct4l requesting resource cpu=5m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod prometheus-operator-admission-webhook-84c7bbc8cc-jvbxn requesting resource cpu=5m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod telemeter-client-c964ff8c9-xszvz requesting resource cpu=3m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod thanos-querier-7f9c896d7f-fk8mk requesting resource cpu=15m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod thanos-querier-7f9c896d7f-xqld6 requesting resource cpu=15m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod multus-287s2 requesting resource cpu=10m on Node 10.245.128.19 + Jul 27 02:04:28.849: INFO: Pod multus-5x56j requesting resource cpu=10m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod multus-additional-cni-plugins-njhzm requesting resource cpu=10m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod multus-additional-cni-plugins-p7gf5 requesting resource cpu=10m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod multus-additional-cni-plugins-xns7c requesting resource cpu=10m on Node 10.245.128.19 + Jul 27 02:04:28.849: INFO: Pod multus-admission-controller-8ccd764f4-7kmkg requesting resource cpu=20m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod multus-admission-controller-8ccd764f4-j68g7 requesting resource cpu=20m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod multus-zhftn requesting resource cpu=10m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod network-metrics-daemon-cglg2 requesting resource cpu=20m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod network-metrics-daemon-djvdx requesting resource cpu=20m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod network-metrics-daemon-xpw2q requesting resource cpu=20m on Node 10.245.128.19 + Jul 27 02:04:28.849: INFO: Pod network-check-source-6777f6456-pt5nn requesting resource cpu=10m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod network-check-target-2j7hq requesting resource cpu=10m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod network-check-target-85dgs requesting resource cpu=10m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod network-check-target-hf22d requesting resource cpu=10m on Node 10.245.128.19 + Jul 27 02:04:28.849: INFO: Pod network-operator-6dddb4f685-gc764 requesting resource cpu=10m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod catalog-operator-69ccd5899d-lrpkv requesting resource cpu=10m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod olm-operator-8448b5677d-bf2sl requesting resource cpu=10m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod package-server-manager-579d664b8c-klrwt requesting resource cpu=10m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod packageserver-b9964c68-6gdlp requesting resource cpu=10m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod packageserver-b9964c68-p2fd4 requesting resource cpu=10m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod metrics-6ff747d58d-llt7w requesting resource cpu=10m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod push-gateway-6448c6788-hrxtl requesting resource cpu=10m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod service-ca-operator-5db987957b-pftl9 requesting resource cpu=10m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod service-ca-665db46585-9cprv requesting resource cpu=10m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod sonobuoy requesting resource cpu=0m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod sonobuoy-e2e-job-17fd703895604ed7 requesting resource cpu=0m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod sonobuoy-systemd-logs-daemon-set-2f6dede7a68c4213-7p2cx requesting resource cpu=0m on Node 10.245.128.18 + Jul 27 02:04:28.849: INFO: Pod sonobuoy-systemd-logs-daemon-set-2f6dede7a68c4213-p74pn requesting resource cpu=0m on Node 10.245.128.19 + Jul 27 02:04:28.849: INFO: Pod sonobuoy-systemd-logs-daemon-set-2f6dede7a68c4213-vft4d requesting resource cpu=0m on Node 10.245.128.17 + Jul 27 02:04:28.849: INFO: Pod tigera-operator-5b48cf996b-5zb5v requesting resource cpu=100m on Node 10.245.128.17 + STEP: Starting Pods to consume most of the cluster CPU. 07/27/23 02:04:28.849 + Jul 27 02:04:28.849: INFO: Creating a pod which consumes cpu=1812m on Node 10.245.128.17 + Jul 27 02:04:28.876: INFO: Creating a pod which consumes cpu=1711m on Node 10.245.128.18 + Jul 27 02:04:28.895: INFO: Creating a pod which consumes cpu=2142m on Node 10.245.128.19 + Jul 27 02:04:28.946: INFO: Waiting up to 5m0s for pod "filler-pod-384fb7c2-ed82-426b-ab91-111db5be4dd7" in namespace "sched-pred-8221" to be "running" + Jul 27 02:04:28.962: INFO: Pod "filler-pod-384fb7c2-ed82-426b-ab91-111db5be4dd7": Phase="Pending", Reason="", readiness=false. Elapsed: 15.733952ms + Jul 27 02:04:30.973: INFO: Pod "filler-pod-384fb7c2-ed82-426b-ab91-111db5be4dd7": Phase="Running", Reason="", readiness=true. Elapsed: 2.026553517s + Jul 27 02:04:30.973: INFO: Pod "filler-pod-384fb7c2-ed82-426b-ab91-111db5be4dd7" satisfied condition "running" + Jul 27 02:04:30.973: INFO: Waiting up to 5m0s for pod "filler-pod-6d3d2084-424b-4b8e-bebf-a0c72fc953bb" in namespace "sched-pred-8221" to be "running" + Jul 27 02:04:30.982: INFO: Pod "filler-pod-6d3d2084-424b-4b8e-bebf-a0c72fc953bb": Phase="Running", Reason="", readiness=true. Elapsed: 9.402236ms + Jul 27 02:04:30.982: INFO: Pod "filler-pod-6d3d2084-424b-4b8e-bebf-a0c72fc953bb" satisfied condition "running" + Jul 27 02:04:30.982: INFO: Waiting up to 5m0s for pod "filler-pod-4650fb76-4f2b-4a76-ad61-1da0f940f7ac" in namespace "sched-pred-8221" to be "running" + Jul 27 02:04:30.994: INFO: Pod "filler-pod-4650fb76-4f2b-4a76-ad61-1da0f940f7ac": Phase="Pending", Reason="", readiness=false. Elapsed: 11.315576ms + Jul 27 02:04:33.008: INFO: Pod "filler-pod-4650fb76-4f2b-4a76-ad61-1da0f940f7ac": Phase="Running", Reason="", readiness=true. Elapsed: 2.025470172s + Jul 27 02:04:33.008: INFO: Pod "filler-pod-4650fb76-4f2b-4a76-ad61-1da0f940f7ac" satisfied condition "running" + STEP: Creating another pod that requires unavailable amount of CPU. 07/27/23 02:04:33.008 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-384fb7c2-ed82-426b-ab91-111db5be4dd7.177597304f184acc], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8221/filler-pod-384fb7c2-ed82-426b-ab91-111db5be4dd7 to 10.245.128.17] 07/27/23 02:04:33.019 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-384fb7c2-ed82-426b-ab91-111db5be4dd7.177597307cc416d4], Reason = [AddedInterface], Message = [Add eth0 [172.17.218.57/32] from k8s-pod-network] 07/27/23 02:04:33.019 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-384fb7c2-ed82-426b-ab91-111db5be4dd7.17759730890821e5], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 07/27/23 02:04:33.019 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-384fb7c2-ed82-426b-ab91-111db5be4dd7.1775973098953e82], Reason = [Created], Message = [Created container filler-pod-384fb7c2-ed82-426b-ab91-111db5be4dd7] 07/27/23 02:04:33.019 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-384fb7c2-ed82-426b-ab91-111db5be4dd7.177597309b50f2c5], Reason = [Started], Message = [Started container filler-pod-384fb7c2-ed82-426b-ab91-111db5be4dd7] 07/27/23 02:04:33.019 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-4650fb76-4f2b-4a76-ad61-1da0f940f7ac.17759730536af2c9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8221/filler-pod-4650fb76-4f2b-4a76-ad61-1da0f940f7ac to 10.245.128.19] 07/27/23 02:04:33.019 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-4650fb76-4f2b-4a76-ad61-1da0f940f7ac.1775973080146f17], Reason = [AddedInterface], Message = [Add eth0 [172.17.225.39/32] from k8s-pod-network] 07/27/23 02:04:33.019 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-4650fb76-4f2b-4a76-ad61-1da0f940f7ac.1775973089fa558e], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 07/27/23 02:04:33.019 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-4650fb76-4f2b-4a76-ad61-1da0f940f7ac.17759730949eac4a], Reason = [Created], Message = [Created container filler-pod-4650fb76-4f2b-4a76-ad61-1da0f940f7ac] 07/27/23 02:04:33.019 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-4650fb76-4f2b-4a76-ad61-1da0f940f7ac.1775973095f23eff], Reason = [Started], Message = [Started container filler-pod-4650fb76-4f2b-4a76-ad61-1da0f940f7ac] 07/27/23 02:04:33.019 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-6d3d2084-424b-4b8e-bebf-a0c72fc953bb.17759730520a038e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8221/filler-pod-6d3d2084-424b-4b8e-bebf-a0c72fc953bb to 10.245.128.18] 07/27/23 02:04:33.02 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-6d3d2084-424b-4b8e-bebf-a0c72fc953bb.1775973086dc1039], Reason = [AddedInterface], Message = [Add eth0 [172.17.230.184/32] from k8s-pod-network] 07/27/23 02:04:33.02 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-6d3d2084-424b-4b8e-bebf-a0c72fc953bb.1775973097185dd0], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 07/27/23 02:04:33.02 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-6d3d2084-424b-4b8e-bebf-a0c72fc953bb.17759730a691fc98], Reason = [Created], Message = [Created container filler-pod-6d3d2084-424b-4b8e-bebf-a0c72fc953bb] 07/27/23 02:04:33.02 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-6d3d2084-424b-4b8e-bebf-a0c72fc953bb.17759730a8ee691b], Reason = [Started], Message = [Started container filler-pod-6d3d2084-424b-4b8e-bebf-a0c72fc953bb] 07/27/23 02:04:33.02 + STEP: Considering event: + Type = [Warning], Name = [additional-pod.1775973146c3099d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 Insufficient cpu. preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod..] 07/27/23 02:04:33.115 + STEP: removing the label node off the node 10.245.128.17 07/27/23 02:04:34.055 + STEP: verifying the node doesn't have the label node 07/27/23 02:04:34.109 + STEP: removing the label node off the node 10.245.128.18 07/27/23 02:04:34.122 + STEP: verifying the node doesn't have the label node 07/27/23 02:04:34.16 + STEP: removing the label node off the node 10.245.128.19 07/27/23 02:04:34.174 + STEP: verifying the node doesn't have the label node 07/27/23 02:04:34.209 + [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 - Jun 12 21:32:26.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] - test/e2e/scheduling/preemption.go:84 - [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + Jul 27 02:04:34.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:88 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 - STEP: Destroying namespace "sched-preemption-611" for this suite. 06/12/23 21:32:27.122 + STEP: Destroying namespace "sched-pred-8221" for this suite. 07/27/23 02:04:34.241 << End Captured GinkgoWriter Output ------------------------------ -S +SSSSSSSSSSSSSSSSS ------------------------------ -[sig-api-machinery] Watchers - should observe add, update, and delete watch notifications on configmaps [Conformance] - test/e2e/apimachinery/watch.go:60 -[BeforeEach] [sig-api-machinery] Watchers +[sig-apps] CronJob + should support CronJob API operations [Conformance] + test/e2e/apps/cronjob.go:319 +[BeforeEach] [sig-apps] CronJob set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:32:27.158 -Jun 12 21:32:27.158: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename watch 06/12/23 21:32:27.163 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:32:27.242 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:32:27.287 -[BeforeEach] [sig-api-machinery] Watchers +STEP: Creating a kubernetes client 07/27/23 02:04:34.267 +Jul 27 02:04:34.267: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename cronjob 07/27/23 02:04:34.268 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:04:34.334 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:04:34.344 +[BeforeEach] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:31 -[It] should observe add, update, and delete watch notifications on configmaps [Conformance] - test/e2e/apimachinery/watch.go:60 -STEP: creating a watch on configmaps with label A 06/12/23 21:32:27.321 -STEP: creating a watch on configmaps with label B 06/12/23 21:32:27.357 -STEP: creating a watch on configmaps with label A or B 06/12/23 21:32:27.379 -STEP: creating a configmap with label A and ensuring the correct watchers observe the notification 06/12/23 21:32:27.433 -Jun 12 21:32:27.467: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2675 7d3d606f-2458-4d86-8f61-d3719899f9cc 105646 0 2023-06-12 21:32:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-06-12 21:32:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} -Jun 12 21:32:27.467: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2675 7d3d606f-2458-4d86-8f61-d3719899f9cc 105646 0 2023-06-12 21:32:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-06-12 21:32:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} -STEP: modifying configmap A and ensuring the correct watchers observe the notification 06/12/23 21:32:27.468 -Jun 12 21:32:27.515: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2675 7d3d606f-2458-4d86-8f61-d3719899f9cc 105647 0 2023-06-12 21:32:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-06-12 21:32:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} -Jun 12 21:32:27.516: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2675 7d3d606f-2458-4d86-8f61-d3719899f9cc 105647 0 2023-06-12 21:32:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-06-12 21:32:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} -STEP: modifying configmap A again and ensuring the correct watchers observe the notification 06/12/23 21:32:27.517 -Jun 12 21:32:27.630: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2675 7d3d606f-2458-4d86-8f61-d3719899f9cc 105648 0 2023-06-12 21:32:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-06-12 21:32:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} -Jun 12 21:32:27.630: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2675 7d3d606f-2458-4d86-8f61-d3719899f9cc 105648 0 2023-06-12 21:32:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-06-12 21:32:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} -STEP: deleting configmap A and ensuring the correct watchers observe the notification 06/12/23 21:32:27.63 -Jun 12 21:32:27.652: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2675 7d3d606f-2458-4d86-8f61-d3719899f9cc 105650 0 2023-06-12 21:32:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-06-12 21:32:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} -Jun 12 21:32:27.652: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2675 7d3d606f-2458-4d86-8f61-d3719899f9cc 105650 0 2023-06-12 21:32:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-06-12 21:32:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} -STEP: creating a configmap with label B and ensuring the correct watchers observe the notification 06/12/23 21:32:27.652 -Jun 12 21:32:27.679: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2675 b856f01e-6a8a-4bdb-b066-953a8b2ac0cf 105651 0 2023-06-12 21:32:27 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-06-12 21:32:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} -Jun 12 21:32:27.705: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2675 b856f01e-6a8a-4bdb-b066-953a8b2ac0cf 105651 0 2023-06-12 21:32:27 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-06-12 21:32:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} -STEP: deleting configmap B and ensuring the correct watchers observe the notification 06/12/23 21:32:37.706 -Jun 12 21:32:37.732: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2675 b856f01e-6a8a-4bdb-b066-953a8b2ac0cf 105798 0 2023-06-12 21:32:27 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-06-12 21:32:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} -Jun 12 21:32:37.732: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2675 b856f01e-6a8a-4bdb-b066-953a8b2ac0cf 105798 0 2023-06-12 21:32:27 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-06-12 21:32:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} -[AfterEach] [sig-api-machinery] Watchers +[It] should support CronJob API operations [Conformance] + test/e2e/apps/cronjob.go:319 +STEP: Creating a cronjob 07/27/23 02:04:34.355 +STEP: creating 07/27/23 02:04:34.355 +W0727 02:04:34.376606 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "c" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "c" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "c" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "c" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +STEP: getting 07/27/23 02:04:34.376 +STEP: listing 07/27/23 02:04:34.395 +STEP: watching 07/27/23 02:04:34.414 +Jul 27 02:04:34.414: INFO: starting watch +STEP: cluster-wide listing 07/27/23 02:04:34.419 +STEP: cluster-wide watching 07/27/23 02:04:34.457 +Jul 27 02:04:34.457: INFO: starting watch +STEP: patching 07/27/23 02:04:34.462 +STEP: updating 07/27/23 02:04:34.484 +Jul 27 02:04:34.530: INFO: waiting for watch events with expected annotations +Jul 27 02:04:34.530: INFO: saw patched and updated annotations +STEP: patching /status 07/27/23 02:04:34.531 +STEP: updating /status 07/27/23 02:04:34.564 +STEP: get /status 07/27/23 02:04:34.605 +STEP: deleting 07/27/23 02:04:34.625 +STEP: deleting a collection 07/27/23 02:04:34.719 +[AfterEach] [sig-apps] CronJob test/e2e/framework/node/init/init.go:32 -Jun 12 21:32:47.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] Watchers +Jul 27 02:04:34.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] Watchers +[DeferCleanup (Each)] [sig-apps] CronJob dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] Watchers +[DeferCleanup (Each)] [sig-apps] CronJob tear down framework | framework.go:193 -STEP: Destroying namespace "watch-2675" for this suite. 06/12/23 21:32:47.752 +STEP: Destroying namespace "cronjob-9964" for this suite. 07/27/23 02:04:34.784 ------------------------------ -• [SLOW TEST] [20.648 seconds] -[sig-api-machinery] Watchers -test/e2e/apimachinery/framework.go:23 - should observe add, update, and delete watch notifications on configmaps [Conformance] - test/e2e/apimachinery/watch.go:60 +• [0.545 seconds] +[sig-apps] CronJob +test/e2e/apps/framework.go:23 + should support CronJob API operations [Conformance] + test/e2e/apps/cronjob.go:319 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] Watchers + [BeforeEach] [sig-apps] CronJob set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:32:27.158 - Jun 12 21:32:27.158: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename watch 06/12/23 21:32:27.163 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:32:27.242 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:32:27.287 - [BeforeEach] [sig-api-machinery] Watchers + STEP: Creating a kubernetes client 07/27/23 02:04:34.267 + Jul 27 02:04:34.267: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename cronjob 07/27/23 02:04:34.268 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:04:34.334 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:04:34.344 + [BeforeEach] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:31 - [It] should observe add, update, and delete watch notifications on configmaps [Conformance] - test/e2e/apimachinery/watch.go:60 - STEP: creating a watch on configmaps with label A 06/12/23 21:32:27.321 - STEP: creating a watch on configmaps with label B 06/12/23 21:32:27.357 - STEP: creating a watch on configmaps with label A or B 06/12/23 21:32:27.379 - STEP: creating a configmap with label A and ensuring the correct watchers observe the notification 06/12/23 21:32:27.433 - Jun 12 21:32:27.467: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2675 7d3d606f-2458-4d86-8f61-d3719899f9cc 105646 0 2023-06-12 21:32:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-06-12 21:32:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} - Jun 12 21:32:27.467: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2675 7d3d606f-2458-4d86-8f61-d3719899f9cc 105646 0 2023-06-12 21:32:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-06-12 21:32:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} - STEP: modifying configmap A and ensuring the correct watchers observe the notification 06/12/23 21:32:27.468 - Jun 12 21:32:27.515: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2675 7d3d606f-2458-4d86-8f61-d3719899f9cc 105647 0 2023-06-12 21:32:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-06-12 21:32:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} - Jun 12 21:32:27.516: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2675 7d3d606f-2458-4d86-8f61-d3719899f9cc 105647 0 2023-06-12 21:32:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-06-12 21:32:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} - STEP: modifying configmap A again and ensuring the correct watchers observe the notification 06/12/23 21:32:27.517 - Jun 12 21:32:27.630: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2675 7d3d606f-2458-4d86-8f61-d3719899f9cc 105648 0 2023-06-12 21:32:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-06-12 21:32:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} - Jun 12 21:32:27.630: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2675 7d3d606f-2458-4d86-8f61-d3719899f9cc 105648 0 2023-06-12 21:32:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-06-12 21:32:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} - STEP: deleting configmap A and ensuring the correct watchers observe the notification 06/12/23 21:32:27.63 - Jun 12 21:32:27.652: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2675 7d3d606f-2458-4d86-8f61-d3719899f9cc 105650 0 2023-06-12 21:32:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-06-12 21:32:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} - Jun 12 21:32:27.652: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2675 7d3d606f-2458-4d86-8f61-d3719899f9cc 105650 0 2023-06-12 21:32:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-06-12 21:32:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} - STEP: creating a configmap with label B and ensuring the correct watchers observe the notification 06/12/23 21:32:27.652 - Jun 12 21:32:27.679: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2675 b856f01e-6a8a-4bdb-b066-953a8b2ac0cf 105651 0 2023-06-12 21:32:27 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-06-12 21:32:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} - Jun 12 21:32:27.705: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2675 b856f01e-6a8a-4bdb-b066-953a8b2ac0cf 105651 0 2023-06-12 21:32:27 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-06-12 21:32:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} - STEP: deleting configmap B and ensuring the correct watchers observe the notification 06/12/23 21:32:37.706 - Jun 12 21:32:37.732: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2675 b856f01e-6a8a-4bdb-b066-953a8b2ac0cf 105798 0 2023-06-12 21:32:27 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-06-12 21:32:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} - Jun 12 21:32:37.732: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2675 b856f01e-6a8a-4bdb-b066-953a8b2ac0cf 105798 0 2023-06-12 21:32:27 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-06-12 21:32:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} - [AfterEach] [sig-api-machinery] Watchers + [It] should support CronJob API operations [Conformance] + test/e2e/apps/cronjob.go:319 + STEP: Creating a cronjob 07/27/23 02:04:34.355 + STEP: creating 07/27/23 02:04:34.355 + W0727 02:04:34.376606 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "c" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "c" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "c" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "c" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + STEP: getting 07/27/23 02:04:34.376 + STEP: listing 07/27/23 02:04:34.395 + STEP: watching 07/27/23 02:04:34.414 + Jul 27 02:04:34.414: INFO: starting watch + STEP: cluster-wide listing 07/27/23 02:04:34.419 + STEP: cluster-wide watching 07/27/23 02:04:34.457 + Jul 27 02:04:34.457: INFO: starting watch + STEP: patching 07/27/23 02:04:34.462 + STEP: updating 07/27/23 02:04:34.484 + Jul 27 02:04:34.530: INFO: waiting for watch events with expected annotations + Jul 27 02:04:34.530: INFO: saw patched and updated annotations + STEP: patching /status 07/27/23 02:04:34.531 + STEP: updating /status 07/27/23 02:04:34.564 + STEP: get /status 07/27/23 02:04:34.605 + STEP: deleting 07/27/23 02:04:34.625 + STEP: deleting a collection 07/27/23 02:04:34.719 + [AfterEach] [sig-apps] CronJob test/e2e/framework/node/init/init.go:32 - Jun 12 21:32:47.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] Watchers + Jul 27 02:04:34.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] Watchers + [DeferCleanup (Each)] [sig-apps] CronJob dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] Watchers + [DeferCleanup (Each)] [sig-apps] CronJob tear down framework | framework.go:193 - STEP: Destroying namespace "watch-2675" for this suite. 06/12/23 21:32:47.752 + STEP: Destroying namespace "cronjob-9964" for this suite. 07/27/23 02:04:34.784 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSS +SSSS ------------------------------ -[sig-network] Ingress API - should support creating Ingress API operations [Conformance] - test/e2e/network/ingress.go:552 -[BeforeEach] [sig-network] Ingress API +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + updates the published spec when one version gets renamed [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:391 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:32:47.81 -Jun 12 21:32:47.811: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename ingress 06/12/23 21:32:47.813 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:32:47.863 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:32:47.879 -[BeforeEach] [sig-network] Ingress API +STEP: Creating a kubernetes client 07/27/23 02:04:34.812 +Jul 27 02:04:34.812: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename crd-publish-openapi 07/27/23 02:04:34.813 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:04:34.859 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:04:34.869 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 -[It] should support creating Ingress API operations [Conformance] - test/e2e/network/ingress.go:552 -STEP: getting /apis 06/12/23 21:32:47.892 -STEP: getting /apis/networking.k8s.io 06/12/23 21:32:47.937 -STEP: getting /apis/networking.k8s.iov1 06/12/23 21:32:47.943 -STEP: creating 06/12/23 21:32:47.948 -STEP: getting 06/12/23 21:32:47.997 -STEP: listing 06/12/23 21:32:48.009 -STEP: watching 06/12/23 21:32:48.021 -Jun 12 21:32:48.022: INFO: starting watch -STEP: cluster-wide listing 06/12/23 21:32:48.027 -STEP: cluster-wide watching 06/12/23 21:32:48.04 -Jun 12 21:32:48.040: INFO: starting watch -STEP: patching 06/12/23 21:32:48.045 -STEP: updating 06/12/23 21:32:48.062 -Jun 12 21:32:48.090: INFO: waiting for watch events with expected annotations -Jun 12 21:32:48.090: INFO: saw patched and updated annotations -STEP: patching /status 06/12/23 21:32:48.09 -STEP: updating /status 06/12/23 21:32:48.106 -STEP: get /status 06/12/23 21:32:48.135 -STEP: deleting 06/12/23 21:32:48.147 -STEP: deleting a collection 06/12/23 21:32:48.191 -[AfterEach] [sig-network] Ingress API +[It] updates the published spec when one version gets renamed [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:391 +STEP: set up a multi version CRD 07/27/23 02:04:34.88 +Jul 27 02:04:34.881: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: rename a version 07/27/23 02:04:45.769 +STEP: check the new version name is served 07/27/23 02:04:45.796 +STEP: check the old version name is removed 07/27/23 02:04:53.591 +STEP: check the other version is not changed 07/27/23 02:04:54.674 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 -Jun 12 21:32:48.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] Ingress API +Jul 27 02:05:04.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] Ingress API +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] Ingress API +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] tear down framework | framework.go:193 -STEP: Destroying namespace "ingress-5293" for this suite. 06/12/23 21:32:48.257 +STEP: Destroying namespace "crd-publish-openapi-5629" for this suite. 07/27/23 02:05:04.909 ------------------------------ -• [0.471 seconds] -[sig-network] Ingress API -test/e2e/network/common/framework.go:23 - should support creating Ingress API operations [Conformance] - test/e2e/network/ingress.go:552 +• [SLOW TEST] [30.121 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + updates the published spec when one version gets renamed [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:391 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] Ingress API + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:32:47.81 - Jun 12 21:32:47.811: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename ingress 06/12/23 21:32:47.813 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:32:47.863 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:32:47.879 - [BeforeEach] [sig-network] Ingress API + STEP: Creating a kubernetes client 07/27/23 02:04:34.812 + Jul 27 02:04:34.812: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename crd-publish-openapi 07/27/23 02:04:34.813 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:04:34.859 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:04:34.869 + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 - [It] should support creating Ingress API operations [Conformance] - test/e2e/network/ingress.go:552 - STEP: getting /apis 06/12/23 21:32:47.892 - STEP: getting /apis/networking.k8s.io 06/12/23 21:32:47.937 - STEP: getting /apis/networking.k8s.iov1 06/12/23 21:32:47.943 - STEP: creating 06/12/23 21:32:47.948 - STEP: getting 06/12/23 21:32:47.997 - STEP: listing 06/12/23 21:32:48.009 - STEP: watching 06/12/23 21:32:48.021 - Jun 12 21:32:48.022: INFO: starting watch - STEP: cluster-wide listing 06/12/23 21:32:48.027 - STEP: cluster-wide watching 06/12/23 21:32:48.04 - Jun 12 21:32:48.040: INFO: starting watch - STEP: patching 06/12/23 21:32:48.045 - STEP: updating 06/12/23 21:32:48.062 - Jun 12 21:32:48.090: INFO: waiting for watch events with expected annotations - Jun 12 21:32:48.090: INFO: saw patched and updated annotations - STEP: patching /status 06/12/23 21:32:48.09 - STEP: updating /status 06/12/23 21:32:48.106 - STEP: get /status 06/12/23 21:32:48.135 - STEP: deleting 06/12/23 21:32:48.147 - STEP: deleting a collection 06/12/23 21:32:48.191 - [AfterEach] [sig-network] Ingress API + [It] updates the published spec when one version gets renamed [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:391 + STEP: set up a multi version CRD 07/27/23 02:04:34.88 + Jul 27 02:04:34.881: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: rename a version 07/27/23 02:04:45.769 + STEP: check the new version name is served 07/27/23 02:04:45.796 + STEP: check the old version name is removed 07/27/23 02:04:53.591 + STEP: check the other version is not changed 07/27/23 02:04:54.674 + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 21:32:48.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] Ingress API + Jul 27 02:05:04.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] Ingress API + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] Ingress API + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "ingress-5293" for this suite. 06/12/23 21:32:48.257 + STEP: Destroying namespace "crd-publish-openapi-5629" for this suite. 07/27/23 02:05:04.909 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-network] Networking Granular Checks: Pods - should function for intra-pod communication: http [NodeConformance] [Conformance] - test/e2e/common/network/networking.go:82 -[BeforeEach] [sig-network] Networking +[sig-node] Kubelet when scheduling a busybox command that always fails in a pod + should have an terminated reason [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:110 +[BeforeEach] [sig-node] Kubelet set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:32:48.293 -Jun 12 21:32:48.293: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename pod-network-test 06/12/23 21:32:48.295 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:32:48.346 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:32:48.373 -[BeforeEach] [sig-network] Networking +STEP: Creating a kubernetes client 07/27/23 02:05:04.934 +Jul 27 02:05:04.934: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename kubelet-test 07/27/23 02:05:04.935 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:05:04.973 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:05:04.983 +[BeforeEach] [sig-node] Kubelet test/e2e/framework/metrics/init/init.go:31 -[It] should function for intra-pod communication: http [NodeConformance] [Conformance] - test/e2e/common/network/networking.go:82 -STEP: Performing setup for networking test in namespace pod-network-test-4182 06/12/23 21:32:48.385 -STEP: creating a selector 06/12/23 21:32:48.386 -STEP: Creating the service pods in kubernetes 06/12/23 21:32:48.386 -Jun 12 21:32:48.386: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable -Jun 12 21:32:48.461: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-4182" to be "running and ready" -Jun 12 21:32:48.476: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 14.622711ms -Jun 12 21:32:48.476: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:32:50.487: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025820452s -Jun 12 21:32:50.487: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:32:52.496: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.034469297s -Jun 12 21:32:52.496: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:32:54.497: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.035977753s -Jun 12 21:32:54.497: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:32:56.489: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.027480164s -Jun 12 21:32:56.489: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:32:58.492: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.030934426s -Jun 12 21:32:58.492: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:33:00.490: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.028969287s -Jun 12 21:33:00.490: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:33:02.549: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.087336769s -Jun 12 21:33:02.549: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:33:04.496: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.03434778s -Jun 12 21:33:04.496: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:33:06.487: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.026128675s -Jun 12 21:33:06.488: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:33:08.488: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.02631396s -Jun 12 21:33:08.488: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:33:10.488: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 22.026392068s -Jun 12 21:33:10.488: INFO: The phase of Pod netserver-0 is Running (Ready = true) -Jun 12 21:33:10.488: INFO: Pod "netserver-0" satisfied condition "running and ready" -Jun 12 21:33:10.497: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-4182" to be "running and ready" -Jun 12 21:33:10.505: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 8.615214ms -Jun 12 21:33:10.505: INFO: The phase of Pod netserver-1 is Running (Ready = true) -Jun 12 21:33:10.506: INFO: Pod "netserver-1" satisfied condition "running and ready" -Jun 12 21:33:10.515: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-4182" to be "running and ready" -Jun 12 21:33:10.525: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 9.449169ms -Jun 12 21:33:10.525: INFO: The phase of Pod netserver-2 is Running (Ready = true) -Jun 12 21:33:10.525: INFO: Pod "netserver-2" satisfied condition "running and ready" -STEP: Creating test pods 06/12/23 21:33:10.536 -Jun 12 21:33:10.564: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-4182" to be "running" -Jun 12 21:33:10.574: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 8.888972ms -Jun 12 21:33:12.587: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022822574s -Jun 12 21:33:14.583: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.017997773s -Jun 12 21:33:14.583: INFO: Pod "test-container-pod" satisfied condition "running" -Jun 12 21:33:14.591: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 -Jun 12 21:33:14.591: INFO: Breadth first check of 172.30.161.118 on host 10.138.75.112... -Jun 12 21:33:14.599: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.30.224.43:9080/dial?request=hostname&protocol=http&host=172.30.161.118&port=8083&tries=1'] Namespace:pod-network-test-4182 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} -Jun 12 21:33:14.599: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 21:33:14.601: INFO: ExecWithOptions: Clientset creation -Jun 12 21:33:14.601: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-4182/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F172.30.224.43%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D172.30.161.118%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) -Jun 12 21:33:14.776: INFO: Waiting for responses: map[] -Jun 12 21:33:14.776: INFO: reached 172.30.161.118 after 0/1 tries -Jun 12 21:33:14.776: INFO: Breadth first check of 172.30.185.112 on host 10.138.75.116... -Jun 12 21:33:14.785: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.30.224.43:9080/dial?request=hostname&protocol=http&host=172.30.185.112&port=8083&tries=1'] Namespace:pod-network-test-4182 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} -Jun 12 21:33:14.785: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 21:33:14.787: INFO: ExecWithOptions: Clientset creation -Jun 12 21:33:14.787: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-4182/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F172.30.224.43%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D172.30.185.112%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) -Jun 12 21:33:14.974: INFO: Waiting for responses: map[] -Jun 12 21:33:14.974: INFO: reached 172.30.185.112 after 0/1 tries -Jun 12 21:33:14.974: INFO: Breadth first check of 172.30.224.32 on host 10.138.75.70... -Jun 12 21:33:14.983: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.30.224.43:9080/dial?request=hostname&protocol=http&host=172.30.224.32&port=8083&tries=1'] Namespace:pod-network-test-4182 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} -Jun 12 21:33:14.983: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 21:33:14.985: INFO: ExecWithOptions: Clientset creation -Jun 12 21:33:14.985: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-4182/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F172.30.224.43%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D172.30.224.32%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) -Jun 12 21:33:15.278: INFO: Waiting for responses: map[] -Jun 12 21:33:15.278: INFO: reached 172.30.224.32 after 0/1 tries -Jun 12 21:33:15.278: INFO: Going to retry 0 out of 3 pods.... -[AfterEach] [sig-network] Networking +[BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 +[BeforeEach] when scheduling a busybox command that always fails in a pod + test/e2e/common/node/kubelet.go:85 +[It] should have an terminated reason [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:110 +[AfterEach] [sig-node] Kubelet test/e2e/framework/node/init/init.go:32 -Jun 12 21:33:15.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] Networking +Jul 27 02:05:09.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Kubelet test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] Networking +[DeferCleanup (Each)] [sig-node] Kubelet dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] Networking +[DeferCleanup (Each)] [sig-node] Kubelet tear down framework | framework.go:193 -STEP: Destroying namespace "pod-network-test-4182" for this suite. 06/12/23 21:33:15.297 +STEP: Destroying namespace "kubelet-test-7462" for this suite. 07/27/23 02:05:09.056 ------------------------------ -• [SLOW TEST] [27.042 seconds] -[sig-network] Networking -test/e2e/common/network/framework.go:23 - Granular Checks: Pods - test/e2e/common/network/networking.go:32 - should function for intra-pod communication: http [NodeConformance] [Conformance] - test/e2e/common/network/networking.go:82 +• [4.144 seconds] +[sig-node] Kubelet +test/e2e/common/node/framework.go:23 + when scheduling a busybox command that always fails in a pod + test/e2e/common/node/kubelet.go:82 + should have an terminated reason [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:110 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] Networking + [BeforeEach] [sig-node] Kubelet set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:32:48.293 - Jun 12 21:32:48.293: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename pod-network-test 06/12/23 21:32:48.295 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:32:48.346 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:32:48.373 - [BeforeEach] [sig-network] Networking + STEP: Creating a kubernetes client 07/27/23 02:05:04.934 + Jul 27 02:05:04.934: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename kubelet-test 07/27/23 02:05:04.935 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:05:04.973 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:05:04.983 + [BeforeEach] [sig-node] Kubelet test/e2e/framework/metrics/init/init.go:31 - [It] should function for intra-pod communication: http [NodeConformance] [Conformance] - test/e2e/common/network/networking.go:82 - STEP: Performing setup for networking test in namespace pod-network-test-4182 06/12/23 21:32:48.385 - STEP: creating a selector 06/12/23 21:32:48.386 - STEP: Creating the service pods in kubernetes 06/12/23 21:32:48.386 - Jun 12 21:32:48.386: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable - Jun 12 21:32:48.461: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-4182" to be "running and ready" - Jun 12 21:32:48.476: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 14.622711ms - Jun 12 21:32:48.476: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:32:50.487: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025820452s - Jun 12 21:32:50.487: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:32:52.496: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.034469297s - Jun 12 21:32:52.496: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:32:54.497: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.035977753s - Jun 12 21:32:54.497: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:32:56.489: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.027480164s - Jun 12 21:32:56.489: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:32:58.492: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.030934426s - Jun 12 21:32:58.492: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:33:00.490: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.028969287s - Jun 12 21:33:00.490: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:33:02.549: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.087336769s - Jun 12 21:33:02.549: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:33:04.496: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.03434778s - Jun 12 21:33:04.496: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:33:06.487: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.026128675s - Jun 12 21:33:06.488: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:33:08.488: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.02631396s - Jun 12 21:33:08.488: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:33:10.488: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 22.026392068s - Jun 12 21:33:10.488: INFO: The phase of Pod netserver-0 is Running (Ready = true) - Jun 12 21:33:10.488: INFO: Pod "netserver-0" satisfied condition "running and ready" - Jun 12 21:33:10.497: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-4182" to be "running and ready" - Jun 12 21:33:10.505: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 8.615214ms - Jun 12 21:33:10.505: INFO: The phase of Pod netserver-1 is Running (Ready = true) - Jun 12 21:33:10.506: INFO: Pod "netserver-1" satisfied condition "running and ready" - Jun 12 21:33:10.515: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-4182" to be "running and ready" - Jun 12 21:33:10.525: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 9.449169ms - Jun 12 21:33:10.525: INFO: The phase of Pod netserver-2 is Running (Ready = true) - Jun 12 21:33:10.525: INFO: Pod "netserver-2" satisfied condition "running and ready" - STEP: Creating test pods 06/12/23 21:33:10.536 - Jun 12 21:33:10.564: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-4182" to be "running" - Jun 12 21:33:10.574: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 8.888972ms - Jun 12 21:33:12.587: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022822574s - Jun 12 21:33:14.583: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.017997773s - Jun 12 21:33:14.583: INFO: Pod "test-container-pod" satisfied condition "running" - Jun 12 21:33:14.591: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 - Jun 12 21:33:14.591: INFO: Breadth first check of 172.30.161.118 on host 10.138.75.112... - Jun 12 21:33:14.599: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.30.224.43:9080/dial?request=hostname&protocol=http&host=172.30.161.118&port=8083&tries=1'] Namespace:pod-network-test-4182 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} - Jun 12 21:33:14.599: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 21:33:14.601: INFO: ExecWithOptions: Clientset creation - Jun 12 21:33:14.601: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-4182/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F172.30.224.43%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D172.30.161.118%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) - Jun 12 21:33:14.776: INFO: Waiting for responses: map[] - Jun 12 21:33:14.776: INFO: reached 172.30.161.118 after 0/1 tries - Jun 12 21:33:14.776: INFO: Breadth first check of 172.30.185.112 on host 10.138.75.116... - Jun 12 21:33:14.785: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.30.224.43:9080/dial?request=hostname&protocol=http&host=172.30.185.112&port=8083&tries=1'] Namespace:pod-network-test-4182 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} - Jun 12 21:33:14.785: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 21:33:14.787: INFO: ExecWithOptions: Clientset creation - Jun 12 21:33:14.787: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-4182/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F172.30.224.43%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D172.30.185.112%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) - Jun 12 21:33:14.974: INFO: Waiting for responses: map[] - Jun 12 21:33:14.974: INFO: reached 172.30.185.112 after 0/1 tries - Jun 12 21:33:14.974: INFO: Breadth first check of 172.30.224.32 on host 10.138.75.70... - Jun 12 21:33:14.983: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.30.224.43:9080/dial?request=hostname&protocol=http&host=172.30.224.32&port=8083&tries=1'] Namespace:pod-network-test-4182 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} - Jun 12 21:33:14.983: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 21:33:14.985: INFO: ExecWithOptions: Clientset creation - Jun 12 21:33:14.985: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-4182/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F172.30.224.43%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D172.30.224.32%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) - Jun 12 21:33:15.278: INFO: Waiting for responses: map[] - Jun 12 21:33:15.278: INFO: reached 172.30.224.32 after 0/1 tries - Jun 12 21:33:15.278: INFO: Going to retry 0 out of 3 pods.... - [AfterEach] [sig-network] Networking + [BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 + [BeforeEach] when scheduling a busybox command that always fails in a pod + test/e2e/common/node/kubelet.go:85 + [It] should have an terminated reason [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:110 + [AfterEach] [sig-node] Kubelet test/e2e/framework/node/init/init.go:32 - Jun 12 21:33:15.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] Networking + Jul 27 02:05:09.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Kubelet test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] Networking + [DeferCleanup (Each)] [sig-node] Kubelet dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] Networking + [DeferCleanup (Each)] [sig-node] Kubelet tear down framework | framework.go:193 - STEP: Destroying namespace "pod-network-test-4182" for this suite. 06/12/23 21:33:15.297 + STEP: Destroying namespace "kubelet-test-7462" for this suite. 07/27/23 02:05:09.056 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSS +S +------------------------------ +[sig-apps] Deployment + RecreateDeployment should delete old pods and create new ones [Conformance] + test/e2e/apps/deployment.go:113 +[BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 +STEP: Creating a kubernetes client 07/27/23 02:05:09.078 +Jul 27 02:05:09.078: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename deployment 07/27/23 02:05:09.079 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:05:09.123 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:05:09.145 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] RecreateDeployment should delete old pods and create new ones [Conformance] + test/e2e/apps/deployment.go:113 +Jul 27 02:05:09.181: INFO: Creating deployment "test-recreate-deployment" +Jul 27 02:05:09.207: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 +Jul 27 02:05:09.225: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created +Jul 27 02:05:11.245: INFO: Waiting deployment "test-recreate-deployment" to complete +Jul 27 02:05:11.252: INFO: Triggering a new rollout for deployment "test-recreate-deployment" +Jul 27 02:05:11.274: INFO: Updating deployment test-recreate-deployment +Jul 27 02:05:11.274: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Jul 27 02:05:11.412: INFO: Deployment "test-recreate-deployment": +&Deployment{ObjectMeta:{test-recreate-deployment deployment-8212 55ecf00f-fdfe-4aeb-b587-944e0064d752 91058 2 2023-07-27 02:05:09 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-07-27 02:05:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 02:05:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00ac76558 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-07-27 02:05:11 +0000 UTC,LastTransitionTime:2023-07-27 02:05:11 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-cff6dc657" is progressing.,LastUpdateTime:2023-07-27 02:05:11 +0000 UTC,LastTransitionTime:2023-07-27 02:05:09 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} + +Jul 27 02:05:11.421: INFO: New ReplicaSet "test-recreate-deployment-cff6dc657" of Deployment "test-recreate-deployment": +&ReplicaSet{ObjectMeta:{test-recreate-deployment-cff6dc657 deployment-8212 b0fc8926-c3af-44ad-ba78-87164bd0f7fb 91057 1 2023-07-27 02:05:11 +0000 UTC map[name:sample-pod-3 pod-template-hash:cff6dc657] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 55ecf00f-fdfe-4aeb-b587-944e0064d752 0xc00ac76a20 0xc00ac76a21}] [] [{kube-controller-manager Update apps/v1 2023-07-27 02:05:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55ecf00f-fdfe-4aeb-b587-944e0064d752\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 02:05:11 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: cff6dc657,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:cff6dc657] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00ac76ab8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Jul 27 02:05:11.421: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": +Jul 27 02:05:11.421: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-795566c5cb deployment-8212 84652e54-a546-4cd9-a160-576a7a117cee 91047 2 2023-07-27 02:05:09 +0000 UTC map[name:sample-pod-3 pod-template-hash:795566c5cb] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 55ecf00f-fdfe-4aeb-b587-944e0064d752 0xc00ac76907 0xc00ac76908}] [] [{kube-controller-manager Update apps/v1 2023-07-27 02:05:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55ecf00f-fdfe-4aeb-b587-944e0064d752\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 02:05:11 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 795566c5cb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:795566c5cb] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00ac769b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Jul 27 02:05:11.430: INFO: Pod "test-recreate-deployment-cff6dc657-49xf9" is not available: +&Pod{ObjectMeta:{test-recreate-deployment-cff6dc657-49xf9 test-recreate-deployment-cff6dc657- deployment-8212 12416f19-c567-4429-a47e-c0cd562c250d 91059 0 2023-07-27 02:05:11 +0000 UTC map[name:sample-pod-3 pod-template-hash:cff6dc657] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-recreate-deployment-cff6dc657 b0fc8926-c3af-44ad-ba78-87164bd0f7fb 0xc005177a77 0xc005177a78}] [] [{kube-controller-manager Update v1 2023-07-27 02:05:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b0fc8926-c3af-44ad-ba78-87164bd0f7fb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-07-27 02:05:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2cgff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2cgff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.19,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c50,c0,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-npd6m,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:05:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:05:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:05:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:05:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.19,PodIP:,StartTime:2023-07-27 02:05:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 +Jul 27 02:05:11.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 +STEP: Destroying namespace "deployment-8212" for this suite. 07/27/23 02:05:11.443 +------------------------------ +• [2.387 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + RecreateDeployment should delete old pods and create new ones [Conformance] + test/e2e/apps/deployment.go:113 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 + STEP: Creating a kubernetes client 07/27/23 02:05:09.078 + Jul 27 02:05:09.078: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename deployment 07/27/23 02:05:09.079 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:05:09.123 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:05:09.145 + [BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 + [It] RecreateDeployment should delete old pods and create new ones [Conformance] + test/e2e/apps/deployment.go:113 + Jul 27 02:05:09.181: INFO: Creating deployment "test-recreate-deployment" + Jul 27 02:05:09.207: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 + Jul 27 02:05:09.225: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created + Jul 27 02:05:11.245: INFO: Waiting deployment "test-recreate-deployment" to complete + Jul 27 02:05:11.252: INFO: Triggering a new rollout for deployment "test-recreate-deployment" + Jul 27 02:05:11.274: INFO: Updating deployment test-recreate-deployment + Jul 27 02:05:11.274: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods + [AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 + Jul 27 02:05:11.412: INFO: Deployment "test-recreate-deployment": + &Deployment{ObjectMeta:{test-recreate-deployment deployment-8212 55ecf00f-fdfe-4aeb-b587-944e0064d752 91058 2 2023-07-27 02:05:09 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-07-27 02:05:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 02:05:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00ac76558 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-07-27 02:05:11 +0000 UTC,LastTransitionTime:2023-07-27 02:05:11 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-cff6dc657" is progressing.,LastUpdateTime:2023-07-27 02:05:11 +0000 UTC,LastTransitionTime:2023-07-27 02:05:09 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} + + Jul 27 02:05:11.421: INFO: New ReplicaSet "test-recreate-deployment-cff6dc657" of Deployment "test-recreate-deployment": + &ReplicaSet{ObjectMeta:{test-recreate-deployment-cff6dc657 deployment-8212 b0fc8926-c3af-44ad-ba78-87164bd0f7fb 91057 1 2023-07-27 02:05:11 +0000 UTC map[name:sample-pod-3 pod-template-hash:cff6dc657] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 55ecf00f-fdfe-4aeb-b587-944e0064d752 0xc00ac76a20 0xc00ac76a21}] [] [{kube-controller-manager Update apps/v1 2023-07-27 02:05:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55ecf00f-fdfe-4aeb-b587-944e0064d752\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 02:05:11 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: cff6dc657,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:cff6dc657] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00ac76ab8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + Jul 27 02:05:11.421: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": + Jul 27 02:05:11.421: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-795566c5cb deployment-8212 84652e54-a546-4cd9-a160-576a7a117cee 91047 2 2023-07-27 02:05:09 +0000 UTC map[name:sample-pod-3 pod-template-hash:795566c5cb] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 55ecf00f-fdfe-4aeb-b587-944e0064d752 0xc00ac76907 0xc00ac76908}] [] [{kube-controller-manager Update apps/v1 2023-07-27 02:05:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55ecf00f-fdfe-4aeb-b587-944e0064d752\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 02:05:11 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 795566c5cb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:795566c5cb] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00ac769b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + Jul 27 02:05:11.430: INFO: Pod "test-recreate-deployment-cff6dc657-49xf9" is not available: + &Pod{ObjectMeta:{test-recreate-deployment-cff6dc657-49xf9 test-recreate-deployment-cff6dc657- deployment-8212 12416f19-c567-4429-a47e-c0cd562c250d 91059 0 2023-07-27 02:05:11 +0000 UTC map[name:sample-pod-3 pod-template-hash:cff6dc657] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-recreate-deployment-cff6dc657 b0fc8926-c3af-44ad-ba78-87164bd0f7fb 0xc005177a77 0xc005177a78}] [] [{kube-controller-manager Update v1 2023-07-27 02:05:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b0fc8926-c3af-44ad-ba78-87164bd0f7fb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-07-27 02:05:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2cgff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2cgff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.19,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c50,c0,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-npd6m,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:05:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:05:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:05:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:05:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.19,PodIP:,StartTime:2023-07-27 02:05:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + [AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 + Jul 27 02:05:11.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 + STEP: Destroying namespace "deployment-8212" for this suite. 07/27/23 02:05:11.443 + << End Captured GinkgoWriter Output +------------------------------ +SSS ------------------------------ [sig-network] Services - should be able to change the type from ClusterIP to ExternalName [Conformance] - test/e2e/network/service.go:1515 + should serve a basic endpoint from pods [Conformance] + test/e2e/network/service.go:787 [BeforeEach] [sig-network] Services set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:33:15.339 -Jun 12 21:33:15.339: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename services 06/12/23 21:33:15.341 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:33:15.442 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:33:15.536 +STEP: Creating a kubernetes client 07/27/23 02:05:11.466 +Jul 27 02:05:11.466: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename services 07/27/23 02:05:11.467 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:05:11.508 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:05:11.516 [BeforeEach] [sig-network] Services test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] Services test/e2e/network/service.go:766 -[It] should be able to change the type from ClusterIP to ExternalName [Conformance] - test/e2e/network/service.go:1515 -STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-8219 06/12/23 21:33:15.58 -STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service 06/12/23 21:33:15.673 -STEP: creating service externalsvc in namespace services-8219 06/12/23 21:33:15.673 -STEP: creating replication controller externalsvc in namespace services-8219 06/12/23 21:33:15.753 -I0612 21:33:15.806246 23 runners.go:193] Created replication controller with name: externalsvc, namespace: services-8219, replica count: 2 -I0612 21:33:18.872685 23 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady -STEP: changing the ClusterIP service to type=ExternalName 06/12/23 21:33:18.883 -Jun 12 21:33:18.929: INFO: Creating new exec pod -Jun 12 21:33:18.949: INFO: Waiting up to 5m0s for pod "execpodkphhx" in namespace "services-8219" to be "running" -Jun 12 21:33:18.957: INFO: Pod "execpodkphhx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0391ms -Jun 12 21:33:21.005: INFO: Pod "execpodkphhx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055358053s -Jun 12 21:33:22.970: INFO: Pod "execpodkphhx": Phase="Running", Reason="", readiness=true. Elapsed: 4.020974297s -Jun 12 21:33:22.970: INFO: Pod "execpodkphhx" satisfied condition "running" -Jun 12 21:33:22.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-8219 exec execpodkphhx -- /bin/sh -x -c nslookup clusterip-service.services-8219.svc.cluster.local' -Jun 12 21:33:23.745: INFO: stderr: "+ nslookup clusterip-service.services-8219.svc.cluster.local\n" -Jun 12 21:33:23.745: INFO: stdout: "Server:\t\t172.21.0.10\nAddress:\t172.21.0.10#53\n\nclusterip-service.services-8219.svc.cluster.local\tcanonical name = externalsvc.services-8219.svc.cluster.local.\nName:\texternalsvc.services-8219.svc.cluster.local\nAddress: 172.21.243.47\n\n" -STEP: deleting ReplicationController externalsvc in namespace services-8219, will wait for the garbage collector to delete the pods 06/12/23 21:33:23.745 -Jun 12 21:33:23.835: INFO: Deleting ReplicationController externalsvc took: 20.389772ms -Jun 12 21:33:23.941: INFO: Terminating ReplicationController externalsvc pods took: 105.604806ms -Jun 12 21:33:26.897: INFO: Cleaning up the ClusterIP to ExternalName test service +[It] should serve a basic endpoint from pods [Conformance] + test/e2e/network/service.go:787 +STEP: creating service endpoint-test2 in namespace services-7864 07/27/23 02:05:11.525 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7864 to expose endpoints map[] 07/27/23 02:05:11.579 +Jul 27 02:05:11.619: INFO: successfully validated that service endpoint-test2 in namespace services-7864 exposes endpoints map[] +STEP: Creating pod pod1 in namespace services-7864 07/27/23 02:05:11.619 +Jul 27 02:05:11.645: INFO: Waiting up to 5m0s for pod "pod1" in namespace "services-7864" to be "running and ready" +Jul 27 02:05:11.653: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 7.742279ms +Jul 27 02:05:11.653: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Jul 27 02:05:13.663: INFO: Pod "pod1": Phase="Running", Reason="", readiness=true. Elapsed: 2.017122217s +Jul 27 02:05:13.663: INFO: The phase of Pod pod1 is Running (Ready = true) +Jul 27 02:05:13.663: INFO: Pod "pod1" satisfied condition "running and ready" +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7864 to expose endpoints map[pod1:[80]] 07/27/23 02:05:13.671 +Jul 27 02:05:13.703: INFO: successfully validated that service endpoint-test2 in namespace services-7864 exposes endpoints map[pod1:[80]] +STEP: Checking if the Service forwards traffic to pod1 07/27/23 02:05:13.703 +Jul 27 02:05:13.703: INFO: Creating new exec pod +Jul 27 02:05:13.721: INFO: Waiting up to 5m0s for pod "execpodvqwdx" in namespace "services-7864" to be "running" +Jul 27 02:05:13.728: INFO: Pod "execpodvqwdx": Phase="Pending", Reason="", readiness=false. Elapsed: 7.242584ms +Jul 27 02:05:15.739: INFO: Pod "execpodvqwdx": Phase="Running", Reason="", readiness=true. Elapsed: 2.018083057s +Jul 27 02:05:15.739: INFO: Pod "execpodvqwdx" satisfied condition "running" +Jul 27 02:05:16.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-7864 exec execpodvqwdx -- /bin/sh -x -c nc -v -z -w 2 endpoint-test2 80' +Jul 27 02:05:16.950: INFO: stderr: "+ nc -v -z -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Jul 27 02:05:16.950: INFO: stdout: "" +Jul 27 02:05:16.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-7864 exec execpodvqwdx -- /bin/sh -x -c nc -v -z -w 2 172.21.247.165 80' +Jul 27 02:05:17.177: INFO: stderr: "+ nc -v -z -w 2 172.21.247.165 80\nConnection to 172.21.247.165 80 port [tcp/http] succeeded!\n" +Jul 27 02:05:17.177: INFO: stdout: "" +STEP: Creating pod pod2 in namespace services-7864 07/27/23 02:05:17.177 +Jul 27 02:05:17.193: INFO: Waiting up to 5m0s for pod "pod2" in namespace "services-7864" to be "running and ready" +Jul 27 02:05:17.200: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.484116ms +Jul 27 02:05:17.200: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Jul 27 02:05:19.210: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 2.016947467s +Jul 27 02:05:19.210: INFO: The phase of Pod pod2 is Running (Ready = true) +Jul 27 02:05:19.210: INFO: Pod "pod2" satisfied condition "running and ready" +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7864 to expose endpoints map[pod1:[80] pod2:[80]] 07/27/23 02:05:19.222 +Jul 27 02:05:19.300: INFO: successfully validated that service endpoint-test2 in namespace services-7864 exposes endpoints map[pod1:[80] pod2:[80]] +STEP: Checking if the Service forwards traffic to pod1 and pod2 07/27/23 02:05:19.3 +Jul 27 02:05:20.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-7864 exec execpodvqwdx -- /bin/sh -x -c nc -v -z -w 2 endpoint-test2 80' +Jul 27 02:05:20.516: INFO: stderr: "+ nc -v -z -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Jul 27 02:05:20.516: INFO: stdout: "" +Jul 27 02:05:20.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-7864 exec execpodvqwdx -- /bin/sh -x -c nc -v -z -w 2 172.21.247.165 80' +Jul 27 02:05:20.734: INFO: stderr: "+ nc -v -z -w 2 172.21.247.165 80\nConnection to 172.21.247.165 80 port [tcp/http] succeeded!\n" +Jul 27 02:05:20.734: INFO: stdout: "" +STEP: Deleting pod pod1 in namespace services-7864 07/27/23 02:05:20.734 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7864 to expose endpoints map[pod2:[80]] 07/27/23 02:05:20.759 +Jul 27 02:05:20.793: INFO: successfully validated that service endpoint-test2 in namespace services-7864 exposes endpoints map[pod2:[80]] +STEP: Checking if the Service forwards traffic to pod2 07/27/23 02:05:20.793 +Jul 27 02:05:21.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-7864 exec execpodvqwdx -- /bin/sh -x -c nc -v -z -w 2 endpoint-test2 80' +Jul 27 02:05:22.006: INFO: stderr: "+ nc -v -z -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Jul 27 02:05:22.006: INFO: stdout: "" +Jul 27 02:05:22.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-7864 exec execpodvqwdx -- /bin/sh -x -c nc -v -z -w 2 172.21.247.165 80' +Jul 27 02:05:22.267: INFO: stderr: "+ nc -v -z -w 2 172.21.247.165 80\nConnection to 172.21.247.165 80 port [tcp/http] succeeded!\n" +Jul 27 02:05:22.267: INFO: stdout: "" +STEP: Deleting pod pod2 in namespace services-7864 07/27/23 02:05:22.267 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7864 to expose endpoints map[] 07/27/23 02:05:22.287 +Jul 27 02:05:22.320: INFO: successfully validated that service endpoint-test2 in namespace services-7864 exposes endpoints map[] [AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 -Jun 12 21:33:26.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +Jul 27 02:05:22.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-network] Services test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] Services dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 -STEP: Destroying namespace "services-8219" for this suite. 06/12/23 21:33:26.966 +STEP: Destroying namespace "services-7864" for this suite. 07/27/23 02:05:22.393 ------------------------------ -• [SLOW TEST] [11.661 seconds] +• [SLOW TEST] [10.951 seconds] [sig-network] Services test/e2e/network/common/framework.go:23 - should be able to change the type from ClusterIP to ExternalName [Conformance] - test/e2e/network/service.go:1515 + should serve a basic endpoint from pods [Conformance] + test/e2e/network/service.go:787 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-network] Services set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:33:15.339 - Jun 12 21:33:15.339: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename services 06/12/23 21:33:15.341 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:33:15.442 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:33:15.536 + STEP: Creating a kubernetes client 07/27/23 02:05:11.466 + Jul 27 02:05:11.466: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename services 07/27/23 02:05:11.467 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:05:11.508 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:05:11.516 [BeforeEach] [sig-network] Services test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] Services test/e2e/network/service.go:766 - [It] should be able to change the type from ClusterIP to ExternalName [Conformance] - test/e2e/network/service.go:1515 - STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-8219 06/12/23 21:33:15.58 - STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service 06/12/23 21:33:15.673 - STEP: creating service externalsvc in namespace services-8219 06/12/23 21:33:15.673 - STEP: creating replication controller externalsvc in namespace services-8219 06/12/23 21:33:15.753 - I0612 21:33:15.806246 23 runners.go:193] Created replication controller with name: externalsvc, namespace: services-8219, replica count: 2 - I0612 21:33:18.872685 23 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady - STEP: changing the ClusterIP service to type=ExternalName 06/12/23 21:33:18.883 - Jun 12 21:33:18.929: INFO: Creating new exec pod - Jun 12 21:33:18.949: INFO: Waiting up to 5m0s for pod "execpodkphhx" in namespace "services-8219" to be "running" - Jun 12 21:33:18.957: INFO: Pod "execpodkphhx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0391ms - Jun 12 21:33:21.005: INFO: Pod "execpodkphhx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055358053s - Jun 12 21:33:22.970: INFO: Pod "execpodkphhx": Phase="Running", Reason="", readiness=true. Elapsed: 4.020974297s - Jun 12 21:33:22.970: INFO: Pod "execpodkphhx" satisfied condition "running" - Jun 12 21:33:22.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-8219 exec execpodkphhx -- /bin/sh -x -c nslookup clusterip-service.services-8219.svc.cluster.local' - Jun 12 21:33:23.745: INFO: stderr: "+ nslookup clusterip-service.services-8219.svc.cluster.local\n" - Jun 12 21:33:23.745: INFO: stdout: "Server:\t\t172.21.0.10\nAddress:\t172.21.0.10#53\n\nclusterip-service.services-8219.svc.cluster.local\tcanonical name = externalsvc.services-8219.svc.cluster.local.\nName:\texternalsvc.services-8219.svc.cluster.local\nAddress: 172.21.243.47\n\n" - STEP: deleting ReplicationController externalsvc in namespace services-8219, will wait for the garbage collector to delete the pods 06/12/23 21:33:23.745 - Jun 12 21:33:23.835: INFO: Deleting ReplicationController externalsvc took: 20.389772ms - Jun 12 21:33:23.941: INFO: Terminating ReplicationController externalsvc pods took: 105.604806ms - Jun 12 21:33:26.897: INFO: Cleaning up the ClusterIP to ExternalName test service + [It] should serve a basic endpoint from pods [Conformance] + test/e2e/network/service.go:787 + STEP: creating service endpoint-test2 in namespace services-7864 07/27/23 02:05:11.525 + STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7864 to expose endpoints map[] 07/27/23 02:05:11.579 + Jul 27 02:05:11.619: INFO: successfully validated that service endpoint-test2 in namespace services-7864 exposes endpoints map[] + STEP: Creating pod pod1 in namespace services-7864 07/27/23 02:05:11.619 + Jul 27 02:05:11.645: INFO: Waiting up to 5m0s for pod "pod1" in namespace "services-7864" to be "running and ready" + Jul 27 02:05:11.653: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 7.742279ms + Jul 27 02:05:11.653: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) + Jul 27 02:05:13.663: INFO: Pod "pod1": Phase="Running", Reason="", readiness=true. Elapsed: 2.017122217s + Jul 27 02:05:13.663: INFO: The phase of Pod pod1 is Running (Ready = true) + Jul 27 02:05:13.663: INFO: Pod "pod1" satisfied condition "running and ready" + STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7864 to expose endpoints map[pod1:[80]] 07/27/23 02:05:13.671 + Jul 27 02:05:13.703: INFO: successfully validated that service endpoint-test2 in namespace services-7864 exposes endpoints map[pod1:[80]] + STEP: Checking if the Service forwards traffic to pod1 07/27/23 02:05:13.703 + Jul 27 02:05:13.703: INFO: Creating new exec pod + Jul 27 02:05:13.721: INFO: Waiting up to 5m0s for pod "execpodvqwdx" in namespace "services-7864" to be "running" + Jul 27 02:05:13.728: INFO: Pod "execpodvqwdx": Phase="Pending", Reason="", readiness=false. Elapsed: 7.242584ms + Jul 27 02:05:15.739: INFO: Pod "execpodvqwdx": Phase="Running", Reason="", readiness=true. Elapsed: 2.018083057s + Jul 27 02:05:15.739: INFO: Pod "execpodvqwdx" satisfied condition "running" + Jul 27 02:05:16.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-7864 exec execpodvqwdx -- /bin/sh -x -c nc -v -z -w 2 endpoint-test2 80' + Jul 27 02:05:16.950: INFO: stderr: "+ nc -v -z -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" + Jul 27 02:05:16.950: INFO: stdout: "" + Jul 27 02:05:16.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-7864 exec execpodvqwdx -- /bin/sh -x -c nc -v -z -w 2 172.21.247.165 80' + Jul 27 02:05:17.177: INFO: stderr: "+ nc -v -z -w 2 172.21.247.165 80\nConnection to 172.21.247.165 80 port [tcp/http] succeeded!\n" + Jul 27 02:05:17.177: INFO: stdout: "" + STEP: Creating pod pod2 in namespace services-7864 07/27/23 02:05:17.177 + Jul 27 02:05:17.193: INFO: Waiting up to 5m0s for pod "pod2" in namespace "services-7864" to be "running and ready" + Jul 27 02:05:17.200: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.484116ms + Jul 27 02:05:17.200: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) + Jul 27 02:05:19.210: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 2.016947467s + Jul 27 02:05:19.210: INFO: The phase of Pod pod2 is Running (Ready = true) + Jul 27 02:05:19.210: INFO: Pod "pod2" satisfied condition "running and ready" + STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7864 to expose endpoints map[pod1:[80] pod2:[80]] 07/27/23 02:05:19.222 + Jul 27 02:05:19.300: INFO: successfully validated that service endpoint-test2 in namespace services-7864 exposes endpoints map[pod1:[80] pod2:[80]] + STEP: Checking if the Service forwards traffic to pod1 and pod2 07/27/23 02:05:19.3 + Jul 27 02:05:20.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-7864 exec execpodvqwdx -- /bin/sh -x -c nc -v -z -w 2 endpoint-test2 80' + Jul 27 02:05:20.516: INFO: stderr: "+ nc -v -z -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" + Jul 27 02:05:20.516: INFO: stdout: "" + Jul 27 02:05:20.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-7864 exec execpodvqwdx -- /bin/sh -x -c nc -v -z -w 2 172.21.247.165 80' + Jul 27 02:05:20.734: INFO: stderr: "+ nc -v -z -w 2 172.21.247.165 80\nConnection to 172.21.247.165 80 port [tcp/http] succeeded!\n" + Jul 27 02:05:20.734: INFO: stdout: "" + STEP: Deleting pod pod1 in namespace services-7864 07/27/23 02:05:20.734 + STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7864 to expose endpoints map[pod2:[80]] 07/27/23 02:05:20.759 + Jul 27 02:05:20.793: INFO: successfully validated that service endpoint-test2 in namespace services-7864 exposes endpoints map[pod2:[80]] + STEP: Checking if the Service forwards traffic to pod2 07/27/23 02:05:20.793 + Jul 27 02:05:21.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-7864 exec execpodvqwdx -- /bin/sh -x -c nc -v -z -w 2 endpoint-test2 80' + Jul 27 02:05:22.006: INFO: stderr: "+ nc -v -z -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" + Jul 27 02:05:22.006: INFO: stdout: "" + Jul 27 02:05:22.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-7864 exec execpodvqwdx -- /bin/sh -x -c nc -v -z -w 2 172.21.247.165 80' + Jul 27 02:05:22.267: INFO: stderr: "+ nc -v -z -w 2 172.21.247.165 80\nConnection to 172.21.247.165 80 port [tcp/http] succeeded!\n" + Jul 27 02:05:22.267: INFO: stdout: "" + STEP: Deleting pod pod2 in namespace services-7864 07/27/23 02:05:22.267 + STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7864 to expose endpoints map[] 07/27/23 02:05:22.287 + Jul 27 02:05:22.320: INFO: successfully validated that service endpoint-test2 in namespace services-7864 exposes endpoints map[] [AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 - Jun 12 21:33:26.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + Jul 27 02:05:22.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-network] Services test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] Services dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 - STEP: Destroying namespace "services-8219" for this suite. 06/12/23 21:33:26.966 + STEP: Destroying namespace "services-7864" for this suite. 07/27/23 02:05:22.393 << End Captured GinkgoWriter Output ------------------------------ -S +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-node] Variable Expansion - should allow substituting values in a container's args [NodeConformance] [Conformance] - test/e2e/common/node/expansion.go:92 -[BeforeEach] [sig-node] Variable Expansion +[sig-cli] Kubectl client Kubectl run pod + should create a pod from an image when restart is Never [Conformance] + test/e2e/kubectl/kubectl.go:1713 +[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:33:27.001 -Jun 12 21:33:27.001: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename var-expansion 06/12/23 21:33:27.004 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:33:27.065 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:33:27.082 -[BeforeEach] [sig-node] Variable Expansion +STEP: Creating a kubernetes client 07/27/23 02:05:22.418 +Jul 27 02:05:22.418: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename kubectl 07/27/23 02:05:22.419 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:05:22.481 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:05:22.489 +[BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 -[It] should allow substituting values in a container's args [NodeConformance] [Conformance] - test/e2e/common/node/expansion.go:92 -STEP: Creating a pod to test substitution in container's args 06/12/23 21:33:27.095 -Jun 12 21:33:27.117: INFO: Waiting up to 5m0s for pod "var-expansion-b2dea802-5e6b-48cf-a82d-aa38c1303343" in namespace "var-expansion-1924" to be "Succeeded or Failed" -Jun 12 21:33:27.127: INFO: Pod "var-expansion-b2dea802-5e6b-48cf-a82d-aa38c1303343": Phase="Pending", Reason="", readiness=false. Elapsed: 10.191759ms -Jun 12 21:33:29.136: INFO: Pod "var-expansion-b2dea802-5e6b-48cf-a82d-aa38c1303343": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019340443s -Jun 12 21:33:31.138: INFO: Pod "var-expansion-b2dea802-5e6b-48cf-a82d-aa38c1303343": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020925086s -Jun 12 21:33:33.138: INFO: Pod "var-expansion-b2dea802-5e6b-48cf-a82d-aa38c1303343": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021223566s -STEP: Saw pod success 06/12/23 21:33:33.138 -Jun 12 21:33:33.139: INFO: Pod "var-expansion-b2dea802-5e6b-48cf-a82d-aa38c1303343" satisfied condition "Succeeded or Failed" -Jun 12 21:33:33.148: INFO: Trying to get logs from node 10.138.75.70 pod var-expansion-b2dea802-5e6b-48cf-a82d-aa38c1303343 container dapi-container: -STEP: delete the pod 06/12/23 21:33:33.2 -Jun 12 21:33:33.220: INFO: Waiting for pod var-expansion-b2dea802-5e6b-48cf-a82d-aa38c1303343 to disappear -Jun 12 21:33:33.239: INFO: Pod var-expansion-b2dea802-5e6b-48cf-a82d-aa38c1303343 no longer exists -[AfterEach] [sig-node] Variable Expansion +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[BeforeEach] Kubectl run pod + test/e2e/kubectl/kubectl.go:1700 +[It] should create a pod from an image when restart is Never [Conformance] + test/e2e/kubectl/kubectl.go:1713 +STEP: running the image registry.k8s.io/e2e-test-images/httpd:2.4.38-4 07/27/23 02:05:22.501 +Jul 27 02:05:22.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-6154 run e2e-test-httpd-pod --restart=Never --pod-running-timeout=2m0s --image=registry.k8s.io/e2e-test-images/httpd:2.4.38-4' +Jul 27 02:05:22.632: INFO: stderr: "" +Jul 27 02:05:22.632: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: verifying the pod e2e-test-httpd-pod was created 07/27/23 02:05:22.632 +[AfterEach] Kubectl run pod + test/e2e/kubectl/kubectl.go:1704 +Jul 27 02:05:22.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-6154 delete pods e2e-test-httpd-pod' +Jul 27 02:05:25.295: INFO: stderr: "" +Jul 27 02:05:25.295: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 -Jun 12 21:33:33.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Variable Expansion +Jul 27 02:05:25.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Variable Expansion +[DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Variable Expansion +[DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 -STEP: Destroying namespace "var-expansion-1924" for this suite. 06/12/23 21:33:33.258 +STEP: Destroying namespace "kubectl-6154" for this suite. 07/27/23 02:05:25.333 ------------------------------ -• [SLOW TEST] [6.279 seconds] -[sig-node] Variable Expansion -test/e2e/common/node/framework.go:23 - should allow substituting values in a container's args [NodeConformance] [Conformance] - test/e2e/common/node/expansion.go:92 +• [2.938 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl run pod + test/e2e/kubectl/kubectl.go:1697 + should create a pod from an image when restart is Never [Conformance] + test/e2e/kubectl/kubectl.go:1713 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Variable Expansion + [BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:33:27.001 - Jun 12 21:33:27.001: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename var-expansion 06/12/23 21:33:27.004 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:33:27.065 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:33:27.082 - [BeforeEach] [sig-node] Variable Expansion + STEP: Creating a kubernetes client 07/27/23 02:05:22.418 + Jul 27 02:05:22.418: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename kubectl 07/27/23 02:05:22.419 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:05:22.481 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:05:22.489 + [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 - [It] should allow substituting values in a container's args [NodeConformance] [Conformance] - test/e2e/common/node/expansion.go:92 - STEP: Creating a pod to test substitution in container's args 06/12/23 21:33:27.095 - Jun 12 21:33:27.117: INFO: Waiting up to 5m0s for pod "var-expansion-b2dea802-5e6b-48cf-a82d-aa38c1303343" in namespace "var-expansion-1924" to be "Succeeded or Failed" - Jun 12 21:33:27.127: INFO: Pod "var-expansion-b2dea802-5e6b-48cf-a82d-aa38c1303343": Phase="Pending", Reason="", readiness=false. Elapsed: 10.191759ms - Jun 12 21:33:29.136: INFO: Pod "var-expansion-b2dea802-5e6b-48cf-a82d-aa38c1303343": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019340443s - Jun 12 21:33:31.138: INFO: Pod "var-expansion-b2dea802-5e6b-48cf-a82d-aa38c1303343": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020925086s - Jun 12 21:33:33.138: INFO: Pod "var-expansion-b2dea802-5e6b-48cf-a82d-aa38c1303343": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021223566s - STEP: Saw pod success 06/12/23 21:33:33.138 - Jun 12 21:33:33.139: INFO: Pod "var-expansion-b2dea802-5e6b-48cf-a82d-aa38c1303343" satisfied condition "Succeeded or Failed" - Jun 12 21:33:33.148: INFO: Trying to get logs from node 10.138.75.70 pod var-expansion-b2dea802-5e6b-48cf-a82d-aa38c1303343 container dapi-container: - STEP: delete the pod 06/12/23 21:33:33.2 - Jun 12 21:33:33.220: INFO: Waiting for pod var-expansion-b2dea802-5e6b-48cf-a82d-aa38c1303343 to disappear - Jun 12 21:33:33.239: INFO: Pod var-expansion-b2dea802-5e6b-48cf-a82d-aa38c1303343 no longer exists - [AfterEach] [sig-node] Variable Expansion + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [BeforeEach] Kubectl run pod + test/e2e/kubectl/kubectl.go:1700 + [It] should create a pod from an image when restart is Never [Conformance] + test/e2e/kubectl/kubectl.go:1713 + STEP: running the image registry.k8s.io/e2e-test-images/httpd:2.4.38-4 07/27/23 02:05:22.501 + Jul 27 02:05:22.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-6154 run e2e-test-httpd-pod --restart=Never --pod-running-timeout=2m0s --image=registry.k8s.io/e2e-test-images/httpd:2.4.38-4' + Jul 27 02:05:22.632: INFO: stderr: "" + Jul 27 02:05:22.632: INFO: stdout: "pod/e2e-test-httpd-pod created\n" + STEP: verifying the pod e2e-test-httpd-pod was created 07/27/23 02:05:22.632 + [AfterEach] Kubectl run pod + test/e2e/kubectl/kubectl.go:1704 + Jul 27 02:05:22.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-6154 delete pods e2e-test-httpd-pod' + Jul 27 02:05:25.295: INFO: stderr: "" + Jul 27 02:05:25.295: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" + [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 - Jun 12 21:33:33.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Variable Expansion + Jul 27 02:05:25.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Variable Expansion + [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Variable Expansion + [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 - STEP: Destroying namespace "var-expansion-1924" for this suite. 06/12/23 21:33:33.258 + STEP: Destroying namespace "kubectl-6154" for this suite. 07/27/23 02:05:25.333 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSS +SSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] Subpath Atomic writer volumes - should support subpaths with configmap pod [Conformance] - test/e2e/storage/subpath.go:70 -[BeforeEach] [sig-storage] Subpath +[sig-apps] ReplicationController + should release no longer matching pods [Conformance] + test/e2e/apps/rc.go:101 +[BeforeEach] [sig-apps] ReplicationController set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:33:33.282 -Jun 12 21:33:33.282: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename subpath 06/12/23 21:33:33.284 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:33:33.347 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:33:33.373 -[BeforeEach] [sig-storage] Subpath +STEP: Creating a kubernetes client 07/27/23 02:05:25.356 +Jul 27 02:05:25.357: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename replication-controller 07/27/23 02:05:25.358 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:05:25.404 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:05:25.414 +[BeforeEach] [sig-apps] ReplicationController test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] Atomic writer volumes - test/e2e/storage/subpath.go:40 -STEP: Setting up data 06/12/23 21:33:33.389 -[It] should support subpaths with configmap pod [Conformance] - test/e2e/storage/subpath.go:70 -STEP: Creating pod pod-subpath-test-configmap-rxvc 06/12/23 21:33:33.442 -STEP: Creating a pod to test atomic-volume-subpath 06/12/23 21:33:33.442 -Jun 12 21:33:33.475: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-rxvc" in namespace "subpath-8761" to be "Succeeded or Failed" -Jun 12 21:33:33.485: INFO: Pod "pod-subpath-test-configmap-rxvc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.162058ms -Jun 12 21:33:35.515: INFO: Pod "pod-subpath-test-configmap-rxvc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040195156s -Jun 12 21:33:37.510: INFO: Pod "pod-subpath-test-configmap-rxvc": Phase="Running", Reason="", readiness=true. Elapsed: 4.035247039s -Jun 12 21:33:39.496: INFO: Pod "pod-subpath-test-configmap-rxvc": Phase="Running", Reason="", readiness=true. Elapsed: 6.02103424s -Jun 12 21:33:41.495: INFO: Pod "pod-subpath-test-configmap-rxvc": Phase="Running", Reason="", readiness=true. Elapsed: 8.020268305s -Jun 12 21:33:43.496: INFO: Pod "pod-subpath-test-configmap-rxvc": Phase="Running", Reason="", readiness=true. Elapsed: 10.020648526s -Jun 12 21:33:45.495: INFO: Pod "pod-subpath-test-configmap-rxvc": Phase="Running", Reason="", readiness=true. Elapsed: 12.020390038s -Jun 12 21:33:47.495: INFO: Pod "pod-subpath-test-configmap-rxvc": Phase="Running", Reason="", readiness=true. Elapsed: 14.020614878s -Jun 12 21:33:49.509: INFO: Pod "pod-subpath-test-configmap-rxvc": Phase="Running", Reason="", readiness=true. Elapsed: 16.03410281s -Jun 12 21:33:51.495: INFO: Pod "pod-subpath-test-configmap-rxvc": Phase="Running", Reason="", readiness=true. Elapsed: 18.020461315s -Jun 12 21:33:53.496: INFO: Pod "pod-subpath-test-configmap-rxvc": Phase="Running", Reason="", readiness=true. Elapsed: 20.021136434s -Jun 12 21:33:55.494: INFO: Pod "pod-subpath-test-configmap-rxvc": Phase="Running", Reason="", readiness=true. Elapsed: 22.019579737s -Jun 12 21:33:57.510: INFO: Pod "pod-subpath-test-configmap-rxvc": Phase="Running", Reason="", readiness=false. Elapsed: 24.035458856s -Jun 12 21:33:59.495: INFO: Pod "pod-subpath-test-configmap-rxvc": Phase="Running", Reason="", readiness=false. Elapsed: 26.02000342s -Jun 12 21:34:01.495: INFO: Pod "pod-subpath-test-configmap-rxvc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.020397823s -STEP: Saw pod success 06/12/23 21:34:01.496 -Jun 12 21:34:01.496: INFO: Pod "pod-subpath-test-configmap-rxvc" satisfied condition "Succeeded or Failed" -Jun 12 21:34:01.505: INFO: Trying to get logs from node 10.138.75.70 pod pod-subpath-test-configmap-rxvc container test-container-subpath-configmap-rxvc: -STEP: delete the pod 06/12/23 21:34:01.526 -Jun 12 21:34:01.549: INFO: Waiting for pod pod-subpath-test-configmap-rxvc to disappear -Jun 12 21:34:01.558: INFO: Pod pod-subpath-test-configmap-rxvc no longer exists -STEP: Deleting pod pod-subpath-test-configmap-rxvc 06/12/23 21:34:01.558 -Jun 12 21:34:01.558: INFO: Deleting pod "pod-subpath-test-configmap-rxvc" in namespace "subpath-8761" -[AfterEach] [sig-storage] Subpath +[BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 +[It] should release no longer matching pods [Conformance] + test/e2e/apps/rc.go:101 +STEP: Given a ReplicationController is created 07/27/23 02:05:25.424 +W0727 02:05:25.452002 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "pod-release" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "pod-release" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "pod-release" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "pod-release" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +STEP: When the matched label of one of its pods change 07/27/23 02:05:25.452 +Jul 27 02:05:25.459: INFO: Pod name pod-release: Found 0 pods out of 1 +Jul 27 02:05:30.471: INFO: Pod name pod-release: Found 1 pods out of 1 +STEP: Then the pod is released 07/27/23 02:05:30.518 +[AfterEach] [sig-apps] ReplicationController test/e2e/framework/node/init/init.go:32 -Jun 12 21:34:01.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Subpath +Jul 27 02:05:31.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicationController test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Subpath +[DeferCleanup (Each)] [sig-apps] ReplicationController dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Subpath +[DeferCleanup (Each)] [sig-apps] ReplicationController tear down framework | framework.go:193 -STEP: Destroying namespace "subpath-8761" for this suite. 06/12/23 21:34:01.584 +STEP: Destroying namespace "replication-controller-1240" for this suite. 07/27/23 02:05:31.552 ------------------------------ -• [SLOW TEST] [28.325 seconds] -[sig-storage] Subpath -test/e2e/storage/utils/framework.go:23 - Atomic writer volumes - test/e2e/storage/subpath.go:36 - should support subpaths with configmap pod [Conformance] - test/e2e/storage/subpath.go:70 +• [SLOW TEST] [6.223 seconds] +[sig-apps] ReplicationController +test/e2e/apps/framework.go:23 + should release no longer matching pods [Conformance] + test/e2e/apps/rc.go:101 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Subpath + [BeforeEach] [sig-apps] ReplicationController set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:33:33.282 - Jun 12 21:33:33.282: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename subpath 06/12/23 21:33:33.284 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:33:33.347 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:33:33.373 - [BeforeEach] [sig-storage] Subpath + STEP: Creating a kubernetes client 07/27/23 02:05:25.356 + Jul 27 02:05:25.357: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename replication-controller 07/27/23 02:05:25.358 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:05:25.404 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:05:25.414 + [BeforeEach] [sig-apps] ReplicationController test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] Atomic writer volumes - test/e2e/storage/subpath.go:40 - STEP: Setting up data 06/12/23 21:33:33.389 - [It] should support subpaths with configmap pod [Conformance] - test/e2e/storage/subpath.go:70 - STEP: Creating pod pod-subpath-test-configmap-rxvc 06/12/23 21:33:33.442 - STEP: Creating a pod to test atomic-volume-subpath 06/12/23 21:33:33.442 - Jun 12 21:33:33.475: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-rxvc" in namespace "subpath-8761" to be "Succeeded or Failed" - Jun 12 21:33:33.485: INFO: Pod "pod-subpath-test-configmap-rxvc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.162058ms - Jun 12 21:33:35.515: INFO: Pod "pod-subpath-test-configmap-rxvc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040195156s - Jun 12 21:33:37.510: INFO: Pod "pod-subpath-test-configmap-rxvc": Phase="Running", Reason="", readiness=true. Elapsed: 4.035247039s - Jun 12 21:33:39.496: INFO: Pod "pod-subpath-test-configmap-rxvc": Phase="Running", Reason="", readiness=true. Elapsed: 6.02103424s - Jun 12 21:33:41.495: INFO: Pod "pod-subpath-test-configmap-rxvc": Phase="Running", Reason="", readiness=true. Elapsed: 8.020268305s - Jun 12 21:33:43.496: INFO: Pod "pod-subpath-test-configmap-rxvc": Phase="Running", Reason="", readiness=true. Elapsed: 10.020648526s - Jun 12 21:33:45.495: INFO: Pod "pod-subpath-test-configmap-rxvc": Phase="Running", Reason="", readiness=true. Elapsed: 12.020390038s - Jun 12 21:33:47.495: INFO: Pod "pod-subpath-test-configmap-rxvc": Phase="Running", Reason="", readiness=true. Elapsed: 14.020614878s - Jun 12 21:33:49.509: INFO: Pod "pod-subpath-test-configmap-rxvc": Phase="Running", Reason="", readiness=true. Elapsed: 16.03410281s - Jun 12 21:33:51.495: INFO: Pod "pod-subpath-test-configmap-rxvc": Phase="Running", Reason="", readiness=true. Elapsed: 18.020461315s - Jun 12 21:33:53.496: INFO: Pod "pod-subpath-test-configmap-rxvc": Phase="Running", Reason="", readiness=true. Elapsed: 20.021136434s - Jun 12 21:33:55.494: INFO: Pod "pod-subpath-test-configmap-rxvc": Phase="Running", Reason="", readiness=true. Elapsed: 22.019579737s - Jun 12 21:33:57.510: INFO: Pod "pod-subpath-test-configmap-rxvc": Phase="Running", Reason="", readiness=false. Elapsed: 24.035458856s - Jun 12 21:33:59.495: INFO: Pod "pod-subpath-test-configmap-rxvc": Phase="Running", Reason="", readiness=false. Elapsed: 26.02000342s - Jun 12 21:34:01.495: INFO: Pod "pod-subpath-test-configmap-rxvc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.020397823s - STEP: Saw pod success 06/12/23 21:34:01.496 - Jun 12 21:34:01.496: INFO: Pod "pod-subpath-test-configmap-rxvc" satisfied condition "Succeeded or Failed" - Jun 12 21:34:01.505: INFO: Trying to get logs from node 10.138.75.70 pod pod-subpath-test-configmap-rxvc container test-container-subpath-configmap-rxvc: - STEP: delete the pod 06/12/23 21:34:01.526 - Jun 12 21:34:01.549: INFO: Waiting for pod pod-subpath-test-configmap-rxvc to disappear - Jun 12 21:34:01.558: INFO: Pod pod-subpath-test-configmap-rxvc no longer exists - STEP: Deleting pod pod-subpath-test-configmap-rxvc 06/12/23 21:34:01.558 - Jun 12 21:34:01.558: INFO: Deleting pod "pod-subpath-test-configmap-rxvc" in namespace "subpath-8761" - [AfterEach] [sig-storage] Subpath + [BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 + [It] should release no longer matching pods [Conformance] + test/e2e/apps/rc.go:101 + STEP: Given a ReplicationController is created 07/27/23 02:05:25.424 + W0727 02:05:25.452002 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "pod-release" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "pod-release" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "pod-release" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "pod-release" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + STEP: When the matched label of one of its pods change 07/27/23 02:05:25.452 + Jul 27 02:05:25.459: INFO: Pod name pod-release: Found 0 pods out of 1 + Jul 27 02:05:30.471: INFO: Pod name pod-release: Found 1 pods out of 1 + STEP: Then the pod is released 07/27/23 02:05:30.518 + [AfterEach] [sig-apps] ReplicationController test/e2e/framework/node/init/init.go:32 - Jun 12 21:34:01.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Subpath + Jul 27 02:05:31.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicationController test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Subpath + [DeferCleanup (Each)] [sig-apps] ReplicationController dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Subpath + [DeferCleanup (Each)] [sig-apps] ReplicationController tear down framework | framework.go:193 - STEP: Destroying namespace "subpath-8761" for this suite. 06/12/23 21:34:01.584 + STEP: Destroying namespace "replication-controller-1240" for this suite. 07/27/23 02:05:31.552 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSS ------------------------------ -[sig-storage] Projected downwardAPI - should update labels on modification [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:130 -[BeforeEach] [sig-storage] Projected downwardAPI +[sig-apps] DisruptionController + should block an eviction until the PDB is updated to allow it [Conformance] + test/e2e/apps/disruption.go:347 +[BeforeEach] [sig-apps] DisruptionController set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:34:01.625 -Jun 12 21:34:01.625: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename projected 06/12/23 21:34:01.628 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:34:01.688 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:34:01.703 -[BeforeEach] [sig-storage] Projected downwardAPI +STEP: Creating a kubernetes client 07/27/23 02:05:31.58 +Jul 27 02:05:31.580: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename disruption 07/27/23 02:05:31.581 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:05:31.631 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:05:31.64 +[BeforeEach] [sig-apps] DisruptionController test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-storage] Projected downwardAPI - test/e2e/common/storage/projected_downwardapi.go:44 -[It] should update labels on modification [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:130 -STEP: Creating the pod 06/12/23 21:34:01.716 -Jun 12 21:34:01.743: INFO: Waiting up to 5m0s for pod "labelsupdate2e90e1a9-dbb3-4798-aa82-fd79a6c409f4" in namespace "projected-2365" to be "running and ready" -Jun 12 21:34:01.751: INFO: Pod "labelsupdate2e90e1a9-dbb3-4798-aa82-fd79a6c409f4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.522306ms -Jun 12 21:34:01.752: INFO: The phase of Pod labelsupdate2e90e1a9-dbb3-4798-aa82-fd79a6c409f4 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:34:03.761: INFO: Pod "labelsupdate2e90e1a9-dbb3-4798-aa82-fd79a6c409f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018276373s -Jun 12 21:34:03.761: INFO: The phase of Pod labelsupdate2e90e1a9-dbb3-4798-aa82-fd79a6c409f4 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:34:05.773: INFO: Pod "labelsupdate2e90e1a9-dbb3-4798-aa82-fd79a6c409f4": Phase="Running", Reason="", readiness=true. Elapsed: 4.030490781s -Jun 12 21:34:05.780: INFO: The phase of Pod labelsupdate2e90e1a9-dbb3-4798-aa82-fd79a6c409f4 is Running (Ready = true) -Jun 12 21:34:05.781: INFO: Pod "labelsupdate2e90e1a9-dbb3-4798-aa82-fd79a6c409f4" satisfied condition "running and ready" -Jun 12 21:34:06.375: INFO: Successfully updated pod "labelsupdate2e90e1a9-dbb3-4798-aa82-fd79a6c409f4" -[AfterEach] [sig-storage] Projected downwardAPI +[BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:72 +[It] should block an eviction until the PDB is updated to allow it [Conformance] + test/e2e/apps/disruption.go:347 +STEP: Creating a pdb that targets all three pods in a test replica set 07/27/23 02:05:31.649 +STEP: Waiting for the pdb to be processed 07/27/23 02:05:31.665 +STEP: First trying to evict a pod which shouldn't be evictable 07/27/23 02:05:33.735 +STEP: Waiting for all pods to be running 07/27/23 02:05:33.735 +Jul 27 02:05:33.743: INFO: pods: 0 < 3 +STEP: locating a running pod 07/27/23 02:05:35.754 +STEP: Updating the pdb to allow a pod to be evicted 07/27/23 02:05:35.778 +STEP: Waiting for the pdb to be processed 07/27/23 02:05:35.8 +STEP: Trying to evict the same pod we tried earlier which should now be evictable 07/27/23 02:05:37.816 +STEP: Waiting for all pods to be running 07/27/23 02:05:37.816 +STEP: Waiting for the pdb to observed all healthy pods 07/27/23 02:05:37.826 +STEP: Patching the pdb to disallow a pod to be evicted 07/27/23 02:05:37.88 +STEP: Waiting for the pdb to be processed 07/27/23 02:05:37.903 +STEP: Waiting for all pods to be running 07/27/23 02:05:39.922 +STEP: locating a running pod 07/27/23 02:05:39.931 +STEP: Deleting the pdb to allow a pod to be evicted 07/27/23 02:05:39.954 +STEP: Waiting for the pdb to be deleted 07/27/23 02:05:39.966 +STEP: Trying to evict the same pod we tried earlier which should now be evictable 07/27/23 02:05:39.973 +STEP: Waiting for all pods to be running 07/27/23 02:05:39.973 +[AfterEach] [sig-apps] DisruptionController test/e2e/framework/node/init/init.go:32 -Jun 12 21:34:08.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Projected downwardAPI +Jul 27 02:05:40.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] DisruptionController test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Projected downwardAPI +[DeferCleanup (Each)] [sig-apps] DisruptionController dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Projected downwardAPI +[DeferCleanup (Each)] [sig-apps] DisruptionController tear down framework | framework.go:193 -STEP: Destroying namespace "projected-2365" for this suite. 06/12/23 21:34:08.477 +STEP: Destroying namespace "disruption-9883" for this suite. 07/27/23 02:05:40.024 ------------------------------ -• [SLOW TEST] [6.880 seconds] -[sig-storage] Projected downwardAPI -test/e2e/common/storage/framework.go:23 - should update labels on modification [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:130 +• [SLOW TEST] [8.472 seconds] +[sig-apps] DisruptionController +test/e2e/apps/framework.go:23 + should block an eviction until the PDB is updated to allow it [Conformance] + test/e2e/apps/disruption.go:347 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Projected downwardAPI + [BeforeEach] [sig-apps] DisruptionController set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:34:01.625 - Jun 12 21:34:01.625: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename projected 06/12/23 21:34:01.628 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:34:01.688 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:34:01.703 - [BeforeEach] [sig-storage] Projected downwardAPI + STEP: Creating a kubernetes client 07/27/23 02:05:31.58 + Jul 27 02:05:31.580: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename disruption 07/27/23 02:05:31.581 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:05:31.631 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:05:31.64 + [BeforeEach] [sig-apps] DisruptionController test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-storage] Projected downwardAPI - test/e2e/common/storage/projected_downwardapi.go:44 - [It] should update labels on modification [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:130 - STEP: Creating the pod 06/12/23 21:34:01.716 - Jun 12 21:34:01.743: INFO: Waiting up to 5m0s for pod "labelsupdate2e90e1a9-dbb3-4798-aa82-fd79a6c409f4" in namespace "projected-2365" to be "running and ready" - Jun 12 21:34:01.751: INFO: Pod "labelsupdate2e90e1a9-dbb3-4798-aa82-fd79a6c409f4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.522306ms - Jun 12 21:34:01.752: INFO: The phase of Pod labelsupdate2e90e1a9-dbb3-4798-aa82-fd79a6c409f4 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:34:03.761: INFO: Pod "labelsupdate2e90e1a9-dbb3-4798-aa82-fd79a6c409f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018276373s - Jun 12 21:34:03.761: INFO: The phase of Pod labelsupdate2e90e1a9-dbb3-4798-aa82-fd79a6c409f4 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:34:05.773: INFO: Pod "labelsupdate2e90e1a9-dbb3-4798-aa82-fd79a6c409f4": Phase="Running", Reason="", readiness=true. Elapsed: 4.030490781s - Jun 12 21:34:05.780: INFO: The phase of Pod labelsupdate2e90e1a9-dbb3-4798-aa82-fd79a6c409f4 is Running (Ready = true) - Jun 12 21:34:05.781: INFO: Pod "labelsupdate2e90e1a9-dbb3-4798-aa82-fd79a6c409f4" satisfied condition "running and ready" - Jun 12 21:34:06.375: INFO: Successfully updated pod "labelsupdate2e90e1a9-dbb3-4798-aa82-fd79a6c409f4" - [AfterEach] [sig-storage] Projected downwardAPI + [BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:72 + [It] should block an eviction until the PDB is updated to allow it [Conformance] + test/e2e/apps/disruption.go:347 + STEP: Creating a pdb that targets all three pods in a test replica set 07/27/23 02:05:31.649 + STEP: Waiting for the pdb to be processed 07/27/23 02:05:31.665 + STEP: First trying to evict a pod which shouldn't be evictable 07/27/23 02:05:33.735 + STEP: Waiting for all pods to be running 07/27/23 02:05:33.735 + Jul 27 02:05:33.743: INFO: pods: 0 < 3 + STEP: locating a running pod 07/27/23 02:05:35.754 + STEP: Updating the pdb to allow a pod to be evicted 07/27/23 02:05:35.778 + STEP: Waiting for the pdb to be processed 07/27/23 02:05:35.8 + STEP: Trying to evict the same pod we tried earlier which should now be evictable 07/27/23 02:05:37.816 + STEP: Waiting for all pods to be running 07/27/23 02:05:37.816 + STEP: Waiting for the pdb to observed all healthy pods 07/27/23 02:05:37.826 + STEP: Patching the pdb to disallow a pod to be evicted 07/27/23 02:05:37.88 + STEP: Waiting for the pdb to be processed 07/27/23 02:05:37.903 + STEP: Waiting for all pods to be running 07/27/23 02:05:39.922 + STEP: locating a running pod 07/27/23 02:05:39.931 + STEP: Deleting the pdb to allow a pod to be evicted 07/27/23 02:05:39.954 + STEP: Waiting for the pdb to be deleted 07/27/23 02:05:39.966 + STEP: Trying to evict the same pod we tried earlier which should now be evictable 07/27/23 02:05:39.973 + STEP: Waiting for all pods to be running 07/27/23 02:05:39.973 + [AfterEach] [sig-apps] DisruptionController test/e2e/framework/node/init/init.go:32 - Jun 12 21:34:08.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + Jul 27 02:05:40.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] DisruptionController test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + [DeferCleanup (Each)] [sig-apps] DisruptionController dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + [DeferCleanup (Each)] [sig-apps] DisruptionController tear down framework | framework.go:193 - STEP: Destroying namespace "projected-2365" for this suite. 06/12/23 21:34:08.477 + STEP: Destroying namespace "disruption-9883" for this suite. 07/27/23 02:05:40.024 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - should mutate custom resource [Conformance] - test/e2e/apimachinery/webhook.go:291 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[sig-node] Container Runtime blackbox test on terminated container + should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:195 +[BeforeEach] [sig-node] Container Runtime set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:34:08.543 -Jun 12 21:34:08.550: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename webhook 06/12/23 21:34:08.553 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:34:08.831 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:34:08.875 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +STEP: Creating a kubernetes client 07/27/23 02:05:40.053 +Jul 27 02:05:40.053: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename container-runtime 07/27/23 02:05:40.055 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:05:40.126 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:05:40.138 +[BeforeEach] [sig-node] Container Runtime test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:90 -STEP: Setting up server cert 06/12/23 21:34:09.019 -STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 21:34:10.278 -STEP: Deploying the webhook pod 06/12/23 21:34:10.31 -STEP: Wait for the deployment to be ready 06/12/23 21:34:10.338 -Jun 12 21:34:10.353: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created -Jun 12 21:34:12.402: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 21, 34, 10, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 34, 10, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 34, 10, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 34, 10, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} -STEP: Deploying the webhook service 06/12/23 21:34:14.412 -STEP: Verifying the service has paired with the endpoint 06/12/23 21:34:14.444 -Jun 12 21:34:15.446: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 -[It] should mutate custom resource [Conformance] - test/e2e/apimachinery/webhook.go:291 -Jun 12 21:34:15.457: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6865-crds.webhook.example.com via the AdmissionRegistration API 06/12/23 21:34:15.983 -STEP: Creating a custom resource that should be mutated by the webhook 06/12/23 21:34:16.027 -[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[It] should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:195 +STEP: create the container 07/27/23 02:05:40.149 +STEP: wait for the container to reach Succeeded 07/27/23 02:05:40.175 +STEP: get the container status 07/27/23 02:05:44.227 +STEP: the container should be terminated 07/27/23 02:05:44.235 +STEP: the termination message should be set 07/27/23 02:05:44.235 +Jul 27 02:05:44.235: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- +STEP: delete the container 07/27/23 02:05:44.235 +[AfterEach] [sig-node] Container Runtime test/e2e/framework/node/init/init.go:32 -Jun 12 21:34:18.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:105 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +Jul 27 02:05:44.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Container Runtime test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-node] Container Runtime dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-node] Container Runtime tear down framework | framework.go:193 -STEP: Destroying namespace "webhook-1988" for this suite. 06/12/23 21:34:18.82 -STEP: Destroying namespace "webhook-1988-markers" for this suite. 06/12/23 21:34:18.843 +STEP: Destroying namespace "container-runtime-7558" for this suite. 07/27/23 02:05:44.278 ------------------------------ -• [SLOW TEST] [10.321 seconds] -[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] -test/e2e/apimachinery/framework.go:23 - should mutate custom resource [Conformance] - test/e2e/apimachinery/webhook.go:291 +• [4.247 seconds] +[sig-node] Container Runtime +test/e2e/common/node/framework.go:23 + blackbox test + test/e2e/common/node/runtime.go:44 + on terminated container + test/e2e/common/node/runtime.go:137 + should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:195 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [BeforeEach] [sig-node] Container Runtime set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:34:08.543 - Jun 12 21:34:08.550: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename webhook 06/12/23 21:34:08.553 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:34:08.831 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:34:08.875 - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + STEP: Creating a kubernetes client 07/27/23 02:05:40.053 + Jul 27 02:05:40.053: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename container-runtime 07/27/23 02:05:40.055 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:05:40.126 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:05:40.138 + [BeforeEach] [sig-node] Container Runtime test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:90 - STEP: Setting up server cert 06/12/23 21:34:09.019 - STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 21:34:10.278 - STEP: Deploying the webhook pod 06/12/23 21:34:10.31 - STEP: Wait for the deployment to be ready 06/12/23 21:34:10.338 - Jun 12 21:34:10.353: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created - Jun 12 21:34:12.402: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 21, 34, 10, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 34, 10, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 34, 10, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 34, 10, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} - STEP: Deploying the webhook service 06/12/23 21:34:14.412 - STEP: Verifying the service has paired with the endpoint 06/12/23 21:34:14.444 - Jun 12 21:34:15.446: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 - [It] should mutate custom resource [Conformance] - test/e2e/apimachinery/webhook.go:291 - Jun 12 21:34:15.457: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6865-crds.webhook.example.com via the AdmissionRegistration API 06/12/23 21:34:15.983 - STEP: Creating a custom resource that should be mutated by the webhook 06/12/23 21:34:16.027 - [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [It] should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:195 + STEP: create the container 07/27/23 02:05:40.149 + STEP: wait for the container to reach Succeeded 07/27/23 02:05:40.175 + STEP: get the container status 07/27/23 02:05:44.227 + STEP: the container should be terminated 07/27/23 02:05:44.235 + STEP: the termination message should be set 07/27/23 02:05:44.235 + Jul 27 02:05:44.235: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- + STEP: delete the container 07/27/23 02:05:44.235 + [AfterEach] [sig-node] Container Runtime test/e2e/framework/node/init/init.go:32 - Jun 12 21:34:18.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:105 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + Jul 27 02:05:44.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Container Runtime test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-node] Container Runtime dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-node] Container Runtime tear down framework | framework.go:193 - STEP: Destroying namespace "webhook-1988" for this suite. 06/12/23 21:34:18.82 - STEP: Destroying namespace "webhook-1988-markers" for this suite. 06/12/23 21:34:18.843 + STEP: Destroying namespace "container-runtime-7558" for this suite. 07/27/23 02:05:44.278 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSS ------------------------------ -[sig-cli] Kubectl client Proxy server - should support --unix-socket=/path [Conformance] - test/e2e/kubectl/kubectl.go:1812 -[BeforeEach] [sig-cli] Kubectl client +[sig-auth] ServiceAccounts + should guarantee kube-root-ca.crt exist in any namespace [Conformance] + test/e2e/auth/service_accounts.go:742 +[BeforeEach] [sig-auth] ServiceAccounts set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:34:18.867 -Jun 12 21:34:18.867: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename kubectl 06/12/23 21:34:18.868 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:34:18.922 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:34:18.959 -[BeforeEach] [sig-cli] Kubectl client +STEP: Creating a kubernetes client 07/27/23 02:05:44.301 +Jul 27 02:05:44.301: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename svcaccounts 07/27/23 02:05:44.302 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:05:44.34 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:05:44.352 +[BeforeEach] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-cli] Kubectl client - test/e2e/kubectl/kubectl.go:274 -[It] should support --unix-socket=/path [Conformance] - test/e2e/kubectl/kubectl.go:1812 -STEP: Starting the proxy 06/12/23 21:34:19.02 -Jun 12 21:34:19.025: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-8325 proxy --unix-socket=/tmp/kubectl-proxy-unix581314969/test' -STEP: retrieving proxy /api/ output 06/12/23 21:34:19.221 -[AfterEach] [sig-cli] Kubectl client +[It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] + test/e2e/auth/service_accounts.go:742 +Jul 27 02:05:44.377: INFO: Got root ca configmap in namespace "svcaccounts-309" +Jul 27 02:05:44.403: INFO: Deleted root ca configmap in namespace "svcaccounts-309" +STEP: waiting for a new root ca configmap created 07/27/23 02:05:44.904 +Jul 27 02:05:44.923: INFO: Recreated root ca configmap in namespace "svcaccounts-309" +Jul 27 02:05:44.955: INFO: Updated root ca configmap in namespace "svcaccounts-309" +STEP: waiting for the root ca configmap reconciled 07/27/23 02:05:45.455 +Jul 27 02:05:45.470: INFO: Reconciled root ca configmap in namespace "svcaccounts-309" +[AfterEach] [sig-auth] ServiceAccounts test/e2e/framework/node/init/init.go:32 -Jun 12 21:34:19.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-cli] Kubectl client +Jul 27 02:05:45.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-cli] Kubectl client +[DeferCleanup (Each)] [sig-auth] ServiceAccounts dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-cli] Kubectl client +[DeferCleanup (Each)] [sig-auth] ServiceAccounts tear down framework | framework.go:193 -STEP: Destroying namespace "kubectl-8325" for this suite. 06/12/23 21:34:19.24 +STEP: Destroying namespace "svcaccounts-309" for this suite. 07/27/23 02:05:45.483 ------------------------------ -• [0.421 seconds] -[sig-cli] Kubectl client -test/e2e/kubectl/framework.go:23 - Proxy server - test/e2e/kubectl/kubectl.go:1780 - should support --unix-socket=/path [Conformance] - test/e2e/kubectl/kubectl.go:1812 +• [1.203 seconds] +[sig-auth] ServiceAccounts +test/e2e/auth/framework.go:23 + should guarantee kube-root-ca.crt exist in any namespace [Conformance] + test/e2e/auth/service_accounts.go:742 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-cli] Kubectl client + [BeforeEach] [sig-auth] ServiceAccounts set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:34:18.867 - Jun 12 21:34:18.867: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename kubectl 06/12/23 21:34:18.868 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:34:18.922 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:34:18.959 - [BeforeEach] [sig-cli] Kubectl client + STEP: Creating a kubernetes client 07/27/23 02:05:44.301 + Jul 27 02:05:44.301: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename svcaccounts 07/27/23 02:05:44.302 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:05:44.34 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:05:44.352 + [BeforeEach] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-cli] Kubectl client - test/e2e/kubectl/kubectl.go:274 - [It] should support --unix-socket=/path [Conformance] - test/e2e/kubectl/kubectl.go:1812 - STEP: Starting the proxy 06/12/23 21:34:19.02 - Jun 12 21:34:19.025: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-8325 proxy --unix-socket=/tmp/kubectl-proxy-unix581314969/test' - STEP: retrieving proxy /api/ output 06/12/23 21:34:19.221 - [AfterEach] [sig-cli] Kubectl client + [It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] + test/e2e/auth/service_accounts.go:742 + Jul 27 02:05:44.377: INFO: Got root ca configmap in namespace "svcaccounts-309" + Jul 27 02:05:44.403: INFO: Deleted root ca configmap in namespace "svcaccounts-309" + STEP: waiting for a new root ca configmap created 07/27/23 02:05:44.904 + Jul 27 02:05:44.923: INFO: Recreated root ca configmap in namespace "svcaccounts-309" + Jul 27 02:05:44.955: INFO: Updated root ca configmap in namespace "svcaccounts-309" + STEP: waiting for the root ca configmap reconciled 07/27/23 02:05:45.455 + Jul 27 02:05:45.470: INFO: Reconciled root ca configmap in namespace "svcaccounts-309" + [AfterEach] [sig-auth] ServiceAccounts test/e2e/framework/node/init/init.go:32 - Jun 12 21:34:19.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-cli] Kubectl client + Jul 27 02:05:45.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-cli] Kubectl client + [DeferCleanup (Each)] [sig-auth] ServiceAccounts dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-cli] Kubectl client + [DeferCleanup (Each)] [sig-auth] ServiceAccounts tear down framework | framework.go:193 - STEP: Destroying namespace "kubectl-8325" for this suite. 06/12/23 21:34:19.24 + STEP: Destroying namespace "svcaccounts-309" for this suite. 07/27/23 02:05:45.483 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSS ------------------------------ -[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition - listing custom resource definition objects works [Conformance] - test/e2e/apimachinery/custom_resource_definition.go:85 -[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +[sig-node] Pods + should support remote command execution over websockets [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:536 +[BeforeEach] [sig-node] Pods set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:34:19.296 -Jun 12 21:34:19.296: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename custom-resource-definition 06/12/23 21:34:19.3 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:34:19.355 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:34:19.367 -[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +STEP: Creating a kubernetes client 07/27/23 02:05:45.504 +Jul 27 02:05:45.504: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename pods 07/27/23 02:05:45.505 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:05:45.55 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:05:45.56 +[BeforeEach] [sig-node] Pods test/e2e/framework/metrics/init/init.go:31 -[It] listing custom resource definition objects works [Conformance] - test/e2e/apimachinery/custom_resource_definition.go:85 -Jun 12 21:34:19.379: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 +[It] should support remote command execution over websockets [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:536 +Jul 27 02:05:45.574: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: creating the pod 07/27/23 02:05:45.574 +STEP: submitting the pod to kubernetes 07/27/23 02:05:45.574 +Jul 27 02:05:45.614: INFO: Waiting up to 5m0s for pod "pod-exec-websocket-3b4a4a4f-c5fd-4562-8633-eca9a5a356b2" in namespace "pods-8508" to be "running and ready" +Jul 27 02:05:45.623: INFO: Pod "pod-exec-websocket-3b4a4a4f-c5fd-4562-8633-eca9a5a356b2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.970377ms +Jul 27 02:05:45.623: INFO: The phase of Pod pod-exec-websocket-3b4a4a4f-c5fd-4562-8633-eca9a5a356b2 is Pending, waiting for it to be Running (with Ready = true) +Jul 27 02:05:47.639: INFO: Pod "pod-exec-websocket-3b4a4a4f-c5fd-4562-8633-eca9a5a356b2": Phase="Running", Reason="", readiness=true. Elapsed: 2.025827896s +Jul 27 02:05:47.639: INFO: The phase of Pod pod-exec-websocket-3b4a4a4f-c5fd-4562-8633-eca9a5a356b2 is Running (Ready = true) +Jul 27 02:05:47.639: INFO: Pod "pod-exec-websocket-3b4a4a4f-c5fd-4562-8633-eca9a5a356b2" satisfied condition "running and ready" +[AfterEach] [sig-node] Pods test/e2e/framework/node/init/init.go:32 -Jun 12 21:34:25.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +Jul 27 02:05:47.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Pods test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-node] Pods dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-node] Pods tear down framework | framework.go:193 -STEP: Destroying namespace "custom-resource-definition-8187" for this suite. 06/12/23 21:34:25.577 +STEP: Destroying namespace "pods-8508" for this suite. 07/27/23 02:05:47.796 ------------------------------ -• [SLOW TEST] [6.306 seconds] -[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] -test/e2e/apimachinery/framework.go:23 - Simple CustomResourceDefinition - test/e2e/apimachinery/custom_resource_definition.go:50 - listing custom resource definition objects works [Conformance] - test/e2e/apimachinery/custom_resource_definition.go:85 - +• [2.320 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should support remote command execution over websockets [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:536 + Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + [BeforeEach] [sig-node] Pods set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:34:19.296 - Jun 12 21:34:19.296: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename custom-resource-definition 06/12/23 21:34:19.3 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:34:19.355 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:34:19.367 - [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + STEP: Creating a kubernetes client 07/27/23 02:05:45.504 + Jul 27 02:05:45.504: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename pods 07/27/23 02:05:45.505 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:05:45.55 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:05:45.56 + [BeforeEach] [sig-node] Pods test/e2e/framework/metrics/init/init.go:31 - [It] listing custom resource definition objects works [Conformance] - test/e2e/apimachinery/custom_resource_definition.go:85 - Jun 12 21:34:19.379: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 + [It] should support remote command execution over websockets [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:536 + Jul 27 02:05:45.574: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: creating the pod 07/27/23 02:05:45.574 + STEP: submitting the pod to kubernetes 07/27/23 02:05:45.574 + Jul 27 02:05:45.614: INFO: Waiting up to 5m0s for pod "pod-exec-websocket-3b4a4a4f-c5fd-4562-8633-eca9a5a356b2" in namespace "pods-8508" to be "running and ready" + Jul 27 02:05:45.623: INFO: Pod "pod-exec-websocket-3b4a4a4f-c5fd-4562-8633-eca9a5a356b2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.970377ms + Jul 27 02:05:45.623: INFO: The phase of Pod pod-exec-websocket-3b4a4a4f-c5fd-4562-8633-eca9a5a356b2 is Pending, waiting for it to be Running (with Ready = true) + Jul 27 02:05:47.639: INFO: Pod "pod-exec-websocket-3b4a4a4f-c5fd-4562-8633-eca9a5a356b2": Phase="Running", Reason="", readiness=true. Elapsed: 2.025827896s + Jul 27 02:05:47.639: INFO: The phase of Pod pod-exec-websocket-3b4a4a4f-c5fd-4562-8633-eca9a5a356b2 is Running (Ready = true) + Jul 27 02:05:47.639: INFO: Pod "pod-exec-websocket-3b4a4a4f-c5fd-4562-8633-eca9a5a356b2" satisfied condition "running and ready" + [AfterEach] [sig-node] Pods test/e2e/framework/node/init/init.go:32 - Jun 12 21:34:25.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + Jul 27 02:05:47.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Pods test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-node] Pods dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-node] Pods tear down framework | framework.go:193 - STEP: Destroying namespace "custom-resource-definition-8187" for this suite. 06/12/23 21:34:25.577 + STEP: Destroying namespace "pods-8508" for this suite. 07/27/23 02:05:47.796 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSS ------------------------------ -[sig-node] Containers - should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance] - test/e2e/common/node/containers.go:59 -[BeforeEach] [sig-node] Containers +[sig-storage] ConfigMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:423 +[BeforeEach] [sig-storage] ConfigMap set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:34:25.62 -Jun 12 21:34:25.621: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename containers 06/12/23 21:34:25.622 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:34:25.708 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:34:25.721 -[BeforeEach] [sig-node] Containers +STEP: Creating a kubernetes client 07/27/23 02:05:47.825 +Jul 27 02:05:47.825: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename configmap 07/27/23 02:05:47.826 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:05:47.866 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:05:47.875 +[BeforeEach] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:31 -[It] should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance] - test/e2e/common/node/containers.go:59 -STEP: Creating a pod to test override arguments 06/12/23 21:34:25.734 -Jun 12 21:34:25.783: INFO: Waiting up to 5m0s for pod "client-containers-63e72d60-60d0-4fb2-a7cb-1332ab968436" in namespace "containers-8346" to be "Succeeded or Failed" -Jun 12 21:34:25.821: INFO: Pod "client-containers-63e72d60-60d0-4fb2-a7cb-1332ab968436": Phase="Pending", Reason="", readiness=false. Elapsed: 37.780082ms -Jun 12 21:34:27.831: INFO: Pod "client-containers-63e72d60-60d0-4fb2-a7cb-1332ab968436": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047922348s -Jun 12 21:34:29.831: INFO: Pod "client-containers-63e72d60-60d0-4fb2-a7cb-1332ab968436": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047713661s -Jun 12 21:34:31.835: INFO: Pod "client-containers-63e72d60-60d0-4fb2-a7cb-1332ab968436": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051749683s -STEP: Saw pod success 06/12/23 21:34:31.835 -Jun 12 21:34:31.836: INFO: Pod "client-containers-63e72d60-60d0-4fb2-a7cb-1332ab968436" satisfied condition "Succeeded or Failed" -Jun 12 21:34:31.845: INFO: Trying to get logs from node 10.138.75.70 pod client-containers-63e72d60-60d0-4fb2-a7cb-1332ab968436 container agnhost-container: -STEP: delete the pod 06/12/23 21:34:31.863 -Jun 12 21:34:31.882: INFO: Waiting for pod client-containers-63e72d60-60d0-4fb2-a7cb-1332ab968436 to disappear -Jun 12 21:34:31.890: INFO: Pod client-containers-63e72d60-60d0-4fb2-a7cb-1332ab968436 no longer exists -[AfterEach] [sig-node] Containers +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:423 +STEP: Creating configMap with name configmap-test-volume-de6d1105-5b2f-4aa5-a89c-010204a77408 07/27/23 02:05:47.888 +STEP: Creating a pod to test consume configMaps 07/27/23 02:05:47.939 +Jul 27 02:05:47.976: INFO: Waiting up to 5m0s for pod "pod-configmaps-750ac42e-b0ee-4f44-a1fe-8b29c849c356" in namespace "configmap-1069" to be "Succeeded or Failed" +Jul 27 02:05:47.986: INFO: Pod "pod-configmaps-750ac42e-b0ee-4f44-a1fe-8b29c849c356": Phase="Pending", Reason="", readiness=false. Elapsed: 9.630064ms +Jul 27 02:05:49.995: INFO: Pod "pod-configmaps-750ac42e-b0ee-4f44-a1fe-8b29c849c356": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019243947s +Jul 27 02:05:51.996: INFO: Pod "pod-configmaps-750ac42e-b0ee-4f44-a1fe-8b29c849c356": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020226762s +STEP: Saw pod success 07/27/23 02:05:51.996 +Jul 27 02:05:51.996: INFO: Pod "pod-configmaps-750ac42e-b0ee-4f44-a1fe-8b29c849c356" satisfied condition "Succeeded or Failed" +Jul 27 02:05:52.007: INFO: Trying to get logs from node 10.245.128.19 pod pod-configmaps-750ac42e-b0ee-4f44-a1fe-8b29c849c356 container configmap-volume-test: +STEP: delete the pod 07/27/23 02:05:52.024 +Jul 27 02:05:52.063: INFO: Waiting for pod pod-configmaps-750ac42e-b0ee-4f44-a1fe-8b29c849c356 to disappear +Jul 27 02:05:52.071: INFO: Pod pod-configmaps-750ac42e-b0ee-4f44-a1fe-8b29c849c356 no longer exists +[AfterEach] [sig-storage] ConfigMap test/e2e/framework/node/init/init.go:32 -Jun 12 21:34:31.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Containers +Jul 27 02:05:52.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Containers +[DeferCleanup (Each)] [sig-storage] ConfigMap dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Containers +[DeferCleanup (Each)] [sig-storage] ConfigMap tear down framework | framework.go:193 -STEP: Destroying namespace "containers-8346" for this suite. 06/12/23 21:34:31.908 +STEP: Destroying namespace "configmap-1069" for this suite. 07/27/23 02:05:52.107 ------------------------------ -• [SLOW TEST] [6.314 seconds] -[sig-node] Containers -test/e2e/common/node/framework.go:23 - should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance] - test/e2e/common/node/containers.go:59 +• [4.309 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:423 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Containers + [BeforeEach] [sig-storage] ConfigMap set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:34:25.62 - Jun 12 21:34:25.621: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename containers 06/12/23 21:34:25.622 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:34:25.708 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:34:25.721 - [BeforeEach] [sig-node] Containers + STEP: Creating a kubernetes client 07/27/23 02:05:47.825 + Jul 27 02:05:47.825: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename configmap 07/27/23 02:05:47.826 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:05:47.866 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:05:47.875 + [BeforeEach] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:31 - [It] should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance] - test/e2e/common/node/containers.go:59 - STEP: Creating a pod to test override arguments 06/12/23 21:34:25.734 - Jun 12 21:34:25.783: INFO: Waiting up to 5m0s for pod "client-containers-63e72d60-60d0-4fb2-a7cb-1332ab968436" in namespace "containers-8346" to be "Succeeded or Failed" - Jun 12 21:34:25.821: INFO: Pod "client-containers-63e72d60-60d0-4fb2-a7cb-1332ab968436": Phase="Pending", Reason="", readiness=false. Elapsed: 37.780082ms - Jun 12 21:34:27.831: INFO: Pod "client-containers-63e72d60-60d0-4fb2-a7cb-1332ab968436": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047922348s - Jun 12 21:34:29.831: INFO: Pod "client-containers-63e72d60-60d0-4fb2-a7cb-1332ab968436": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047713661s - Jun 12 21:34:31.835: INFO: Pod "client-containers-63e72d60-60d0-4fb2-a7cb-1332ab968436": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051749683s - STEP: Saw pod success 06/12/23 21:34:31.835 - Jun 12 21:34:31.836: INFO: Pod "client-containers-63e72d60-60d0-4fb2-a7cb-1332ab968436" satisfied condition "Succeeded or Failed" - Jun 12 21:34:31.845: INFO: Trying to get logs from node 10.138.75.70 pod client-containers-63e72d60-60d0-4fb2-a7cb-1332ab968436 container agnhost-container: - STEP: delete the pod 06/12/23 21:34:31.863 - Jun 12 21:34:31.882: INFO: Waiting for pod client-containers-63e72d60-60d0-4fb2-a7cb-1332ab968436 to disappear - Jun 12 21:34:31.890: INFO: Pod client-containers-63e72d60-60d0-4fb2-a7cb-1332ab968436 no longer exists - [AfterEach] [sig-node] Containers + [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:423 + STEP: Creating configMap with name configmap-test-volume-de6d1105-5b2f-4aa5-a89c-010204a77408 07/27/23 02:05:47.888 + STEP: Creating a pod to test consume configMaps 07/27/23 02:05:47.939 + Jul 27 02:05:47.976: INFO: Waiting up to 5m0s for pod "pod-configmaps-750ac42e-b0ee-4f44-a1fe-8b29c849c356" in namespace "configmap-1069" to be "Succeeded or Failed" + Jul 27 02:05:47.986: INFO: Pod "pod-configmaps-750ac42e-b0ee-4f44-a1fe-8b29c849c356": Phase="Pending", Reason="", readiness=false. Elapsed: 9.630064ms + Jul 27 02:05:49.995: INFO: Pod "pod-configmaps-750ac42e-b0ee-4f44-a1fe-8b29c849c356": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019243947s + Jul 27 02:05:51.996: INFO: Pod "pod-configmaps-750ac42e-b0ee-4f44-a1fe-8b29c849c356": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020226762s + STEP: Saw pod success 07/27/23 02:05:51.996 + Jul 27 02:05:51.996: INFO: Pod "pod-configmaps-750ac42e-b0ee-4f44-a1fe-8b29c849c356" satisfied condition "Succeeded or Failed" + Jul 27 02:05:52.007: INFO: Trying to get logs from node 10.245.128.19 pod pod-configmaps-750ac42e-b0ee-4f44-a1fe-8b29c849c356 container configmap-volume-test: + STEP: delete the pod 07/27/23 02:05:52.024 + Jul 27 02:05:52.063: INFO: Waiting for pod pod-configmaps-750ac42e-b0ee-4f44-a1fe-8b29c849c356 to disappear + Jul 27 02:05:52.071: INFO: Pod pod-configmaps-750ac42e-b0ee-4f44-a1fe-8b29c849c356 no longer exists + [AfterEach] [sig-storage] ConfigMap test/e2e/framework/node/init/init.go:32 - Jun 12 21:34:31.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Containers + Jul 27 02:05:52.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Containers + [DeferCleanup (Each)] [sig-storage] ConfigMap dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Containers + [DeferCleanup (Each)] [sig-storage] ConfigMap tear down framework | framework.go:193 - STEP: Destroying namespace "containers-8346" for this suite. 06/12/23 21:34:31.908 + STEP: Destroying namespace "configmap-1069" for this suite. 07/27/23 02:05:52.107 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSS ------------------------------ -[sig-cli] Kubectl client Kubectl logs - should be able to retrieve and filter logs [Conformance] - test/e2e/kubectl/kubectl.go:1592 -[BeforeEach] [sig-cli] Kubectl client +[sig-storage] ConfigMap + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:47 +[BeforeEach] [sig-storage] ConfigMap set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:34:31.942 -Jun 12 21:34:31.942: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename kubectl 06/12/23 21:34:31.944 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:34:32.149 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:34:32.165 -[BeforeEach] [sig-cli] Kubectl client +STEP: Creating a kubernetes client 07/27/23 02:05:52.134 +Jul 27 02:05:52.134: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename configmap 07/27/23 02:05:52.135 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:05:52.172 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:05:52.182 +[BeforeEach] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-cli] Kubectl client - test/e2e/kubectl/kubectl.go:274 -[BeforeEach] Kubectl logs - test/e2e/kubectl/kubectl.go:1572 -STEP: creating an pod 06/12/23 21:34:32.178 -Jun 12 21:34:32.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1514 run logs-generator --image=registry.k8s.io/e2e-test-images/agnhost:2.43 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s' -Jun 12 21:34:32.440: INFO: stderr: "" -Jun 12 21:34:32.440: INFO: stdout: "pod/logs-generator created\n" -[It] should be able to retrieve and filter logs [Conformance] - test/e2e/kubectl/kubectl.go:1592 -STEP: Waiting for log generator to start. 06/12/23 21:34:32.44 -Jun 12 21:34:32.440: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] -Jun 12 21:34:32.440: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-1514" to be "running and ready, or succeeded" -Jun 12 21:34:32.476: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 36.118267ms -Jun 12 21:34:32.476: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'logs-generator' on '10.138.75.70' to be 'Running' but was 'Pending' -Jun 12 21:34:34.505: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065355951s -Jun 12 21:34:34.506: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'logs-generator' on '10.138.75.70' to be 'Running' but was 'Pending' -Jun 12 21:34:36.488: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.047942517s -Jun 12 21:34:36.488: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" -Jun 12 21:34:36.488: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] -STEP: checking for a matching strings 06/12/23 21:34:36.488 -Jun 12 21:34:36.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1514 logs logs-generator logs-generator' -Jun 12 21:34:36.831: INFO: stderr: "" -Jun 12 21:34:36.831: INFO: stdout: "I0612 21:34:34.441435 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/hghp 379\nI0612 21:34:34.641631 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/n5mn 202\nI0612 21:34:34.842201 1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/xql 516\nI0612 21:34:35.042104 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/75s 237\nI0612 21:34:35.242111 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/k5b 440\nI0612 21:34:35.442301 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/h6k 267\nI0612 21:34:35.642096 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/c4n 229\nI0612 21:34:35.851767 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/rr8 217\nI0612 21:34:36.041473 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/hzhl 313\nI0612 21:34:36.247851 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/jh6 301\nI0612 21:34:36.442183 1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/4bq 335\nI0612 21:34:36.644076 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/kvc 308\n" -STEP: limiting log lines 06/12/23 21:34:36.831 -Jun 12 21:34:36.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1514 logs logs-generator logs-generator --tail=1' -Jun 12 21:34:37.147: INFO: stderr: "" -Jun 12 21:34:37.147: INFO: stdout: "I0612 21:34:37.041902 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/qjhf 523\n" -Jun 12 21:34:37.147: INFO: got output "I0612 21:34:37.041902 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/qjhf 523\n" -STEP: limiting log bytes 06/12/23 21:34:37.147 -Jun 12 21:34:37.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1514 logs logs-generator logs-generator --limit-bytes=1' -Jun 12 21:34:37.363: INFO: stderr: "" -Jun 12 21:34:37.363: INFO: stdout: "I" -Jun 12 21:34:37.363: INFO: got output "I" -STEP: exposing timestamps 06/12/23 21:34:37.363 -Jun 12 21:34:37.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1514 logs logs-generator logs-generator --tail=1 --timestamps' -Jun 12 21:34:37.808: INFO: stderr: "" -Jun 12 21:34:37.808: INFO: stdout: "2023-06-12T16:34:37.642596145-05:00 I0612 21:34:37.642437 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/cvdx 458\n" -Jun 12 21:34:37.808: INFO: got output "2023-06-12T16:34:37.642596145-05:00 I0612 21:34:37.642437 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/cvdx 458\n" -STEP: restricting to a time range 06/12/23 21:34:37.808 -Jun 12 21:34:40.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1514 logs logs-generator logs-generator --since=1s' -Jun 12 21:34:40.556: INFO: stderr: "" -Jun 12 21:34:40.556: INFO: stdout: "I0612 21:34:39.642389 1 logs_generator.go:76] 26 GET /api/v1/namespaces/default/pods/4v2 356\nI0612 21:34:39.848755 1 logs_generator.go:76] 27 POST /api/v1/namespaces/ns/pods/qpl 349\nI0612 21:34:40.042266 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/default/pods/wcc 541\nI0612 21:34:40.241623 1 logs_generator.go:76] 29 GET /api/v1/namespaces/kube-system/pods/q9hs 314\nI0612 21:34:40.442222 1 logs_generator.go:76] 30 GET /api/v1/namespaces/kube-system/pods/mpr8 410\n" -Jun 12 21:34:40.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1514 logs logs-generator logs-generator --since=24h' -Jun 12 21:34:40.785: INFO: stderr: "" -Jun 12 21:34:40.785: INFO: stdout: "I0612 21:34:34.441435 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/hghp 379\nI0612 21:34:34.641631 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/n5mn 202\nI0612 21:34:34.842201 1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/xql 516\nI0612 21:34:35.042104 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/75s 237\nI0612 21:34:35.242111 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/k5b 440\nI0612 21:34:35.442301 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/h6k 267\nI0612 21:34:35.642096 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/c4n 229\nI0612 21:34:35.851767 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/rr8 217\nI0612 21:34:36.041473 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/hzhl 313\nI0612 21:34:36.247851 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/jh6 301\nI0612 21:34:36.442183 1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/4bq 335\nI0612 21:34:36.644076 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/kvc 308\nI0612 21:34:36.841898 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/24c5 375\nI0612 21:34:37.041902 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/qjhf 523\nI0612 21:34:37.241622 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/ns/pods/268j 290\nI0612 21:34:37.441943 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/s4l 537\nI0612 21:34:37.642437 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/cvdx 458\nI0612 21:34:37.841947 1 logs_generator.go:76] 17 POST /api/v1/namespaces/kube-system/pods/9z5 221\nI0612 21:34:38.042520 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/6xc 301\nI0612 21:34:38.242056 1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/qfqd 232\nI0612 21:34:38.441490 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/2w66 577\nI0612 21:34:38.651862 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/mbk 543\nI0612 21:34:38.841758 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/xkgt 380\nI0612 21:34:39.042218 1 logs_generator.go:76] 23 GET /api/v1/namespaces/ns/pods/nmz 383\nI0612 21:34:39.241527 1 logs_generator.go:76] 24 POST /api/v1/namespaces/ns/pods/4n4s 599\nI0612 21:34:39.441929 1 logs_generator.go:76] 25 GET /api/v1/namespaces/kube-system/pods/2nbt 514\nI0612 21:34:39.642389 1 logs_generator.go:76] 26 GET /api/v1/namespaces/default/pods/4v2 356\nI0612 21:34:39.848755 1 logs_generator.go:76] 27 POST /api/v1/namespaces/ns/pods/qpl 349\nI0612 21:34:40.042266 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/default/pods/wcc 541\nI0612 21:34:40.241623 1 logs_generator.go:76] 29 GET /api/v1/namespaces/kube-system/pods/q9hs 314\nI0612 21:34:40.442222 1 logs_generator.go:76] 30 GET /api/v1/namespaces/kube-system/pods/mpr8 410\nI0612 21:34:40.641724 1 logs_generator.go:76] 31 GET /api/v1/namespaces/ns/pods/2p7w 218\n" -[AfterEach] Kubectl logs - test/e2e/kubectl/kubectl.go:1577 -Jun 12 21:34:40.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1514 delete pod logs-generator' -Jun 12 21:34:43.655: INFO: stderr: "" -Jun 12 21:34:43.656: INFO: stdout: "pod \"logs-generator\" deleted\n" -[AfterEach] [sig-cli] Kubectl client +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:47 +STEP: Creating configMap with name configmap-test-volume-a8d383fb-c072-43bc-83ad-b04e783dbf7f 07/27/23 02:05:52.195 +STEP: Creating a pod to test consume configMaps 07/27/23 02:05:52.22 +Jul 27 02:05:52.250: INFO: Waiting up to 5m0s for pod "pod-configmaps-a7cf0a44-b593-4ad9-8337-fec4f86ca119" in namespace "configmap-7035" to be "Succeeded or Failed" +Jul 27 02:05:52.261: INFO: Pod "pod-configmaps-a7cf0a44-b593-4ad9-8337-fec4f86ca119": Phase="Pending", Reason="", readiness=false. Elapsed: 10.693926ms +Jul 27 02:05:54.271: INFO: Pod "pod-configmaps-a7cf0a44-b593-4ad9-8337-fec4f86ca119": Phase="Running", Reason="", readiness=true. Elapsed: 2.021005401s +Jul 27 02:05:56.271: INFO: Pod "pod-configmaps-a7cf0a44-b593-4ad9-8337-fec4f86ca119": Phase="Running", Reason="", readiness=false. Elapsed: 4.020563605s +Jul 27 02:05:58.272: INFO: Pod "pod-configmaps-a7cf0a44-b593-4ad9-8337-fec4f86ca119": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021556574s +STEP: Saw pod success 07/27/23 02:05:58.272 +Jul 27 02:05:58.272: INFO: Pod "pod-configmaps-a7cf0a44-b593-4ad9-8337-fec4f86ca119" satisfied condition "Succeeded or Failed" +Jul 27 02:05:58.281: INFO: Trying to get logs from node 10.245.128.19 pod pod-configmaps-a7cf0a44-b593-4ad9-8337-fec4f86ca119 container agnhost-container: +STEP: delete the pod 07/27/23 02:05:58.298 +Jul 27 02:05:58.323: INFO: Waiting for pod pod-configmaps-a7cf0a44-b593-4ad9-8337-fec4f86ca119 to disappear +Jul 27 02:05:58.330: INFO: Pod pod-configmaps-a7cf0a44-b593-4ad9-8337-fec4f86ca119 no longer exists +[AfterEach] [sig-storage] ConfigMap test/e2e/framework/node/init/init.go:32 -Jun 12 21:34:43.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-cli] Kubectl client +Jul 27 02:05:58.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-cli] Kubectl client +[DeferCleanup (Each)] [sig-storage] ConfigMap dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-cli] Kubectl client +[DeferCleanup (Each)] [sig-storage] ConfigMap tear down framework | framework.go:193 -STEP: Destroying namespace "kubectl-1514" for this suite. 06/12/23 21:34:43.72 +STEP: Destroying namespace "configmap-7035" for this suite. 07/27/23 02:05:58.345 ------------------------------ -• [SLOW TEST] [11.805 seconds] -[sig-cli] Kubectl client -test/e2e/kubectl/framework.go:23 - Kubectl logs - test/e2e/kubectl/kubectl.go:1569 - should be able to retrieve and filter logs [Conformance] - test/e2e/kubectl/kubectl.go:1592 +• [SLOW TEST] [6.234 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:47 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-cli] Kubectl client + [BeforeEach] [sig-storage] ConfigMap set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:34:31.942 - Jun 12 21:34:31.942: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename kubectl 06/12/23 21:34:31.944 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:34:32.149 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:34:32.165 - [BeforeEach] [sig-cli] Kubectl client + STEP: Creating a kubernetes client 07/27/23 02:05:52.134 + Jul 27 02:05:52.134: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename configmap 07/27/23 02:05:52.135 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:05:52.172 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:05:52.182 + [BeforeEach] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-cli] Kubectl client - test/e2e/kubectl/kubectl.go:274 - [BeforeEach] Kubectl logs - test/e2e/kubectl/kubectl.go:1572 - STEP: creating an pod 06/12/23 21:34:32.178 - Jun 12 21:34:32.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1514 run logs-generator --image=registry.k8s.io/e2e-test-images/agnhost:2.43 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s' - Jun 12 21:34:32.440: INFO: stderr: "" - Jun 12 21:34:32.440: INFO: stdout: "pod/logs-generator created\n" - [It] should be able to retrieve and filter logs [Conformance] - test/e2e/kubectl/kubectl.go:1592 - STEP: Waiting for log generator to start. 06/12/23 21:34:32.44 - Jun 12 21:34:32.440: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] - Jun 12 21:34:32.440: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-1514" to be "running and ready, or succeeded" - Jun 12 21:34:32.476: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 36.118267ms - Jun 12 21:34:32.476: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'logs-generator' on '10.138.75.70' to be 'Running' but was 'Pending' - Jun 12 21:34:34.505: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065355951s - Jun 12 21:34:34.506: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'logs-generator' on '10.138.75.70' to be 'Running' but was 'Pending' - Jun 12 21:34:36.488: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.047942517s - Jun 12 21:34:36.488: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" - Jun 12 21:34:36.488: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] - STEP: checking for a matching strings 06/12/23 21:34:36.488 - Jun 12 21:34:36.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1514 logs logs-generator logs-generator' - Jun 12 21:34:36.831: INFO: stderr: "" - Jun 12 21:34:36.831: INFO: stdout: "I0612 21:34:34.441435 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/hghp 379\nI0612 21:34:34.641631 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/n5mn 202\nI0612 21:34:34.842201 1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/xql 516\nI0612 21:34:35.042104 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/75s 237\nI0612 21:34:35.242111 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/k5b 440\nI0612 21:34:35.442301 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/h6k 267\nI0612 21:34:35.642096 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/c4n 229\nI0612 21:34:35.851767 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/rr8 217\nI0612 21:34:36.041473 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/hzhl 313\nI0612 21:34:36.247851 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/jh6 301\nI0612 21:34:36.442183 1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/4bq 335\nI0612 21:34:36.644076 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/kvc 308\n" - STEP: limiting log lines 06/12/23 21:34:36.831 - Jun 12 21:34:36.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1514 logs logs-generator logs-generator --tail=1' - Jun 12 21:34:37.147: INFO: stderr: "" - Jun 12 21:34:37.147: INFO: stdout: "I0612 21:34:37.041902 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/qjhf 523\n" - Jun 12 21:34:37.147: INFO: got output "I0612 21:34:37.041902 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/qjhf 523\n" - STEP: limiting log bytes 06/12/23 21:34:37.147 - Jun 12 21:34:37.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1514 logs logs-generator logs-generator --limit-bytes=1' - Jun 12 21:34:37.363: INFO: stderr: "" - Jun 12 21:34:37.363: INFO: stdout: "I" - Jun 12 21:34:37.363: INFO: got output "I" - STEP: exposing timestamps 06/12/23 21:34:37.363 - Jun 12 21:34:37.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1514 logs logs-generator logs-generator --tail=1 --timestamps' - Jun 12 21:34:37.808: INFO: stderr: "" - Jun 12 21:34:37.808: INFO: stdout: "2023-06-12T16:34:37.642596145-05:00 I0612 21:34:37.642437 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/cvdx 458\n" - Jun 12 21:34:37.808: INFO: got output "2023-06-12T16:34:37.642596145-05:00 I0612 21:34:37.642437 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/cvdx 458\n" - STEP: restricting to a time range 06/12/23 21:34:37.808 - Jun 12 21:34:40.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1514 logs logs-generator logs-generator --since=1s' - Jun 12 21:34:40.556: INFO: stderr: "" - Jun 12 21:34:40.556: INFO: stdout: "I0612 21:34:39.642389 1 logs_generator.go:76] 26 GET /api/v1/namespaces/default/pods/4v2 356\nI0612 21:34:39.848755 1 logs_generator.go:76] 27 POST /api/v1/namespaces/ns/pods/qpl 349\nI0612 21:34:40.042266 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/default/pods/wcc 541\nI0612 21:34:40.241623 1 logs_generator.go:76] 29 GET /api/v1/namespaces/kube-system/pods/q9hs 314\nI0612 21:34:40.442222 1 logs_generator.go:76] 30 GET /api/v1/namespaces/kube-system/pods/mpr8 410\n" - Jun 12 21:34:40.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1514 logs logs-generator logs-generator --since=24h' - Jun 12 21:34:40.785: INFO: stderr: "" - Jun 12 21:34:40.785: INFO: stdout: "I0612 21:34:34.441435 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/hghp 379\nI0612 21:34:34.641631 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/n5mn 202\nI0612 21:34:34.842201 1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/xql 516\nI0612 21:34:35.042104 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/75s 237\nI0612 21:34:35.242111 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/k5b 440\nI0612 21:34:35.442301 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/h6k 267\nI0612 21:34:35.642096 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/c4n 229\nI0612 21:34:35.851767 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/rr8 217\nI0612 21:34:36.041473 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/hzhl 313\nI0612 21:34:36.247851 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/jh6 301\nI0612 21:34:36.442183 1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/4bq 335\nI0612 21:34:36.644076 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/kvc 308\nI0612 21:34:36.841898 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/24c5 375\nI0612 21:34:37.041902 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/qjhf 523\nI0612 21:34:37.241622 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/ns/pods/268j 290\nI0612 21:34:37.441943 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/s4l 537\nI0612 21:34:37.642437 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/cvdx 458\nI0612 21:34:37.841947 1 logs_generator.go:76] 17 POST /api/v1/namespaces/kube-system/pods/9z5 221\nI0612 21:34:38.042520 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/6xc 301\nI0612 21:34:38.242056 1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/qfqd 232\nI0612 21:34:38.441490 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/2w66 577\nI0612 21:34:38.651862 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/mbk 543\nI0612 21:34:38.841758 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/xkgt 380\nI0612 21:34:39.042218 1 logs_generator.go:76] 23 GET /api/v1/namespaces/ns/pods/nmz 383\nI0612 21:34:39.241527 1 logs_generator.go:76] 24 POST /api/v1/namespaces/ns/pods/4n4s 599\nI0612 21:34:39.441929 1 logs_generator.go:76] 25 GET /api/v1/namespaces/kube-system/pods/2nbt 514\nI0612 21:34:39.642389 1 logs_generator.go:76] 26 GET /api/v1/namespaces/default/pods/4v2 356\nI0612 21:34:39.848755 1 logs_generator.go:76] 27 POST /api/v1/namespaces/ns/pods/qpl 349\nI0612 21:34:40.042266 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/default/pods/wcc 541\nI0612 21:34:40.241623 1 logs_generator.go:76] 29 GET /api/v1/namespaces/kube-system/pods/q9hs 314\nI0612 21:34:40.442222 1 logs_generator.go:76] 30 GET /api/v1/namespaces/kube-system/pods/mpr8 410\nI0612 21:34:40.641724 1 logs_generator.go:76] 31 GET /api/v1/namespaces/ns/pods/2p7w 218\n" - [AfterEach] Kubectl logs - test/e2e/kubectl/kubectl.go:1577 - Jun 12 21:34:40.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-1514 delete pod logs-generator' - Jun 12 21:34:43.655: INFO: stderr: "" - Jun 12 21:34:43.656: INFO: stdout: "pod \"logs-generator\" deleted\n" - [AfterEach] [sig-cli] Kubectl client + [It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:47 + STEP: Creating configMap with name configmap-test-volume-a8d383fb-c072-43bc-83ad-b04e783dbf7f 07/27/23 02:05:52.195 + STEP: Creating a pod to test consume configMaps 07/27/23 02:05:52.22 + Jul 27 02:05:52.250: INFO: Waiting up to 5m0s for pod "pod-configmaps-a7cf0a44-b593-4ad9-8337-fec4f86ca119" in namespace "configmap-7035" to be "Succeeded or Failed" + Jul 27 02:05:52.261: INFO: Pod "pod-configmaps-a7cf0a44-b593-4ad9-8337-fec4f86ca119": Phase="Pending", Reason="", readiness=false. Elapsed: 10.693926ms + Jul 27 02:05:54.271: INFO: Pod "pod-configmaps-a7cf0a44-b593-4ad9-8337-fec4f86ca119": Phase="Running", Reason="", readiness=true. Elapsed: 2.021005401s + Jul 27 02:05:56.271: INFO: Pod "pod-configmaps-a7cf0a44-b593-4ad9-8337-fec4f86ca119": Phase="Running", Reason="", readiness=false. Elapsed: 4.020563605s + Jul 27 02:05:58.272: INFO: Pod "pod-configmaps-a7cf0a44-b593-4ad9-8337-fec4f86ca119": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021556574s + STEP: Saw pod success 07/27/23 02:05:58.272 + Jul 27 02:05:58.272: INFO: Pod "pod-configmaps-a7cf0a44-b593-4ad9-8337-fec4f86ca119" satisfied condition "Succeeded or Failed" + Jul 27 02:05:58.281: INFO: Trying to get logs from node 10.245.128.19 pod pod-configmaps-a7cf0a44-b593-4ad9-8337-fec4f86ca119 container agnhost-container: + STEP: delete the pod 07/27/23 02:05:58.298 + Jul 27 02:05:58.323: INFO: Waiting for pod pod-configmaps-a7cf0a44-b593-4ad9-8337-fec4f86ca119 to disappear + Jul 27 02:05:58.330: INFO: Pod pod-configmaps-a7cf0a44-b593-4ad9-8337-fec4f86ca119 no longer exists + [AfterEach] [sig-storage] ConfigMap test/e2e/framework/node/init/init.go:32 - Jun 12 21:34:43.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-cli] Kubectl client + Jul 27 02:05:58.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-cli] Kubectl client + [DeferCleanup (Each)] [sig-storage] ConfigMap dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-cli] Kubectl client + [DeferCleanup (Each)] [sig-storage] ConfigMap tear down framework | framework.go:193 - STEP: Destroying namespace "kubectl-1514" for this suite. 06/12/23 21:34:43.72 + STEP: Destroying namespace "configmap-7035" for this suite. 07/27/23 02:05:58.345 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSS ------------------------------- -[sig-storage] Projected configMap - should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:375 -[BeforeEach] [sig-storage] Projected configMap +[sig-apps] ReplicaSet + should serve a basic image on each replica with a public image [Conformance] + test/e2e/apps/replica_set.go:111 +[BeforeEach] [sig-apps] ReplicaSet set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:34:43.774 -Jun 12 21:34:43.774: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename projected 06/12/23 21:34:43.783 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:34:43.975 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:34:43.997 -[BeforeEach] [sig-storage] Projected configMap +STEP: Creating a kubernetes client 07/27/23 02:05:58.368 +Jul 27 02:05:58.368: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename replicaset 07/27/23 02:05:58.369 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:05:58.413 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:05:58.422 +[BeforeEach] [sig-apps] ReplicaSet test/e2e/framework/metrics/init/init.go:31 -[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:375 -STEP: Creating configMap with name projected-configmap-test-volume-32b90554-b6f9-4163-af47-b30ddcc6b1c0 06/12/23 21:34:44.064 -STEP: Creating a pod to test consume configMaps 06/12/23 21:34:44.089 -Jun 12 21:34:44.115: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c1d3cca5-c29c-47ea-b5b7-3e21694da96d" in namespace "projected-1421" to be "Succeeded or Failed" -Jun 12 21:34:44.125: INFO: Pod "pod-projected-configmaps-c1d3cca5-c29c-47ea-b5b7-3e21694da96d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.519721ms -Jun 12 21:34:46.136: INFO: Pod "pod-projected-configmaps-c1d3cca5-c29c-47ea-b5b7-3e21694da96d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020211358s -Jun 12 21:34:48.134: INFO: Pod "pod-projected-configmaps-c1d3cca5-c29c-47ea-b5b7-3e21694da96d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018618585s -Jun 12 21:34:50.134: INFO: Pod "pod-projected-configmaps-c1d3cca5-c29c-47ea-b5b7-3e21694da96d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018638941s -STEP: Saw pod success 06/12/23 21:34:50.134 -Jun 12 21:34:50.135: INFO: Pod "pod-projected-configmaps-c1d3cca5-c29c-47ea-b5b7-3e21694da96d" satisfied condition "Succeeded or Failed" -Jun 12 21:34:50.143: INFO: Trying to get logs from node 10.138.75.70 pod pod-projected-configmaps-c1d3cca5-c29c-47ea-b5b7-3e21694da96d container projected-configmap-volume-test: -STEP: delete the pod 06/12/23 21:34:50.159 -Jun 12 21:34:50.183: INFO: Waiting for pod pod-projected-configmaps-c1d3cca5-c29c-47ea-b5b7-3e21694da96d to disappear -Jun 12 21:34:50.196: INFO: Pod pod-projected-configmaps-c1d3cca5-c29c-47ea-b5b7-3e21694da96d no longer exists -[AfterEach] [sig-storage] Projected configMap +[It] should serve a basic image on each replica with a public image [Conformance] + test/e2e/apps/replica_set.go:111 +Jul 27 02:05:58.431: INFO: Creating ReplicaSet my-hostname-basic-e8c9b516-12e4-499f-a1a2-878422a9d120 +W0727 02:05:58.448471 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "my-hostname-basic-e8c9b516-12e4-499f-a1a2-878422a9d120" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "my-hostname-basic-e8c9b516-12e4-499f-a1a2-878422a9d120" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "my-hostname-basic-e8c9b516-12e4-499f-a1a2-878422a9d120" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "my-hostname-basic-e8c9b516-12e4-499f-a1a2-878422a9d120" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +Jul 27 02:05:58.459: INFO: Pod name my-hostname-basic-e8c9b516-12e4-499f-a1a2-878422a9d120: Found 0 pods out of 1 +Jul 27 02:06:03.468: INFO: Pod name my-hostname-basic-e8c9b516-12e4-499f-a1a2-878422a9d120: Found 1 pods out of 1 +Jul 27 02:06:03.468: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-e8c9b516-12e4-499f-a1a2-878422a9d120" is running +Jul 27 02:06:03.468: INFO: Waiting up to 5m0s for pod "my-hostname-basic-e8c9b516-12e4-499f-a1a2-878422a9d120-pbd6z" in namespace "replicaset-3256" to be "running" +Jul 27 02:06:03.477: INFO: Pod "my-hostname-basic-e8c9b516-12e4-499f-a1a2-878422a9d120-pbd6z": Phase="Running", Reason="", readiness=true. Elapsed: 8.501076ms +Jul 27 02:06:03.477: INFO: Pod "my-hostname-basic-e8c9b516-12e4-499f-a1a2-878422a9d120-pbd6z" satisfied condition "running" +Jul 27 02:06:03.477: INFO: Pod "my-hostname-basic-e8c9b516-12e4-499f-a1a2-878422a9d120-pbd6z" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-27 02:05:58 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-27 02:06:00 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-27 02:06:00 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-27 02:05:58 +0000 UTC Reason: Message:}]) +Jul 27 02:06:03.477: INFO: Trying to dial the pod +Jul 27 02:06:08.526: INFO: Controller my-hostname-basic-e8c9b516-12e4-499f-a1a2-878422a9d120: Got expected result from replica 1 [my-hostname-basic-e8c9b516-12e4-499f-a1a2-878422a9d120-pbd6z]: "my-hostname-basic-e8c9b516-12e4-499f-a1a2-878422a9d120-pbd6z", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicaSet test/e2e/framework/node/init/init.go:32 -Jun 12 21:34:50.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Projected configMap +Jul 27 02:06:08.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicaSet test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Projected configMap +[DeferCleanup (Each)] [sig-apps] ReplicaSet dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Projected configMap +[DeferCleanup (Each)] [sig-apps] ReplicaSet tear down framework | framework.go:193 -STEP: Destroying namespace "projected-1421" for this suite. 06/12/23 21:34:50.226 +STEP: Destroying namespace "replicaset-3256" for this suite. 07/27/23 02:06:08.541 ------------------------------ -• [SLOW TEST] [6.474 seconds] -[sig-storage] Projected configMap -test/e2e/common/storage/framework.go:23 - should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:375 +• [SLOW TEST] [10.196 seconds] +[sig-apps] ReplicaSet +test/e2e/apps/framework.go:23 + should serve a basic image on each replica with a public image [Conformance] + test/e2e/apps/replica_set.go:111 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Projected configMap + [BeforeEach] [sig-apps] ReplicaSet set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:34:43.774 - Jun 12 21:34:43.774: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename projected 06/12/23 21:34:43.783 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:34:43.975 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:34:43.997 - [BeforeEach] [sig-storage] Projected configMap + STEP: Creating a kubernetes client 07/27/23 02:05:58.368 + Jul 27 02:05:58.368: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename replicaset 07/27/23 02:05:58.369 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:05:58.413 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:05:58.422 + [BeforeEach] [sig-apps] ReplicaSet test/e2e/framework/metrics/init/init.go:31 - [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:375 - STEP: Creating configMap with name projected-configmap-test-volume-32b90554-b6f9-4163-af47-b30ddcc6b1c0 06/12/23 21:34:44.064 - STEP: Creating a pod to test consume configMaps 06/12/23 21:34:44.089 - Jun 12 21:34:44.115: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c1d3cca5-c29c-47ea-b5b7-3e21694da96d" in namespace "projected-1421" to be "Succeeded or Failed" - Jun 12 21:34:44.125: INFO: Pod "pod-projected-configmaps-c1d3cca5-c29c-47ea-b5b7-3e21694da96d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.519721ms - Jun 12 21:34:46.136: INFO: Pod "pod-projected-configmaps-c1d3cca5-c29c-47ea-b5b7-3e21694da96d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020211358s - Jun 12 21:34:48.134: INFO: Pod "pod-projected-configmaps-c1d3cca5-c29c-47ea-b5b7-3e21694da96d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018618585s - Jun 12 21:34:50.134: INFO: Pod "pod-projected-configmaps-c1d3cca5-c29c-47ea-b5b7-3e21694da96d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018638941s - STEP: Saw pod success 06/12/23 21:34:50.134 - Jun 12 21:34:50.135: INFO: Pod "pod-projected-configmaps-c1d3cca5-c29c-47ea-b5b7-3e21694da96d" satisfied condition "Succeeded or Failed" - Jun 12 21:34:50.143: INFO: Trying to get logs from node 10.138.75.70 pod pod-projected-configmaps-c1d3cca5-c29c-47ea-b5b7-3e21694da96d container projected-configmap-volume-test: - STEP: delete the pod 06/12/23 21:34:50.159 - Jun 12 21:34:50.183: INFO: Waiting for pod pod-projected-configmaps-c1d3cca5-c29c-47ea-b5b7-3e21694da96d to disappear - Jun 12 21:34:50.196: INFO: Pod pod-projected-configmaps-c1d3cca5-c29c-47ea-b5b7-3e21694da96d no longer exists - [AfterEach] [sig-storage] Projected configMap + [It] should serve a basic image on each replica with a public image [Conformance] + test/e2e/apps/replica_set.go:111 + Jul 27 02:05:58.431: INFO: Creating ReplicaSet my-hostname-basic-e8c9b516-12e4-499f-a1a2-878422a9d120 + W0727 02:05:58.448471 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "my-hostname-basic-e8c9b516-12e4-499f-a1a2-878422a9d120" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "my-hostname-basic-e8c9b516-12e4-499f-a1a2-878422a9d120" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "my-hostname-basic-e8c9b516-12e4-499f-a1a2-878422a9d120" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "my-hostname-basic-e8c9b516-12e4-499f-a1a2-878422a9d120" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + Jul 27 02:05:58.459: INFO: Pod name my-hostname-basic-e8c9b516-12e4-499f-a1a2-878422a9d120: Found 0 pods out of 1 + Jul 27 02:06:03.468: INFO: Pod name my-hostname-basic-e8c9b516-12e4-499f-a1a2-878422a9d120: Found 1 pods out of 1 + Jul 27 02:06:03.468: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-e8c9b516-12e4-499f-a1a2-878422a9d120" is running + Jul 27 02:06:03.468: INFO: Waiting up to 5m0s for pod "my-hostname-basic-e8c9b516-12e4-499f-a1a2-878422a9d120-pbd6z" in namespace "replicaset-3256" to be "running" + Jul 27 02:06:03.477: INFO: Pod "my-hostname-basic-e8c9b516-12e4-499f-a1a2-878422a9d120-pbd6z": Phase="Running", Reason="", readiness=true. Elapsed: 8.501076ms + Jul 27 02:06:03.477: INFO: Pod "my-hostname-basic-e8c9b516-12e4-499f-a1a2-878422a9d120-pbd6z" satisfied condition "running" + Jul 27 02:06:03.477: INFO: Pod "my-hostname-basic-e8c9b516-12e4-499f-a1a2-878422a9d120-pbd6z" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-27 02:05:58 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-27 02:06:00 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-27 02:06:00 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-27 02:05:58 +0000 UTC Reason: Message:}]) + Jul 27 02:06:03.477: INFO: Trying to dial the pod + Jul 27 02:06:08.526: INFO: Controller my-hostname-basic-e8c9b516-12e4-499f-a1a2-878422a9d120: Got expected result from replica 1 [my-hostname-basic-e8c9b516-12e4-499f-a1a2-878422a9d120-pbd6z]: "my-hostname-basic-e8c9b516-12e4-499f-a1a2-878422a9d120-pbd6z", 1 of 1 required successes so far + [AfterEach] [sig-apps] ReplicaSet test/e2e/framework/node/init/init.go:32 - Jun 12 21:34:50.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Projected configMap + Jul 27 02:06:08.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicaSet test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Projected configMap + [DeferCleanup (Each)] [sig-apps] ReplicaSet dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Projected configMap + [DeferCleanup (Each)] [sig-apps] ReplicaSet tear down framework | framework.go:193 - STEP: Destroying namespace "projected-1421" for this suite. 06/12/23 21:34:50.226 + STEP: Destroying namespace "replicaset-3256" for this suite. 07/27/23 02:06:08.541 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] Secrets - should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/secrets_volume.go:89 -[BeforeEach] [sig-storage] Secrets +[sig-storage] Projected secret + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:119 +[BeforeEach] [sig-storage] Projected secret set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:34:50.252 -Jun 12 21:34:50.253: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename secrets 06/12/23 21:34:50.257 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:34:50.31 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:34:50.322 -[BeforeEach] [sig-storage] Secrets +STEP: Creating a kubernetes client 07/27/23 02:06:08.565 +Jul 27 02:06:08.565: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename projected 07/27/23 02:06:08.566 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:06:08.605 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:06:08.615 +[BeforeEach] [sig-storage] Projected secret test/e2e/framework/metrics/init/init.go:31 -[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/secrets_volume.go:89 -STEP: Creating secret with name secret-test-map-05daca1c-a9b0-4e49-a94d-0dd92e52d573 06/12/23 21:34:50.34 -STEP: Creating a pod to test consume secrets 06/12/23 21:34:50.356 -Jun 12 21:34:50.381: INFO: Waiting up to 5m0s for pod "pod-secrets-1a95f398-0650-4d51-ac71-ba15e5b177b0" in namespace "secrets-7376" to be "Succeeded or Failed" -Jun 12 21:34:50.390: INFO: Pod "pod-secrets-1a95f398-0650-4d51-ac71-ba15e5b177b0": Phase="Pending", Reason="", readiness=false. Elapsed: 9.150736ms -Jun 12 21:34:52.401: INFO: Pod "pod-secrets-1a95f398-0650-4d51-ac71-ba15e5b177b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020266671s -Jun 12 21:34:54.400: INFO: Pod "pod-secrets-1a95f398-0650-4d51-ac71-ba15e5b177b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01881364s -Jun 12 21:34:56.404: INFO: Pod "pod-secrets-1a95f398-0650-4d51-ac71-ba15e5b177b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022987546s -STEP: Saw pod success 06/12/23 21:34:56.404 -Jun 12 21:34:56.404: INFO: Pod "pod-secrets-1a95f398-0650-4d51-ac71-ba15e5b177b0" satisfied condition "Succeeded or Failed" -Jun 12 21:34:56.412: INFO: Trying to get logs from node 10.138.75.70 pod pod-secrets-1a95f398-0650-4d51-ac71-ba15e5b177b0 container secret-volume-test: -STEP: delete the pod 06/12/23 21:34:56.432 -Jun 12 21:34:56.455: INFO: Waiting for pod pod-secrets-1a95f398-0650-4d51-ac71-ba15e5b177b0 to disappear -Jun 12 21:34:56.463: INFO: Pod pod-secrets-1a95f398-0650-4d51-ac71-ba15e5b177b0 no longer exists -[AfterEach] [sig-storage] Secrets +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:119 +STEP: Creating secret with name projected-secret-test-1ef58fc1-27ba-40f7-a371-84ad0e94df8f 07/27/23 02:06:08.624 +STEP: Creating a pod to test consume secrets 07/27/23 02:06:08.636 +W0727 02:06:08.664886 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "secret-volume-test" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "secret-volume-test" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "secret-volume-test" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "secret-volume-test" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +Jul 27 02:06:08.665: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d54431d8-0377-4dcd-a80b-d392f2294421" in namespace "projected-307" to be "Succeeded or Failed" +Jul 27 02:06:08.683: INFO: Pod "pod-projected-secrets-d54431d8-0377-4dcd-a80b-d392f2294421": Phase="Pending", Reason="", readiness=false. Elapsed: 18.913353ms +Jul 27 02:06:10.693: INFO: Pod "pod-projected-secrets-d54431d8-0377-4dcd-a80b-d392f2294421": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028748742s +Jul 27 02:06:12.695: INFO: Pod "pod-projected-secrets-d54431d8-0377-4dcd-a80b-d392f2294421": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030796046s +STEP: Saw pod success 07/27/23 02:06:12.695 +Jul 27 02:06:12.695: INFO: Pod "pod-projected-secrets-d54431d8-0377-4dcd-a80b-d392f2294421" satisfied condition "Succeeded or Failed" +Jul 27 02:06:12.704: INFO: Trying to get logs from node 10.245.128.19 pod pod-projected-secrets-d54431d8-0377-4dcd-a80b-d392f2294421 container secret-volume-test: +STEP: delete the pod 07/27/23 02:06:12.733 +Jul 27 02:06:12.755: INFO: Waiting for pod pod-projected-secrets-d54431d8-0377-4dcd-a80b-d392f2294421 to disappear +Jul 27 02:06:12.762: INFO: Pod pod-projected-secrets-d54431d8-0377-4dcd-a80b-d392f2294421 no longer exists +[AfterEach] [sig-storage] Projected secret test/e2e/framework/node/init/init.go:32 -Jun 12 21:34:56.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Secrets +Jul 27 02:06:12.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected secret test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Secrets +[DeferCleanup (Each)] [sig-storage] Projected secret dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Secrets +[DeferCleanup (Each)] [sig-storage] Projected secret tear down framework | framework.go:193 -STEP: Destroying namespace "secrets-7376" for this suite. 06/12/23 21:34:56.48 +STEP: Destroying namespace "projected-307" for this suite. 07/27/23 02:06:12.778 ------------------------------ -• [SLOW TEST] [6.252 seconds] -[sig-storage] Secrets +• [4.236 seconds] +[sig-storage] Projected secret test/e2e/common/storage/framework.go:23 - should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/secrets_volume.go:89 + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:119 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Secrets + [BeforeEach] [sig-storage] Projected secret set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:34:50.252 - Jun 12 21:34:50.253: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename secrets 06/12/23 21:34:50.257 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:34:50.31 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:34:50.322 - [BeforeEach] [sig-storage] Secrets + STEP: Creating a kubernetes client 07/27/23 02:06:08.565 + Jul 27 02:06:08.565: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename projected 07/27/23 02:06:08.566 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:06:08.605 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:06:08.615 + [BeforeEach] [sig-storage] Projected secret test/e2e/framework/metrics/init/init.go:31 - [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/secrets_volume.go:89 - STEP: Creating secret with name secret-test-map-05daca1c-a9b0-4e49-a94d-0dd92e52d573 06/12/23 21:34:50.34 - STEP: Creating a pod to test consume secrets 06/12/23 21:34:50.356 - Jun 12 21:34:50.381: INFO: Waiting up to 5m0s for pod "pod-secrets-1a95f398-0650-4d51-ac71-ba15e5b177b0" in namespace "secrets-7376" to be "Succeeded or Failed" - Jun 12 21:34:50.390: INFO: Pod "pod-secrets-1a95f398-0650-4d51-ac71-ba15e5b177b0": Phase="Pending", Reason="", readiness=false. Elapsed: 9.150736ms - Jun 12 21:34:52.401: INFO: Pod "pod-secrets-1a95f398-0650-4d51-ac71-ba15e5b177b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020266671s - Jun 12 21:34:54.400: INFO: Pod "pod-secrets-1a95f398-0650-4d51-ac71-ba15e5b177b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01881364s - Jun 12 21:34:56.404: INFO: Pod "pod-secrets-1a95f398-0650-4d51-ac71-ba15e5b177b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022987546s - STEP: Saw pod success 06/12/23 21:34:56.404 - Jun 12 21:34:56.404: INFO: Pod "pod-secrets-1a95f398-0650-4d51-ac71-ba15e5b177b0" satisfied condition "Succeeded or Failed" - Jun 12 21:34:56.412: INFO: Trying to get logs from node 10.138.75.70 pod pod-secrets-1a95f398-0650-4d51-ac71-ba15e5b177b0 container secret-volume-test: - STEP: delete the pod 06/12/23 21:34:56.432 - Jun 12 21:34:56.455: INFO: Waiting for pod pod-secrets-1a95f398-0650-4d51-ac71-ba15e5b177b0 to disappear - Jun 12 21:34:56.463: INFO: Pod pod-secrets-1a95f398-0650-4d51-ac71-ba15e5b177b0 no longer exists - [AfterEach] [sig-storage] Secrets + [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:119 + STEP: Creating secret with name projected-secret-test-1ef58fc1-27ba-40f7-a371-84ad0e94df8f 07/27/23 02:06:08.624 + STEP: Creating a pod to test consume secrets 07/27/23 02:06:08.636 + W0727 02:06:08.664886 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "secret-volume-test" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "secret-volume-test" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "secret-volume-test" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "secret-volume-test" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + Jul 27 02:06:08.665: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d54431d8-0377-4dcd-a80b-d392f2294421" in namespace "projected-307" to be "Succeeded or Failed" + Jul 27 02:06:08.683: INFO: Pod "pod-projected-secrets-d54431d8-0377-4dcd-a80b-d392f2294421": Phase="Pending", Reason="", readiness=false. Elapsed: 18.913353ms + Jul 27 02:06:10.693: INFO: Pod "pod-projected-secrets-d54431d8-0377-4dcd-a80b-d392f2294421": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028748742s + Jul 27 02:06:12.695: INFO: Pod "pod-projected-secrets-d54431d8-0377-4dcd-a80b-d392f2294421": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030796046s + STEP: Saw pod success 07/27/23 02:06:12.695 + Jul 27 02:06:12.695: INFO: Pod "pod-projected-secrets-d54431d8-0377-4dcd-a80b-d392f2294421" satisfied condition "Succeeded or Failed" + Jul 27 02:06:12.704: INFO: Trying to get logs from node 10.245.128.19 pod pod-projected-secrets-d54431d8-0377-4dcd-a80b-d392f2294421 container secret-volume-test: + STEP: delete the pod 07/27/23 02:06:12.733 + Jul 27 02:06:12.755: INFO: Waiting for pod pod-projected-secrets-d54431d8-0377-4dcd-a80b-d392f2294421 to disappear + Jul 27 02:06:12.762: INFO: Pod pod-projected-secrets-d54431d8-0377-4dcd-a80b-d392f2294421 no longer exists + [AfterEach] [sig-storage] Projected secret test/e2e/framework/node/init/init.go:32 - Jun 12 21:34:56.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Secrets + Jul 27 02:06:12.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected secret test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Secrets + [DeferCleanup (Each)] [sig-storage] Projected secret dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Secrets + [DeferCleanup (Each)] [sig-storage] Projected secret tear down framework | framework.go:193 - STEP: Destroying namespace "secrets-7376" for this suite. 06/12/23 21:34:56.48 + STEP: Destroying namespace "projected-307" for this suite. 07/27/23 02:06:12.778 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - patching/updating a validating webhook should work [Conformance] - test/e2e/apimachinery/webhook.go:413 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[sig-node] Pods + should get a host IP [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:204 +[BeforeEach] [sig-node] Pods set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:34:56.515 -Jun 12 21:34:56.515: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename webhook 06/12/23 21:34:56.52 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:34:56.573 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:34:56.587 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +STEP: Creating a kubernetes client 07/27/23 02:06:12.801 +Jul 27 02:06:12.801: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename pods 07/27/23 02:06:12.802 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:06:12.879 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:06:12.894 +[BeforeEach] [sig-node] Pods test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:90 -STEP: Setting up server cert 06/12/23 21:34:56.661 -STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 21:34:57.222 -STEP: Deploying the webhook pod 06/12/23 21:34:57.255 -STEP: Wait for the deployment to be ready 06/12/23 21:34:57.281 -Jun 12 21:34:57.298: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set -Jun 12 21:34:59.325: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 21, 34, 57, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 34, 57, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 34, 57, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 34, 57, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} -STEP: Deploying the webhook service 06/12/23 21:35:01.335 -STEP: Verifying the service has paired with the endpoint 06/12/23 21:35:01.387 -Jun 12 21:35:02.392: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 -[It] patching/updating a validating webhook should work [Conformance] - test/e2e/apimachinery/webhook.go:413 -STEP: Creating a validating webhook configuration 06/12/23 21:35:02.428 -STEP: Creating a configMap that does not comply to the validation webhook rules 06/12/23 21:35:02.519 -STEP: Updating a validating webhook configuration's rules to not include the create operation 06/12/23 21:35:02.553 -STEP: Creating a configMap that does not comply to the validation webhook rules 06/12/23 21:35:02.582 -STEP: Patching a validating webhook configuration's rules to include the create operation 06/12/23 21:35:02.616 -STEP: Creating a configMap that does not comply to the validation webhook rules 06/12/23 21:35:02.635 -[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 +[It] should get a host IP [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:204 +STEP: creating pod 07/27/23 02:06:12.911 +Jul 27 02:06:12.968: INFO: Waiting up to 5m0s for pod "pod-hostip-dab17fee-224d-4995-8a7a-1bc3d1a9f850" in namespace "pods-1708" to be "running and ready" +Jul 27 02:06:12.980: INFO: Pod "pod-hostip-dab17fee-224d-4995-8a7a-1bc3d1a9f850": Phase="Pending", Reason="", readiness=false. Elapsed: 11.964418ms +Jul 27 02:06:12.980: INFO: The phase of Pod pod-hostip-dab17fee-224d-4995-8a7a-1bc3d1a9f850 is Pending, waiting for it to be Running (with Ready = true) +Jul 27 02:06:14.989: INFO: Pod "pod-hostip-dab17fee-224d-4995-8a7a-1bc3d1a9f850": Phase="Running", Reason="", readiness=true. Elapsed: 2.020400515s +Jul 27 02:06:14.989: INFO: The phase of Pod pod-hostip-dab17fee-224d-4995-8a7a-1bc3d1a9f850 is Running (Ready = true) +Jul 27 02:06:14.989: INFO: Pod "pod-hostip-dab17fee-224d-4995-8a7a-1bc3d1a9f850" satisfied condition "running and ready" +Jul 27 02:06:15.004: INFO: Pod pod-hostip-dab17fee-224d-4995-8a7a-1bc3d1a9f850 has hostIP: 10.245.128.19 +[AfterEach] [sig-node] Pods test/e2e/framework/node/init/init.go:32 -Jun 12 21:35:02.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:105 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +Jul 27 02:06:15.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Pods test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-node] Pods dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-node] Pods tear down framework | framework.go:193 -STEP: Destroying namespace "webhook-3603" for this suite. 06/12/23 21:35:02.813 -STEP: Destroying namespace "webhook-3603-markers" for this suite. 06/12/23 21:35:02.842 +STEP: Destroying namespace "pods-1708" for this suite. 07/27/23 02:06:15.014 ------------------------------ -• [SLOW TEST] [6.353 seconds] -[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] -test/e2e/apimachinery/framework.go:23 - patching/updating a validating webhook should work [Conformance] - test/e2e/apimachinery/webhook.go:413 +• [2.235 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should get a host IP [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:204 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [BeforeEach] [sig-node] Pods set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:34:56.515 - Jun 12 21:34:56.515: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename webhook 06/12/23 21:34:56.52 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:34:56.573 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:34:56.587 - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + STEP: Creating a kubernetes client 07/27/23 02:06:12.801 + Jul 27 02:06:12.801: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename pods 07/27/23 02:06:12.802 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:06:12.879 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:06:12.894 + [BeforeEach] [sig-node] Pods test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:90 - STEP: Setting up server cert 06/12/23 21:34:56.661 - STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 21:34:57.222 - STEP: Deploying the webhook pod 06/12/23 21:34:57.255 - STEP: Wait for the deployment to be ready 06/12/23 21:34:57.281 - Jun 12 21:34:57.298: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set - Jun 12 21:34:59.325: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 21, 34, 57, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 34, 57, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 34, 57, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 34, 57, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} - STEP: Deploying the webhook service 06/12/23 21:35:01.335 - STEP: Verifying the service has paired with the endpoint 06/12/23 21:35:01.387 - Jun 12 21:35:02.392: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 - [It] patching/updating a validating webhook should work [Conformance] - test/e2e/apimachinery/webhook.go:413 - STEP: Creating a validating webhook configuration 06/12/23 21:35:02.428 - STEP: Creating a configMap that does not comply to the validation webhook rules 06/12/23 21:35:02.519 - STEP: Updating a validating webhook configuration's rules to not include the create operation 06/12/23 21:35:02.553 - STEP: Creating a configMap that does not comply to the validation webhook rules 06/12/23 21:35:02.582 - STEP: Patching a validating webhook configuration's rules to include the create operation 06/12/23 21:35:02.616 - STEP: Creating a configMap that does not comply to the validation webhook rules 06/12/23 21:35:02.635 - [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 + [It] should get a host IP [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:204 + STEP: creating pod 07/27/23 02:06:12.911 + Jul 27 02:06:12.968: INFO: Waiting up to 5m0s for pod "pod-hostip-dab17fee-224d-4995-8a7a-1bc3d1a9f850" in namespace "pods-1708" to be "running and ready" + Jul 27 02:06:12.980: INFO: Pod "pod-hostip-dab17fee-224d-4995-8a7a-1bc3d1a9f850": Phase="Pending", Reason="", readiness=false. Elapsed: 11.964418ms + Jul 27 02:06:12.980: INFO: The phase of Pod pod-hostip-dab17fee-224d-4995-8a7a-1bc3d1a9f850 is Pending, waiting for it to be Running (with Ready = true) + Jul 27 02:06:14.989: INFO: Pod "pod-hostip-dab17fee-224d-4995-8a7a-1bc3d1a9f850": Phase="Running", Reason="", readiness=true. Elapsed: 2.020400515s + Jul 27 02:06:14.989: INFO: The phase of Pod pod-hostip-dab17fee-224d-4995-8a7a-1bc3d1a9f850 is Running (Ready = true) + Jul 27 02:06:14.989: INFO: Pod "pod-hostip-dab17fee-224d-4995-8a7a-1bc3d1a9f850" satisfied condition "running and ready" + Jul 27 02:06:15.004: INFO: Pod pod-hostip-dab17fee-224d-4995-8a7a-1bc3d1a9f850 has hostIP: 10.245.128.19 + [AfterEach] [sig-node] Pods test/e2e/framework/node/init/init.go:32 - Jun 12 21:35:02.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:105 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + Jul 27 02:06:15.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Pods test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-node] Pods dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-node] Pods tear down framework | framework.go:193 - STEP: Destroying namespace "webhook-3603" for this suite. 06/12/23 21:35:02.813 - STEP: Destroying namespace "webhook-3603-markers" for this suite. 06/12/23 21:35:02.842 + STEP: Destroying namespace "pods-1708" for this suite. 07/27/23 02:06:15.014 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSS ------------------------------ -[sig-network] Services - should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] - test/e2e/network/service.go:2250 -[BeforeEach] [sig-network] Services +[sig-cli] Kubectl client Kubectl replace + should update a single-container pod's image [Conformance] + test/e2e/kubectl/kubectl.go:1747 +[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:35:02.873 -Jun 12 21:35:02.874: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename services 06/12/23 21:35:02.878 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:35:02.952 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:35:02.971 -[BeforeEach] [sig-network] Services +STEP: Creating a kubernetes client 07/27/23 02:06:15.037 +Jul 27 02:06:15.037: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename kubectl 07/27/23 02:06:15.038 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:06:15.092 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:06:15.102 +[BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-network] Services - test/e2e/network/service.go:766 -[It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] - test/e2e/network/service.go:2250 -STEP: creating service in namespace services-5647 06/12/23 21:35:02.983 -STEP: creating service affinity-nodeport-transition in namespace services-5647 06/12/23 21:35:02.984 -STEP: creating replication controller affinity-nodeport-transition in namespace services-5647 06/12/23 21:35:03.044 -I0612 21:35:03.065273 23 runners.go:193] Created replication controller with name: affinity-nodeport-transition, namespace: services-5647, replica count: 3 -I0612 21:35:06.116164 23 runners.go:193] affinity-nodeport-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady -I0612 21:35:09.116493 23 runners.go:193] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady -Jun 12 21:35:09.160: INFO: Creating new exec pod -Jun 12 21:35:09.180: INFO: Waiting up to 5m0s for pod "execpod-affinityfcmg4" in namespace "services-5647" to be "running" -Jun 12 21:35:09.192: INFO: Pod "execpod-affinityfcmg4": Phase="Pending", Reason="", readiness=false. Elapsed: 11.738565ms -Jun 12 21:35:11.201: INFO: Pod "execpod-affinityfcmg4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020815588s -Jun 12 21:35:13.225: INFO: Pod "execpod-affinityfcmg4": Phase="Running", Reason="", readiness=true. Elapsed: 4.045529513s -Jun 12 21:35:13.225: INFO: Pod "execpod-affinityfcmg4" satisfied condition "running" -Jun 12 21:35:14.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-5647 exec execpod-affinityfcmg4 -- /bin/sh -x -c nc -v -z -w 2 affinity-nodeport-transition 80' -Jun 12 21:35:14.758: INFO: stderr: "+ nc -v -z -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" -Jun 12 21:35:14.759: INFO: stdout: "" -Jun 12 21:35:14.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-5647 exec execpod-affinityfcmg4 -- /bin/sh -x -c nc -v -z -w 2 172.21.118.26 80' -Jun 12 21:35:15.371: INFO: stderr: "+ nc -v -z -w 2 172.21.118.26 80\nConnection to 172.21.118.26 80 port [tcp/http] succeeded!\n" -Jun 12 21:35:15.371: INFO: stdout: "" -Jun 12 21:35:15.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-5647 exec execpod-affinityfcmg4 -- /bin/sh -x -c nc -v -z -w 2 10.138.75.112 30035' -Jun 12 21:35:15.756: INFO: stderr: "+ nc -v -z -w 2 10.138.75.112 30035\nConnection to 10.138.75.112 30035 port [tcp/*] succeeded!\n" -Jun 12 21:35:15.756: INFO: stdout: "" -Jun 12 21:35:15.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-5647 exec execpod-affinityfcmg4 -- /bin/sh -x -c nc -v -z -w 2 10.138.75.70 30035' -Jun 12 21:35:16.594: INFO: stderr: "+ nc -v -z -w 2 10.138.75.70 30035\nConnection to 10.138.75.70 30035 port [tcp/*] succeeded!\n" -Jun 12 21:35:16.594: INFO: stdout: "" -Jun 12 21:35:16.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-5647 exec execpod-affinityfcmg4 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.138.75.112:30035/ ; done' -Jun 12 21:35:18.690: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n" -Jun 12 21:35:18.690: INFO: stdout: "\naffinity-nodeport-transition-zqpqw\naffinity-nodeport-transition-ssc66\naffinity-nodeport-transition-zqpqw\naffinity-nodeport-transition-ssc66\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-zqpqw\naffinity-nodeport-transition-ssc66\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-ssc66\naffinity-nodeport-transition-zqpqw\naffinity-nodeport-transition-ssc66\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-zqpqw\naffinity-nodeport-transition-ssc66\naffinity-nodeport-transition-zqpqw" -Jun 12 21:35:18.690: INFO: Received response from host: affinity-nodeport-transition-zqpqw -Jun 12 21:35:18.690: INFO: Received response from host: affinity-nodeport-transition-ssc66 -Jun 12 21:35:18.690: INFO: Received response from host: affinity-nodeport-transition-zqpqw -Jun 12 21:35:18.690: INFO: Received response from host: affinity-nodeport-transition-ssc66 -Jun 12 21:35:18.690: INFO: Received response from host: affinity-nodeport-transition-f9brh -Jun 12 21:35:18.690: INFO: Received response from host: affinity-nodeport-transition-f9brh -Jun 12 21:35:18.690: INFO: Received response from host: affinity-nodeport-transition-zqpqw -Jun 12 21:35:18.690: INFO: Received response from host: affinity-nodeport-transition-ssc66 -Jun 12 21:35:18.690: INFO: Received response from host: affinity-nodeport-transition-f9brh -Jun 12 21:35:18.690: INFO: Received response from host: affinity-nodeport-transition-ssc66 -Jun 12 21:35:18.690: INFO: Received response from host: affinity-nodeport-transition-zqpqw -Jun 12 21:35:18.690: INFO: Received response from host: affinity-nodeport-transition-ssc66 -Jun 12 21:35:18.690: INFO: Received response from host: affinity-nodeport-transition-f9brh -Jun 12 21:35:18.690: INFO: Received response from host: affinity-nodeport-transition-zqpqw -Jun 12 21:35:18.690: INFO: Received response from host: affinity-nodeport-transition-ssc66 -Jun 12 21:35:18.690: INFO: Received response from host: affinity-nodeport-transition-zqpqw -Jun 12 21:35:18.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-5647 exec execpod-affinityfcmg4 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.138.75.112:30035/ ; done' -Jun 12 21:35:20.491: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n" -Jun 12 21:35:20.491: INFO: stdout: "\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-f9brh" -Jun 12 21:35:20.491: INFO: Received response from host: affinity-nodeport-transition-f9brh -Jun 12 21:35:20.491: INFO: Received response from host: affinity-nodeport-transition-f9brh -Jun 12 21:35:20.491: INFO: Received response from host: affinity-nodeport-transition-f9brh -Jun 12 21:35:20.491: INFO: Received response from host: affinity-nodeport-transition-f9brh -Jun 12 21:35:20.491: INFO: Received response from host: affinity-nodeport-transition-f9brh -Jun 12 21:35:20.491: INFO: Received response from host: affinity-nodeport-transition-f9brh -Jun 12 21:35:20.491: INFO: Received response from host: affinity-nodeport-transition-f9brh -Jun 12 21:35:20.491: INFO: Received response from host: affinity-nodeport-transition-f9brh -Jun 12 21:35:20.491: INFO: Received response from host: affinity-nodeport-transition-f9brh -Jun 12 21:35:20.491: INFO: Received response from host: affinity-nodeport-transition-f9brh -Jun 12 21:35:20.491: INFO: Received response from host: affinity-nodeport-transition-f9brh -Jun 12 21:35:20.491: INFO: Received response from host: affinity-nodeport-transition-f9brh -Jun 12 21:35:20.491: INFO: Received response from host: affinity-nodeport-transition-f9brh -Jun 12 21:35:20.491: INFO: Received response from host: affinity-nodeport-transition-f9brh -Jun 12 21:35:20.491: INFO: Received response from host: affinity-nodeport-transition-f9brh -Jun 12 21:35:20.491: INFO: Received response from host: affinity-nodeport-transition-f9brh -Jun 12 21:35:20.491: INFO: Cleaning up the exec pod -STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-5647, will wait for the garbage collector to delete the pods 06/12/23 21:35:20.514 -Jun 12 21:35:20.600: INFO: Deleting ReplicationController affinity-nodeport-transition took: 21.515885ms -Jun 12 21:35:20.801: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 201.314017ms -[AfterEach] [sig-network] Services +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[BeforeEach] Kubectl replace + test/e2e/kubectl/kubectl.go:1734 +[It] should update a single-container pod's image [Conformance] + test/e2e/kubectl/kubectl.go:1747 +STEP: running the image registry.k8s.io/e2e-test-images/httpd:2.4.38-4 07/27/23 02:06:15.111 +Jul 27 02:06:15.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-9221 run e2e-test-httpd-pod --image=registry.k8s.io/e2e-test-images/httpd:2.4.38-4 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' +Jul 27 02:06:15.221: INFO: stderr: "" +Jul 27 02:06:15.221: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: verifying the pod e2e-test-httpd-pod is running 07/27/23 02:06:15.221 +STEP: verifying the pod e2e-test-httpd-pod was created 07/27/23 02:06:20.272 +Jul 27 02:06:20.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-9221 get pod e2e-test-httpd-pod -o json' +Jul 27 02:06:20.349: INFO: stderr: "" +Jul 27 02:06:20.349: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"cni.projectcalico.org/containerID\": \"900f6036c97f59c0e58b6db099a5466a3e111a9928ceef31bc60f2bf4768dbef\",\n \"cni.projectcalico.org/podIP\": \"172.17.225.14/32\",\n \"cni.projectcalico.org/podIPs\": \"172.17.225.14/32\",\n \"k8s.v1.cni.cncf.io/network-status\": \"[{\\n \\\"name\\\": \\\"k8s-pod-network\\\",\\n \\\"ips\\\": [\\n \\\"172.17.225.14\\\"\\n ],\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"openshift.io/scc\": \"anyuid\"\n },\n \"creationTimestamp\": \"2023-07-27T02:06:15Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-9221\",\n \"resourceVersion\": \"92589\",\n \"uid\": \"87b8512f-25d2-4bc8-9627-2636cc67856b\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"registry.k8s.io/e2e-test-images/httpd:2.4.38-4\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"securityContext\": {\n \"capabilities\": {\n \"drop\": [\n \"MKNOD\"\n ]\n }\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-g69n6\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"imagePullSecrets\": [\n {\n \"name\": \"default-dockercfg-vvmlw\"\n }\n ],\n \"nodeName\": \"10.245.128.19\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {\n \"seLinuxOptions\": {\n \"level\": \"s0:c51,c15\"\n }\n },\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-g69n6\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"service-ca.crt\",\n \"path\": \"service-ca.crt\"\n }\n ],\n \"name\": \"openshift-service-ca.crt\"\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-07-27T02:06:15Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-07-27T02:06:17Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-07-27T02:06:17Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-07-27T02:06:15Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"cri-o://9274de8b39093407ee40f544df9143ac73c1258f30c7799bc610860974ea0d6f\",\n \"image\": \"registry.k8s.io/e2e-test-images/httpd:2.4.38-4\",\n \"imageID\": \"registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2023-07-27T02:06:16Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.245.128.19\",\n \"phase\": \"Running\",\n \"podIP\": \"172.17.225.14\",\n \"podIPs\": [\n {\n \"ip\": \"172.17.225.14\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2023-07-27T02:06:15Z\"\n }\n}\n" +STEP: replace the image in the pod 07/27/23 02:06:20.349 +Jul 27 02:06:20.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-9221 replace -f -' +Jul 27 02:06:20.864: INFO: stderr: "" +Jul 27 02:06:20.864: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" +STEP: verifying the pod e2e-test-httpd-pod has the right image registry.k8s.io/e2e-test-images/busybox:1.29-4 07/27/23 02:06:20.864 +[AfterEach] Kubectl replace + test/e2e/kubectl/kubectl.go:1738 +Jul 27 02:06:20.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-9221 delete pods e2e-test-httpd-pod' +Jul 27 02:06:22.533: INFO: stderr: "" +Jul 27 02:06:22.533: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 -Jun 12 21:35:24.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] Services +Jul 27 02:06:22.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] Services +[DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] Services +[DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 -STEP: Destroying namespace "services-5647" for this suite. 06/12/23 21:35:24.893 +STEP: Destroying namespace "kubectl-9221" for this suite. 07/27/23 02:06:22.55 ------------------------------ -• [SLOW TEST] [22.046 seconds] -[sig-network] Services -test/e2e/network/common/framework.go:23 - should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] - test/e2e/network/service.go:2250 +• [SLOW TEST] [7.566 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl replace + test/e2e/kubectl/kubectl.go:1731 + should update a single-container pod's image [Conformance] + test/e2e/kubectl/kubectl.go:1747 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] Services + [BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:35:02.873 - Jun 12 21:35:02.874: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename services 06/12/23 21:35:02.878 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:35:02.952 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:35:02.971 - [BeforeEach] [sig-network] Services + STEP: Creating a kubernetes client 07/27/23 02:06:15.037 + Jul 27 02:06:15.037: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename kubectl 07/27/23 02:06:15.038 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:06:15.092 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:06:15.102 + [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-network] Services - test/e2e/network/service.go:766 - [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] - test/e2e/network/service.go:2250 - STEP: creating service in namespace services-5647 06/12/23 21:35:02.983 - STEP: creating service affinity-nodeport-transition in namespace services-5647 06/12/23 21:35:02.984 - STEP: creating replication controller affinity-nodeport-transition in namespace services-5647 06/12/23 21:35:03.044 - I0612 21:35:03.065273 23 runners.go:193] Created replication controller with name: affinity-nodeport-transition, namespace: services-5647, replica count: 3 - I0612 21:35:06.116164 23 runners.go:193] affinity-nodeport-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady - I0612 21:35:09.116493 23 runners.go:193] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady - Jun 12 21:35:09.160: INFO: Creating new exec pod - Jun 12 21:35:09.180: INFO: Waiting up to 5m0s for pod "execpod-affinityfcmg4" in namespace "services-5647" to be "running" - Jun 12 21:35:09.192: INFO: Pod "execpod-affinityfcmg4": Phase="Pending", Reason="", readiness=false. Elapsed: 11.738565ms - Jun 12 21:35:11.201: INFO: Pod "execpod-affinityfcmg4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020815588s - Jun 12 21:35:13.225: INFO: Pod "execpod-affinityfcmg4": Phase="Running", Reason="", readiness=true. Elapsed: 4.045529513s - Jun 12 21:35:13.225: INFO: Pod "execpod-affinityfcmg4" satisfied condition "running" - Jun 12 21:35:14.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-5647 exec execpod-affinityfcmg4 -- /bin/sh -x -c nc -v -z -w 2 affinity-nodeport-transition 80' - Jun 12 21:35:14.758: INFO: stderr: "+ nc -v -z -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" - Jun 12 21:35:14.759: INFO: stdout: "" - Jun 12 21:35:14.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-5647 exec execpod-affinityfcmg4 -- /bin/sh -x -c nc -v -z -w 2 172.21.118.26 80' - Jun 12 21:35:15.371: INFO: stderr: "+ nc -v -z -w 2 172.21.118.26 80\nConnection to 172.21.118.26 80 port [tcp/http] succeeded!\n" - Jun 12 21:35:15.371: INFO: stdout: "" - Jun 12 21:35:15.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-5647 exec execpod-affinityfcmg4 -- /bin/sh -x -c nc -v -z -w 2 10.138.75.112 30035' - Jun 12 21:35:15.756: INFO: stderr: "+ nc -v -z -w 2 10.138.75.112 30035\nConnection to 10.138.75.112 30035 port [tcp/*] succeeded!\n" - Jun 12 21:35:15.756: INFO: stdout: "" - Jun 12 21:35:15.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-5647 exec execpod-affinityfcmg4 -- /bin/sh -x -c nc -v -z -w 2 10.138.75.70 30035' - Jun 12 21:35:16.594: INFO: stderr: "+ nc -v -z -w 2 10.138.75.70 30035\nConnection to 10.138.75.70 30035 port [tcp/*] succeeded!\n" - Jun 12 21:35:16.594: INFO: stdout: "" - Jun 12 21:35:16.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-5647 exec execpod-affinityfcmg4 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.138.75.112:30035/ ; done' - Jun 12 21:35:18.690: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n" - Jun 12 21:35:18.690: INFO: stdout: "\naffinity-nodeport-transition-zqpqw\naffinity-nodeport-transition-ssc66\naffinity-nodeport-transition-zqpqw\naffinity-nodeport-transition-ssc66\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-zqpqw\naffinity-nodeport-transition-ssc66\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-ssc66\naffinity-nodeport-transition-zqpqw\naffinity-nodeport-transition-ssc66\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-zqpqw\naffinity-nodeport-transition-ssc66\naffinity-nodeport-transition-zqpqw" - Jun 12 21:35:18.690: INFO: Received response from host: affinity-nodeport-transition-zqpqw - Jun 12 21:35:18.690: INFO: Received response from host: affinity-nodeport-transition-ssc66 - Jun 12 21:35:18.690: INFO: Received response from host: affinity-nodeport-transition-zqpqw - Jun 12 21:35:18.690: INFO: Received response from host: affinity-nodeport-transition-ssc66 - Jun 12 21:35:18.690: INFO: Received response from host: affinity-nodeport-transition-f9brh - Jun 12 21:35:18.690: INFO: Received response from host: affinity-nodeport-transition-f9brh - Jun 12 21:35:18.690: INFO: Received response from host: affinity-nodeport-transition-zqpqw - Jun 12 21:35:18.690: INFO: Received response from host: affinity-nodeport-transition-ssc66 - Jun 12 21:35:18.690: INFO: Received response from host: affinity-nodeport-transition-f9brh - Jun 12 21:35:18.690: INFO: Received response from host: affinity-nodeport-transition-ssc66 - Jun 12 21:35:18.690: INFO: Received response from host: affinity-nodeport-transition-zqpqw - Jun 12 21:35:18.690: INFO: Received response from host: affinity-nodeport-transition-ssc66 - Jun 12 21:35:18.690: INFO: Received response from host: affinity-nodeport-transition-f9brh - Jun 12 21:35:18.690: INFO: Received response from host: affinity-nodeport-transition-zqpqw - Jun 12 21:35:18.690: INFO: Received response from host: affinity-nodeport-transition-ssc66 - Jun 12 21:35:18.690: INFO: Received response from host: affinity-nodeport-transition-zqpqw - Jun 12 21:35:18.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-5647 exec execpod-affinityfcmg4 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.138.75.112:30035/ ; done' - Jun 12 21:35:20.491: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.75.112:30035/\n" - Jun 12 21:35:20.491: INFO: stdout: "\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-f9brh\naffinity-nodeport-transition-f9brh" - Jun 12 21:35:20.491: INFO: Received response from host: affinity-nodeport-transition-f9brh - Jun 12 21:35:20.491: INFO: Received response from host: affinity-nodeport-transition-f9brh - Jun 12 21:35:20.491: INFO: Received response from host: affinity-nodeport-transition-f9brh - Jun 12 21:35:20.491: INFO: Received response from host: affinity-nodeport-transition-f9brh - Jun 12 21:35:20.491: INFO: Received response from host: affinity-nodeport-transition-f9brh - Jun 12 21:35:20.491: INFO: Received response from host: affinity-nodeport-transition-f9brh - Jun 12 21:35:20.491: INFO: Received response from host: affinity-nodeport-transition-f9brh - Jun 12 21:35:20.491: INFO: Received response from host: affinity-nodeport-transition-f9brh - Jun 12 21:35:20.491: INFO: Received response from host: affinity-nodeport-transition-f9brh - Jun 12 21:35:20.491: INFO: Received response from host: affinity-nodeport-transition-f9brh - Jun 12 21:35:20.491: INFO: Received response from host: affinity-nodeport-transition-f9brh - Jun 12 21:35:20.491: INFO: Received response from host: affinity-nodeport-transition-f9brh - Jun 12 21:35:20.491: INFO: Received response from host: affinity-nodeport-transition-f9brh - Jun 12 21:35:20.491: INFO: Received response from host: affinity-nodeport-transition-f9brh - Jun 12 21:35:20.491: INFO: Received response from host: affinity-nodeport-transition-f9brh - Jun 12 21:35:20.491: INFO: Received response from host: affinity-nodeport-transition-f9brh - Jun 12 21:35:20.491: INFO: Cleaning up the exec pod - STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-5647, will wait for the garbage collector to delete the pods 06/12/23 21:35:20.514 - Jun 12 21:35:20.600: INFO: Deleting ReplicationController affinity-nodeport-transition took: 21.515885ms - Jun 12 21:35:20.801: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 201.314017ms - [AfterEach] [sig-network] Services + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [BeforeEach] Kubectl replace + test/e2e/kubectl/kubectl.go:1734 + [It] should update a single-container pod's image [Conformance] + test/e2e/kubectl/kubectl.go:1747 + STEP: running the image registry.k8s.io/e2e-test-images/httpd:2.4.38-4 07/27/23 02:06:15.111 + Jul 27 02:06:15.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-9221 run e2e-test-httpd-pod --image=registry.k8s.io/e2e-test-images/httpd:2.4.38-4 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' + Jul 27 02:06:15.221: INFO: stderr: "" + Jul 27 02:06:15.221: INFO: stdout: "pod/e2e-test-httpd-pod created\n" + STEP: verifying the pod e2e-test-httpd-pod is running 07/27/23 02:06:15.221 + STEP: verifying the pod e2e-test-httpd-pod was created 07/27/23 02:06:20.272 + Jul 27 02:06:20.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-9221 get pod e2e-test-httpd-pod -o json' + Jul 27 02:06:20.349: INFO: stderr: "" + Jul 27 02:06:20.349: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"cni.projectcalico.org/containerID\": \"900f6036c97f59c0e58b6db099a5466a3e111a9928ceef31bc60f2bf4768dbef\",\n \"cni.projectcalico.org/podIP\": \"172.17.225.14/32\",\n \"cni.projectcalico.org/podIPs\": \"172.17.225.14/32\",\n \"k8s.v1.cni.cncf.io/network-status\": \"[{\\n \\\"name\\\": \\\"k8s-pod-network\\\",\\n \\\"ips\\\": [\\n \\\"172.17.225.14\\\"\\n ],\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"openshift.io/scc\": \"anyuid\"\n },\n \"creationTimestamp\": \"2023-07-27T02:06:15Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-9221\",\n \"resourceVersion\": \"92589\",\n \"uid\": \"87b8512f-25d2-4bc8-9627-2636cc67856b\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"registry.k8s.io/e2e-test-images/httpd:2.4.38-4\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"securityContext\": {\n \"capabilities\": {\n \"drop\": [\n \"MKNOD\"\n ]\n }\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-g69n6\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"imagePullSecrets\": [\n {\n \"name\": \"default-dockercfg-vvmlw\"\n }\n ],\n \"nodeName\": \"10.245.128.19\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {\n \"seLinuxOptions\": {\n \"level\": \"s0:c51,c15\"\n }\n },\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-g69n6\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"service-ca.crt\",\n \"path\": \"service-ca.crt\"\n }\n ],\n \"name\": \"openshift-service-ca.crt\"\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-07-27T02:06:15Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-07-27T02:06:17Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-07-27T02:06:17Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-07-27T02:06:15Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"cri-o://9274de8b39093407ee40f544df9143ac73c1258f30c7799bc610860974ea0d6f\",\n \"image\": \"registry.k8s.io/e2e-test-images/httpd:2.4.38-4\",\n \"imageID\": \"registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2023-07-27T02:06:16Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.245.128.19\",\n \"phase\": \"Running\",\n \"podIP\": \"172.17.225.14\",\n \"podIPs\": [\n {\n \"ip\": \"172.17.225.14\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2023-07-27T02:06:15Z\"\n }\n}\n" + STEP: replace the image in the pod 07/27/23 02:06:20.349 + Jul 27 02:06:20.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-9221 replace -f -' + Jul 27 02:06:20.864: INFO: stderr: "" + Jul 27 02:06:20.864: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" + STEP: verifying the pod e2e-test-httpd-pod has the right image registry.k8s.io/e2e-test-images/busybox:1.29-4 07/27/23 02:06:20.864 + [AfterEach] Kubectl replace + test/e2e/kubectl/kubectl.go:1738 + Jul 27 02:06:20.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-9221 delete pods e2e-test-httpd-pod' + Jul 27 02:06:22.533: INFO: stderr: "" + Jul 27 02:06:22.533: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" + [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 - Jun 12 21:35:24.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] Services + Jul 27 02:06:22.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] Services + [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] Services + [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 - STEP: Destroying namespace "services-5647" for this suite. 06/12/23 21:35:24.893 + STEP: Destroying namespace "kubectl-9221" for this suite. 07/27/23 02:06:22.55 << End Captured GinkgoWriter Output ------------------------------ -S ------------------------------- -[sig-auth] ServiceAccounts - should guarantee kube-root-ca.crt exist in any namespace [Conformance] - test/e2e/auth/service_accounts.go:742 -[BeforeEach] [sig-auth] ServiceAccounts +[sig-node] Probing container + should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:135 +[BeforeEach] [sig-node] Probing container set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:35:24.921 -Jun 12 21:35:24.922: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename svcaccounts 06/12/23 21:35:24.925 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:35:25.007 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:35:25.018 -[BeforeEach] [sig-auth] ServiceAccounts +STEP: Creating a kubernetes client 07/27/23 02:06:22.603 +Jul 27 02:06:22.603: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename container-probe 07/27/23 02:06:22.604 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:06:22.694 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:06:22.702 +[BeforeEach] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:31 -[It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] - test/e2e/auth/service_accounts.go:742 -Jun 12 21:35:25.044: INFO: Got root ca configmap in namespace "svcaccounts-8121" -Jun 12 21:35:25.068: INFO: Deleted root ca configmap in namespace "svcaccounts-8121" -STEP: waiting for a new root ca configmap created 06/12/23 21:35:25.569 -Jun 12 21:35:25.598: INFO: Recreated root ca configmap in namespace "svcaccounts-8121" -Jun 12 21:35:25.614: INFO: Updated root ca configmap in namespace "svcaccounts-8121" -STEP: waiting for the root ca configmap reconciled 06/12/23 21:35:26.119 -Jun 12 21:35:26.167: INFO: Reconciled root ca configmap in namespace "svcaccounts-8121" -[AfterEach] [sig-auth] ServiceAccounts +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 +[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:135 +STEP: Creating pod busybox-8ad4fd73-1980-479f-bd1a-c563de2229f4 in namespace container-probe-7649 07/27/23 02:06:22.71 +W0727 02:06:22.734475 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "busybox" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "busybox" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "busybox" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "busybox" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +Jul 27 02:06:22.734: INFO: Waiting up to 5m0s for pod "busybox-8ad4fd73-1980-479f-bd1a-c563de2229f4" in namespace "container-probe-7649" to be "not pending" +Jul 27 02:06:22.751: INFO: Pod "busybox-8ad4fd73-1980-479f-bd1a-c563de2229f4": Phase="Pending", Reason="", readiness=false. Elapsed: 17.045099ms +Jul 27 02:06:24.760: INFO: Pod "busybox-8ad4fd73-1980-479f-bd1a-c563de2229f4": Phase="Running", Reason="", readiness=true. Elapsed: 2.026070894s +Jul 27 02:06:24.760: INFO: Pod "busybox-8ad4fd73-1980-479f-bd1a-c563de2229f4" satisfied condition "not pending" +Jul 27 02:06:24.760: INFO: Started pod busybox-8ad4fd73-1980-479f-bd1a-c563de2229f4 in namespace container-probe-7649 +STEP: checking the pod's current state and verifying that restartCount is present 07/27/23 02:06:24.76 +Jul 27 02:06:24.768: INFO: Initial restart count of pod busybox-8ad4fd73-1980-479f-bd1a-c563de2229f4 is 0 +Jul 27 02:07:15.153: INFO: Restart count of pod container-probe-7649/busybox-8ad4fd73-1980-479f-bd1a-c563de2229f4 is now 1 (50.385030777s elapsed) +STEP: deleting the pod 07/27/23 02:07:15.153 +[AfterEach] [sig-node] Probing container test/e2e/framework/node/init/init.go:32 -Jun 12 21:35:26.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-auth] ServiceAccounts +Jul 27 02:07:15.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-auth] ServiceAccounts +[DeferCleanup (Each)] [sig-node] Probing container dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-auth] ServiceAccounts +[DeferCleanup (Each)] [sig-node] Probing container tear down framework | framework.go:193 -STEP: Destroying namespace "svcaccounts-8121" for this suite. 06/12/23 21:35:26.186 +STEP: Destroying namespace "container-probe-7649" for this suite. 07/27/23 02:07:15.19 ------------------------------ -• [1.288 seconds] -[sig-auth] ServiceAccounts -test/e2e/auth/framework.go:23 - should guarantee kube-root-ca.crt exist in any namespace [Conformance] - test/e2e/auth/service_accounts.go:742 +• [SLOW TEST] [52.610 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:135 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-auth] ServiceAccounts + [BeforeEach] [sig-node] Probing container set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:35:24.921 - Jun 12 21:35:24.922: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename svcaccounts 06/12/23 21:35:24.925 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:35:25.007 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:35:25.018 - [BeforeEach] [sig-auth] ServiceAccounts + STEP: Creating a kubernetes client 07/27/23 02:06:22.603 + Jul 27 02:06:22.603: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename container-probe 07/27/23 02:06:22.604 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:06:22.694 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:06:22.702 + [BeforeEach] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:31 - [It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] - test/e2e/auth/service_accounts.go:742 - Jun 12 21:35:25.044: INFO: Got root ca configmap in namespace "svcaccounts-8121" - Jun 12 21:35:25.068: INFO: Deleted root ca configmap in namespace "svcaccounts-8121" - STEP: waiting for a new root ca configmap created 06/12/23 21:35:25.569 - Jun 12 21:35:25.598: INFO: Recreated root ca configmap in namespace "svcaccounts-8121" - Jun 12 21:35:25.614: INFO: Updated root ca configmap in namespace "svcaccounts-8121" - STEP: waiting for the root ca configmap reconciled 06/12/23 21:35:26.119 - Jun 12 21:35:26.167: INFO: Reconciled root ca configmap in namespace "svcaccounts-8121" - [AfterEach] [sig-auth] ServiceAccounts + [BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 + [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:135 + STEP: Creating pod busybox-8ad4fd73-1980-479f-bd1a-c563de2229f4 in namespace container-probe-7649 07/27/23 02:06:22.71 + W0727 02:06:22.734475 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "busybox" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "busybox" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "busybox" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "busybox" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + Jul 27 02:06:22.734: INFO: Waiting up to 5m0s for pod "busybox-8ad4fd73-1980-479f-bd1a-c563de2229f4" in namespace "container-probe-7649" to be "not pending" + Jul 27 02:06:22.751: INFO: Pod "busybox-8ad4fd73-1980-479f-bd1a-c563de2229f4": Phase="Pending", Reason="", readiness=false. Elapsed: 17.045099ms + Jul 27 02:06:24.760: INFO: Pod "busybox-8ad4fd73-1980-479f-bd1a-c563de2229f4": Phase="Running", Reason="", readiness=true. Elapsed: 2.026070894s + Jul 27 02:06:24.760: INFO: Pod "busybox-8ad4fd73-1980-479f-bd1a-c563de2229f4" satisfied condition "not pending" + Jul 27 02:06:24.760: INFO: Started pod busybox-8ad4fd73-1980-479f-bd1a-c563de2229f4 in namespace container-probe-7649 + STEP: checking the pod's current state and verifying that restartCount is present 07/27/23 02:06:24.76 + Jul 27 02:06:24.768: INFO: Initial restart count of pod busybox-8ad4fd73-1980-479f-bd1a-c563de2229f4 is 0 + Jul 27 02:07:15.153: INFO: Restart count of pod container-probe-7649/busybox-8ad4fd73-1980-479f-bd1a-c563de2229f4 is now 1 (50.385030777s elapsed) + STEP: deleting the pod 07/27/23 02:07:15.153 + [AfterEach] [sig-node] Probing container test/e2e/framework/node/init/init.go:32 - Jun 12 21:35:26.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-auth] ServiceAccounts + Jul 27 02:07:15.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-auth] ServiceAccounts + [DeferCleanup (Each)] [sig-node] Probing container dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-auth] ServiceAccounts + [DeferCleanup (Each)] [sig-node] Probing container tear down framework | framework.go:193 - STEP: Destroying namespace "svcaccounts-8121" for this suite. 06/12/23 21:35:26.186 + STEP: Destroying namespace "container-probe-7649" for this suite. 07/27/23 02:07:15.19 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSS +SSSSSSSSSSSSS ------------------------------ -[sig-apps] Job - should manage the lifecycle of a job [Conformance] - test/e2e/apps/job.go:703 -[BeforeEach] [sig-apps] Job +[sig-node] PodTemplates + should delete a collection of pod templates [Conformance] + test/e2e/common/node/podtemplates.go:122 +[BeforeEach] [sig-node] PodTemplates set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:35:26.211 -Jun 12 21:35:26.211: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename job 06/12/23 21:35:26.212 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:35:26.267 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:35:26.278 -[BeforeEach] [sig-apps] Job +STEP: Creating a kubernetes client 07/27/23 02:07:15.214 +Jul 27 02:07:15.214: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename podtemplate 07/27/23 02:07:15.215 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:07:15.266 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:07:15.276 +[BeforeEach] [sig-node] PodTemplates test/e2e/framework/metrics/init/init.go:31 -[It] should manage the lifecycle of a job [Conformance] - test/e2e/apps/job.go:703 -STEP: Creating a suspended job 06/12/23 21:35:26.302 -STEP: Patching the Job 06/12/23 21:35:26.322 -STEP: Watching for Job to be patched 06/12/23 21:35:26.388 -Jun 12 21:35:26.406: INFO: Event ADDED observed for Job e2e-gxw4b in namespace job-6778 with labels: map[e2e-job-label:e2e-gxw4b] and annotations: map[batch.kubernetes.io/job-tracking:] -Jun 12 21:35:26.406: INFO: Event MODIFIED observed for Job e2e-gxw4b in namespace job-6778 with labels: map[e2e-job-label:e2e-gxw4b] and annotations: map[batch.kubernetes.io/job-tracking:] -Jun 12 21:35:26.406: INFO: Event MODIFIED found for Job e2e-gxw4b in namespace job-6778 with labels: map[e2e-gxw4b:patched e2e-job-label:e2e-gxw4b] and annotations: map[batch.kubernetes.io/job-tracking:] -STEP: Updating the job 06/12/23 21:35:26.406 -STEP: Watching for Job to be updated 06/12/23 21:35:26.497 -Jun 12 21:35:26.507: INFO: Event MODIFIED found for Job e2e-gxw4b in namespace job-6778 with labels: map[e2e-gxw4b:patched e2e-job-label:e2e-gxw4b] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] -Jun 12 21:35:26.507: INFO: Found Job annotations: map[string]string{"batch.kubernetes.io/job-tracking":"", "updated":"true"} -STEP: Listing all Jobs with LabelSelector 06/12/23 21:35:26.507 -Jun 12 21:35:26.528: INFO: Job: e2e-gxw4b as labels: map[e2e-gxw4b:patched e2e-job-label:e2e-gxw4b] -STEP: Waiting for job to complete 06/12/23 21:35:26.528 -STEP: Delete a job collection with a labelselector 06/12/23 21:35:44.542 -STEP: Watching for Job to be deleted 06/12/23 21:35:44.567 -Jun 12 21:35:44.579: INFO: Event MODIFIED observed for Job e2e-gxw4b in namespace job-6778 with labels: map[e2e-gxw4b:patched e2e-job-label:e2e-gxw4b] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] -Jun 12 21:35:44.579: INFO: Event MODIFIED observed for Job e2e-gxw4b in namespace job-6778 with labels: map[e2e-gxw4b:patched e2e-job-label:e2e-gxw4b] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] -Jun 12 21:35:44.580: INFO: Event MODIFIED observed for Job e2e-gxw4b in namespace job-6778 with labels: map[e2e-gxw4b:patched e2e-job-label:e2e-gxw4b] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] -Jun 12 21:35:44.581: INFO: Event MODIFIED observed for Job e2e-gxw4b in namespace job-6778 with labels: map[e2e-gxw4b:patched e2e-job-label:e2e-gxw4b] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] -Jun 12 21:35:44.581: INFO: Event MODIFIED observed for Job e2e-gxw4b in namespace job-6778 with labels: map[e2e-gxw4b:patched e2e-job-label:e2e-gxw4b] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] -Jun 12 21:35:44.581: INFO: Event MODIFIED observed for Job e2e-gxw4b in namespace job-6778 with labels: map[e2e-gxw4b:patched e2e-job-label:e2e-gxw4b] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] -Jun 12 21:35:44.582: INFO: Event MODIFIED observed for Job e2e-gxw4b in namespace job-6778 with labels: map[e2e-gxw4b:patched e2e-job-label:e2e-gxw4b] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] -Jun 12 21:35:44.582: INFO: Event MODIFIED observed for Job e2e-gxw4b in namespace job-6778 with labels: map[e2e-gxw4b:patched e2e-job-label:e2e-gxw4b] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] -Jun 12 21:35:44.582: INFO: Event MODIFIED observed for Job e2e-gxw4b in namespace job-6778 with labels: map[e2e-gxw4b:patched e2e-job-label:e2e-gxw4b] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] -Jun 12 21:35:44.583: INFO: Event DELETED found for Job e2e-gxw4b in namespace job-6778 with labels: map[e2e-gxw4b:patched e2e-job-label:e2e-gxw4b] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] -STEP: Relist jobs to confirm deletion 06/12/23 21:35:44.583 -[AfterEach] [sig-apps] Job +[It] should delete a collection of pod templates [Conformance] + test/e2e/common/node/podtemplates.go:122 +STEP: Create set of pod templates 07/27/23 02:07:15.285 +W0727 02:07:16.304705 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "token-test" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "token-test" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "token-test" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "token-test" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +Jul 27 02:07:16.304: INFO: created test-podtemplate-1 +Jul 27 02:07:16.320: INFO: created test-podtemplate-2 +Jul 27 02:07:16.337: INFO: created test-podtemplate-3 +STEP: get a list of pod templates with a label in the current namespace 07/27/23 02:07:16.337 +STEP: delete collection of pod templates 07/27/23 02:07:16.349 +Jul 27 02:07:16.349: INFO: requesting DeleteCollection of pod templates +STEP: check that the list of pod templates matches the requested quantity 07/27/23 02:07:16.411 +Jul 27 02:07:16.411: INFO: requesting list of pod templates to confirm quantity +[AfterEach] [sig-node] PodTemplates test/e2e/framework/node/init/init.go:32 -Jun 12 21:35:44.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] Job +Jul 27 02:07:16.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] PodTemplates test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] Job +[DeferCleanup (Each)] [sig-node] PodTemplates dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] Job +[DeferCleanup (Each)] [sig-node] PodTemplates tear down framework | framework.go:193 -STEP: Destroying namespace "job-6778" for this suite. 06/12/23 21:35:44.614 +STEP: Destroying namespace "podtemplate-4700" for this suite. 07/27/23 02:07:16.438 ------------------------------ -• [SLOW TEST] [18.427 seconds] -[sig-apps] Job -test/e2e/apps/framework.go:23 - should manage the lifecycle of a job [Conformance] - test/e2e/apps/job.go:703 +• [1.246 seconds] +[sig-node] PodTemplates +test/e2e/common/node/framework.go:23 + should delete a collection of pod templates [Conformance] + test/e2e/common/node/podtemplates.go:122 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] Job + [BeforeEach] [sig-node] PodTemplates set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:35:26.211 - Jun 12 21:35:26.211: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename job 06/12/23 21:35:26.212 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:35:26.267 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:35:26.278 - [BeforeEach] [sig-apps] Job + STEP: Creating a kubernetes client 07/27/23 02:07:15.214 + Jul 27 02:07:15.214: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename podtemplate 07/27/23 02:07:15.215 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:07:15.266 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:07:15.276 + [BeforeEach] [sig-node] PodTemplates test/e2e/framework/metrics/init/init.go:31 - [It] should manage the lifecycle of a job [Conformance] - test/e2e/apps/job.go:703 - STEP: Creating a suspended job 06/12/23 21:35:26.302 - STEP: Patching the Job 06/12/23 21:35:26.322 - STEP: Watching for Job to be patched 06/12/23 21:35:26.388 - Jun 12 21:35:26.406: INFO: Event ADDED observed for Job e2e-gxw4b in namespace job-6778 with labels: map[e2e-job-label:e2e-gxw4b] and annotations: map[batch.kubernetes.io/job-tracking:] - Jun 12 21:35:26.406: INFO: Event MODIFIED observed for Job e2e-gxw4b in namespace job-6778 with labels: map[e2e-job-label:e2e-gxw4b] and annotations: map[batch.kubernetes.io/job-tracking:] - Jun 12 21:35:26.406: INFO: Event MODIFIED found for Job e2e-gxw4b in namespace job-6778 with labels: map[e2e-gxw4b:patched e2e-job-label:e2e-gxw4b] and annotations: map[batch.kubernetes.io/job-tracking:] - STEP: Updating the job 06/12/23 21:35:26.406 - STEP: Watching for Job to be updated 06/12/23 21:35:26.497 - Jun 12 21:35:26.507: INFO: Event MODIFIED found for Job e2e-gxw4b in namespace job-6778 with labels: map[e2e-gxw4b:patched e2e-job-label:e2e-gxw4b] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] - Jun 12 21:35:26.507: INFO: Found Job annotations: map[string]string{"batch.kubernetes.io/job-tracking":"", "updated":"true"} - STEP: Listing all Jobs with LabelSelector 06/12/23 21:35:26.507 - Jun 12 21:35:26.528: INFO: Job: e2e-gxw4b as labels: map[e2e-gxw4b:patched e2e-job-label:e2e-gxw4b] - STEP: Waiting for job to complete 06/12/23 21:35:26.528 - STEP: Delete a job collection with a labelselector 06/12/23 21:35:44.542 - STEP: Watching for Job to be deleted 06/12/23 21:35:44.567 - Jun 12 21:35:44.579: INFO: Event MODIFIED observed for Job e2e-gxw4b in namespace job-6778 with labels: map[e2e-gxw4b:patched e2e-job-label:e2e-gxw4b] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] - Jun 12 21:35:44.579: INFO: Event MODIFIED observed for Job e2e-gxw4b in namespace job-6778 with labels: map[e2e-gxw4b:patched e2e-job-label:e2e-gxw4b] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] - Jun 12 21:35:44.580: INFO: Event MODIFIED observed for Job e2e-gxw4b in namespace job-6778 with labels: map[e2e-gxw4b:patched e2e-job-label:e2e-gxw4b] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] - Jun 12 21:35:44.581: INFO: Event MODIFIED observed for Job e2e-gxw4b in namespace job-6778 with labels: map[e2e-gxw4b:patched e2e-job-label:e2e-gxw4b] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] - Jun 12 21:35:44.581: INFO: Event MODIFIED observed for Job e2e-gxw4b in namespace job-6778 with labels: map[e2e-gxw4b:patched e2e-job-label:e2e-gxw4b] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] - Jun 12 21:35:44.581: INFO: Event MODIFIED observed for Job e2e-gxw4b in namespace job-6778 with labels: map[e2e-gxw4b:patched e2e-job-label:e2e-gxw4b] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] - Jun 12 21:35:44.582: INFO: Event MODIFIED observed for Job e2e-gxw4b in namespace job-6778 with labels: map[e2e-gxw4b:patched e2e-job-label:e2e-gxw4b] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] - Jun 12 21:35:44.582: INFO: Event MODIFIED observed for Job e2e-gxw4b in namespace job-6778 with labels: map[e2e-gxw4b:patched e2e-job-label:e2e-gxw4b] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] - Jun 12 21:35:44.582: INFO: Event MODIFIED observed for Job e2e-gxw4b in namespace job-6778 with labels: map[e2e-gxw4b:patched e2e-job-label:e2e-gxw4b] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] - Jun 12 21:35:44.583: INFO: Event DELETED found for Job e2e-gxw4b in namespace job-6778 with labels: map[e2e-gxw4b:patched e2e-job-label:e2e-gxw4b] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] - STEP: Relist jobs to confirm deletion 06/12/23 21:35:44.583 - [AfterEach] [sig-apps] Job + [It] should delete a collection of pod templates [Conformance] + test/e2e/common/node/podtemplates.go:122 + STEP: Create set of pod templates 07/27/23 02:07:15.285 + W0727 02:07:16.304705 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "token-test" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "token-test" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "token-test" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "token-test" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + Jul 27 02:07:16.304: INFO: created test-podtemplate-1 + Jul 27 02:07:16.320: INFO: created test-podtemplate-2 + Jul 27 02:07:16.337: INFO: created test-podtemplate-3 + STEP: get a list of pod templates with a label in the current namespace 07/27/23 02:07:16.337 + STEP: delete collection of pod templates 07/27/23 02:07:16.349 + Jul 27 02:07:16.349: INFO: requesting DeleteCollection of pod templates + STEP: check that the list of pod templates matches the requested quantity 07/27/23 02:07:16.411 + Jul 27 02:07:16.411: INFO: requesting list of pod templates to confirm quantity + [AfterEach] [sig-node] PodTemplates test/e2e/framework/node/init/init.go:32 - Jun 12 21:35:44.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] Job + Jul 27 02:07:16.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] PodTemplates test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] Job + [DeferCleanup (Each)] [sig-node] PodTemplates dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] Job + [DeferCleanup (Each)] [sig-node] PodTemplates tear down framework | framework.go:193 - STEP: Destroying namespace "job-6778" for this suite. 06/12/23 21:35:44.614 + STEP: Destroying namespace "podtemplate-4700" for this suite. 07/27/23 02:07:16.438 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSS ------------------------------ -[sig-node] Ephemeral Containers [NodeConformance] - will start an ephemeral container in an existing pod [Conformance] - test/e2e/common/node/ephemeral_containers.go:45 -[BeforeEach] [sig-node] Ephemeral Containers [NodeConformance] +[sig-node] PodTemplates + should run the lifecycle of PodTemplates [Conformance] + test/e2e/common/node/podtemplates.go:53 +[BeforeEach] [sig-node] PodTemplates set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:35:44.639 -Jun 12 21:35:44.639: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename ephemeral-containers-test 06/12/23 21:35:44.64 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:35:44.72 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:35:44.731 -[BeforeEach] [sig-node] Ephemeral Containers [NodeConformance] +STEP: Creating a kubernetes client 07/27/23 02:07:16.463 +Jul 27 02:07:16.463: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename podtemplate 07/27/23 02:07:16.464 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:07:16.503 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:07:16.512 +[BeforeEach] [sig-node] PodTemplates test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-node] Ephemeral Containers [NodeConformance] - test/e2e/common/node/ephemeral_containers.go:38 -[It] will start an ephemeral container in an existing pod [Conformance] - test/e2e/common/node/ephemeral_containers.go:45 -STEP: creating a target pod 06/12/23 21:35:44.742 -Jun 12 21:35:44.798: INFO: Waiting up to 5m0s for pod "ephemeral-containers-target-pod" in namespace "ephemeral-containers-test-1925" to be "running and ready" -Jun 12 21:35:44.815: INFO: Pod "ephemeral-containers-target-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 16.506755ms -Jun 12 21:35:44.815: INFO: The phase of Pod ephemeral-containers-target-pod is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:35:46.825: INFO: Pod "ephemeral-containers-target-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026300865s -Jun 12 21:35:46.825: INFO: The phase of Pod ephemeral-containers-target-pod is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:35:48.824: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.025657514s -Jun 12 21:35:48.824: INFO: The phase of Pod ephemeral-containers-target-pod is Running (Ready = true) -Jun 12 21:35:48.824: INFO: Pod "ephemeral-containers-target-pod" satisfied condition "running and ready" -STEP: adding an ephemeral container 06/12/23 21:35:48.839 -Jun 12 21:35:48.860: INFO: Waiting up to 1m0s for pod "ephemeral-containers-target-pod" in namespace "ephemeral-containers-test-1925" to be "container debugger running" -Jun 12 21:35:48.871: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 10.609772ms -Jun 12 21:35:50.888: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.027851859s -Jun 12 21:35:50.888: INFO: Pod "ephemeral-containers-target-pod" satisfied condition "container debugger running" -STEP: checking pod container endpoints 06/12/23 21:35:50.888 -Jun 12 21:35:50.889: INFO: ExecWithOptions {Command:[/bin/echo marco] Namespace:ephemeral-containers-test-1925 PodName:ephemeral-containers-target-pod ContainerName:debugger Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} -Jun 12 21:35:50.889: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 21:35:50.922: INFO: ExecWithOptions: Clientset creation -Jun 12 21:35:50.922: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/ephemeral-containers-test-1925/pods/ephemeral-containers-target-pod/exec?command=%2Fbin%2Fecho&command=marco&container=debugger&container=debugger&stderr=true&stdout=true) -Jun 12 21:35:51.252: INFO: Exec stderr: "" -[AfterEach] [sig-node] Ephemeral Containers [NodeConformance] +[It] should run the lifecycle of PodTemplates [Conformance] + test/e2e/common/node/podtemplates.go:53 +[AfterEach] [sig-node] PodTemplates test/e2e/framework/node/init/init.go:32 -Jun 12 21:35:51.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Ephemeral Containers [NodeConformance] +Jul 27 02:07:16.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] PodTemplates test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Ephemeral Containers [NodeConformance] +[DeferCleanup (Each)] [sig-node] PodTemplates dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Ephemeral Containers [NodeConformance] +[DeferCleanup (Each)] [sig-node] PodTemplates tear down framework | framework.go:193 -STEP: Destroying namespace "ephemeral-containers-test-1925" for this suite. 06/12/23 21:35:51.357 +STEP: Destroying namespace "podtemplate-1416" for this suite. 07/27/23 02:07:16.676 ------------------------------ -• [SLOW TEST] [6.751 seconds] -[sig-node] Ephemeral Containers [NodeConformance] +• [0.234 seconds] +[sig-node] PodTemplates test/e2e/common/node/framework.go:23 - will start an ephemeral container in an existing pod [Conformance] - test/e2e/common/node/ephemeral_containers.go:45 + should run the lifecycle of PodTemplates [Conformance] + test/e2e/common/node/podtemplates.go:53 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Ephemeral Containers [NodeConformance] + [BeforeEach] [sig-node] PodTemplates set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:35:44.639 - Jun 12 21:35:44.639: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename ephemeral-containers-test 06/12/23 21:35:44.64 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:35:44.72 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:35:44.731 - [BeforeEach] [sig-node] Ephemeral Containers [NodeConformance] + STEP: Creating a kubernetes client 07/27/23 02:07:16.463 + Jul 27 02:07:16.463: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename podtemplate 07/27/23 02:07:16.464 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:07:16.503 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:07:16.512 + [BeforeEach] [sig-node] PodTemplates test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-node] Ephemeral Containers [NodeConformance] - test/e2e/common/node/ephemeral_containers.go:38 - [It] will start an ephemeral container in an existing pod [Conformance] - test/e2e/common/node/ephemeral_containers.go:45 - STEP: creating a target pod 06/12/23 21:35:44.742 - Jun 12 21:35:44.798: INFO: Waiting up to 5m0s for pod "ephemeral-containers-target-pod" in namespace "ephemeral-containers-test-1925" to be "running and ready" - Jun 12 21:35:44.815: INFO: Pod "ephemeral-containers-target-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 16.506755ms - Jun 12 21:35:44.815: INFO: The phase of Pod ephemeral-containers-target-pod is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:35:46.825: INFO: Pod "ephemeral-containers-target-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026300865s - Jun 12 21:35:46.825: INFO: The phase of Pod ephemeral-containers-target-pod is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:35:48.824: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.025657514s - Jun 12 21:35:48.824: INFO: The phase of Pod ephemeral-containers-target-pod is Running (Ready = true) - Jun 12 21:35:48.824: INFO: Pod "ephemeral-containers-target-pod" satisfied condition "running and ready" - STEP: adding an ephemeral container 06/12/23 21:35:48.839 - Jun 12 21:35:48.860: INFO: Waiting up to 1m0s for pod "ephemeral-containers-target-pod" in namespace "ephemeral-containers-test-1925" to be "container debugger running" - Jun 12 21:35:48.871: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 10.609772ms - Jun 12 21:35:50.888: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.027851859s - Jun 12 21:35:50.888: INFO: Pod "ephemeral-containers-target-pod" satisfied condition "container debugger running" - STEP: checking pod container endpoints 06/12/23 21:35:50.888 - Jun 12 21:35:50.889: INFO: ExecWithOptions {Command:[/bin/echo marco] Namespace:ephemeral-containers-test-1925 PodName:ephemeral-containers-target-pod ContainerName:debugger Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} - Jun 12 21:35:50.889: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 21:35:50.922: INFO: ExecWithOptions: Clientset creation - Jun 12 21:35:50.922: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/ephemeral-containers-test-1925/pods/ephemeral-containers-target-pod/exec?command=%2Fbin%2Fecho&command=marco&container=debugger&container=debugger&stderr=true&stdout=true) - Jun 12 21:35:51.252: INFO: Exec stderr: "" - [AfterEach] [sig-node] Ephemeral Containers [NodeConformance] + [It] should run the lifecycle of PodTemplates [Conformance] + test/e2e/common/node/podtemplates.go:53 + [AfterEach] [sig-node] PodTemplates test/e2e/framework/node/init/init.go:32 - Jun 12 21:35:51.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Ephemeral Containers [NodeConformance] + Jul 27 02:07:16.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] PodTemplates test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Ephemeral Containers [NodeConformance] + [DeferCleanup (Each)] [sig-node] PodTemplates dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Ephemeral Containers [NodeConformance] + [DeferCleanup (Each)] [sig-node] PodTemplates tear down framework | framework.go:193 - STEP: Destroying namespace "ephemeral-containers-test-1925" for this suite. 06/12/23 21:35:51.357 + STEP: Destroying namespace "podtemplate-1416" for this suite. 07/27/23 02:07:16.676 << End Captured GinkgoWriter Output ------------------------------ -SSSSSS ------------------------------- -[sig-auth] ServiceAccounts - should update a ServiceAccount [Conformance] - test/e2e/auth/service_accounts.go:810 -[BeforeEach] [sig-auth] ServiceAccounts - set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:35:51.391 -Jun 12 21:35:51.391: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename svcaccounts 06/12/23 21:35:51.394 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:35:51.5 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:35:51.512 -[BeforeEach] [sig-auth] ServiceAccounts - test/e2e/framework/metrics/init/init.go:31 -[It] should update a ServiceAccount [Conformance] - test/e2e/auth/service_accounts.go:810 -STEP: Creating ServiceAccount "e2e-sa-m65dj" 06/12/23 21:35:51.526 -Jun 12 21:35:51.549: INFO: AutomountServiceAccountToken: false -STEP: Updating ServiceAccount "e2e-sa-m65dj" 06/12/23 21:35:51.55 -Jun 12 21:35:51.591: INFO: AutomountServiceAccountToken: true -[AfterEach] [sig-auth] ServiceAccounts - test/e2e/framework/node/init/init.go:32 -Jun 12 21:35:51.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-auth] ServiceAccounts - test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-auth] ServiceAccounts - dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-auth] ServiceAccounts - tear down framework | framework.go:193 -STEP: Destroying namespace "svcaccounts-3342" for this suite. 06/12/23 21:35:51.606 ------------------------------- -• [0.249 seconds] -[sig-auth] ServiceAccounts -test/e2e/auth/framework.go:23 - should update a ServiceAccount [Conformance] - test/e2e/auth/service_accounts.go:810 - - Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-auth] ServiceAccounts - set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:35:51.391 - Jun 12 21:35:51.391: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename svcaccounts 06/12/23 21:35:51.394 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:35:51.5 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:35:51.512 - [BeforeEach] [sig-auth] ServiceAccounts - test/e2e/framework/metrics/init/init.go:31 - [It] should update a ServiceAccount [Conformance] - test/e2e/auth/service_accounts.go:810 - STEP: Creating ServiceAccount "e2e-sa-m65dj" 06/12/23 21:35:51.526 - Jun 12 21:35:51.549: INFO: AutomountServiceAccountToken: false - STEP: Updating ServiceAccount "e2e-sa-m65dj" 06/12/23 21:35:51.55 - Jun 12 21:35:51.591: INFO: AutomountServiceAccountToken: true - [AfterEach] [sig-auth] ServiceAccounts - test/e2e/framework/node/init/init.go:32 - Jun 12 21:35:51.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-auth] ServiceAccounts - test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-auth] ServiceAccounts - dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-auth] ServiceAccounts - tear down framework | framework.go:193 - STEP: Destroying namespace "svcaccounts-3342" for this suite. 06/12/23 21:35:51.606 - << End Captured GinkgoWriter Output ------------------------------- -SSSSSSSSSS +S ------------------------------ [sig-network] Services should provide secure master service [Conformance] test/e2e/network/service.go:777 [BeforeEach] [sig-network] Services set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:35:51.653 -Jun 12 21:35:51.654: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename services 06/12/23 21:35:51.656 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:35:51.736 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:35:51.788 +STEP: Creating a kubernetes client 07/27/23 02:07:16.698 +Jul 27 02:07:16.698: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename services 07/27/23 02:07:16.699 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:07:16.744 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:07:16.758 [BeforeEach] [sig-network] Services test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] Services @@ -21154,16 +18640,16 @@ STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:35 test/e2e/network/service.go:777 [AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 -Jun 12 21:35:52.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +Jul 27 02:07:16.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-network] Services test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] Services dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 -STEP: Destroying namespace "services-976" for this suite. 06/12/23 21:35:52.033 +STEP: Destroying namespace "services-1106" for this suite. 07/27/23 02:07:16.829 ------------------------------ -• [0.417 seconds] +• [0.158 seconds] [sig-network] Services test/e2e/network/common/framework.go:23 should provide secure master service [Conformance] @@ -21172,11 +18658,11 @@ test/e2e/network/common/framework.go:23 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-network] Services set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:35:51.653 - Jun 12 21:35:51.654: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename services 06/12/23 21:35:51.656 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:35:51.736 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:35:51.788 + STEP: Creating a kubernetes client 07/27/23 02:07:16.698 + Jul 27 02:07:16.698: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename services 07/27/23 02:07:16.699 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:07:16.744 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:07:16.758 [BeforeEach] [sig-network] Services test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] Services @@ -21185,2999 +18671,4114 @@ test/e2e/network/common/framework.go:23 test/e2e/network/service.go:777 [AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 - Jun 12 21:35:52.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + Jul 27 02:07:16.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-network] Services test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] Services dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 - STEP: Destroying namespace "services-976" for this suite. 06/12/23 21:35:52.033 + STEP: Destroying namespace "services-1106" for this suite. 07/27/23 02:07:16.829 << End Captured GinkgoWriter Output ------------------------------ -SSSSSS +S ------------------------------ -[sig-node] Security Context - should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] - test/e2e/node/security_context.go:164 -[BeforeEach] [sig-node] Security Context +[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch + watch on custom resource definition objects [Conformance] + test/e2e/apimachinery/crd_watch.go:51 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:35:52.074 -Jun 12 21:35:52.074: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename security-context 06/12/23 21:35:52.079 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:35:52.184 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:35:52.244 -[BeforeEach] [sig-node] Security Context +STEP: Creating a kubernetes client 07/27/23 02:07:16.856 +Jul 27 02:07:16.856: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename crd-watch 07/27/23 02:07:16.857 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:07:16.899 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:07:16.908 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 -[It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] - test/e2e/node/security_context.go:164 -STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser 06/12/23 21:35:52.292 -Jun 12 21:35:52.358: INFO: Waiting up to 5m0s for pod "security-context-83ef8644-c523-437b-8103-6b7dfb786f85" in namespace "security-context-7628" to be "Succeeded or Failed" -Jun 12 21:35:52.366: INFO: Pod "security-context-83ef8644-c523-437b-8103-6b7dfb786f85": Phase="Pending", Reason="", readiness=false. Elapsed: 7.942111ms -Jun 12 21:35:54.377: INFO: Pod "security-context-83ef8644-c523-437b-8103-6b7dfb786f85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018218221s -Jun 12 21:35:56.377: INFO: Pod "security-context-83ef8644-c523-437b-8103-6b7dfb786f85": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01800602s -Jun 12 21:35:58.377: INFO: Pod "security-context-83ef8644-c523-437b-8103-6b7dfb786f85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018499393s -STEP: Saw pod success 06/12/23 21:35:58.377 -Jun 12 21:35:58.378: INFO: Pod "security-context-83ef8644-c523-437b-8103-6b7dfb786f85" satisfied condition "Succeeded or Failed" -Jun 12 21:35:58.386: INFO: Trying to get logs from node 10.138.75.112 pod security-context-83ef8644-c523-437b-8103-6b7dfb786f85 container test-container: -STEP: delete the pod 06/12/23 21:35:58.495 -Jun 12 21:35:58.548: INFO: Waiting for pod security-context-83ef8644-c523-437b-8103-6b7dfb786f85 to disappear -Jun 12 21:35:58.570: INFO: Pod security-context-83ef8644-c523-437b-8103-6b7dfb786f85 no longer exists -[AfterEach] [sig-node] Security Context +[It] watch on custom resource definition objects [Conformance] + test/e2e/apimachinery/crd_watch.go:51 +Jul 27 02:07:16.918: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Creating first CR 07/27/23 02:07:19.562 +Jul 27 02:07:19.581: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-07-27T02:07:19Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-07-27T02:07:19Z]] name:name1 resourceVersion:93200 uid:9c5085e3-ef44-484a-b98b-e4ec11a23677] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Creating second CR 07/27/23 02:07:29.583 +Jul 27 02:07:29.603: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-07-27T02:07:29Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-07-27T02:07:29Z]] name:name2 resourceVersion:93313 uid:c1f657bc-1ec5-43b5-9555-31821fdbf10c] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Modifying first CR 07/27/23 02:07:39.603 +Jul 27 02:07:39.622: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-07-27T02:07:19Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-07-27T02:07:39Z]] name:name1 resourceVersion:93364 uid:9c5085e3-ef44-484a-b98b-e4ec11a23677] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Modifying second CR 07/27/23 02:07:49.624 +Jul 27 02:07:49.643: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-07-27T02:07:29Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-07-27T02:07:49Z]] name:name2 resourceVersion:93407 uid:c1f657bc-1ec5-43b5-9555-31821fdbf10c] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Deleting first CR 07/27/23 02:07:59.643 +Jul 27 02:07:59.667: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-07-27T02:07:19Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-07-27T02:07:39Z]] name:name1 resourceVersion:93448 uid:9c5085e3-ef44-484a-b98b-e4ec11a23677] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Deleting second CR 07/27/23 02:08:09.669 +Jul 27 02:08:09.694: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-07-27T02:07:29Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-07-27T02:07:49Z]] name:name2 resourceVersion:93501 uid:c1f657bc-1ec5-43b5-9555-31821fdbf10c] num:map[num1:9223372036854775807 num2:1000000]]} +[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 -Jun 12 21:35:58.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Security Context +Jul 27 02:08:20.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Security Context +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Security Context +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] tear down framework | framework.go:193 -STEP: Destroying namespace "security-context-7628" for this suite. 06/12/23 21:35:58.636 +STEP: Destroying namespace "crd-watch-201" for this suite. 07/27/23 02:08:20.244 ------------------------------ -• [SLOW TEST] [6.596 seconds] -[sig-node] Security Context -test/e2e/node/framework.go:23 - should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] - test/e2e/node/security_context.go:164 +• [SLOW TEST] [63.416 seconds] +[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + CustomResourceDefinition Watch + test/e2e/apimachinery/crd_watch.go:44 + watch on custom resource definition objects [Conformance] + test/e2e/apimachinery/crd_watch.go:51 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Security Context + [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:35:52.074 - Jun 12 21:35:52.074: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename security-context 06/12/23 21:35:52.079 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:35:52.184 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:35:52.244 - [BeforeEach] [sig-node] Security Context + STEP: Creating a kubernetes client 07/27/23 02:07:16.856 + Jul 27 02:07:16.856: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename crd-watch 07/27/23 02:07:16.857 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:07:16.899 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:07:16.908 + [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 - [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] - test/e2e/node/security_context.go:164 - STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser 06/12/23 21:35:52.292 - Jun 12 21:35:52.358: INFO: Waiting up to 5m0s for pod "security-context-83ef8644-c523-437b-8103-6b7dfb786f85" in namespace "security-context-7628" to be "Succeeded or Failed" - Jun 12 21:35:52.366: INFO: Pod "security-context-83ef8644-c523-437b-8103-6b7dfb786f85": Phase="Pending", Reason="", readiness=false. Elapsed: 7.942111ms - Jun 12 21:35:54.377: INFO: Pod "security-context-83ef8644-c523-437b-8103-6b7dfb786f85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018218221s - Jun 12 21:35:56.377: INFO: Pod "security-context-83ef8644-c523-437b-8103-6b7dfb786f85": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01800602s - Jun 12 21:35:58.377: INFO: Pod "security-context-83ef8644-c523-437b-8103-6b7dfb786f85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018499393s - STEP: Saw pod success 06/12/23 21:35:58.377 - Jun 12 21:35:58.378: INFO: Pod "security-context-83ef8644-c523-437b-8103-6b7dfb786f85" satisfied condition "Succeeded or Failed" - Jun 12 21:35:58.386: INFO: Trying to get logs from node 10.138.75.112 pod security-context-83ef8644-c523-437b-8103-6b7dfb786f85 container test-container: - STEP: delete the pod 06/12/23 21:35:58.495 - Jun 12 21:35:58.548: INFO: Waiting for pod security-context-83ef8644-c523-437b-8103-6b7dfb786f85 to disappear - Jun 12 21:35:58.570: INFO: Pod security-context-83ef8644-c523-437b-8103-6b7dfb786f85 no longer exists - [AfterEach] [sig-node] Security Context + [It] watch on custom resource definition objects [Conformance] + test/e2e/apimachinery/crd_watch.go:51 + Jul 27 02:07:16.918: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Creating first CR 07/27/23 02:07:19.562 + Jul 27 02:07:19.581: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-07-27T02:07:19Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-07-27T02:07:19Z]] name:name1 resourceVersion:93200 uid:9c5085e3-ef44-484a-b98b-e4ec11a23677] num:map[num1:9223372036854775807 num2:1000000]]} + STEP: Creating second CR 07/27/23 02:07:29.583 + Jul 27 02:07:29.603: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-07-27T02:07:29Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-07-27T02:07:29Z]] name:name2 resourceVersion:93313 uid:c1f657bc-1ec5-43b5-9555-31821fdbf10c] num:map[num1:9223372036854775807 num2:1000000]]} + STEP: Modifying first CR 07/27/23 02:07:39.603 + Jul 27 02:07:39.622: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-07-27T02:07:19Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-07-27T02:07:39Z]] name:name1 resourceVersion:93364 uid:9c5085e3-ef44-484a-b98b-e4ec11a23677] num:map[num1:9223372036854775807 num2:1000000]]} + STEP: Modifying second CR 07/27/23 02:07:49.624 + Jul 27 02:07:49.643: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-07-27T02:07:29Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-07-27T02:07:49Z]] name:name2 resourceVersion:93407 uid:c1f657bc-1ec5-43b5-9555-31821fdbf10c] num:map[num1:9223372036854775807 num2:1000000]]} + STEP: Deleting first CR 07/27/23 02:07:59.643 + Jul 27 02:07:59.667: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-07-27T02:07:19Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-07-27T02:07:39Z]] name:name1 resourceVersion:93448 uid:9c5085e3-ef44-484a-b98b-e4ec11a23677] num:map[num1:9223372036854775807 num2:1000000]]} + STEP: Deleting second CR 07/27/23 02:08:09.669 + Jul 27 02:08:09.694: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-07-27T02:07:29Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-07-27T02:07:49Z]] name:name2 resourceVersion:93501 uid:c1f657bc-1ec5-43b5-9555-31821fdbf10c] num:map[num1:9223372036854775807 num2:1000000]]} + [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 21:35:58.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Security Context + Jul 27 02:08:20.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Security Context + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Security Context + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "security-context-7628" for this suite. 06/12/23 21:35:58.636 + STEP: Destroying namespace "crd-watch-201" for this suite. 07/27/23 02:08:20.244 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSS +SSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] - test/e2e/apimachinery/webhook.go:277 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[sig-storage] Subpath Atomic writer volumes + should support subpaths with downward pod [Conformance] + test/e2e/storage/subpath.go:92 +[BeforeEach] [sig-storage] Subpath set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:35:58.723 -Jun 12 21:35:58.725: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename webhook 06/12/23 21:35:58.734 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:35:58.854 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:35:58.882 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +STEP: Creating a kubernetes client 07/27/23 02:08:20.273 +Jul 27 02:08:20.273: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename subpath 07/27/23 02:08:20.274 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:08:20.345 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:08:20.357 +[BeforeEach] [sig-storage] Subpath test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:90 -STEP: Setting up server cert 06/12/23 21:35:58.98 -STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 21:36:00.815 -STEP: Deploying the webhook pod 06/12/23 21:36:00.845 -STEP: Wait for the deployment to be ready 06/12/23 21:36:00.873 -Jun 12 21:36:00.891: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set -Jun 12 21:36:02.919: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 21, 36, 0, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 36, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 36, 0, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 36, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} -STEP: Deploying the webhook service 06/12/23 21:36:04.935 -STEP: Verifying the service has paired with the endpoint 06/12/23 21:36:04.998 -Jun 12 21:36:05.999: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 -[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] - test/e2e/apimachinery/webhook.go:277 -STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API 06/12/23 21:36:06.019 -STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API 06/12/23 21:36:06.07 -STEP: Creating a dummy validating-webhook-configuration object 06/12/23 21:36:06.134 -STEP: Deleting the validating-webhook-configuration, which should be possible to remove 06/12/23 21:36:06.16 -STEP: Creating a dummy mutating-webhook-configuration object 06/12/23 21:36:06.192 -STEP: Deleting the mutating-webhook-configuration, which should be possible to remove 06/12/23 21:36:06.213 -[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 +STEP: Setting up data 07/27/23 02:08:20.366 +[It] should support subpaths with downward pod [Conformance] + test/e2e/storage/subpath.go:92 +STEP: Creating pod pod-subpath-test-downwardapi-57dm 07/27/23 02:08:20.401 +STEP: Creating a pod to test atomic-volume-subpath 07/27/23 02:08:20.401 +Jul 27 02:08:20.445: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-57dm" in namespace "subpath-9788" to be "Succeeded or Failed" +Jul 27 02:08:20.453: INFO: Pod "pod-subpath-test-downwardapi-57dm": Phase="Pending", Reason="", readiness=false. Elapsed: 7.886371ms +Jul 27 02:08:22.467: INFO: Pod "pod-subpath-test-downwardapi-57dm": Phase="Running", Reason="", readiness=true. Elapsed: 2.021788051s +Jul 27 02:08:24.463: INFO: Pod "pod-subpath-test-downwardapi-57dm": Phase="Running", Reason="", readiness=true. Elapsed: 4.01718919s +Jul 27 02:08:26.464: INFO: Pod "pod-subpath-test-downwardapi-57dm": Phase="Running", Reason="", readiness=true. Elapsed: 6.018529936s +Jul 27 02:08:28.462: INFO: Pod "pod-subpath-test-downwardapi-57dm": Phase="Running", Reason="", readiness=true. Elapsed: 8.016653442s +Jul 27 02:08:30.463: INFO: Pod "pod-subpath-test-downwardapi-57dm": Phase="Running", Reason="", readiness=true. Elapsed: 10.017876014s +Jul 27 02:08:32.462: INFO: Pod "pod-subpath-test-downwardapi-57dm": Phase="Running", Reason="", readiness=true. Elapsed: 12.016805404s +Jul 27 02:08:34.464: INFO: Pod "pod-subpath-test-downwardapi-57dm": Phase="Running", Reason="", readiness=true. Elapsed: 14.018648455s +Jul 27 02:08:36.464: INFO: Pod "pod-subpath-test-downwardapi-57dm": Phase="Running", Reason="", readiness=true. Elapsed: 16.018446408s +Jul 27 02:08:38.462: INFO: Pod "pod-subpath-test-downwardapi-57dm": Phase="Running", Reason="", readiness=true. Elapsed: 18.017141371s +Jul 27 02:08:40.463: INFO: Pod "pod-subpath-test-downwardapi-57dm": Phase="Running", Reason="", readiness=true. Elapsed: 20.017621599s +Jul 27 02:08:42.478: INFO: Pod "pod-subpath-test-downwardapi-57dm": Phase="Running", Reason="", readiness=false. Elapsed: 22.032439949s +Jul 27 02:08:44.462: INFO: Pod "pod-subpath-test-downwardapi-57dm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.016714324s +STEP: Saw pod success 07/27/23 02:08:44.462 +Jul 27 02:08:44.462: INFO: Pod "pod-subpath-test-downwardapi-57dm" satisfied condition "Succeeded or Failed" +Jul 27 02:08:44.470: INFO: Trying to get logs from node 10.245.128.19 pod pod-subpath-test-downwardapi-57dm container test-container-subpath-downwardapi-57dm: +STEP: delete the pod 07/27/23 02:08:44.512 +Jul 27 02:08:44.544: INFO: Waiting for pod pod-subpath-test-downwardapi-57dm to disappear +Jul 27 02:08:44.551: INFO: Pod pod-subpath-test-downwardapi-57dm no longer exists +STEP: Deleting pod pod-subpath-test-downwardapi-57dm 07/27/23 02:08:44.551 +Jul 27 02:08:44.551: INFO: Deleting pod "pod-subpath-test-downwardapi-57dm" in namespace "subpath-9788" +[AfterEach] [sig-storage] Subpath test/e2e/framework/node/init/init.go:32 -Jun 12 21:36:06.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:105 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +Jul 27 02:08:44.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Subpath test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-storage] Subpath dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-storage] Subpath tear down framework | framework.go:193 -STEP: Destroying namespace "webhook-8145" for this suite. 06/12/23 21:36:06.437 -STEP: Destroying namespace "webhook-8145-markers" for this suite. 06/12/23 21:36:06.465 +STEP: Destroying namespace "subpath-9788" for this suite. 07/27/23 02:08:44.571 ------------------------------ -• [SLOW TEST] [7.766 seconds] -[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] -test/e2e/apimachinery/framework.go:23 - should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] - test/e2e/apimachinery/webhook.go:277 +• [SLOW TEST] [24.321 seconds] +[sig-storage] Subpath +test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + test/e2e/storage/subpath.go:36 + should support subpaths with downward pod [Conformance] + test/e2e/storage/subpath.go:92 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [BeforeEach] [sig-storage] Subpath set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:35:58.723 - Jun 12 21:35:58.725: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename webhook 06/12/23 21:35:58.734 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:35:58.854 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:35:58.882 - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + STEP: Creating a kubernetes client 07/27/23 02:08:20.273 + Jul 27 02:08:20.273: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename subpath 07/27/23 02:08:20.274 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:08:20.345 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:08:20.357 + [BeforeEach] [sig-storage] Subpath test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:90 - STEP: Setting up server cert 06/12/23 21:35:58.98 - STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 21:36:00.815 - STEP: Deploying the webhook pod 06/12/23 21:36:00.845 - STEP: Wait for the deployment to be ready 06/12/23 21:36:00.873 - Jun 12 21:36:00.891: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set - Jun 12 21:36:02.919: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 21, 36, 0, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 36, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 36, 0, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 36, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} - STEP: Deploying the webhook service 06/12/23 21:36:04.935 - STEP: Verifying the service has paired with the endpoint 06/12/23 21:36:04.998 - Jun 12 21:36:05.999: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 - [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] - test/e2e/apimachinery/webhook.go:277 - STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API 06/12/23 21:36:06.019 - STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API 06/12/23 21:36:06.07 - STEP: Creating a dummy validating-webhook-configuration object 06/12/23 21:36:06.134 - STEP: Deleting the validating-webhook-configuration, which should be possible to remove 06/12/23 21:36:06.16 - STEP: Creating a dummy mutating-webhook-configuration object 06/12/23 21:36:06.192 - STEP: Deleting the mutating-webhook-configuration, which should be possible to remove 06/12/23 21:36:06.213 - [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 + STEP: Setting up data 07/27/23 02:08:20.366 + [It] should support subpaths with downward pod [Conformance] + test/e2e/storage/subpath.go:92 + STEP: Creating pod pod-subpath-test-downwardapi-57dm 07/27/23 02:08:20.401 + STEP: Creating a pod to test atomic-volume-subpath 07/27/23 02:08:20.401 + Jul 27 02:08:20.445: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-57dm" in namespace "subpath-9788" to be "Succeeded or Failed" + Jul 27 02:08:20.453: INFO: Pod "pod-subpath-test-downwardapi-57dm": Phase="Pending", Reason="", readiness=false. Elapsed: 7.886371ms + Jul 27 02:08:22.467: INFO: Pod "pod-subpath-test-downwardapi-57dm": Phase="Running", Reason="", readiness=true. Elapsed: 2.021788051s + Jul 27 02:08:24.463: INFO: Pod "pod-subpath-test-downwardapi-57dm": Phase="Running", Reason="", readiness=true. Elapsed: 4.01718919s + Jul 27 02:08:26.464: INFO: Pod "pod-subpath-test-downwardapi-57dm": Phase="Running", Reason="", readiness=true. Elapsed: 6.018529936s + Jul 27 02:08:28.462: INFO: Pod "pod-subpath-test-downwardapi-57dm": Phase="Running", Reason="", readiness=true. Elapsed: 8.016653442s + Jul 27 02:08:30.463: INFO: Pod "pod-subpath-test-downwardapi-57dm": Phase="Running", Reason="", readiness=true. Elapsed: 10.017876014s + Jul 27 02:08:32.462: INFO: Pod "pod-subpath-test-downwardapi-57dm": Phase="Running", Reason="", readiness=true. Elapsed: 12.016805404s + Jul 27 02:08:34.464: INFO: Pod "pod-subpath-test-downwardapi-57dm": Phase="Running", Reason="", readiness=true. Elapsed: 14.018648455s + Jul 27 02:08:36.464: INFO: Pod "pod-subpath-test-downwardapi-57dm": Phase="Running", Reason="", readiness=true. Elapsed: 16.018446408s + Jul 27 02:08:38.462: INFO: Pod "pod-subpath-test-downwardapi-57dm": Phase="Running", Reason="", readiness=true. Elapsed: 18.017141371s + Jul 27 02:08:40.463: INFO: Pod "pod-subpath-test-downwardapi-57dm": Phase="Running", Reason="", readiness=true. Elapsed: 20.017621599s + Jul 27 02:08:42.478: INFO: Pod "pod-subpath-test-downwardapi-57dm": Phase="Running", Reason="", readiness=false. Elapsed: 22.032439949s + Jul 27 02:08:44.462: INFO: Pod "pod-subpath-test-downwardapi-57dm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.016714324s + STEP: Saw pod success 07/27/23 02:08:44.462 + Jul 27 02:08:44.462: INFO: Pod "pod-subpath-test-downwardapi-57dm" satisfied condition "Succeeded or Failed" + Jul 27 02:08:44.470: INFO: Trying to get logs from node 10.245.128.19 pod pod-subpath-test-downwardapi-57dm container test-container-subpath-downwardapi-57dm: + STEP: delete the pod 07/27/23 02:08:44.512 + Jul 27 02:08:44.544: INFO: Waiting for pod pod-subpath-test-downwardapi-57dm to disappear + Jul 27 02:08:44.551: INFO: Pod pod-subpath-test-downwardapi-57dm no longer exists + STEP: Deleting pod pod-subpath-test-downwardapi-57dm 07/27/23 02:08:44.551 + Jul 27 02:08:44.551: INFO: Deleting pod "pod-subpath-test-downwardapi-57dm" in namespace "subpath-9788" + [AfterEach] [sig-storage] Subpath test/e2e/framework/node/init/init.go:32 - Jun 12 21:36:06.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:105 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + Jul 27 02:08:44.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Subpath test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-storage] Subpath dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-storage] Subpath tear down framework | framework.go:193 - STEP: Destroying namespace "webhook-8145" for this suite. 06/12/23 21:36:06.437 - STEP: Destroying namespace "webhook-8145-markers" for this suite. 06/12/23 21:36:06.465 + STEP: Destroying namespace "subpath-9788" for this suite. 07/27/23 02:08:44.571 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSS +SS ------------------------------ -[sig-apps] Daemon set [Serial] - should rollback without unnecessary restarts [Conformance] - test/e2e/apps/daemon_set.go:432 -[BeforeEach] [sig-apps] Daemon set [Serial] +[sig-apps] CronJob + should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + test/e2e/apps/cronjob.go:124 +[BeforeEach] [sig-apps] CronJob set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:36:06.491 -Jun 12 21:36:06.491: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename daemonsets 06/12/23 21:36:06.494 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:36:06.543 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:36:06.586 -[BeforeEach] [sig-apps] Daemon set [Serial] +STEP: Creating a kubernetes client 07/27/23 02:08:44.595 +Jul 27 02:08:44.595: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename cronjob 07/27/23 02:08:44.596 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:08:44.638 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:08:44.646 +[BeforeEach] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-apps] Daemon set [Serial] - test/e2e/apps/daemon_set.go:146 -[It] should rollback without unnecessary restarts [Conformance] - test/e2e/apps/daemon_set.go:432 -Jun 12 21:36:06.670: INFO: Create a RollingUpdate DaemonSet -Jun 12 21:36:06.686: INFO: Check that daemon pods launch on every node of the cluster -Jun 12 21:36:06.704: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 21:36:06.704: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 -Jun 12 21:36:07.744: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 21:36:07.744: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 -Jun 12 21:36:08.735: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 21:36:08.735: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 -Jun 12 21:36:09.734: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 -Jun 12 21:36:09.734: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 -Jun 12 21:36:10.766: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 -Jun 12 21:36:10.766: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set -Jun 12 21:36:10.767: INFO: Update the DaemonSet to trigger a rollout -Jun 12 21:36:10.844: INFO: Updating DaemonSet daemon-set -Jun 12 21:36:13.889: INFO: Roll back the DaemonSet before rollout is complete -Jun 12 21:36:13.917: INFO: Updating DaemonSet daemon-set -Jun 12 21:36:13.917: INFO: Make sure DaemonSet rollback is complete -Jun 12 21:36:13.927: INFO: Wrong image for pod: daemon-set-fvgm9. Expected: registry.k8s.io/e2e-test-images/httpd:2.4.38-4, got: foo:non-existent. -Jun 12 21:36:13.927: INFO: Pod daemon-set-fvgm9 is not available -Jun 12 21:36:20.980: INFO: Pod daemon-set-qd9jt is not available -[AfterEach] [sig-apps] Daemon set [Serial] - test/e2e/apps/daemon_set.go:111 -STEP: Deleting DaemonSet "daemon-set" 06/12/23 21:36:21.096 -STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-97, will wait for the garbage collector to delete the pods 06/12/23 21:36:21.096 -Jun 12 21:36:21.206: INFO: Deleting DaemonSet.extensions daemon-set took: 21.492963ms -Jun 12 21:36:21.307: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.866771ms -Jun 12 21:36:25.715: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 21:36:25.715: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set -Jun 12 21:36:25.726: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"109287"},"items":null} - -Jun 12 21:36:25.734: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"109287"},"items":null} - -[AfterEach] [sig-apps] Daemon set [Serial] +[It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + test/e2e/apps/cronjob.go:124 +STEP: Creating a ForbidConcurrent cronjob 07/27/23 02:08:44.656 +W0727 02:08:44.677708 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "c" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "c" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "c" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "c" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +STEP: Ensuring a job is scheduled 07/27/23 02:08:44.677 +STEP: Ensuring exactly one is scheduled 07/27/23 02:09:00.692 +STEP: Ensuring exactly one running job exists by listing jobs explicitly 07/27/23 02:09:00.712 +STEP: Ensuring no more jobs are scheduled 07/27/23 02:09:00.723 +STEP: Removing cronjob 07/27/23 02:14:00.749 +[AfterEach] [sig-apps] CronJob test/e2e/framework/node/init/init.go:32 -Jun 12 21:36:25.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] +Jul 27 02:14:00.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] +[DeferCleanup (Each)] [sig-apps] CronJob dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] +[DeferCleanup (Each)] [sig-apps] CronJob tear down framework | framework.go:193 -STEP: Destroying namespace "daemonsets-97" for this suite. 06/12/23 21:36:25.806 +STEP: Destroying namespace "cronjob-7382" for this suite. 07/27/23 02:14:00.784 ------------------------------ -• [SLOW TEST] [19.366 seconds] -[sig-apps] Daemon set [Serial] +• [SLOW TEST] [316.222 seconds] +[sig-apps] CronJob test/e2e/apps/framework.go:23 - should rollback without unnecessary restarts [Conformance] - test/e2e/apps/daemon_set.go:432 + should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + test/e2e/apps/cronjob.go:124 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] Daemon set [Serial] + [BeforeEach] [sig-apps] CronJob set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:36:06.491 - Jun 12 21:36:06.491: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename daemonsets 06/12/23 21:36:06.494 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:36:06.543 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:36:06.586 - [BeforeEach] [sig-apps] Daemon set [Serial] + STEP: Creating a kubernetes client 07/27/23 02:08:44.595 + Jul 27 02:08:44.595: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename cronjob 07/27/23 02:08:44.596 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:08:44.638 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:08:44.646 + [BeforeEach] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-apps] Daemon set [Serial] - test/e2e/apps/daemon_set.go:146 - [It] should rollback without unnecessary restarts [Conformance] - test/e2e/apps/daemon_set.go:432 - Jun 12 21:36:06.670: INFO: Create a RollingUpdate DaemonSet - Jun 12 21:36:06.686: INFO: Check that daemon pods launch on every node of the cluster - Jun 12 21:36:06.704: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 21:36:06.704: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 - Jun 12 21:36:07.744: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 21:36:07.744: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 - Jun 12 21:36:08.735: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 21:36:08.735: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 - Jun 12 21:36:09.734: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 - Jun 12 21:36:09.734: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 - Jun 12 21:36:10.766: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 - Jun 12 21:36:10.766: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set - Jun 12 21:36:10.767: INFO: Update the DaemonSet to trigger a rollout - Jun 12 21:36:10.844: INFO: Updating DaemonSet daemon-set - Jun 12 21:36:13.889: INFO: Roll back the DaemonSet before rollout is complete - Jun 12 21:36:13.917: INFO: Updating DaemonSet daemon-set - Jun 12 21:36:13.917: INFO: Make sure DaemonSet rollback is complete - Jun 12 21:36:13.927: INFO: Wrong image for pod: daemon-set-fvgm9. Expected: registry.k8s.io/e2e-test-images/httpd:2.4.38-4, got: foo:non-existent. - Jun 12 21:36:13.927: INFO: Pod daemon-set-fvgm9 is not available - Jun 12 21:36:20.980: INFO: Pod daemon-set-qd9jt is not available - [AfterEach] [sig-apps] Daemon set [Serial] - test/e2e/apps/daemon_set.go:111 - STEP: Deleting DaemonSet "daemon-set" 06/12/23 21:36:21.096 - STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-97, will wait for the garbage collector to delete the pods 06/12/23 21:36:21.096 - Jun 12 21:36:21.206: INFO: Deleting DaemonSet.extensions daemon-set took: 21.492963ms - Jun 12 21:36:21.307: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.866771ms - Jun 12 21:36:25.715: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 21:36:25.715: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set - Jun 12 21:36:25.726: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"109287"},"items":null} - - Jun 12 21:36:25.734: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"109287"},"items":null} - - [AfterEach] [sig-apps] Daemon set [Serial] + [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + test/e2e/apps/cronjob.go:124 + STEP: Creating a ForbidConcurrent cronjob 07/27/23 02:08:44.656 + W0727 02:08:44.677708 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "c" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "c" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "c" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "c" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + STEP: Ensuring a job is scheduled 07/27/23 02:08:44.677 + STEP: Ensuring exactly one is scheduled 07/27/23 02:09:00.692 + STEP: Ensuring exactly one running job exists by listing jobs explicitly 07/27/23 02:09:00.712 + STEP: Ensuring no more jobs are scheduled 07/27/23 02:09:00.723 + STEP: Removing cronjob 07/27/23 02:14:00.749 + [AfterEach] [sig-apps] CronJob test/e2e/framework/node/init/init.go:32 - Jun 12 21:36:25.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + Jul 27 02:14:00.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + [DeferCleanup (Each)] [sig-apps] CronJob dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + [DeferCleanup (Each)] [sig-apps] CronJob tear down framework | framework.go:193 - STEP: Destroying namespace "daemonsets-97" for this suite. 06/12/23 21:36:25.806 + STEP: Destroying namespace "cronjob-7382" for this suite. 07/27/23 02:14:00.784 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-network] HostPort - validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] - test/e2e/network/hostport.go:63 -[BeforeEach] [sig-network] HostPort +[sig-network] Services + should be able to create a functioning NodePort service [Conformance] + test/e2e/network/service.go:1302 +[BeforeEach] [sig-network] Services set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:36:25.873 -Jun 12 21:36:25.873: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename hostport 06/12/23 21:36:25.876 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:36:25.956 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:36:26.014 -[BeforeEach] [sig-network] HostPort +STEP: Creating a kubernetes client 07/27/23 02:14:00.819 +Jul 27 02:14:00.819: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename services 07/27/23 02:14:00.82 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:14:00.894 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:14:00.902 +[BeforeEach] [sig-network] Services test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-network] HostPort - test/e2e/network/hostport.go:49 -[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] - test/e2e/network/hostport.go:63 -STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled 06/12/23 21:36:26.218 -Jun 12 21:36:26.300: INFO: Waiting up to 5m0s for pod "pod1" in namespace "hostport-8258" to be "running and ready" -Jun 12 21:36:26.314: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 13.459388ms -Jun 12 21:36:26.314: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:36:28.323: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022981898s -Jun 12 21:36:28.323: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:36:30.323: INFO: Pod "pod1": Phase="Running", Reason="", readiness=true. Elapsed: 4.022906206s -Jun 12 21:36:30.323: INFO: The phase of Pod pod1 is Running (Ready = true) -Jun 12 21:36:30.323: INFO: Pod "pod1" satisfied condition "running and ready" -STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 10.138.75.112 on the node which pod1 resides and expect scheduled 06/12/23 21:36:30.323 -Jun 12 21:36:30.339: INFO: Waiting up to 5m0s for pod "pod2" in namespace "hostport-8258" to be "running and ready" -Jun 12 21:36:30.348: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.570964ms -Jun 12 21:36:30.348: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:36:32.368: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028830416s -Jun 12 21:36:32.368: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:36:34.358: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 4.01861584s -Jun 12 21:36:34.358: INFO: The phase of Pod pod2 is Running (Ready = true) -Jun 12 21:36:34.358: INFO: Pod "pod2" satisfied condition "running and ready" -STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 10.138.75.112 but use UDP protocol on the node which pod2 resides 06/12/23 21:36:34.358 -Jun 12 21:36:34.373: INFO: Waiting up to 5m0s for pod "pod3" in namespace "hostport-8258" to be "running and ready" -Jun 12 21:36:34.381: INFO: Pod "pod3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.258851ms -Jun 12 21:36:34.382: INFO: The phase of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:36:36.391: INFO: Pod "pod3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017933123s -Jun 12 21:36:36.391: INFO: The phase of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:36:38.391: INFO: Pod "pod3": Phase="Running", Reason="", readiness=true. Elapsed: 4.018108324s -Jun 12 21:36:38.391: INFO: The phase of Pod pod3 is Running (Ready = true) -Jun 12 21:36:38.392: INFO: Pod "pod3" satisfied condition "running and ready" -Jun 12 21:36:38.405: INFO: Waiting up to 5m0s for pod "e2e-host-exec" in namespace "hostport-8258" to be "running and ready" -Jun 12 21:36:38.413: INFO: Pod "e2e-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 8.441769ms -Jun 12 21:36:38.413: INFO: The phase of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:36:40.427: INFO: Pod "e2e-host-exec": Phase="Running", Reason="", readiness=true. Elapsed: 2.02262171s -Jun 12 21:36:40.427: INFO: The phase of Pod e2e-host-exec is Running (Ready = true) -Jun 12 21:36:40.427: INFO: Pod "e2e-host-exec" satisfied condition "running and ready" -STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 06/12/23 21:36:40.435 -Jun 12 21:36:40.436: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.138.75.112 http://127.0.0.1:54323/hostname] Namespace:hostport-8258 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} -Jun 12 21:36:40.436: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 21:36:40.437: INFO: ExecWithOptions: Clientset creation -Jun 12 21:36:40.437: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/hostport-8258/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+--interface+10.138.75.112+http%3A%2F%2F127.0.0.1%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) -STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.138.75.112, port: 54323 06/12/23 21:36:40.94 -Jun 12 21:36:40.940: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.138.75.112:54323/hostname] Namespace:hostport-8258 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} -Jun 12 21:36:40.940: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 21:36:40.942: INFO: ExecWithOptions: Clientset creation -Jun 12 21:36:40.943: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/hostport-8258/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+http%3A%2F%2F10.138.75.112%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) -STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.138.75.112, port: 54323 UDP 06/12/23 21:36:41.3 -Jun 12 21:36:41.301: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostname | nc -u -w 5 10.138.75.112 54323] Namespace:hostport-8258 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} -Jun 12 21:36:41.301: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 21:36:41.303: INFO: ExecWithOptions: Clientset creation -Jun 12 21:36:41.304: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/hostport-8258/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostname+%7C+nc+-u+-w+5+10.138.75.112+54323&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) -[AfterEach] [sig-network] HostPort +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should be able to create a functioning NodePort service [Conformance] + test/e2e/network/service.go:1302 +STEP: creating service nodeport-test with type=NodePort in namespace services-3085 07/27/23 02:14:00.911 +STEP: creating replication controller nodeport-test in namespace services-3085 07/27/23 02:14:01 +I0727 02:14:01.043798 20 runners.go:193] Created replication controller with name: nodeport-test, namespace: services-3085, replica count: 2 +I0727 02:14:04.095207 20 runners.go:193] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Jul 27 02:14:04.095: INFO: Creating new exec pod +Jul 27 02:14:04.119: INFO: Waiting up to 5m0s for pod "execpoddjxsb" in namespace "services-3085" to be "running" +Jul 27 02:14:04.126: INFO: Pod "execpoddjxsb": Phase="Pending", Reason="", readiness=false. Elapsed: 7.306705ms +Jul 27 02:14:06.139: INFO: Pod "execpoddjxsb": Phase="Running", Reason="", readiness=true. Elapsed: 2.020130694s +Jul 27 02:14:06.139: INFO: Pod "execpoddjxsb" satisfied condition "running" +Jul 27 02:14:07.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-3085 exec execpoddjxsb -- /bin/sh -x -c nc -v -z -w 2 nodeport-test 80' +Jul 27 02:14:07.525: INFO: stderr: "+ nc -v -z -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" +Jul 27 02:14:07.525: INFO: stdout: "" +Jul 27 02:14:07.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-3085 exec execpoddjxsb -- /bin/sh -x -c nc -v -z -w 2 172.21.80.21 80' +Jul 27 02:14:07.844: INFO: stderr: "+ nc -v -z -w 2 172.21.80.21 80\nConnection to 172.21.80.21 80 port [tcp/http] succeeded!\n" +Jul 27 02:14:07.844: INFO: stdout: "" +Jul 27 02:14:07.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-3085 exec execpoddjxsb -- /bin/sh -x -c nc -v -z -w 2 10.245.128.17 32461' +Jul 27 02:14:08.114: INFO: stderr: "+ nc -v -z -w 2 10.245.128.17 32461\nConnection to 10.245.128.17 32461 port [tcp/*] succeeded!\n" +Jul 27 02:14:08.114: INFO: stdout: "" +Jul 27 02:14:08.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-3085 exec execpoddjxsb -- /bin/sh -x -c nc -v -z -w 2 10.245.128.19 32461' +Jul 27 02:14:08.457: INFO: stderr: "+ nc -v -z -w 2 10.245.128.19 32461\nConnection to 10.245.128.19 32461 port [tcp/*] succeeded!\n" +Jul 27 02:14:08.457: INFO: stdout: "" +[AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 -Jun 12 21:36:46.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] HostPort +Jul 27 02:14:08.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] HostPort +[DeferCleanup (Each)] [sig-network] Services dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] HostPort +[DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 -STEP: Destroying namespace "hostport-8258" for this suite. 06/12/23 21:36:46.676 +STEP: Destroying namespace "services-3085" for this suite. 07/27/23 02:14:08.486 ------------------------------ -• [SLOW TEST] [20.829 seconds] -[sig-network] HostPort +• [SLOW TEST] [7.697 seconds] +[sig-network] Services test/e2e/network/common/framework.go:23 - validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] - test/e2e/network/hostport.go:63 + should be able to create a functioning NodePort service [Conformance] + test/e2e/network/service.go:1302 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] HostPort + [BeforeEach] [sig-network] Services set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:36:25.873 - Jun 12 21:36:25.873: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename hostport 06/12/23 21:36:25.876 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:36:25.956 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:36:26.014 - [BeforeEach] [sig-network] HostPort + STEP: Creating a kubernetes client 07/27/23 02:14:00.819 + Jul 27 02:14:00.819: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename services 07/27/23 02:14:00.82 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:14:00.894 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:14:00.902 + [BeforeEach] [sig-network] Services test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-network] HostPort - test/e2e/network/hostport.go:49 - [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] - test/e2e/network/hostport.go:63 - STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled 06/12/23 21:36:26.218 - Jun 12 21:36:26.300: INFO: Waiting up to 5m0s for pod "pod1" in namespace "hostport-8258" to be "running and ready" - Jun 12 21:36:26.314: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 13.459388ms - Jun 12 21:36:26.314: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:36:28.323: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022981898s - Jun 12 21:36:28.323: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:36:30.323: INFO: Pod "pod1": Phase="Running", Reason="", readiness=true. Elapsed: 4.022906206s - Jun 12 21:36:30.323: INFO: The phase of Pod pod1 is Running (Ready = true) - Jun 12 21:36:30.323: INFO: Pod "pod1" satisfied condition "running and ready" - STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 10.138.75.112 on the node which pod1 resides and expect scheduled 06/12/23 21:36:30.323 - Jun 12 21:36:30.339: INFO: Waiting up to 5m0s for pod "pod2" in namespace "hostport-8258" to be "running and ready" - Jun 12 21:36:30.348: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.570964ms - Jun 12 21:36:30.348: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:36:32.368: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028830416s - Jun 12 21:36:32.368: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:36:34.358: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 4.01861584s - Jun 12 21:36:34.358: INFO: The phase of Pod pod2 is Running (Ready = true) - Jun 12 21:36:34.358: INFO: Pod "pod2" satisfied condition "running and ready" - STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 10.138.75.112 but use UDP protocol on the node which pod2 resides 06/12/23 21:36:34.358 - Jun 12 21:36:34.373: INFO: Waiting up to 5m0s for pod "pod3" in namespace "hostport-8258" to be "running and ready" - Jun 12 21:36:34.381: INFO: Pod "pod3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.258851ms - Jun 12 21:36:34.382: INFO: The phase of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:36:36.391: INFO: Pod "pod3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017933123s - Jun 12 21:36:36.391: INFO: The phase of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:36:38.391: INFO: Pod "pod3": Phase="Running", Reason="", readiness=true. Elapsed: 4.018108324s - Jun 12 21:36:38.391: INFO: The phase of Pod pod3 is Running (Ready = true) - Jun 12 21:36:38.392: INFO: Pod "pod3" satisfied condition "running and ready" - Jun 12 21:36:38.405: INFO: Waiting up to 5m0s for pod "e2e-host-exec" in namespace "hostport-8258" to be "running and ready" - Jun 12 21:36:38.413: INFO: Pod "e2e-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 8.441769ms - Jun 12 21:36:38.413: INFO: The phase of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:36:40.427: INFO: Pod "e2e-host-exec": Phase="Running", Reason="", readiness=true. Elapsed: 2.02262171s - Jun 12 21:36:40.427: INFO: The phase of Pod e2e-host-exec is Running (Ready = true) - Jun 12 21:36:40.427: INFO: Pod "e2e-host-exec" satisfied condition "running and ready" - STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 06/12/23 21:36:40.435 - Jun 12 21:36:40.436: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.138.75.112 http://127.0.0.1:54323/hostname] Namespace:hostport-8258 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} - Jun 12 21:36:40.436: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 21:36:40.437: INFO: ExecWithOptions: Clientset creation - Jun 12 21:36:40.437: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/hostport-8258/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+--interface+10.138.75.112+http%3A%2F%2F127.0.0.1%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) - STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.138.75.112, port: 54323 06/12/23 21:36:40.94 - Jun 12 21:36:40.940: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.138.75.112:54323/hostname] Namespace:hostport-8258 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} - Jun 12 21:36:40.940: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 21:36:40.942: INFO: ExecWithOptions: Clientset creation - Jun 12 21:36:40.943: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/hostport-8258/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+http%3A%2F%2F10.138.75.112%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) - STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.138.75.112, port: 54323 UDP 06/12/23 21:36:41.3 - Jun 12 21:36:41.301: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostname | nc -u -w 5 10.138.75.112 54323] Namespace:hostport-8258 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} - Jun 12 21:36:41.301: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 21:36:41.303: INFO: ExecWithOptions: Clientset creation - Jun 12 21:36:41.304: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/hostport-8258/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostname+%7C+nc+-u+-w+5+10.138.75.112+54323&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) - [AfterEach] [sig-network] HostPort + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should be able to create a functioning NodePort service [Conformance] + test/e2e/network/service.go:1302 + STEP: creating service nodeport-test with type=NodePort in namespace services-3085 07/27/23 02:14:00.911 + STEP: creating replication controller nodeport-test in namespace services-3085 07/27/23 02:14:01 + I0727 02:14:01.043798 20 runners.go:193] Created replication controller with name: nodeport-test, namespace: services-3085, replica count: 2 + I0727 02:14:04.095207 20 runners.go:193] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Jul 27 02:14:04.095: INFO: Creating new exec pod + Jul 27 02:14:04.119: INFO: Waiting up to 5m0s for pod "execpoddjxsb" in namespace "services-3085" to be "running" + Jul 27 02:14:04.126: INFO: Pod "execpoddjxsb": Phase="Pending", Reason="", readiness=false. Elapsed: 7.306705ms + Jul 27 02:14:06.139: INFO: Pod "execpoddjxsb": Phase="Running", Reason="", readiness=true. Elapsed: 2.020130694s + Jul 27 02:14:06.139: INFO: Pod "execpoddjxsb" satisfied condition "running" + Jul 27 02:14:07.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-3085 exec execpoddjxsb -- /bin/sh -x -c nc -v -z -w 2 nodeport-test 80' + Jul 27 02:14:07.525: INFO: stderr: "+ nc -v -z -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" + Jul 27 02:14:07.525: INFO: stdout: "" + Jul 27 02:14:07.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-3085 exec execpoddjxsb -- /bin/sh -x -c nc -v -z -w 2 172.21.80.21 80' + Jul 27 02:14:07.844: INFO: stderr: "+ nc -v -z -w 2 172.21.80.21 80\nConnection to 172.21.80.21 80 port [tcp/http] succeeded!\n" + Jul 27 02:14:07.844: INFO: stdout: "" + Jul 27 02:14:07.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-3085 exec execpoddjxsb -- /bin/sh -x -c nc -v -z -w 2 10.245.128.17 32461' + Jul 27 02:14:08.114: INFO: stderr: "+ nc -v -z -w 2 10.245.128.17 32461\nConnection to 10.245.128.17 32461 port [tcp/*] succeeded!\n" + Jul 27 02:14:08.114: INFO: stdout: "" + Jul 27 02:14:08.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-3085 exec execpoddjxsb -- /bin/sh -x -c nc -v -z -w 2 10.245.128.19 32461' + Jul 27 02:14:08.457: INFO: stderr: "+ nc -v -z -w 2 10.245.128.19 32461\nConnection to 10.245.128.19 32461 port [tcp/*] succeeded!\n" + Jul 27 02:14:08.457: INFO: stdout: "" + [AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 - Jun 12 21:36:46.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] HostPort + Jul 27 02:14:08.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] HostPort + [DeferCleanup (Each)] [sig-network] Services dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] HostPort + [DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 - STEP: Destroying namespace "hostport-8258" for this suite. 06/12/23 21:36:46.676 + STEP: Destroying namespace "services-3085" for this suite. 07/27/23 02:14:08.486 << End Captured GinkgoWriter Output ------------------------------ -SSSSS +SSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] ConfigMap - should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:57 -[BeforeEach] [sig-storage] ConfigMap +[sig-api-machinery] server version + should find the server version [Conformance] + test/e2e/apimachinery/server_version.go:39 +[BeforeEach] [sig-api-machinery] server version set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:36:46.703 -Jun 12 21:36:46.704: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename configmap 06/12/23 21:36:46.707 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:36:46.78 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:36:46.792 -[BeforeEach] [sig-storage] ConfigMap +STEP: Creating a kubernetes client 07/27/23 02:14:08.518 +Jul 27 02:14:08.518: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename server-version 07/27/23 02:14:08.519 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:14:08.564 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:14:08.605 +[BeforeEach] [sig-api-machinery] server version test/e2e/framework/metrics/init/init.go:31 -[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:57 -STEP: Creating configMap with name configmap-test-volume-460326f1-6402-457b-b0c1-e57d17081c3e 06/12/23 21:36:46.807 -STEP: Creating a pod to test consume configMaps 06/12/23 21:36:46.828 -Jun 12 21:36:46.852: INFO: Waiting up to 5m0s for pod "pod-configmaps-04f4bfe8-5650-4014-93b4-50b437d890b2" in namespace "configmap-3824" to be "Succeeded or Failed" -Jun 12 21:36:46.863: INFO: Pod "pod-configmaps-04f4bfe8-5650-4014-93b4-50b437d890b2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.911264ms -Jun 12 21:36:48.872: INFO: Pod "pod-configmaps-04f4bfe8-5650-4014-93b4-50b437d890b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020563474s -Jun 12 21:36:50.899: INFO: Pod "pod-configmaps-04f4bfe8-5650-4014-93b4-50b437d890b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046968246s -Jun 12 21:36:52.873: INFO: Pod "pod-configmaps-04f4bfe8-5650-4014-93b4-50b437d890b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021419423s -STEP: Saw pod success 06/12/23 21:36:52.873 -Jun 12 21:36:52.873: INFO: Pod "pod-configmaps-04f4bfe8-5650-4014-93b4-50b437d890b2" satisfied condition "Succeeded or Failed" -Jun 12 21:36:52.881: INFO: Trying to get logs from node 10.138.75.70 pod pod-configmaps-04f4bfe8-5650-4014-93b4-50b437d890b2 container agnhost-container: -STEP: delete the pod 06/12/23 21:36:52.898 -Jun 12 21:36:52.921: INFO: Waiting for pod pod-configmaps-04f4bfe8-5650-4014-93b4-50b437d890b2 to disappear -Jun 12 21:36:52.929: INFO: Pod pod-configmaps-04f4bfe8-5650-4014-93b4-50b437d890b2 no longer exists -[AfterEach] [sig-storage] ConfigMap +[It] should find the server version [Conformance] + test/e2e/apimachinery/server_version.go:39 +STEP: Request ServerVersion 07/27/23 02:14:08.676 +STEP: Confirm major version 07/27/23 02:14:08.68 +Jul 27 02:14:08.680: INFO: Major version: 1 +STEP: Confirm minor version 07/27/23 02:14:08.68 +Jul 27 02:14:08.680: INFO: cleanMinorVersion: 26 +Jul 27 02:14:08.680: INFO: Minor version: 26 +[AfterEach] [sig-api-machinery] server version test/e2e/framework/node/init/init.go:32 -Jun 12 21:36:52.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] ConfigMap +Jul 27 02:14:08.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] server version test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] ConfigMap +[DeferCleanup (Each)] [sig-api-machinery] server version dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] ConfigMap +[DeferCleanup (Each)] [sig-api-machinery] server version tear down framework | framework.go:193 -STEP: Destroying namespace "configmap-3824" for this suite. 06/12/23 21:36:52.946 +STEP: Destroying namespace "server-version-7940" for this suite. 07/27/23 02:14:08.716 ------------------------------ -• [SLOW TEST] [6.267 seconds] -[sig-storage] ConfigMap -test/e2e/common/storage/framework.go:23 - should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:57 +• [0.253 seconds] +[sig-api-machinery] server version +test/e2e/apimachinery/framework.go:23 + should find the server version [Conformance] + test/e2e/apimachinery/server_version.go:39 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] ConfigMap + [BeforeEach] [sig-api-machinery] server version set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:36:46.703 - Jun 12 21:36:46.704: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename configmap 06/12/23 21:36:46.707 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:36:46.78 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:36:46.792 - [BeforeEach] [sig-storage] ConfigMap + STEP: Creating a kubernetes client 07/27/23 02:14:08.518 + Jul 27 02:14:08.518: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename server-version 07/27/23 02:14:08.519 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:14:08.564 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:14:08.605 + [BeforeEach] [sig-api-machinery] server version test/e2e/framework/metrics/init/init.go:31 - [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:57 - STEP: Creating configMap with name configmap-test-volume-460326f1-6402-457b-b0c1-e57d17081c3e 06/12/23 21:36:46.807 - STEP: Creating a pod to test consume configMaps 06/12/23 21:36:46.828 - Jun 12 21:36:46.852: INFO: Waiting up to 5m0s for pod "pod-configmaps-04f4bfe8-5650-4014-93b4-50b437d890b2" in namespace "configmap-3824" to be "Succeeded or Failed" - Jun 12 21:36:46.863: INFO: Pod "pod-configmaps-04f4bfe8-5650-4014-93b4-50b437d890b2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.911264ms - Jun 12 21:36:48.872: INFO: Pod "pod-configmaps-04f4bfe8-5650-4014-93b4-50b437d890b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020563474s - Jun 12 21:36:50.899: INFO: Pod "pod-configmaps-04f4bfe8-5650-4014-93b4-50b437d890b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046968246s - Jun 12 21:36:52.873: INFO: Pod "pod-configmaps-04f4bfe8-5650-4014-93b4-50b437d890b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021419423s - STEP: Saw pod success 06/12/23 21:36:52.873 - Jun 12 21:36:52.873: INFO: Pod "pod-configmaps-04f4bfe8-5650-4014-93b4-50b437d890b2" satisfied condition "Succeeded or Failed" - Jun 12 21:36:52.881: INFO: Trying to get logs from node 10.138.75.70 pod pod-configmaps-04f4bfe8-5650-4014-93b4-50b437d890b2 container agnhost-container: - STEP: delete the pod 06/12/23 21:36:52.898 - Jun 12 21:36:52.921: INFO: Waiting for pod pod-configmaps-04f4bfe8-5650-4014-93b4-50b437d890b2 to disappear - Jun 12 21:36:52.929: INFO: Pod pod-configmaps-04f4bfe8-5650-4014-93b4-50b437d890b2 no longer exists - [AfterEach] [sig-storage] ConfigMap + [It] should find the server version [Conformance] + test/e2e/apimachinery/server_version.go:39 + STEP: Request ServerVersion 07/27/23 02:14:08.676 + STEP: Confirm major version 07/27/23 02:14:08.68 + Jul 27 02:14:08.680: INFO: Major version: 1 + STEP: Confirm minor version 07/27/23 02:14:08.68 + Jul 27 02:14:08.680: INFO: cleanMinorVersion: 26 + Jul 27 02:14:08.680: INFO: Minor version: 26 + [AfterEach] [sig-api-machinery] server version test/e2e/framework/node/init/init.go:32 - Jun 12 21:36:52.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] ConfigMap + Jul 27 02:14:08.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] server version test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] ConfigMap + [DeferCleanup (Each)] [sig-api-machinery] server version dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] ConfigMap + [DeferCleanup (Each)] [sig-api-machinery] server version tear down framework | framework.go:193 - STEP: Destroying namespace "configmap-3824" for this suite. 06/12/23 21:36:52.946 + STEP: Destroying namespace "server-version-7940" for this suite. 07/27/23 02:14:08.716 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSS +SSSSSSSSSSSS ------------------------------ -[sig-api-machinery] ResourceQuota - should apply changes to a resourcequota status [Conformance] - test/e2e/apimachinery/resource_quota.go:1010 -[BeforeEach] [sig-api-machinery] ResourceQuota +[sig-apps] Job + should create pods for an Indexed job with completion indexes and specified hostname [Conformance] + test/e2e/apps/job.go:366 +[BeforeEach] [sig-apps] Job set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:36:52.979 -Jun 12 21:36:52.979: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename resourcequota 06/12/23 21:36:52.981 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:36:53.04 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:36:53.069 -[BeforeEach] [sig-api-machinery] ResourceQuota +STEP: Creating a kubernetes client 07/27/23 02:14:08.772 +Jul 27 02:14:08.772: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename job 07/27/23 02:14:08.773 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:14:08.822 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:14:08.842 +[BeforeEach] [sig-apps] Job test/e2e/framework/metrics/init/init.go:31 -[It] should apply changes to a resourcequota status [Conformance] - test/e2e/apimachinery/resource_quota.go:1010 -STEP: Creating resourceQuota "e2e-rq-status-z2fs7" 06/12/23 21:36:53.097 -Jun 12 21:36:53.160: INFO: Resource quota "e2e-rq-status-z2fs7" reports spec: hard cpu limit of 500m -Jun 12 21:36:53.160: INFO: Resource quota "e2e-rq-status-z2fs7" reports spec: hard memory limit of 500Mi -STEP: Updating resourceQuota "e2e-rq-status-z2fs7" /status 06/12/23 21:36:53.161 -STEP: Confirm /status for "e2e-rq-status-z2fs7" resourceQuota via watch 06/12/23 21:36:53.193 -Jun 12 21:36:53.203: INFO: observed resourceQuota "e2e-rq-status-z2fs7" in namespace "resourcequota-4248" with hard status: v1.ResourceList(nil) -Jun 12 21:36:53.203: INFO: Found resourceQuota "e2e-rq-status-z2fs7" in namespace "resourcequota-4248" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:500, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}} -Jun 12 21:36:53.203: INFO: ResourceQuota "e2e-rq-status-z2fs7" /status was updated -STEP: Patching hard spec values for cpu & memory 06/12/23 21:36:53.217 -Jun 12 21:36:53.243: INFO: Resource quota "e2e-rq-status-z2fs7" reports spec: hard cpu limit of 1 -Jun 12 21:36:53.243: INFO: Resource quota "e2e-rq-status-z2fs7" reports spec: hard memory limit of 1Gi -STEP: Patching "e2e-rq-status-z2fs7" /status 06/12/23 21:36:53.243 -STEP: Confirm /status for "e2e-rq-status-z2fs7" resourceQuota via watch 06/12/23 21:36:53.28 -Jun 12 21:36:53.286: INFO: observed resourceQuota "e2e-rq-status-z2fs7" in namespace "resourcequota-4248" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:500, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}} -Jun 12 21:36:53.286: INFO: Found resourceQuota "e2e-rq-status-z2fs7" in namespace "resourcequota-4248" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:1, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}} -Jun 12 21:36:53.286: INFO: ResourceQuota "e2e-rq-status-z2fs7" /status was patched -STEP: Get "e2e-rq-status-z2fs7" /status 06/12/23 21:36:53.286 -Jun 12 21:36:53.312: INFO: Resourcequota "e2e-rq-status-z2fs7" reports status: hard cpu of 1 -Jun 12 21:36:53.313: INFO: Resourcequota "e2e-rq-status-z2fs7" reports status: hard memory of 1Gi -STEP: Repatching "e2e-rq-status-z2fs7" /status before checking Spec is unchanged 06/12/23 21:36:53.326 -Jun 12 21:36:53.344: INFO: Resourcequota "e2e-rq-status-z2fs7" reports status: hard cpu of 2 -Jun 12 21:36:53.344: INFO: Resourcequota "e2e-rq-status-z2fs7" reports status: hard memory of 2Gi -Jun 12 21:36:53.350: INFO: Found resourceQuota "e2e-rq-status-z2fs7" in namespace "resourcequota-4248" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:2, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"2", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:2147483648, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"2Gi", Format:"BinarySI"}} -Jun 12 21:38:33.379: INFO: ResourceQuota "e2e-rq-status-z2fs7" Spec was unchanged and /status reset -[AfterEach] [sig-api-machinery] ResourceQuota +[It] should create pods for an Indexed job with completion indexes and specified hostname [Conformance] + test/e2e/apps/job.go:366 +STEP: Creating Indexed job 07/27/23 02:14:08.855 +STEP: Ensuring job reaches completions 07/27/23 02:14:08.874 +STEP: Ensuring pods with index for job exist 07/27/23 02:14:18.885 +[AfterEach] [sig-apps] Job test/e2e/framework/node/init/init.go:32 -Jun 12 21:38:33.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +Jul 27 02:14:18.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Job test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +[DeferCleanup (Each)] [sig-apps] Job dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +[DeferCleanup (Each)] [sig-apps] Job tear down framework | framework.go:193 -STEP: Destroying namespace "resourcequota-4248" for this suite. 06/12/23 21:38:33.398 +STEP: Destroying namespace "job-266" for this suite. 07/27/23 02:14:18.91 ------------------------------ -• [SLOW TEST] [100.442 seconds] -[sig-api-machinery] ResourceQuota -test/e2e/apimachinery/framework.go:23 - should apply changes to a resourcequota status [Conformance] - test/e2e/apimachinery/resource_quota.go:1010 +• [SLOW TEST] [10.161 seconds] +[sig-apps] Job +test/e2e/apps/framework.go:23 + should create pods for an Indexed job with completion indexes and specified hostname [Conformance] + test/e2e/apps/job.go:366 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] ResourceQuota + [BeforeEach] [sig-apps] Job set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:36:52.979 - Jun 12 21:36:52.979: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename resourcequota 06/12/23 21:36:52.981 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:36:53.04 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:36:53.069 - [BeforeEach] [sig-api-machinery] ResourceQuota + STEP: Creating a kubernetes client 07/27/23 02:14:08.772 + Jul 27 02:14:08.772: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename job 07/27/23 02:14:08.773 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:14:08.822 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:14:08.842 + [BeforeEach] [sig-apps] Job test/e2e/framework/metrics/init/init.go:31 - [It] should apply changes to a resourcequota status [Conformance] - test/e2e/apimachinery/resource_quota.go:1010 - STEP: Creating resourceQuota "e2e-rq-status-z2fs7" 06/12/23 21:36:53.097 - Jun 12 21:36:53.160: INFO: Resource quota "e2e-rq-status-z2fs7" reports spec: hard cpu limit of 500m - Jun 12 21:36:53.160: INFO: Resource quota "e2e-rq-status-z2fs7" reports spec: hard memory limit of 500Mi - STEP: Updating resourceQuota "e2e-rq-status-z2fs7" /status 06/12/23 21:36:53.161 - STEP: Confirm /status for "e2e-rq-status-z2fs7" resourceQuota via watch 06/12/23 21:36:53.193 - Jun 12 21:36:53.203: INFO: observed resourceQuota "e2e-rq-status-z2fs7" in namespace "resourcequota-4248" with hard status: v1.ResourceList(nil) - Jun 12 21:36:53.203: INFO: Found resourceQuota "e2e-rq-status-z2fs7" in namespace "resourcequota-4248" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:500, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}} - Jun 12 21:36:53.203: INFO: ResourceQuota "e2e-rq-status-z2fs7" /status was updated - STEP: Patching hard spec values for cpu & memory 06/12/23 21:36:53.217 - Jun 12 21:36:53.243: INFO: Resource quota "e2e-rq-status-z2fs7" reports spec: hard cpu limit of 1 - Jun 12 21:36:53.243: INFO: Resource quota "e2e-rq-status-z2fs7" reports spec: hard memory limit of 1Gi - STEP: Patching "e2e-rq-status-z2fs7" /status 06/12/23 21:36:53.243 - STEP: Confirm /status for "e2e-rq-status-z2fs7" resourceQuota via watch 06/12/23 21:36:53.28 - Jun 12 21:36:53.286: INFO: observed resourceQuota "e2e-rq-status-z2fs7" in namespace "resourcequota-4248" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:500, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}} - Jun 12 21:36:53.286: INFO: Found resourceQuota "e2e-rq-status-z2fs7" in namespace "resourcequota-4248" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:1, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}} - Jun 12 21:36:53.286: INFO: ResourceQuota "e2e-rq-status-z2fs7" /status was patched - STEP: Get "e2e-rq-status-z2fs7" /status 06/12/23 21:36:53.286 - Jun 12 21:36:53.312: INFO: Resourcequota "e2e-rq-status-z2fs7" reports status: hard cpu of 1 - Jun 12 21:36:53.313: INFO: Resourcequota "e2e-rq-status-z2fs7" reports status: hard memory of 1Gi - STEP: Repatching "e2e-rq-status-z2fs7" /status before checking Spec is unchanged 06/12/23 21:36:53.326 - Jun 12 21:36:53.344: INFO: Resourcequota "e2e-rq-status-z2fs7" reports status: hard cpu of 2 - Jun 12 21:36:53.344: INFO: Resourcequota "e2e-rq-status-z2fs7" reports status: hard memory of 2Gi - Jun 12 21:36:53.350: INFO: Found resourceQuota "e2e-rq-status-z2fs7" in namespace "resourcequota-4248" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:2, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"2", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:2147483648, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"2Gi", Format:"BinarySI"}} - Jun 12 21:38:33.379: INFO: ResourceQuota "e2e-rq-status-z2fs7" Spec was unchanged and /status reset - [AfterEach] [sig-api-machinery] ResourceQuota + [It] should create pods for an Indexed job with completion indexes and specified hostname [Conformance] + test/e2e/apps/job.go:366 + STEP: Creating Indexed job 07/27/23 02:14:08.855 + STEP: Ensuring job reaches completions 07/27/23 02:14:08.874 + STEP: Ensuring pods with index for job exist 07/27/23 02:14:18.885 + [AfterEach] [sig-apps] Job test/e2e/framework/node/init/init.go:32 - Jun 12 21:38:33.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + Jul 27 02:14:18.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Job test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + [DeferCleanup (Each)] [sig-apps] Job dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + [DeferCleanup (Each)] [sig-apps] Job tear down framework | framework.go:193 - STEP: Destroying namespace "resourcequota-4248" for this suite. 06/12/23 21:38:33.398 + STEP: Destroying namespace "job-266" for this suite. 07/27/23 02:14:18.91 << End Captured GinkgoWriter Output ------------------------------ -SSSSS +SSSSSSSSSSSSSS ------------------------------ -[sig-node] Secrets - should be consumable via the environment [NodeConformance] [Conformance] - test/e2e/common/node/secrets.go:95 -[BeforeEach] [sig-node] Secrets +[sig-network] DNS + should support configurable pod DNS nameservers [Conformance] + test/e2e/network/dns.go:411 +[BeforeEach] [sig-network] DNS set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:38:33.423 -Jun 12 21:38:33.424: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename secrets 06/12/23 21:38:33.427 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:38:33.482 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:38:33.496 -[BeforeEach] [sig-node] Secrets +STEP: Creating a kubernetes client 07/27/23 02:14:18.933 +Jul 27 02:14:18.933: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename dns 07/27/23 02:14:18.934 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:14:18.976 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:14:18.985 +[BeforeEach] [sig-network] DNS test/e2e/framework/metrics/init/init.go:31 -[It] should be consumable via the environment [NodeConformance] [Conformance] - test/e2e/common/node/secrets.go:95 -STEP: creating secret secrets-8285/secret-test-85ce032e-e517-49e4-906d-0d95544e90b8 06/12/23 21:38:33.568 -STEP: Creating a pod to test consume secrets 06/12/23 21:38:33.585 -Jun 12 21:38:33.610: INFO: Waiting up to 5m0s for pod "pod-configmaps-83ecc188-5e4f-46f7-8dad-15695f4a9843" in namespace "secrets-8285" to be "Succeeded or Failed" -Jun 12 21:38:33.619: INFO: Pod "pod-configmaps-83ecc188-5e4f-46f7-8dad-15695f4a9843": Phase="Pending", Reason="", readiness=false. Elapsed: 8.6126ms -Jun 12 21:38:35.629: INFO: Pod "pod-configmaps-83ecc188-5e4f-46f7-8dad-15695f4a9843": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01780286s -Jun 12 21:38:37.643: INFO: Pod "pod-configmaps-83ecc188-5e4f-46f7-8dad-15695f4a9843": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032332957s -Jun 12 21:38:39.630: INFO: Pod "pod-configmaps-83ecc188-5e4f-46f7-8dad-15695f4a9843": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019304598s -STEP: Saw pod success 06/12/23 21:38:39.63 -Jun 12 21:38:39.630: INFO: Pod "pod-configmaps-83ecc188-5e4f-46f7-8dad-15695f4a9843" satisfied condition "Succeeded or Failed" -Jun 12 21:38:39.670: INFO: Trying to get logs from node 10.138.75.70 pod pod-configmaps-83ecc188-5e4f-46f7-8dad-15695f4a9843 container env-test: -STEP: delete the pod 06/12/23 21:38:39.754 -Jun 12 21:38:39.823: INFO: Waiting for pod pod-configmaps-83ecc188-5e4f-46f7-8dad-15695f4a9843 to disappear -Jun 12 21:38:39.852: INFO: Pod pod-configmaps-83ecc188-5e4f-46f7-8dad-15695f4a9843 no longer exists -[AfterEach] [sig-node] Secrets +[It] should support configurable pod DNS nameservers [Conformance] + test/e2e/network/dns.go:411 +STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... 07/27/23 02:14:18.995 +Jul 27 02:14:19.020: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-7274 84eba61e-b5a7-478a-8c4b-a203a4ccf241 95721 0 2023-07-27 02:14:19 +0000 UTC map[] map[openshift.io/scc:anyuid] [] [] [{e2e.test Update v1 2023-07-27 02:14:18 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dbv8r,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dbv8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c52,c19,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jul 27 02:14:19.020: INFO: Waiting up to 5m0s for pod "test-dns-nameservers" in namespace "dns-7274" to be "running and ready" +Jul 27 02:14:19.027: INFO: Pod "test-dns-nameservers": Phase="Pending", Reason="", readiness=false. Elapsed: 7.197497ms +Jul 27 02:14:19.027: INFO: The phase of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) +Jul 27 02:14:21.037: INFO: Pod "test-dns-nameservers": Phase="Running", Reason="", readiness=true. Elapsed: 2.017195159s +Jul 27 02:14:21.037: INFO: The phase of Pod test-dns-nameservers is Running (Ready = true) +Jul 27 02:14:21.037: INFO: Pod "test-dns-nameservers" satisfied condition "running and ready" +STEP: Verifying customized DNS suffix list is configured on pod... 07/27/23 02:14:21.037 +Jul 27 02:14:21.037: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-7274 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jul 27 02:14:21.037: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 02:14:21.038: INFO: ExecWithOptions: Clientset creation +Jul 27 02:14:21.038: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/dns-7274/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-suffix&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +STEP: Verifying customized DNS server is configured on pod... 07/27/23 02:14:21.166 +Jul 27 02:14:21.166: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-7274 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jul 27 02:14:21.166: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 02:14:21.167: INFO: ExecWithOptions: Clientset creation +Jul 27 02:14:21.167: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/dns-7274/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-server-list&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +Jul 27 02:14:21.300: INFO: Deleting pod test-dns-nameservers... +[AfterEach] [sig-network] DNS test/e2e/framework/node/init/init.go:32 -Jun 12 21:38:39.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Secrets +Jul 27 02:14:21.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] DNS test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Secrets +[DeferCleanup (Each)] [sig-network] DNS dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Secrets +[DeferCleanup (Each)] [sig-network] DNS tear down framework | framework.go:193 -STEP: Destroying namespace "secrets-8285" for this suite. 06/12/23 21:38:39.887 +STEP: Destroying namespace "dns-7274" for this suite. 07/27/23 02:14:21.339 ------------------------------ -• [SLOW TEST] [6.487 seconds] -[sig-node] Secrets -test/e2e/common/node/framework.go:23 - should be consumable via the environment [NodeConformance] [Conformance] - test/e2e/common/node/secrets.go:95 +• [2.431 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should support configurable pod DNS nameservers [Conformance] + test/e2e/network/dns.go:411 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Secrets + [BeforeEach] [sig-network] DNS set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:38:33.423 - Jun 12 21:38:33.424: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename secrets 06/12/23 21:38:33.427 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:38:33.482 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:38:33.496 - [BeforeEach] [sig-node] Secrets + STEP: Creating a kubernetes client 07/27/23 02:14:18.933 + Jul 27 02:14:18.933: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename dns 07/27/23 02:14:18.934 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:14:18.976 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:14:18.985 + [BeforeEach] [sig-network] DNS test/e2e/framework/metrics/init/init.go:31 - [It] should be consumable via the environment [NodeConformance] [Conformance] - test/e2e/common/node/secrets.go:95 - STEP: creating secret secrets-8285/secret-test-85ce032e-e517-49e4-906d-0d95544e90b8 06/12/23 21:38:33.568 - STEP: Creating a pod to test consume secrets 06/12/23 21:38:33.585 - Jun 12 21:38:33.610: INFO: Waiting up to 5m0s for pod "pod-configmaps-83ecc188-5e4f-46f7-8dad-15695f4a9843" in namespace "secrets-8285" to be "Succeeded or Failed" - Jun 12 21:38:33.619: INFO: Pod "pod-configmaps-83ecc188-5e4f-46f7-8dad-15695f4a9843": Phase="Pending", Reason="", readiness=false. Elapsed: 8.6126ms - Jun 12 21:38:35.629: INFO: Pod "pod-configmaps-83ecc188-5e4f-46f7-8dad-15695f4a9843": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01780286s - Jun 12 21:38:37.643: INFO: Pod "pod-configmaps-83ecc188-5e4f-46f7-8dad-15695f4a9843": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032332957s - Jun 12 21:38:39.630: INFO: Pod "pod-configmaps-83ecc188-5e4f-46f7-8dad-15695f4a9843": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019304598s - STEP: Saw pod success 06/12/23 21:38:39.63 - Jun 12 21:38:39.630: INFO: Pod "pod-configmaps-83ecc188-5e4f-46f7-8dad-15695f4a9843" satisfied condition "Succeeded or Failed" - Jun 12 21:38:39.670: INFO: Trying to get logs from node 10.138.75.70 pod pod-configmaps-83ecc188-5e4f-46f7-8dad-15695f4a9843 container env-test: - STEP: delete the pod 06/12/23 21:38:39.754 - Jun 12 21:38:39.823: INFO: Waiting for pod pod-configmaps-83ecc188-5e4f-46f7-8dad-15695f4a9843 to disappear - Jun 12 21:38:39.852: INFO: Pod pod-configmaps-83ecc188-5e4f-46f7-8dad-15695f4a9843 no longer exists - [AfterEach] [sig-node] Secrets + [It] should support configurable pod DNS nameservers [Conformance] + test/e2e/network/dns.go:411 + STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... 07/27/23 02:14:18.995 + Jul 27 02:14:19.020: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-7274 84eba61e-b5a7-478a-8c4b-a203a4ccf241 95721 0 2023-07-27 02:14:19 +0000 UTC map[] map[openshift.io/scc:anyuid] [] [] [{e2e.test Update v1 2023-07-27 02:14:18 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dbv8r,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dbv8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c52,c19,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jul 27 02:14:19.020: INFO: Waiting up to 5m0s for pod "test-dns-nameservers" in namespace "dns-7274" to be "running and ready" + Jul 27 02:14:19.027: INFO: Pod "test-dns-nameservers": Phase="Pending", Reason="", readiness=false. Elapsed: 7.197497ms + Jul 27 02:14:19.027: INFO: The phase of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) + Jul 27 02:14:21.037: INFO: Pod "test-dns-nameservers": Phase="Running", Reason="", readiness=true. Elapsed: 2.017195159s + Jul 27 02:14:21.037: INFO: The phase of Pod test-dns-nameservers is Running (Ready = true) + Jul 27 02:14:21.037: INFO: Pod "test-dns-nameservers" satisfied condition "running and ready" + STEP: Verifying customized DNS suffix list is configured on pod... 07/27/23 02:14:21.037 + Jul 27 02:14:21.037: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-7274 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jul 27 02:14:21.037: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 02:14:21.038: INFO: ExecWithOptions: Clientset creation + Jul 27 02:14:21.038: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/dns-7274/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-suffix&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) + STEP: Verifying customized DNS server is configured on pod... 07/27/23 02:14:21.166 + Jul 27 02:14:21.166: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-7274 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jul 27 02:14:21.166: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 02:14:21.167: INFO: ExecWithOptions: Clientset creation + Jul 27 02:14:21.167: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/dns-7274/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-server-list&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) + Jul 27 02:14:21.300: INFO: Deleting pod test-dns-nameservers... + [AfterEach] [sig-network] DNS test/e2e/framework/node/init/init.go:32 - Jun 12 21:38:39.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Secrets + Jul 27 02:14:21.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] DNS test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Secrets + [DeferCleanup (Each)] [sig-network] DNS dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Secrets + [DeferCleanup (Each)] [sig-network] DNS tear down framework | framework.go:193 - STEP: Destroying namespace "secrets-8285" for this suite. 06/12/23 21:38:39.887 + STEP: Destroying namespace "dns-7274" for this suite. 07/27/23 02:14:21.339 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSS +S ------------------------------ -[sig-node] PodTemplates - should run the lifecycle of PodTemplates [Conformance] - test/e2e/common/node/podtemplates.go:53 -[BeforeEach] [sig-node] PodTemplates +[sig-storage] EmptyDir volumes + should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:207 +[BeforeEach] [sig-storage] EmptyDir volumes set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:38:39.92 -Jun 12 21:38:39.920: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename podtemplate 06/12/23 21:38:39.923 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:38:40.014 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:38:40.032 -[BeforeEach] [sig-node] PodTemplates +STEP: Creating a kubernetes client 07/27/23 02:14:21.365 +Jul 27 02:14:21.365: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename emptydir 07/27/23 02:14:21.366 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:14:21.417 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:14:21.426 +[BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:31 -[It] should run the lifecycle of PodTemplates [Conformance] - test/e2e/common/node/podtemplates.go:53 -[AfterEach] [sig-node] PodTemplates +[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:207 +STEP: Creating a pod to test emptydir 0666 on node default medium 07/27/23 02:14:21.437 +W0727 02:14:21.466799 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "test-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "test-container" must set securityContext.capabilities.drop=["ALL"]), seccompProfile (pod or container "test-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +Jul 27 02:14:21.467: INFO: Waiting up to 5m0s for pod "pod-5b621cc0-ad45-4967-9269-39e4a052b6a5" in namespace "emptydir-6848" to be "Succeeded or Failed" +Jul 27 02:14:21.520: INFO: Pod "pod-5b621cc0-ad45-4967-9269-39e4a052b6a5": Phase="Pending", Reason="", readiness=false. Elapsed: 53.475302ms +Jul 27 02:14:23.539: INFO: Pod "pod-5b621cc0-ad45-4967-9269-39e4a052b6a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07243031s +Jul 27 02:14:25.530: INFO: Pod "pod-5b621cc0-ad45-4967-9269-39e4a052b6a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063738643s +Jul 27 02:14:27.535: INFO: Pod "pod-5b621cc0-ad45-4967-9269-39e4a052b6a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.067980784s +STEP: Saw pod success 07/27/23 02:14:27.535 +Jul 27 02:14:27.535: INFO: Pod "pod-5b621cc0-ad45-4967-9269-39e4a052b6a5" satisfied condition "Succeeded or Failed" +Jul 27 02:14:27.547: INFO: Trying to get logs from node 10.245.128.19 pod pod-5b621cc0-ad45-4967-9269-39e4a052b6a5 container test-container: +STEP: delete the pod 07/27/23 02:14:27.649 +Jul 27 02:14:27.672: INFO: Waiting for pod pod-5b621cc0-ad45-4967-9269-39e4a052b6a5 to disappear +Jul 27 02:14:27.681: INFO: Pod pod-5b621cc0-ad45-4967-9269-39e4a052b6a5 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/node/init/init.go:32 -Jun 12 21:38:40.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] PodTemplates +Jul 27 02:14:27.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] PodTemplates +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] PodTemplates +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes tear down framework | framework.go:193 -STEP: Destroying namespace "podtemplate-490" for this suite. 06/12/23 21:38:40.219 +STEP: Destroying namespace "emptydir-6848" for this suite. 07/27/23 02:14:27.696 ------------------------------ -• [0.322 seconds] -[sig-node] PodTemplates -test/e2e/common/node/framework.go:23 - should run the lifecycle of PodTemplates [Conformance] - test/e2e/common/node/podtemplates.go:53 +• [SLOW TEST] [6.356 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:207 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] PodTemplates + [BeforeEach] [sig-storage] EmptyDir volumes set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:38:39.92 - Jun 12 21:38:39.920: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename podtemplate 06/12/23 21:38:39.923 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:38:40.014 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:38:40.032 - [BeforeEach] [sig-node] PodTemplates + STEP: Creating a kubernetes client 07/27/23 02:14:21.365 + Jul 27 02:14:21.365: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename emptydir 07/27/23 02:14:21.366 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:14:21.417 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:14:21.426 + [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:31 - [It] should run the lifecycle of PodTemplates [Conformance] - test/e2e/common/node/podtemplates.go:53 - [AfterEach] [sig-node] PodTemplates + [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:207 + STEP: Creating a pod to test emptydir 0666 on node default medium 07/27/23 02:14:21.437 + W0727 02:14:21.466799 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "test-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "test-container" must set securityContext.capabilities.drop=["ALL"]), seccompProfile (pod or container "test-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + Jul 27 02:14:21.467: INFO: Waiting up to 5m0s for pod "pod-5b621cc0-ad45-4967-9269-39e4a052b6a5" in namespace "emptydir-6848" to be "Succeeded or Failed" + Jul 27 02:14:21.520: INFO: Pod "pod-5b621cc0-ad45-4967-9269-39e4a052b6a5": Phase="Pending", Reason="", readiness=false. Elapsed: 53.475302ms + Jul 27 02:14:23.539: INFO: Pod "pod-5b621cc0-ad45-4967-9269-39e4a052b6a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07243031s + Jul 27 02:14:25.530: INFO: Pod "pod-5b621cc0-ad45-4967-9269-39e4a052b6a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063738643s + Jul 27 02:14:27.535: INFO: Pod "pod-5b621cc0-ad45-4967-9269-39e4a052b6a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.067980784s + STEP: Saw pod success 07/27/23 02:14:27.535 + Jul 27 02:14:27.535: INFO: Pod "pod-5b621cc0-ad45-4967-9269-39e4a052b6a5" satisfied condition "Succeeded or Failed" + Jul 27 02:14:27.547: INFO: Trying to get logs from node 10.245.128.19 pod pod-5b621cc0-ad45-4967-9269-39e4a052b6a5 container test-container: + STEP: delete the pod 07/27/23 02:14:27.649 + Jul 27 02:14:27.672: INFO: Waiting for pod pod-5b621cc0-ad45-4967-9269-39e4a052b6a5 to disappear + Jul 27 02:14:27.681: INFO: Pod pod-5b621cc0-ad45-4967-9269-39e4a052b6a5 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/node/init/init.go:32 - Jun 12 21:38:40.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] PodTemplates + Jul 27 02:14:27.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] PodTemplates + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] PodTemplates + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes tear down framework | framework.go:193 - STEP: Destroying namespace "podtemplate-490" for this suite. 06/12/23 21:38:40.219 + STEP: Destroying namespace "emptydir-6848" for this suite. 07/27/23 02:14:27.696 << End Captured GinkgoWriter Output ------------------------------ -S ------------------------------- -[sig-scheduling] LimitRange - should list, patch and delete a LimitRange by collection [Conformance] - test/e2e/scheduling/limit_range.go:239 -[BeforeEach] [sig-scheduling] LimitRange +[sig-network] DNS + should provide /etc/hosts entries for the cluster [Conformance] + test/e2e/network/dns.go:117 +[BeforeEach] [sig-network] DNS set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:38:40.243 -Jun 12 21:38:40.243: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename limitrange 06/12/23 21:38:40.245 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:38:40.293 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:38:40.308 -[BeforeEach] [sig-scheduling] LimitRange +STEP: Creating a kubernetes client 07/27/23 02:14:27.721 +Jul 27 02:14:27.721: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename dns 07/27/23 02:14:27.722 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:14:27.766 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:14:27.774 +[BeforeEach] [sig-network] DNS test/e2e/framework/metrics/init/init.go:31 -[It] should list, patch and delete a LimitRange by collection [Conformance] - test/e2e/scheduling/limit_range.go:239 -STEP: Creating LimitRange "e2e-limitrange-6zdrp" in namespace "limitrange-5010" 06/12/23 21:38:40.321 -STEP: Creating another limitRange in another namespace 06/12/23 21:38:40.337 -Jun 12 21:38:40.403: INFO: Namespace "e2e-limitrange-6zdrp-2390" created -Jun 12 21:38:40.403: INFO: Creating LimitRange "e2e-limitrange-6zdrp" in namespace "e2e-limitrange-6zdrp-2390" -STEP: Listing all LimitRanges with label "e2e-test=e2e-limitrange-6zdrp" 06/12/23 21:38:40.426 -Jun 12 21:38:40.441: INFO: Found 2 limitRanges -STEP: Patching LimitRange "e2e-limitrange-6zdrp" in "limitrange-5010" namespace 06/12/23 21:38:40.441 -Jun 12 21:38:40.505: INFO: LimitRange "e2e-limitrange-6zdrp" has been patched -STEP: Delete LimitRange "e2e-limitrange-6zdrp" by Collection with labelSelector: "e2e-limitrange-6zdrp=patched" 06/12/23 21:38:40.505 -STEP: Confirm that the limitRange "e2e-limitrange-6zdrp" has been deleted 06/12/23 21:38:40.592 -Jun 12 21:38:40.592: INFO: Requesting list of LimitRange to confirm quantity -Jun 12 21:38:40.603: INFO: Found 0 LimitRange with label "e2e-limitrange-6zdrp=patched" -Jun 12 21:38:40.603: INFO: LimitRange "e2e-limitrange-6zdrp" has been deleted. -STEP: Confirm that a single LimitRange still exists with label "e2e-test=e2e-limitrange-6zdrp" 06/12/23 21:38:40.603 -Jun 12 21:38:40.616: INFO: Found 1 limitRange -[AfterEach] [sig-scheduling] LimitRange +[It] should provide /etc/hosts entries for the cluster [Conformance] + test/e2e/network/dns.go:117 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5771.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5771.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;sleep 1; done + 07/27/23 02:14:27.783 +STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5771.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5771.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;sleep 1; done + 07/27/23 02:14:27.784 +STEP: creating a pod to probe /etc/hosts 07/27/23 02:14:27.784 +STEP: submitting the pod to kubernetes 07/27/23 02:14:27.784 +Jul 27 02:14:27.815: INFO: Waiting up to 15m0s for pod "dns-test-3cc3f71c-09f8-4296-9d73-e570b619e4d6" in namespace "dns-5771" to be "running" +Jul 27 02:14:27.825: INFO: Pod "dns-test-3cc3f71c-09f8-4296-9d73-e570b619e4d6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.281689ms +Jul 27 02:14:29.834: INFO: Pod "dns-test-3cc3f71c-09f8-4296-9d73-e570b619e4d6": Phase="Running", Reason="", readiness=true. Elapsed: 2.018575357s +Jul 27 02:14:29.834: INFO: Pod "dns-test-3cc3f71c-09f8-4296-9d73-e570b619e4d6" satisfied condition "running" +STEP: retrieving the pod 07/27/23 02:14:29.834 +STEP: looking for the results for each expected name from probers 07/27/23 02:14:29.842 +Jul 27 02:14:29.925: INFO: DNS probes using dns-5771/dns-test-3cc3f71c-09f8-4296-9d73-e570b619e4d6 succeeded + +STEP: deleting the pod 07/27/23 02:14:29.925 +[AfterEach] [sig-network] DNS test/e2e/framework/node/init/init.go:32 -Jun 12 21:38:40.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-scheduling] LimitRange +Jul 27 02:14:29.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] DNS test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-scheduling] LimitRange +[DeferCleanup (Each)] [sig-network] DNS dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-scheduling] LimitRange +[DeferCleanup (Each)] [sig-network] DNS tear down framework | framework.go:193 -STEP: Destroying namespace "limitrange-5010" for this suite. 06/12/23 21:38:40.627 -STEP: Destroying namespace "e2e-limitrange-6zdrp-2390" for this suite. 06/12/23 21:38:40.65 +STEP: Destroying namespace "dns-5771" for this suite. 07/27/23 02:14:29.96 ------------------------------ -• [0.432 seconds] -[sig-scheduling] LimitRange -test/e2e/scheduling/framework.go:40 - should list, patch and delete a LimitRange by collection [Conformance] - test/e2e/scheduling/limit_range.go:239 +• [2.261 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should provide /etc/hosts entries for the cluster [Conformance] + test/e2e/network/dns.go:117 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-scheduling] LimitRange + [BeforeEach] [sig-network] DNS set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:38:40.243 - Jun 12 21:38:40.243: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename limitrange 06/12/23 21:38:40.245 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:38:40.293 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:38:40.308 - [BeforeEach] [sig-scheduling] LimitRange + STEP: Creating a kubernetes client 07/27/23 02:14:27.721 + Jul 27 02:14:27.721: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename dns 07/27/23 02:14:27.722 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:14:27.766 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:14:27.774 + [BeforeEach] [sig-network] DNS test/e2e/framework/metrics/init/init.go:31 - [It] should list, patch and delete a LimitRange by collection [Conformance] - test/e2e/scheduling/limit_range.go:239 - STEP: Creating LimitRange "e2e-limitrange-6zdrp" in namespace "limitrange-5010" 06/12/23 21:38:40.321 - STEP: Creating another limitRange in another namespace 06/12/23 21:38:40.337 - Jun 12 21:38:40.403: INFO: Namespace "e2e-limitrange-6zdrp-2390" created - Jun 12 21:38:40.403: INFO: Creating LimitRange "e2e-limitrange-6zdrp" in namespace "e2e-limitrange-6zdrp-2390" - STEP: Listing all LimitRanges with label "e2e-test=e2e-limitrange-6zdrp" 06/12/23 21:38:40.426 - Jun 12 21:38:40.441: INFO: Found 2 limitRanges - STEP: Patching LimitRange "e2e-limitrange-6zdrp" in "limitrange-5010" namespace 06/12/23 21:38:40.441 - Jun 12 21:38:40.505: INFO: LimitRange "e2e-limitrange-6zdrp" has been patched - STEP: Delete LimitRange "e2e-limitrange-6zdrp" by Collection with labelSelector: "e2e-limitrange-6zdrp=patched" 06/12/23 21:38:40.505 - STEP: Confirm that the limitRange "e2e-limitrange-6zdrp" has been deleted 06/12/23 21:38:40.592 - Jun 12 21:38:40.592: INFO: Requesting list of LimitRange to confirm quantity - Jun 12 21:38:40.603: INFO: Found 0 LimitRange with label "e2e-limitrange-6zdrp=patched" - Jun 12 21:38:40.603: INFO: LimitRange "e2e-limitrange-6zdrp" has been deleted. - STEP: Confirm that a single LimitRange still exists with label "e2e-test=e2e-limitrange-6zdrp" 06/12/23 21:38:40.603 - Jun 12 21:38:40.616: INFO: Found 1 limitRange - [AfterEach] [sig-scheduling] LimitRange + [It] should provide /etc/hosts entries for the cluster [Conformance] + test/e2e/network/dns.go:117 + STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5771.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5771.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;sleep 1; done + 07/27/23 02:14:27.783 + STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5771.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5771.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;sleep 1; done + 07/27/23 02:14:27.784 + STEP: creating a pod to probe /etc/hosts 07/27/23 02:14:27.784 + STEP: submitting the pod to kubernetes 07/27/23 02:14:27.784 + Jul 27 02:14:27.815: INFO: Waiting up to 15m0s for pod "dns-test-3cc3f71c-09f8-4296-9d73-e570b619e4d6" in namespace "dns-5771" to be "running" + Jul 27 02:14:27.825: INFO: Pod "dns-test-3cc3f71c-09f8-4296-9d73-e570b619e4d6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.281689ms + Jul 27 02:14:29.834: INFO: Pod "dns-test-3cc3f71c-09f8-4296-9d73-e570b619e4d6": Phase="Running", Reason="", readiness=true. Elapsed: 2.018575357s + Jul 27 02:14:29.834: INFO: Pod "dns-test-3cc3f71c-09f8-4296-9d73-e570b619e4d6" satisfied condition "running" + STEP: retrieving the pod 07/27/23 02:14:29.834 + STEP: looking for the results for each expected name from probers 07/27/23 02:14:29.842 + Jul 27 02:14:29.925: INFO: DNS probes using dns-5771/dns-test-3cc3f71c-09f8-4296-9d73-e570b619e4d6 succeeded + + STEP: deleting the pod 07/27/23 02:14:29.925 + [AfterEach] [sig-network] DNS test/e2e/framework/node/init/init.go:32 - Jun 12 21:38:40.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-scheduling] LimitRange + Jul 27 02:14:29.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] DNS test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-scheduling] LimitRange + [DeferCleanup (Each)] [sig-network] DNS dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-scheduling] LimitRange + [DeferCleanup (Each)] [sig-network] DNS tear down framework | framework.go:193 - STEP: Destroying namespace "limitrange-5010" for this suite. 06/12/23 21:38:40.627 - STEP: Destroying namespace "e2e-limitrange-6zdrp-2390" for this suite. 06/12/23 21:38:40.65 + STEP: Destroying namespace "dns-5771" for this suite. 07/27/23 02:14:29.96 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-network] Networking Granular Checks: Pods - should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/network/networking.go:122 -[BeforeEach] [sig-network] Networking +[sig-storage] Projected downwardAPI + should provide podname only [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:53 +[BeforeEach] [sig-storage] Projected downwardAPI set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:38:40.677 -Jun 12 21:38:40.678: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename pod-network-test 06/12/23 21:38:40.679 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:38:40.772 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:38:40.785 -[BeforeEach] [sig-network] Networking +STEP: Creating a kubernetes client 07/27/23 02:14:29.982 +Jul 27 02:14:29.983: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename projected 07/27/23 02:14:29.984 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:14:30.027 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:14:30.036 +[BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:31 -[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/network/networking.go:122 -STEP: Performing setup for networking test in namespace pod-network-test-7786 06/12/23 21:38:40.8 -STEP: creating a selector 06/12/23 21:38:40.801 -STEP: Creating the service pods in kubernetes 06/12/23 21:38:40.801 -Jun 12 21:38:40.801: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable -Jun 12 21:38:40.867: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-7786" to be "running and ready" -Jun 12 21:38:40.884: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.313646ms -Jun 12 21:38:40.884: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:38:42.900: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032746299s -Jun 12 21:38:42.900: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:38:44.895: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.027341981s -Jun 12 21:38:44.895: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:38:46.895: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.027902849s -Jun 12 21:38:46.901: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:38:48.893: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.025821598s -Jun 12 21:38:48.893: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:38:50.894: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.026633501s -Jun 12 21:38:50.894: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:38:52.894: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.02690558s -Jun 12 21:38:52.894: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:38:54.894: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.02726484s -Jun 12 21:38:54.895: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:38:56.894: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.026518404s -Jun 12 21:38:56.894: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:38:58.895: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.027769429s -Jun 12 21:38:58.895: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:39:00.897: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.029354114s -Jun 12 21:39:00.897: INFO: The phase of Pod netserver-0 is Running (Ready = false) -Jun 12 21:39:02.896: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 22.028639449s -Jun 12 21:39:02.896: INFO: The phase of Pod netserver-0 is Running (Ready = true) -Jun 12 21:39:02.896: INFO: Pod "netserver-0" satisfied condition "running and ready" -Jun 12 21:39:02.906: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-7786" to be "running and ready" -Jun 12 21:39:02.915: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 8.998384ms -Jun 12 21:39:02.915: INFO: The phase of Pod netserver-1 is Running (Ready = true) -Jun 12 21:39:02.915: INFO: Pod "netserver-1" satisfied condition "running and ready" -Jun 12 21:39:02.924: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-7786" to be "running and ready" -Jun 12 21:39:02.933: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 9.071935ms -Jun 12 21:39:02.933: INFO: The phase of Pod netserver-2 is Running (Ready = true) -Jun 12 21:39:02.933: INFO: Pod "netserver-2" satisfied condition "running and ready" -STEP: Creating test pods 06/12/23 21:39:02.941 -Jun 12 21:39:02.992: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-7786" to be "running" -Jun 12 21:39:03.007: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 15.285096ms -Jun 12 21:39:05.017: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025565946s -Jun 12 21:39:07.017: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.025386293s -Jun 12 21:39:07.017: INFO: Pod "test-container-pod" satisfied condition "running" -Jun 12 21:39:07.025: INFO: Waiting up to 5m0s for pod "host-test-container-pod" in namespace "pod-network-test-7786" to be "running" -Jun 12 21:39:07.034: INFO: Pod "host-test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 8.650692ms -Jun 12 21:39:07.034: INFO: Pod "host-test-container-pod" satisfied condition "running" -Jun 12 21:39:07.043: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 -Jun 12 21:39:07.043: INFO: Going to poll 172.30.161.104 on port 8081 at least 0 times, with a maximum of 39 tries before failing -Jun 12 21:39:07.051: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.30.161.104 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7786 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} -Jun 12 21:39:07.051: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 21:39:07.053: INFO: ExecWithOptions: Clientset creation -Jun 12 21:39:07.053: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-7786/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+172.30.161.104+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) -Jun 12 21:39:08.733: INFO: Found all 1 expected endpoints: [netserver-0] -Jun 12 21:39:08.733: INFO: Going to poll 172.30.185.74 on port 8081 at least 0 times, with a maximum of 39 tries before failing -Jun 12 21:39:08.742: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.30.185.74 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7786 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} -Jun 12 21:39:08.742: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 21:39:08.744: INFO: ExecWithOptions: Clientset creation -Jun 12 21:39:08.744: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-7786/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+172.30.185.74+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) -Jun 12 21:39:10.148: INFO: Found all 1 expected endpoints: [netserver-1] -Jun 12 21:39:10.148: INFO: Going to poll 172.30.224.13 on port 8081 at least 0 times, with a maximum of 39 tries before failing -Jun 12 21:39:10.165: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.30.224.13 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7786 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} -Jun 12 21:39:10.165: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 21:39:10.167: INFO: ExecWithOptions: Clientset creation -Jun 12 21:39:10.167: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-7786/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+172.30.224.13+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) -Jun 12 21:39:12.028: INFO: Found all 1 expected endpoints: [netserver-2] -[AfterEach] [sig-network] Networking +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 +[It] should provide podname only [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:53 +STEP: Creating a pod to test downward API volume plugin 07/27/23 02:14:30.074 +Jul 27 02:14:30.100: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3727561f-3e7c-49a0-b1f2-8125bfd162f4" in namespace "projected-2711" to be "Succeeded or Failed" +Jul 27 02:14:30.120: INFO: Pod "downwardapi-volume-3727561f-3e7c-49a0-b1f2-8125bfd162f4": Phase="Pending", Reason="", readiness=false. Elapsed: 19.585946ms +Jul 27 02:14:32.132: INFO: Pod "downwardapi-volume-3727561f-3e7c-49a0-b1f2-8125bfd162f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031838197s +Jul 27 02:14:34.129: INFO: Pod "downwardapi-volume-3727561f-3e7c-49a0-b1f2-8125bfd162f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028007331s +STEP: Saw pod success 07/27/23 02:14:34.129 +Jul 27 02:14:34.129: INFO: Pod "downwardapi-volume-3727561f-3e7c-49a0-b1f2-8125bfd162f4" satisfied condition "Succeeded or Failed" +Jul 27 02:14:34.136: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-3727561f-3e7c-49a0-b1f2-8125bfd162f4 container client-container: +STEP: delete the pod 07/27/23 02:14:34.153 +Jul 27 02:14:34.175: INFO: Waiting for pod downwardapi-volume-3727561f-3e7c-49a0-b1f2-8125bfd162f4 to disappear +Jul 27 02:14:34.183: INFO: Pod downwardapi-volume-3727561f-3e7c-49a0-b1f2-8125bfd162f4 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/node/init/init.go:32 -Jun 12 21:39:12.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] Networking +Jul 27 02:14:34.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] Networking +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] Networking +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI tear down framework | framework.go:193 -STEP: Destroying namespace "pod-network-test-7786" for this suite. 06/12/23 21:39:12.246 +STEP: Destroying namespace "projected-2711" for this suite. 07/27/23 02:14:34.195 ------------------------------ -• [SLOW TEST] [31.600 seconds] -[sig-network] Networking -test/e2e/common/network/framework.go:23 - Granular Checks: Pods - test/e2e/common/network/networking.go:32 - should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/network/networking.go:122 +• [4.240 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should provide podname only [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:53 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] Networking + [BeforeEach] [sig-storage] Projected downwardAPI set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:38:40.677 - Jun 12 21:38:40.678: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename pod-network-test 06/12/23 21:38:40.679 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:38:40.772 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:38:40.785 - [BeforeEach] [sig-network] Networking + STEP: Creating a kubernetes client 07/27/23 02:14:29.982 + Jul 27 02:14:29.983: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename projected 07/27/23 02:14:29.984 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:14:30.027 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:14:30.036 + [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:31 - [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/network/networking.go:122 - STEP: Performing setup for networking test in namespace pod-network-test-7786 06/12/23 21:38:40.8 - STEP: creating a selector 06/12/23 21:38:40.801 - STEP: Creating the service pods in kubernetes 06/12/23 21:38:40.801 - Jun 12 21:38:40.801: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable - Jun 12 21:38:40.867: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-7786" to be "running and ready" - Jun 12 21:38:40.884: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.313646ms - Jun 12 21:38:40.884: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:38:42.900: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032746299s - Jun 12 21:38:42.900: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:38:44.895: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.027341981s - Jun 12 21:38:44.895: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:38:46.895: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.027902849s - Jun 12 21:38:46.901: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:38:48.893: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.025821598s - Jun 12 21:38:48.893: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:38:50.894: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.026633501s - Jun 12 21:38:50.894: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:38:52.894: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.02690558s - Jun 12 21:38:52.894: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:38:54.894: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.02726484s - Jun 12 21:38:54.895: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:38:56.894: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.026518404s - Jun 12 21:38:56.894: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:38:58.895: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.027769429s - Jun 12 21:38:58.895: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:39:00.897: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.029354114s - Jun 12 21:39:00.897: INFO: The phase of Pod netserver-0 is Running (Ready = false) - Jun 12 21:39:02.896: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 22.028639449s - Jun 12 21:39:02.896: INFO: The phase of Pod netserver-0 is Running (Ready = true) - Jun 12 21:39:02.896: INFO: Pod "netserver-0" satisfied condition "running and ready" - Jun 12 21:39:02.906: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-7786" to be "running and ready" - Jun 12 21:39:02.915: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 8.998384ms - Jun 12 21:39:02.915: INFO: The phase of Pod netserver-1 is Running (Ready = true) - Jun 12 21:39:02.915: INFO: Pod "netserver-1" satisfied condition "running and ready" - Jun 12 21:39:02.924: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-7786" to be "running and ready" - Jun 12 21:39:02.933: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 9.071935ms - Jun 12 21:39:02.933: INFO: The phase of Pod netserver-2 is Running (Ready = true) - Jun 12 21:39:02.933: INFO: Pod "netserver-2" satisfied condition "running and ready" - STEP: Creating test pods 06/12/23 21:39:02.941 - Jun 12 21:39:02.992: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-7786" to be "running" - Jun 12 21:39:03.007: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 15.285096ms - Jun 12 21:39:05.017: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025565946s - Jun 12 21:39:07.017: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.025386293s - Jun 12 21:39:07.017: INFO: Pod "test-container-pod" satisfied condition "running" - Jun 12 21:39:07.025: INFO: Waiting up to 5m0s for pod "host-test-container-pod" in namespace "pod-network-test-7786" to be "running" - Jun 12 21:39:07.034: INFO: Pod "host-test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 8.650692ms - Jun 12 21:39:07.034: INFO: Pod "host-test-container-pod" satisfied condition "running" - Jun 12 21:39:07.043: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 - Jun 12 21:39:07.043: INFO: Going to poll 172.30.161.104 on port 8081 at least 0 times, with a maximum of 39 tries before failing - Jun 12 21:39:07.051: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.30.161.104 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7786 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} - Jun 12 21:39:07.051: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 21:39:07.053: INFO: ExecWithOptions: Clientset creation - Jun 12 21:39:07.053: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-7786/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+172.30.161.104+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) - Jun 12 21:39:08.733: INFO: Found all 1 expected endpoints: [netserver-0] - Jun 12 21:39:08.733: INFO: Going to poll 172.30.185.74 on port 8081 at least 0 times, with a maximum of 39 tries before failing - Jun 12 21:39:08.742: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.30.185.74 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7786 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} - Jun 12 21:39:08.742: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 21:39:08.744: INFO: ExecWithOptions: Clientset creation - Jun 12 21:39:08.744: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-7786/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+172.30.185.74+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) - Jun 12 21:39:10.148: INFO: Found all 1 expected endpoints: [netserver-1] - Jun 12 21:39:10.148: INFO: Going to poll 172.30.224.13 on port 8081 at least 0 times, with a maximum of 39 tries before failing - Jun 12 21:39:10.165: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.30.224.13 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7786 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} - Jun 12 21:39:10.165: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 21:39:10.167: INFO: ExecWithOptions: Clientset creation - Jun 12 21:39:10.167: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-7786/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+172.30.224.13+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) - Jun 12 21:39:12.028: INFO: Found all 1 expected endpoints: [netserver-2] - [AfterEach] [sig-network] Networking + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 + [It] should provide podname only [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:53 + STEP: Creating a pod to test downward API volume plugin 07/27/23 02:14:30.074 + Jul 27 02:14:30.100: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3727561f-3e7c-49a0-b1f2-8125bfd162f4" in namespace "projected-2711" to be "Succeeded or Failed" + Jul 27 02:14:30.120: INFO: Pod "downwardapi-volume-3727561f-3e7c-49a0-b1f2-8125bfd162f4": Phase="Pending", Reason="", readiness=false. Elapsed: 19.585946ms + Jul 27 02:14:32.132: INFO: Pod "downwardapi-volume-3727561f-3e7c-49a0-b1f2-8125bfd162f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031838197s + Jul 27 02:14:34.129: INFO: Pod "downwardapi-volume-3727561f-3e7c-49a0-b1f2-8125bfd162f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028007331s + STEP: Saw pod success 07/27/23 02:14:34.129 + Jul 27 02:14:34.129: INFO: Pod "downwardapi-volume-3727561f-3e7c-49a0-b1f2-8125bfd162f4" satisfied condition "Succeeded or Failed" + Jul 27 02:14:34.136: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-3727561f-3e7c-49a0-b1f2-8125bfd162f4 container client-container: + STEP: delete the pod 07/27/23 02:14:34.153 + Jul 27 02:14:34.175: INFO: Waiting for pod downwardapi-volume-3727561f-3e7c-49a0-b1f2-8125bfd162f4 to disappear + Jul 27 02:14:34.183: INFO: Pod downwardapi-volume-3727561f-3e7c-49a0-b1f2-8125bfd162f4 no longer exists + [AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/node/init/init.go:32 - Jun 12 21:39:12.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] Networking + Jul 27 02:14:34.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] Networking + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] Networking + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI tear down framework | framework.go:193 - STEP: Destroying namespace "pod-network-test-7786" for this suite. 06/12/23 21:39:12.246 + STEP: Destroying namespace "projected-2711" for this suite. 07/27/23 02:14:34.195 << End Captured GinkgoWriter Output ------------------------------ -SS +SSSSSSSSSSSSSSS ------------------------------ -[sig-apps] Daemon set [Serial] - should run and stop complex daemon [Conformance] - test/e2e/apps/daemon_set.go:194 -[BeforeEach] [sig-apps] Daemon set [Serial] +[sig-node] Pods Extended Pods Set QOS Class + should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + test/e2e/node/pods.go:161 +[BeforeEach] [sig-node] Pods Extended set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:39:12.282 -Jun 12 21:39:12.295: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename daemonsets 06/12/23 21:39:12.301 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:39:12.544 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:39:12.652 -[BeforeEach] [sig-apps] Daemon set [Serial] +STEP: Creating a kubernetes client 07/27/23 02:14:34.223 +Jul 27 02:14:34.223: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename pods 07/27/23 02:14:34.224 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:14:34.283 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:14:34.297 +[BeforeEach] [sig-node] Pods Extended test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-apps] Daemon set [Serial] - test/e2e/apps/daemon_set.go:146 -[It] should run and stop complex daemon [Conformance] - test/e2e/apps/daemon_set.go:194 -Jun 12 21:39:13.024: INFO: Creating daemon "daemon-set" with a node selector -STEP: Initially, daemon pods should not be running on any nodes. 06/12/23 21:39:13.049 -Jun 12 21:39:13.069: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 21:39:13.069: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set -STEP: Change node label to blue, check that daemon pod is launched. 06/12/23 21:39:13.069 -Jun 12 21:39:13.229: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 21:39:13.229: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 -Jun 12 21:39:14.242: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 21:39:14.243: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 -Jun 12 21:39:15.241: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 21:39:15.241: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 -Jun 12 21:39:16.270: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 21:39:16.270: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 -Jun 12 21:39:17.239: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 21:39:17.239: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 -Jun 12 21:39:18.238: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 -Jun 12 21:39:18.238: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set -STEP: Update the node label to green, and wait for daemons to be unscheduled 06/12/23 21:39:18.25 -Jun 12 21:39:18.304: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 -Jun 12 21:39:18.304: INFO: Number of running nodes: 0, number of available pods: 1 in daemonset daemon-set -Jun 12 21:39:19.315: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 21:39:19.315: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set -STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate 06/12/23 21:39:19.315 -Jun 12 21:39:19.348: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 21:39:19.348: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 -Jun 12 21:39:20.358: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 21:39:20.358: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 -Jun 12 21:39:21.368: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 21:39:21.368: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 -Jun 12 21:39:22.377: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 21:39:22.377: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 -Jun 12 21:39:23.373: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 21:39:23.373: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 -Jun 12 21:39:24.360: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 21:39:24.360: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 -Jun 12 21:39:25.358: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 -Jun 12 21:39:25.358: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set -[AfterEach] [sig-apps] Daemon set [Serial] - test/e2e/apps/daemon_set.go:111 -STEP: Deleting DaemonSet "daemon-set" 06/12/23 21:39:25.381 -STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1721, will wait for the garbage collector to delete the pods 06/12/23 21:39:25.381 -Jun 12 21:39:25.464: INFO: Deleting DaemonSet.extensions daemon-set took: 20.62057ms -Jun 12 21:39:25.565: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.097824ms -Jun 12 21:39:29.175: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 21:39:29.175: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set -Jun 12 21:39:29.186: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"111006"},"items":null} - -Jun 12 21:39:29.194: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"111006"},"items":null} - -[AfterEach] [sig-apps] Daemon set [Serial] +[BeforeEach] Pods Set QOS Class + test/e2e/node/pods.go:152 +[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + test/e2e/node/pods.go:161 +STEP: creating the pod 07/27/23 02:14:34.312 +STEP: submitting the pod to kubernetes 07/27/23 02:14:34.312 +W0727 02:14:34.344843 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "agnhost" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "agnhost" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "agnhost" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "agnhost" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +STEP: verifying QOS class is set on the pod 07/27/23 02:14:34.344 +[AfterEach] [sig-node] Pods Extended test/e2e/framework/node/init/init.go:32 -Jun 12 21:39:29.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] +Jul 27 02:14:34.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Pods Extended test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] +[DeferCleanup (Each)] [sig-node] Pods Extended dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] +[DeferCleanup (Each)] [sig-node] Pods Extended tear down framework | framework.go:193 -STEP: Destroying namespace "daemonsets-1721" for this suite. 06/12/23 21:39:29.286 +STEP: Destroying namespace "pods-122" for this suite. 07/27/23 02:14:34.369 ------------------------------ -• [SLOW TEST] [17.029 seconds] -[sig-apps] Daemon set [Serial] -test/e2e/apps/framework.go:23 - should run and stop complex daemon [Conformance] - test/e2e/apps/daemon_set.go:194 +• [0.179 seconds] +[sig-node] Pods Extended +test/e2e/node/framework.go:23 + Pods Set QOS Class + test/e2e/node/pods.go:150 + should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + test/e2e/node/pods.go:161 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] Daemon set [Serial] + [BeforeEach] [sig-node] Pods Extended set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:39:12.282 - Jun 12 21:39:12.295: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename daemonsets 06/12/23 21:39:12.301 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:39:12.544 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:39:12.652 - [BeforeEach] [sig-apps] Daemon set [Serial] + STEP: Creating a kubernetes client 07/27/23 02:14:34.223 + Jul 27 02:14:34.223: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename pods 07/27/23 02:14:34.224 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:14:34.283 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:14:34.297 + [BeforeEach] [sig-node] Pods Extended test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-apps] Daemon set [Serial] - test/e2e/apps/daemon_set.go:146 - [It] should run and stop complex daemon [Conformance] - test/e2e/apps/daemon_set.go:194 - Jun 12 21:39:13.024: INFO: Creating daemon "daemon-set" with a node selector - STEP: Initially, daemon pods should not be running on any nodes. 06/12/23 21:39:13.049 - Jun 12 21:39:13.069: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 21:39:13.069: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set - STEP: Change node label to blue, check that daemon pod is launched. 06/12/23 21:39:13.069 - Jun 12 21:39:13.229: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 21:39:13.229: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 - Jun 12 21:39:14.242: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 21:39:14.243: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 - Jun 12 21:39:15.241: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 21:39:15.241: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 - Jun 12 21:39:16.270: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 21:39:16.270: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 - Jun 12 21:39:17.239: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 21:39:17.239: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 - Jun 12 21:39:18.238: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 - Jun 12 21:39:18.238: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set - STEP: Update the node label to green, and wait for daemons to be unscheduled 06/12/23 21:39:18.25 - Jun 12 21:39:18.304: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 - Jun 12 21:39:18.304: INFO: Number of running nodes: 0, number of available pods: 1 in daemonset daemon-set - Jun 12 21:39:19.315: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 21:39:19.315: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set - STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate 06/12/23 21:39:19.315 - Jun 12 21:39:19.348: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 21:39:19.348: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 - Jun 12 21:39:20.358: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 21:39:20.358: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 - Jun 12 21:39:21.368: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 21:39:21.368: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 - Jun 12 21:39:22.377: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 21:39:22.377: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 - Jun 12 21:39:23.373: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 21:39:23.373: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 - Jun 12 21:39:24.360: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 21:39:24.360: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 - Jun 12 21:39:25.358: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 - Jun 12 21:39:25.358: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set - [AfterEach] [sig-apps] Daemon set [Serial] - test/e2e/apps/daemon_set.go:111 - STEP: Deleting DaemonSet "daemon-set" 06/12/23 21:39:25.381 - STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1721, will wait for the garbage collector to delete the pods 06/12/23 21:39:25.381 - Jun 12 21:39:25.464: INFO: Deleting DaemonSet.extensions daemon-set took: 20.62057ms - Jun 12 21:39:25.565: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.097824ms - Jun 12 21:39:29.175: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 21:39:29.175: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set - Jun 12 21:39:29.186: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"111006"},"items":null} - - Jun 12 21:39:29.194: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"111006"},"items":null} - - [AfterEach] [sig-apps] Daemon set [Serial] + [BeforeEach] Pods Set QOS Class + test/e2e/node/pods.go:152 + [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + test/e2e/node/pods.go:161 + STEP: creating the pod 07/27/23 02:14:34.312 + STEP: submitting the pod to kubernetes 07/27/23 02:14:34.312 + W0727 02:14:34.344843 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "agnhost" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "agnhost" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "agnhost" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "agnhost" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + STEP: verifying QOS class is set on the pod 07/27/23 02:14:34.344 + [AfterEach] [sig-node] Pods Extended test/e2e/framework/node/init/init.go:32 - Jun 12 21:39:29.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + Jul 27 02:14:34.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Pods Extended test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + [DeferCleanup (Each)] [sig-node] Pods Extended dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + [DeferCleanup (Each)] [sig-node] Pods Extended tear down framework | framework.go:193 - STEP: Destroying namespace "daemonsets-1721" for this suite. 06/12/23 21:39:29.286 + STEP: Destroying namespace "pods-122" for this suite. 07/27/23 02:14:34.369 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] Subpath Atomic writer volumes - should support subpaths with projected pod [Conformance] - test/e2e/storage/subpath.go:106 -[BeforeEach] [sig-storage] Subpath +[sig-network] EndpointSliceMirroring + should mirror a custom Endpoints resource through create update and delete [Conformance] + test/e2e/network/endpointslicemirroring.go:53 +[BeforeEach] [sig-network] EndpointSliceMirroring set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:39:29.325 -Jun 12 21:39:29.326: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename subpath 06/12/23 21:39:29.332 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:39:29.455 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:39:29.478 -[BeforeEach] [sig-storage] Subpath +STEP: Creating a kubernetes client 07/27/23 02:14:34.403 +Jul 27 02:14:34.403: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename endpointslicemirroring 07/27/23 02:14:34.404 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:14:34.455 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:14:34.464 +[BeforeEach] [sig-network] EndpointSliceMirroring test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] Atomic writer volumes - test/e2e/storage/subpath.go:40 -STEP: Setting up data 06/12/23 21:39:29.496 -[It] should support subpaths with projected pod [Conformance] - test/e2e/storage/subpath.go:106 -STEP: Creating pod pod-subpath-test-projected-64rv 06/12/23 21:39:29.531 -STEP: Creating a pod to test atomic-volume-subpath 06/12/23 21:39:29.531 -Jun 12 21:39:29.555: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-64rv" in namespace "subpath-4899" to be "Succeeded or Failed" -Jun 12 21:39:29.564: INFO: Pod "pod-subpath-test-projected-64rv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.953805ms -Jun 12 21:39:31.573: INFO: Pod "pod-subpath-test-projected-64rv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017570499s -Jun 12 21:39:33.575: INFO: Pod "pod-subpath-test-projected-64rv": Phase="Running", Reason="", readiness=true. Elapsed: 4.019031105s -Jun 12 21:39:35.576: INFO: Pod "pod-subpath-test-projected-64rv": Phase="Running", Reason="", readiness=true. Elapsed: 6.020600289s -Jun 12 21:39:37.574: INFO: Pod "pod-subpath-test-projected-64rv": Phase="Running", Reason="", readiness=true. Elapsed: 8.018138616s -Jun 12 21:39:39.575: INFO: Pod "pod-subpath-test-projected-64rv": Phase="Running", Reason="", readiness=true. Elapsed: 10.019302309s -Jun 12 21:39:41.575: INFO: Pod "pod-subpath-test-projected-64rv": Phase="Running", Reason="", readiness=true. Elapsed: 12.019898363s -Jun 12 21:39:43.578: INFO: Pod "pod-subpath-test-projected-64rv": Phase="Running", Reason="", readiness=true. Elapsed: 14.022315005s -Jun 12 21:39:45.601: INFO: Pod "pod-subpath-test-projected-64rv": Phase="Running", Reason="", readiness=true. Elapsed: 16.044995234s -Jun 12 21:39:47.608: INFO: Pod "pod-subpath-test-projected-64rv": Phase="Running", Reason="", readiness=true. Elapsed: 18.052762085s -Jun 12 21:39:49.573: INFO: Pod "pod-subpath-test-projected-64rv": Phase="Running", Reason="", readiness=true. Elapsed: 20.0179278s -Jun 12 21:39:51.575: INFO: Pod "pod-subpath-test-projected-64rv": Phase="Running", Reason="", readiness=true. Elapsed: 22.019324937s -Jun 12 21:39:53.575: INFO: Pod "pod-subpath-test-projected-64rv": Phase="Running", Reason="", readiness=false. Elapsed: 24.019650335s -Jun 12 21:39:55.574: INFO: Pod "pod-subpath-test-projected-64rv": Phase="Running", Reason="", readiness=false. Elapsed: 26.018751728s -Jun 12 21:39:57.575: INFO: Pod "pod-subpath-test-projected-64rv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.01956549s -STEP: Saw pod success 06/12/23 21:39:57.575 -Jun 12 21:39:57.576: INFO: Pod "pod-subpath-test-projected-64rv" satisfied condition "Succeeded or Failed" -Jun 12 21:39:57.585: INFO: Trying to get logs from node 10.138.75.70 pod pod-subpath-test-projected-64rv container test-container-subpath-projected-64rv: -STEP: delete the pod 06/12/23 21:39:57.608 -Jun 12 21:39:57.627: INFO: Waiting for pod pod-subpath-test-projected-64rv to disappear -Jun 12 21:39:57.635: INFO: Pod pod-subpath-test-projected-64rv no longer exists -STEP: Deleting pod pod-subpath-test-projected-64rv 06/12/23 21:39:57.635 -Jun 12 21:39:57.635: INFO: Deleting pod "pod-subpath-test-projected-64rv" in namespace "subpath-4899" -[AfterEach] [sig-storage] Subpath +[BeforeEach] [sig-network] EndpointSliceMirroring + test/e2e/network/endpointslicemirroring.go:41 +[It] should mirror a custom Endpoints resource through create update and delete [Conformance] + test/e2e/network/endpointslicemirroring.go:53 +STEP: mirroring a new custom Endpoint 07/27/23 02:14:34.519 +Jul 27 02:14:34.560: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 +STEP: mirroring an update to a custom Endpoint 07/27/23 02:14:36.605 +STEP: mirroring deletion of a custom Endpoint 07/27/23 02:14:36.653 +Jul 27 02:14:36.682: INFO: Waiting for 0 EndpointSlices to exist, got 1 +[AfterEach] [sig-network] EndpointSliceMirroring test/e2e/framework/node/init/init.go:32 -Jun 12 21:39:57.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Subpath +Jul 27 02:14:38.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] EndpointSliceMirroring test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Subpath +[DeferCleanup (Each)] [sig-network] EndpointSliceMirroring dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Subpath +[DeferCleanup (Each)] [sig-network] EndpointSliceMirroring tear down framework | framework.go:193 -STEP: Destroying namespace "subpath-4899" for this suite. 06/12/23 21:39:57.658 +STEP: Destroying namespace "endpointslicemirroring-7" for this suite. 07/27/23 02:14:38.715 ------------------------------ -• [SLOW TEST] [28.354 seconds] -[sig-storage] Subpath -test/e2e/storage/utils/framework.go:23 - Atomic writer volumes - test/e2e/storage/subpath.go:36 - should support subpaths with projected pod [Conformance] - test/e2e/storage/subpath.go:106 +• [4.335 seconds] +[sig-network] EndpointSliceMirroring +test/e2e/network/common/framework.go:23 + should mirror a custom Endpoints resource through create update and delete [Conformance] + test/e2e/network/endpointslicemirroring.go:53 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Subpath + [BeforeEach] [sig-network] EndpointSliceMirroring set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:39:29.325 - Jun 12 21:39:29.326: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename subpath 06/12/23 21:39:29.332 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:39:29.455 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:39:29.478 - [BeforeEach] [sig-storage] Subpath + STEP: Creating a kubernetes client 07/27/23 02:14:34.403 + Jul 27 02:14:34.403: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename endpointslicemirroring 07/27/23 02:14:34.404 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:14:34.455 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:14:34.464 + [BeforeEach] [sig-network] EndpointSliceMirroring test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] Atomic writer volumes - test/e2e/storage/subpath.go:40 - STEP: Setting up data 06/12/23 21:39:29.496 - [It] should support subpaths with projected pod [Conformance] - test/e2e/storage/subpath.go:106 - STEP: Creating pod pod-subpath-test-projected-64rv 06/12/23 21:39:29.531 - STEP: Creating a pod to test atomic-volume-subpath 06/12/23 21:39:29.531 - Jun 12 21:39:29.555: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-64rv" in namespace "subpath-4899" to be "Succeeded or Failed" - Jun 12 21:39:29.564: INFO: Pod "pod-subpath-test-projected-64rv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.953805ms - Jun 12 21:39:31.573: INFO: Pod "pod-subpath-test-projected-64rv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017570499s - Jun 12 21:39:33.575: INFO: Pod "pod-subpath-test-projected-64rv": Phase="Running", Reason="", readiness=true. Elapsed: 4.019031105s - Jun 12 21:39:35.576: INFO: Pod "pod-subpath-test-projected-64rv": Phase="Running", Reason="", readiness=true. Elapsed: 6.020600289s - Jun 12 21:39:37.574: INFO: Pod "pod-subpath-test-projected-64rv": Phase="Running", Reason="", readiness=true. Elapsed: 8.018138616s - Jun 12 21:39:39.575: INFO: Pod "pod-subpath-test-projected-64rv": Phase="Running", Reason="", readiness=true. Elapsed: 10.019302309s - Jun 12 21:39:41.575: INFO: Pod "pod-subpath-test-projected-64rv": Phase="Running", Reason="", readiness=true. Elapsed: 12.019898363s - Jun 12 21:39:43.578: INFO: Pod "pod-subpath-test-projected-64rv": Phase="Running", Reason="", readiness=true. Elapsed: 14.022315005s - Jun 12 21:39:45.601: INFO: Pod "pod-subpath-test-projected-64rv": Phase="Running", Reason="", readiness=true. Elapsed: 16.044995234s - Jun 12 21:39:47.608: INFO: Pod "pod-subpath-test-projected-64rv": Phase="Running", Reason="", readiness=true. Elapsed: 18.052762085s - Jun 12 21:39:49.573: INFO: Pod "pod-subpath-test-projected-64rv": Phase="Running", Reason="", readiness=true. Elapsed: 20.0179278s - Jun 12 21:39:51.575: INFO: Pod "pod-subpath-test-projected-64rv": Phase="Running", Reason="", readiness=true. Elapsed: 22.019324937s - Jun 12 21:39:53.575: INFO: Pod "pod-subpath-test-projected-64rv": Phase="Running", Reason="", readiness=false. Elapsed: 24.019650335s - Jun 12 21:39:55.574: INFO: Pod "pod-subpath-test-projected-64rv": Phase="Running", Reason="", readiness=false. Elapsed: 26.018751728s - Jun 12 21:39:57.575: INFO: Pod "pod-subpath-test-projected-64rv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.01956549s - STEP: Saw pod success 06/12/23 21:39:57.575 - Jun 12 21:39:57.576: INFO: Pod "pod-subpath-test-projected-64rv" satisfied condition "Succeeded or Failed" - Jun 12 21:39:57.585: INFO: Trying to get logs from node 10.138.75.70 pod pod-subpath-test-projected-64rv container test-container-subpath-projected-64rv: - STEP: delete the pod 06/12/23 21:39:57.608 - Jun 12 21:39:57.627: INFO: Waiting for pod pod-subpath-test-projected-64rv to disappear - Jun 12 21:39:57.635: INFO: Pod pod-subpath-test-projected-64rv no longer exists - STEP: Deleting pod pod-subpath-test-projected-64rv 06/12/23 21:39:57.635 - Jun 12 21:39:57.635: INFO: Deleting pod "pod-subpath-test-projected-64rv" in namespace "subpath-4899" - [AfterEach] [sig-storage] Subpath + [BeforeEach] [sig-network] EndpointSliceMirroring + test/e2e/network/endpointslicemirroring.go:41 + [It] should mirror a custom Endpoints resource through create update and delete [Conformance] + test/e2e/network/endpointslicemirroring.go:53 + STEP: mirroring a new custom Endpoint 07/27/23 02:14:34.519 + Jul 27 02:14:34.560: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 + STEP: mirroring an update to a custom Endpoint 07/27/23 02:14:36.605 + STEP: mirroring deletion of a custom Endpoint 07/27/23 02:14:36.653 + Jul 27 02:14:36.682: INFO: Waiting for 0 EndpointSlices to exist, got 1 + [AfterEach] [sig-network] EndpointSliceMirroring test/e2e/framework/node/init/init.go:32 - Jun 12 21:39:57.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Subpath + Jul 27 02:14:38.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] EndpointSliceMirroring test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Subpath + [DeferCleanup (Each)] [sig-network] EndpointSliceMirroring dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Subpath + [DeferCleanup (Each)] [sig-network] EndpointSliceMirroring tear down framework | framework.go:193 - STEP: Destroying namespace "subpath-4899" for this suite. 06/12/23 21:39:57.658 + STEP: Destroying namespace "endpointslicemirroring-7" for this suite. 07/27/23 02:14:38.715 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-api-machinery] Namespaces [Serial] - should apply an update to a Namespace [Conformance] - test/e2e/apimachinery/namespace.go:366 -[BeforeEach] [sig-api-machinery] Namespaces [Serial] +[sig-network] IngressClass API + should support creating IngressClass API operations [Conformance] + test/e2e/network/ingressclass.go:223 +[BeforeEach] [sig-network] IngressClass API set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:39:57.683 -Jun 12 21:39:57.683: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename namespaces 06/12/23 21:39:57.686 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:39:57.733 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:39:57.744 -[BeforeEach] [sig-api-machinery] Namespaces [Serial] +STEP: Creating a kubernetes client 07/27/23 02:14:38.751 +Jul 27 02:14:38.752: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename ingressclass 07/27/23 02:14:38.753 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:14:38.795 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:14:38.808 +[BeforeEach] [sig-network] IngressClass API test/e2e/framework/metrics/init/init.go:31 -[It] should apply an update to a Namespace [Conformance] - test/e2e/apimachinery/namespace.go:366 -STEP: Updating Namespace "namespaces-8866" 06/12/23 21:39:57.755 -Jun 12 21:39:57.783: INFO: Namespace "namespaces-8866" now has labels, map[string]string{"e2e-framework":"namespaces", "e2e-run":"252e5f2f-6715-440e-b971-87933460a116", "kubernetes.io/metadata.name":"namespaces-8866", "namespaces-8866":"updated", "pod-security.kubernetes.io/audit":"privileged", "pod-security.kubernetes.io/audit-version":"v1.24", "pod-security.kubernetes.io/enforce":"baseline", "pod-security.kubernetes.io/warn":"privileged", "pod-security.kubernetes.io/warn-version":"v1.24"} -[AfterEach] [sig-api-machinery] Namespaces [Serial] +[BeforeEach] [sig-network] IngressClass API + test/e2e/network/ingressclass.go:211 +[It] should support creating IngressClass API operations [Conformance] + test/e2e/network/ingressclass.go:223 +STEP: getting /apis 07/27/23 02:14:38.823 +STEP: getting /apis/networking.k8s.io 07/27/23 02:14:38.875 +STEP: getting /apis/networking.k8s.iov1 07/27/23 02:14:38.921 +STEP: creating 07/27/23 02:14:38.973 +STEP: getting 07/27/23 02:14:39.099 +STEP: listing 07/27/23 02:14:39.141 +STEP: watching 07/27/23 02:14:39.18 +Jul 27 02:14:39.181: INFO: starting watch +STEP: patching 07/27/23 02:14:39.225 +STEP: updating 07/27/23 02:14:39.263 +Jul 27 02:14:39.289: INFO: waiting for watch events with expected annotations +Jul 27 02:14:39.290: INFO: saw patched and updated annotations +STEP: deleting 07/27/23 02:14:39.291 +STEP: deleting a collection 07/27/23 02:14:39.434 +[AfterEach] [sig-network] IngressClass API test/e2e/framework/node/init/init.go:32 -Jun 12 21:39:57.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] +Jul 27 02:14:39.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] IngressClass API test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] +[DeferCleanup (Each)] [sig-network] IngressClass API dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] +[DeferCleanup (Each)] [sig-network] IngressClass API tear down framework | framework.go:193 -STEP: Destroying namespace "namespaces-8866" for this suite. 06/12/23 21:39:57.795 +STEP: Destroying namespace "ingressclass-9673" for this suite. 07/27/23 02:14:39.563 ------------------------------ -• [0.140 seconds] -[sig-api-machinery] Namespaces [Serial] -test/e2e/apimachinery/framework.go:23 - should apply an update to a Namespace [Conformance] - test/e2e/apimachinery/namespace.go:366 +• [0.837 seconds] +[sig-network] IngressClass API +test/e2e/network/common/framework.go:23 + should support creating IngressClass API operations [Conformance] + test/e2e/network/ingressclass.go:223 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] Namespaces [Serial] + [BeforeEach] [sig-network] IngressClass API set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:39:57.683 - Jun 12 21:39:57.683: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename namespaces 06/12/23 21:39:57.686 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:39:57.733 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:39:57.744 - [BeforeEach] [sig-api-machinery] Namespaces [Serial] + STEP: Creating a kubernetes client 07/27/23 02:14:38.751 + Jul 27 02:14:38.752: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename ingressclass 07/27/23 02:14:38.753 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:14:38.795 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:14:38.808 + [BeforeEach] [sig-network] IngressClass API test/e2e/framework/metrics/init/init.go:31 - [It] should apply an update to a Namespace [Conformance] - test/e2e/apimachinery/namespace.go:366 - STEP: Updating Namespace "namespaces-8866" 06/12/23 21:39:57.755 - Jun 12 21:39:57.783: INFO: Namespace "namespaces-8866" now has labels, map[string]string{"e2e-framework":"namespaces", "e2e-run":"252e5f2f-6715-440e-b971-87933460a116", "kubernetes.io/metadata.name":"namespaces-8866", "namespaces-8866":"updated", "pod-security.kubernetes.io/audit":"privileged", "pod-security.kubernetes.io/audit-version":"v1.24", "pod-security.kubernetes.io/enforce":"baseline", "pod-security.kubernetes.io/warn":"privileged", "pod-security.kubernetes.io/warn-version":"v1.24"} - [AfterEach] [sig-api-machinery] Namespaces [Serial] + [BeforeEach] [sig-network] IngressClass API + test/e2e/network/ingressclass.go:211 + [It] should support creating IngressClass API operations [Conformance] + test/e2e/network/ingressclass.go:223 + STEP: getting /apis 07/27/23 02:14:38.823 + STEP: getting /apis/networking.k8s.io 07/27/23 02:14:38.875 + STEP: getting /apis/networking.k8s.iov1 07/27/23 02:14:38.921 + STEP: creating 07/27/23 02:14:38.973 + STEP: getting 07/27/23 02:14:39.099 + STEP: listing 07/27/23 02:14:39.141 + STEP: watching 07/27/23 02:14:39.18 + Jul 27 02:14:39.181: INFO: starting watch + STEP: patching 07/27/23 02:14:39.225 + STEP: updating 07/27/23 02:14:39.263 + Jul 27 02:14:39.289: INFO: waiting for watch events with expected annotations + Jul 27 02:14:39.290: INFO: saw patched and updated annotations + STEP: deleting 07/27/23 02:14:39.291 + STEP: deleting a collection 07/27/23 02:14:39.434 + [AfterEach] [sig-network] IngressClass API test/e2e/framework/node/init/init.go:32 - Jun 12 21:39:57.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + Jul 27 02:14:39.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] IngressClass API test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + [DeferCleanup (Each)] [sig-network] IngressClass API dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + [DeferCleanup (Each)] [sig-network] IngressClass API tear down framework | framework.go:193 - STEP: Destroying namespace "namespaces-8866" for this suite. 06/12/23 21:39:57.795 + STEP: Destroying namespace "ingressclass-9673" for this suite. 07/27/23 02:14:39.563 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints - verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] - test/e2e/scheduling/preemption.go:814 +[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath + runs ReplicaSets to verify preemption running path [Conformance] + test/e2e/scheduling/preemption.go:624 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:39:57.826 -Jun 12 21:39:57.826: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename sched-preemption 06/12/23 21:39:57.827 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:39:57.881 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:39:57.895 +STEP: Creating a kubernetes client 07/27/23 02:14:39.595 +Jul 27 02:14:39.595: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename sched-preemption 07/27/23 02:14:39.596 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:14:39.648 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:14:39.682 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:97 -Jun 12 21:39:57.950: INFO: Waiting up to 1m0s for all nodes to be ready -Jun 12 21:40:58.132: INFO: Waiting for terminating namespaces to be deleted... -[BeforeEach] PriorityClass endpoints +Jul 27 02:14:39.813: INFO: Waiting up to 1m0s for all nodes to be ready +Jul 27 02:15:40.054: INFO: Waiting for terminating namespaces to be deleted... +[BeforeEach] PreemptionExecutionPath set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:40:58.156 -Jun 12 21:40:58.156: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename sched-preemption-path 06/12/23 21:40:58.158 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:40:58.22 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:40:58.235 -[BeforeEach] PriorityClass endpoints +STEP: Creating a kubernetes client 07/27/23 02:15:40.085 +Jul 27 02:15:40.085: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename sched-preemption-path 07/27/23 02:15:40.087 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:15:40.128 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:15:40.139 +[BeforeEach] PreemptionExecutionPath test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] PriorityClass endpoints - test/e2e/scheduling/preemption.go:771 -[It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] - test/e2e/scheduling/preemption.go:814 -Jun 12 21:40:58.348: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: value: Forbidden: may not be changed in an update. -Jun 12 21:40:58.360: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: value: Forbidden: may not be changed in an update. -[AfterEach] PriorityClass endpoints +[BeforeEach] PreemptionExecutionPath + test/e2e/scheduling/preemption.go:576 +STEP: Finding an available node 07/27/23 02:15:40.147 +STEP: Trying to launch a pod without a label to get a node which can launch it. 07/27/23 02:15:40.147 +Jul 27 02:15:41.173: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-path-8870" to be "running" +Jul 27 02:15:41.181: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 8.501263ms +Jul 27 02:15:43.192: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.019436179s +Jul 27 02:15:43.192: INFO: Pod "without-label" satisfied condition "running" +STEP: Explicitly delete pod here to free the resource it takes. 07/27/23 02:15:43.202 +Jul 27 02:15:43.226: INFO: found a healthy node: 10.245.128.19 +[It] runs ReplicaSets to verify preemption running path [Conformance] + test/e2e/scheduling/preemption.go:624 +Jul 27 02:15:49.481: INFO: pods created so far: [1 1 1] +Jul 27 02:15:49.481: INFO: length of pods created so far: 3 +Jul 27 02:15:53.532: INFO: pods created so far: [2 2 1] +[AfterEach] PreemptionExecutionPath test/e2e/framework/node/init/init.go:32 -Jun 12 21:40:58.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[AfterEach] PriorityClass endpoints - test/e2e/scheduling/preemption.go:787 +Jul 27 02:16:00.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] PreemptionExecutionPath + test/e2e/scheduling/preemption.go:549 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 -Jun 12 21:40:58.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +Jul 27 02:16:00.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 -[DeferCleanup (Each)] PriorityClass endpoints +[DeferCleanup (Each)] PreemptionExecutionPath test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] PriorityClass endpoints +[DeferCleanup (Each)] PreemptionExecutionPath dump namespaces | framework.go:196 -[DeferCleanup (Each)] PriorityClass endpoints +[DeferCleanup (Each)] PreemptionExecutionPath tear down framework | framework.go:193 -STEP: Destroying namespace "sched-preemption-path-275" for this suite. 06/12/23 21:40:58.63 +STEP: Destroying namespace "sched-preemption-path-8870" for this suite. 07/27/23 02:16:00.818 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 -STEP: Destroying namespace "sched-preemption-6549" for this suite. 06/12/23 21:40:58.651 +STEP: Destroying namespace "sched-preemption-4646" for this suite. 07/27/23 02:16:00.843 ------------------------------ -• [SLOW TEST] [60.859 seconds] +• [SLOW TEST] [81.271 seconds] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/framework.go:40 - PriorityClass endpoints - test/e2e/scheduling/preemption.go:764 - verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] - test/e2e/scheduling/preemption.go:814 - + PreemptionExecutionPath + test/e2e/scheduling/preemption.go:537 + runs ReplicaSets to verify preemption running path [Conformance] + test/e2e/scheduling/preemption.go:624 + Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:39:57.826 - Jun 12 21:39:57.826: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename sched-preemption 06/12/23 21:39:57.827 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:39:57.881 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:39:57.895 + STEP: Creating a kubernetes client 07/27/23 02:14:39.595 + Jul 27 02:14:39.595: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename sched-preemption 07/27/23 02:14:39.596 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:14:39.648 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:14:39.682 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:97 - Jun 12 21:39:57.950: INFO: Waiting up to 1m0s for all nodes to be ready - Jun 12 21:40:58.132: INFO: Waiting for terminating namespaces to be deleted... - [BeforeEach] PriorityClass endpoints + Jul 27 02:14:39.813: INFO: Waiting up to 1m0s for all nodes to be ready + Jul 27 02:15:40.054: INFO: Waiting for terminating namespaces to be deleted... + [BeforeEach] PreemptionExecutionPath set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:40:58.156 - Jun 12 21:40:58.156: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename sched-preemption-path 06/12/23 21:40:58.158 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:40:58.22 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:40:58.235 - [BeforeEach] PriorityClass endpoints + STEP: Creating a kubernetes client 07/27/23 02:15:40.085 + Jul 27 02:15:40.085: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename sched-preemption-path 07/27/23 02:15:40.087 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:15:40.128 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:15:40.139 + [BeforeEach] PreemptionExecutionPath test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] PriorityClass endpoints - test/e2e/scheduling/preemption.go:771 - [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] - test/e2e/scheduling/preemption.go:814 - Jun 12 21:40:58.348: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: value: Forbidden: may not be changed in an update. - Jun 12 21:40:58.360: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: value: Forbidden: may not be changed in an update. - [AfterEach] PriorityClass endpoints + [BeforeEach] PreemptionExecutionPath + test/e2e/scheduling/preemption.go:576 + STEP: Finding an available node 07/27/23 02:15:40.147 + STEP: Trying to launch a pod without a label to get a node which can launch it. 07/27/23 02:15:40.147 + Jul 27 02:15:41.173: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-path-8870" to be "running" + Jul 27 02:15:41.181: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 8.501263ms + Jul 27 02:15:43.192: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.019436179s + Jul 27 02:15:43.192: INFO: Pod "without-label" satisfied condition "running" + STEP: Explicitly delete pod here to free the resource it takes. 07/27/23 02:15:43.202 + Jul 27 02:15:43.226: INFO: found a healthy node: 10.245.128.19 + [It] runs ReplicaSets to verify preemption running path [Conformance] + test/e2e/scheduling/preemption.go:624 + Jul 27 02:15:49.481: INFO: pods created so far: [1 1 1] + Jul 27 02:15:49.481: INFO: length of pods created so far: 3 + Jul 27 02:15:53.532: INFO: pods created so far: [2 2 1] + [AfterEach] PreemptionExecutionPath test/e2e/framework/node/init/init.go:32 - Jun 12 21:40:58.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [AfterEach] PriorityClass endpoints - test/e2e/scheduling/preemption.go:787 + Jul 27 02:16:00.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] PreemptionExecutionPath + test/e2e/scheduling/preemption.go:549 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 - Jun 12 21:40:58.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + Jul 27 02:16:00.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 - [DeferCleanup (Each)] PriorityClass endpoints + [DeferCleanup (Each)] PreemptionExecutionPath test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] PriorityClass endpoints + [DeferCleanup (Each)] PreemptionExecutionPath dump namespaces | framework.go:196 - [DeferCleanup (Each)] PriorityClass endpoints + [DeferCleanup (Each)] PreemptionExecutionPath tear down framework | framework.go:193 - STEP: Destroying namespace "sched-preemption-path-275" for this suite. 06/12/23 21:40:58.63 + STEP: Destroying namespace "sched-preemption-path-8870" for this suite. 07/27/23 02:16:00.818 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 - STEP: Destroying namespace "sched-preemption-6549" for this suite. 06/12/23 21:40:58.651 + STEP: Destroying namespace "sched-preemption-4646" for this suite. 07/27/23 02:16:00.843 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSS +SSSSS ------------------------------ -[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition - getting/updating/patching custom resource definition status sub-resource works [Conformance] - test/e2e/apimachinery/custom_resource_definition.go:145 -[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +[sig-node] Variable Expansion + should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + test/e2e/common/node/expansion.go:225 +[BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:40:58.692 -Jun 12 21:40:58.692: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename custom-resource-definition 06/12/23 21:40:58.693 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:40:58.751 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:40:58.766 -[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +STEP: Creating a kubernetes client 07/27/23 02:16:00.867 +Jul 27 02:16:00.867: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename var-expansion 07/27/23 02:16:00.868 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:16:00.916 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:16:00.924 +[BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 -[It] getting/updating/patching custom resource definition status sub-resource works [Conformance] - test/e2e/apimachinery/custom_resource_definition.go:145 -Jun 12 21:40:58.780: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + test/e2e/common/node/expansion.go:225 +STEP: creating the pod with failed condition 07/27/23 02:16:00.933 +W0727 02:16:00.991624 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "dapi-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "dapi-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "dapi-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "dapi-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +Jul 27 02:16:00.991: INFO: Waiting up to 2m0s for pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98" in namespace "var-expansion-5798" to be "running" +Jul 27 02:16:00.999: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 7.977084ms +Jul 27 02:16:03.010: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019098797s +Jul 27 02:16:05.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017790217s +Jul 27 02:16:07.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017354614s +Jul 27 02:16:09.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017693343s +Jul 27 02:16:11.010: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 10.018263996s +Jul 27 02:16:13.011: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 12.019721436s +Jul 27 02:16:15.012: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 14.020871066s +Jul 27 02:16:17.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 16.017871824s +Jul 27 02:16:19.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 18.017979846s +Jul 27 02:16:21.008: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 20.017224096s +Jul 27 02:16:23.017: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 22.025885611s +Jul 27 02:16:25.008: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 24.016930577s +Jul 27 02:16:27.036: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 26.044636993s +Jul 27 02:16:29.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 28.01757553s +Jul 27 02:16:31.011: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 30.019605455s +Jul 27 02:16:33.010: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 32.01870591s +Jul 27 02:16:35.008: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 34.017090172s +Jul 27 02:16:37.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 36.017633472s +Jul 27 02:16:39.008: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 38.01672059s +Jul 27 02:16:41.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 40.017814648s +Jul 27 02:16:43.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 42.01799772s +Jul 27 02:16:45.011: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 44.019726386s +Jul 27 02:16:47.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 46.017301564s +Jul 27 02:16:49.010: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 48.01878931s +Jul 27 02:16:51.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 50.017624132s +Jul 27 02:16:53.010: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 52.018798156s +Jul 27 02:16:55.015: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 54.024234873s +Jul 27 02:16:57.008: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 56.016929179s +Jul 27 02:16:59.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 58.0181483s +Jul 27 02:17:01.008: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.017043883s +Jul 27 02:17:03.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.018110664s +Jul 27 02:17:05.010: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.018372296s +Jul 27 02:17:07.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.018158977s +Jul 27 02:17:09.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.017979084s +Jul 27 02:17:11.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.017767981s +Jul 27 02:17:13.010: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.019021978s +Jul 27 02:17:15.010: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.018908803s +Jul 27 02:17:17.011: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.019928539s +Jul 27 02:17:19.027: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.035981261s +Jul 27 02:17:21.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.017786296s +Jul 27 02:17:23.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.018011553s +Jul 27 02:17:25.010: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.018452756s +Jul 27 02:17:27.007: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.016170423s +Jul 27 02:17:29.031: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.03967215s +Jul 27 02:17:31.029: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.037827585s +Jul 27 02:17:33.022: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.03037316s +Jul 27 02:17:35.008: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.01724546s +Jul 27 02:17:37.011: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.019358585s +Jul 27 02:17:39.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.017918128s +Jul 27 02:17:41.015: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.024106766s +Jul 27 02:17:43.011: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.019359207s +Jul 27 02:17:45.008: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.016447359s +Jul 27 02:17:47.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.018083025s +Jul 27 02:17:49.010: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.018349047s +Jul 27 02:17:51.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.017712366s +Jul 27 02:17:53.008: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.016666013s +Jul 27 02:17:55.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.01760139s +Jul 27 02:17:57.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.017878976s +Jul 27 02:17:59.010: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.018333683s +Jul 27 02:18:01.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.017795881s +Jul 27 02:18:01.020: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.02906626s +STEP: updating the pod 07/27/23 02:18:01.02 +Jul 27 02:18:01.553: INFO: Successfully updated pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98" +STEP: waiting for pod running 07/27/23 02:18:01.553 +Jul 27 02:18:01.553: INFO: Waiting up to 2m0s for pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98" in namespace "var-expansion-5798" to be "running" +Jul 27 02:18:01.561: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 7.996105ms +Jul 27 02:18:03.573: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Running", Reason="", readiness=true. Elapsed: 2.019829163s +Jul 27 02:18:03.573: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98" satisfied condition "running" +STEP: deleting the pod gracefully 07/27/23 02:18:03.573 +Jul 27 02:18:03.573: INFO: Deleting pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98" in namespace "var-expansion-5798" +Jul 27 02:18:03.590: INFO: Wait up to 5m0s for pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98" to be fully deleted +[AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 -Jun 12 21:40:59.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +Jul 27 02:18:35.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-node] Variable Expansion dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-node] Variable Expansion tear down framework | framework.go:193 -STEP: Destroying namespace "custom-resource-definition-9680" for this suite. 06/12/23 21:40:59.422 +STEP: Destroying namespace "var-expansion-5798" for this suite. 07/27/23 02:18:35.638 ------------------------------ -• [0.751 seconds] -[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] -test/e2e/apimachinery/framework.go:23 - Simple CustomResourceDefinition - test/e2e/apimachinery/custom_resource_definition.go:50 - getting/updating/patching custom resource definition status sub-resource works [Conformance] - test/e2e/apimachinery/custom_resource_definition.go:145 +• [SLOW TEST] [154.794 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + test/e2e/common/node/expansion.go:225 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + [BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:40:58.692 - Jun 12 21:40:58.692: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename custom-resource-definition 06/12/23 21:40:58.693 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:40:58.751 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:40:58.766 - [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + STEP: Creating a kubernetes client 07/27/23 02:16:00.867 + Jul 27 02:16:00.867: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename var-expansion 07/27/23 02:16:00.868 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:16:00.916 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:16:00.924 + [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 - [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] - test/e2e/apimachinery/custom_resource_definition.go:145 - Jun 12 21:40:58.780: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + test/e2e/common/node/expansion.go:225 + STEP: creating the pod with failed condition 07/27/23 02:16:00.933 + W0727 02:16:00.991624 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "dapi-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "dapi-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "dapi-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "dapi-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + Jul 27 02:16:00.991: INFO: Waiting up to 2m0s for pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98" in namespace "var-expansion-5798" to be "running" + Jul 27 02:16:00.999: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 7.977084ms + Jul 27 02:16:03.010: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019098797s + Jul 27 02:16:05.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017790217s + Jul 27 02:16:07.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017354614s + Jul 27 02:16:09.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017693343s + Jul 27 02:16:11.010: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 10.018263996s + Jul 27 02:16:13.011: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 12.019721436s + Jul 27 02:16:15.012: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 14.020871066s + Jul 27 02:16:17.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 16.017871824s + Jul 27 02:16:19.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 18.017979846s + Jul 27 02:16:21.008: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 20.017224096s + Jul 27 02:16:23.017: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 22.025885611s + Jul 27 02:16:25.008: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 24.016930577s + Jul 27 02:16:27.036: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 26.044636993s + Jul 27 02:16:29.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 28.01757553s + Jul 27 02:16:31.011: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 30.019605455s + Jul 27 02:16:33.010: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 32.01870591s + Jul 27 02:16:35.008: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 34.017090172s + Jul 27 02:16:37.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 36.017633472s + Jul 27 02:16:39.008: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 38.01672059s + Jul 27 02:16:41.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 40.017814648s + Jul 27 02:16:43.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 42.01799772s + Jul 27 02:16:45.011: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 44.019726386s + Jul 27 02:16:47.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 46.017301564s + Jul 27 02:16:49.010: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 48.01878931s + Jul 27 02:16:51.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 50.017624132s + Jul 27 02:16:53.010: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 52.018798156s + Jul 27 02:16:55.015: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 54.024234873s + Jul 27 02:16:57.008: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 56.016929179s + Jul 27 02:16:59.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 58.0181483s + Jul 27 02:17:01.008: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.017043883s + Jul 27 02:17:03.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.018110664s + Jul 27 02:17:05.010: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.018372296s + Jul 27 02:17:07.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.018158977s + Jul 27 02:17:09.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.017979084s + Jul 27 02:17:11.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.017767981s + Jul 27 02:17:13.010: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.019021978s + Jul 27 02:17:15.010: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.018908803s + Jul 27 02:17:17.011: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.019928539s + Jul 27 02:17:19.027: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.035981261s + Jul 27 02:17:21.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.017786296s + Jul 27 02:17:23.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.018011553s + Jul 27 02:17:25.010: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.018452756s + Jul 27 02:17:27.007: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.016170423s + Jul 27 02:17:29.031: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.03967215s + Jul 27 02:17:31.029: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.037827585s + Jul 27 02:17:33.022: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.03037316s + Jul 27 02:17:35.008: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.01724546s + Jul 27 02:17:37.011: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.019358585s + Jul 27 02:17:39.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.017918128s + Jul 27 02:17:41.015: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.024106766s + Jul 27 02:17:43.011: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.019359207s + Jul 27 02:17:45.008: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.016447359s + Jul 27 02:17:47.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.018083025s + Jul 27 02:17:49.010: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.018349047s + Jul 27 02:17:51.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.017712366s + Jul 27 02:17:53.008: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.016666013s + Jul 27 02:17:55.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.01760139s + Jul 27 02:17:57.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.017878976s + Jul 27 02:17:59.010: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.018333683s + Jul 27 02:18:01.009: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.017795881s + Jul 27 02:18:01.020: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.02906626s + STEP: updating the pod 07/27/23 02:18:01.02 + Jul 27 02:18:01.553: INFO: Successfully updated pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98" + STEP: waiting for pod running 07/27/23 02:18:01.553 + Jul 27 02:18:01.553: INFO: Waiting up to 2m0s for pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98" in namespace "var-expansion-5798" to be "running" + Jul 27 02:18:01.561: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Pending", Reason="", readiness=false. Elapsed: 7.996105ms + Jul 27 02:18:03.573: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98": Phase="Running", Reason="", readiness=true. Elapsed: 2.019829163s + Jul 27 02:18:03.573: INFO: Pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98" satisfied condition "running" + STEP: deleting the pod gracefully 07/27/23 02:18:03.573 + Jul 27 02:18:03.573: INFO: Deleting pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98" in namespace "var-expansion-5798" + Jul 27 02:18:03.590: INFO: Wait up to 5m0s for pod "var-expansion-ed6acc77-0bfc-4301-9773-7d358f731b98" to be fully deleted + [AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 - Jun 12 21:40:59.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + Jul 27 02:18:35.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-node] Variable Expansion dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-node] Variable Expansion tear down framework | framework.go:193 - STEP: Destroying namespace "custom-resource-definition-9680" for this suite. 06/12/23 21:40:59.422 + STEP: Destroying namespace "var-expansion-5798" for this suite. 07/27/23 02:18:35.638 << End Captured GinkgoWriter Output ------------------------------ -[sig-network] Services - should test the lifecycle of an Endpoint [Conformance] - test/e2e/network/service.go:3244 -[BeforeEach] [sig-network] Services +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Proxy version v1 + A set of valid responses are returned for both pod and service Proxy [Conformance] + test/e2e/network/proxy.go:380 +[BeforeEach] version v1 set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:40:59.445 -Jun 12 21:40:59.445: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename services 06/12/23 21:40:59.448 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:40:59.538 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:40:59.55 -[BeforeEach] [sig-network] Services +STEP: Creating a kubernetes client 07/27/23 02:18:35.662 +Jul 27 02:18:35.662: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename proxy 07/27/23 02:18:35.663 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:18:35.704 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:18:35.712 +[BeforeEach] version v1 test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-network] Services - test/e2e/network/service.go:766 -[It] should test the lifecycle of an Endpoint [Conformance] - test/e2e/network/service.go:3244 -STEP: creating an Endpoint 06/12/23 21:40:59.61 -STEP: waiting for available Endpoint 06/12/23 21:40:59.647 -STEP: listing all Endpoints 06/12/23 21:40:59.659 -STEP: updating the Endpoint 06/12/23 21:40:59.767 -STEP: fetching the Endpoint 06/12/23 21:40:59.801 -STEP: patching the Endpoint 06/12/23 21:40:59.813 -STEP: fetching the Endpoint 06/12/23 21:40:59.842 -STEP: deleting the Endpoint by Collection 06/12/23 21:40:59.854 -STEP: waiting for Endpoint deletion 06/12/23 21:40:59.883 -STEP: fetching the Endpoint 06/12/23 21:40:59.89 -[AfterEach] [sig-network] Services +[It] A set of valid responses are returned for both pod and service Proxy [Conformance] + test/e2e/network/proxy.go:380 +Jul 27 02:18:35.721: INFO: Creating pod... +Jul 27 02:18:35.752: INFO: Waiting up to 5m0s for pod "agnhost" in namespace "proxy-9191" to be "running" +Jul 27 02:18:35.763: INFO: Pod "agnhost": Phase="Pending", Reason="", readiness=false. Elapsed: 10.828312ms +Jul 27 02:18:37.780: INFO: Pod "agnhost": Phase="Running", Reason="", readiness=true. Elapsed: 2.028132496s +Jul 27 02:18:37.780: INFO: Pod "agnhost" satisfied condition "running" +Jul 27 02:18:37.780: INFO: Creating service... +Jul 27 02:18:37.853: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-9191/pods/agnhost/proxy?method=DELETE +Jul 27 02:18:37.872: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Jul 27 02:18:37.872: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-9191/pods/agnhost/proxy?method=OPTIONS +Jul 27 02:18:37.885: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Jul 27 02:18:37.885: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-9191/pods/agnhost/proxy?method=PATCH +Jul 27 02:18:37.900: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Jul 27 02:18:37.900: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-9191/pods/agnhost/proxy?method=POST +Jul 27 02:18:37.914: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Jul 27 02:18:37.914: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-9191/pods/agnhost/proxy?method=PUT +Jul 27 02:18:37.927: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +Jul 27 02:18:37.927: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-9191/services/e2e-proxy-test-service/proxy?method=DELETE +Jul 27 02:18:37.945: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Jul 27 02:18:37.945: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-9191/services/e2e-proxy-test-service/proxy?method=OPTIONS +Jul 27 02:18:37.995: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Jul 27 02:18:37.995: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-9191/services/e2e-proxy-test-service/proxy?method=PATCH +Jul 27 02:18:38.019: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Jul 27 02:18:38.019: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-9191/services/e2e-proxy-test-service/proxy?method=POST +Jul 27 02:18:38.046: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Jul 27 02:18:38.046: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-9191/services/e2e-proxy-test-service/proxy?method=PUT +Jul 27 02:18:38.065: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +Jul 27 02:18:38.065: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-9191/pods/agnhost/proxy?method=GET +Jul 27 02:18:38.072: INFO: http.Client request:GET StatusCode:301 +Jul 27 02:18:38.072: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-9191/services/e2e-proxy-test-service/proxy?method=GET +Jul 27 02:18:38.085: INFO: http.Client request:GET StatusCode:301 +Jul 27 02:18:38.085: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-9191/pods/agnhost/proxy?method=HEAD +Jul 27 02:18:38.092: INFO: http.Client request:HEAD StatusCode:301 +Jul 27 02:18:38.092: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-9191/services/e2e-proxy-test-service/proxy?method=HEAD +Jul 27 02:18:38.104: INFO: http.Client request:HEAD StatusCode:301 +[AfterEach] version v1 test/e2e/framework/node/init/init.go:32 -Jun 12 21:40:59.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] Services +Jul 27 02:18:38.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] version v1 test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] Services +[DeferCleanup (Each)] version v1 dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] Services +[DeferCleanup (Each)] version v1 tear down framework | framework.go:193 -STEP: Destroying namespace "services-2054" for this suite. 06/12/23 21:40:59.914 +STEP: Destroying namespace "proxy-9191" for this suite. 07/27/23 02:18:38.117 ------------------------------ -• [0.490 seconds] -[sig-network] Services +• [2.507 seconds] +[sig-network] Proxy test/e2e/network/common/framework.go:23 - should test the lifecycle of an Endpoint [Conformance] - test/e2e/network/service.go:3244 + version v1 + test/e2e/network/proxy.go:74 + A set of valid responses are returned for both pod and service Proxy [Conformance] + test/e2e/network/proxy.go:380 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] Services + [BeforeEach] version v1 set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:40:59.445 - Jun 12 21:40:59.445: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename services 06/12/23 21:40:59.448 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:40:59.538 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:40:59.55 - [BeforeEach] [sig-network] Services + STEP: Creating a kubernetes client 07/27/23 02:18:35.662 + Jul 27 02:18:35.662: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename proxy 07/27/23 02:18:35.663 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:18:35.704 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:18:35.712 + [BeforeEach] version v1 test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-network] Services - test/e2e/network/service.go:766 - [It] should test the lifecycle of an Endpoint [Conformance] - test/e2e/network/service.go:3244 - STEP: creating an Endpoint 06/12/23 21:40:59.61 - STEP: waiting for available Endpoint 06/12/23 21:40:59.647 - STEP: listing all Endpoints 06/12/23 21:40:59.659 - STEP: updating the Endpoint 06/12/23 21:40:59.767 - STEP: fetching the Endpoint 06/12/23 21:40:59.801 - STEP: patching the Endpoint 06/12/23 21:40:59.813 - STEP: fetching the Endpoint 06/12/23 21:40:59.842 - STEP: deleting the Endpoint by Collection 06/12/23 21:40:59.854 - STEP: waiting for Endpoint deletion 06/12/23 21:40:59.883 - STEP: fetching the Endpoint 06/12/23 21:40:59.89 - [AfterEach] [sig-network] Services + [It] A set of valid responses are returned for both pod and service Proxy [Conformance] + test/e2e/network/proxy.go:380 + Jul 27 02:18:35.721: INFO: Creating pod... + Jul 27 02:18:35.752: INFO: Waiting up to 5m0s for pod "agnhost" in namespace "proxy-9191" to be "running" + Jul 27 02:18:35.763: INFO: Pod "agnhost": Phase="Pending", Reason="", readiness=false. Elapsed: 10.828312ms + Jul 27 02:18:37.780: INFO: Pod "agnhost": Phase="Running", Reason="", readiness=true. Elapsed: 2.028132496s + Jul 27 02:18:37.780: INFO: Pod "agnhost" satisfied condition "running" + Jul 27 02:18:37.780: INFO: Creating service... + Jul 27 02:18:37.853: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-9191/pods/agnhost/proxy?method=DELETE + Jul 27 02:18:37.872: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE + Jul 27 02:18:37.872: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-9191/pods/agnhost/proxy?method=OPTIONS + Jul 27 02:18:37.885: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS + Jul 27 02:18:37.885: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-9191/pods/agnhost/proxy?method=PATCH + Jul 27 02:18:37.900: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH + Jul 27 02:18:37.900: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-9191/pods/agnhost/proxy?method=POST + Jul 27 02:18:37.914: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST + Jul 27 02:18:37.914: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-9191/pods/agnhost/proxy?method=PUT + Jul 27 02:18:37.927: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT + Jul 27 02:18:37.927: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-9191/services/e2e-proxy-test-service/proxy?method=DELETE + Jul 27 02:18:37.945: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE + Jul 27 02:18:37.945: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-9191/services/e2e-proxy-test-service/proxy?method=OPTIONS + Jul 27 02:18:37.995: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS + Jul 27 02:18:37.995: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-9191/services/e2e-proxy-test-service/proxy?method=PATCH + Jul 27 02:18:38.019: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH + Jul 27 02:18:38.019: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-9191/services/e2e-proxy-test-service/proxy?method=POST + Jul 27 02:18:38.046: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST + Jul 27 02:18:38.046: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-9191/services/e2e-proxy-test-service/proxy?method=PUT + Jul 27 02:18:38.065: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT + Jul 27 02:18:38.065: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-9191/pods/agnhost/proxy?method=GET + Jul 27 02:18:38.072: INFO: http.Client request:GET StatusCode:301 + Jul 27 02:18:38.072: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-9191/services/e2e-proxy-test-service/proxy?method=GET + Jul 27 02:18:38.085: INFO: http.Client request:GET StatusCode:301 + Jul 27 02:18:38.085: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-9191/pods/agnhost/proxy?method=HEAD + Jul 27 02:18:38.092: INFO: http.Client request:HEAD StatusCode:301 + Jul 27 02:18:38.092: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-9191/services/e2e-proxy-test-service/proxy?method=HEAD + Jul 27 02:18:38.104: INFO: http.Client request:HEAD StatusCode:301 + [AfterEach] version v1 test/e2e/framework/node/init/init.go:32 - Jun 12 21:40:59.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] Services + Jul 27 02:18:38.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] version v1 test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] Services + [DeferCleanup (Each)] version v1 dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] Services + [DeferCleanup (Each)] version v1 tear down framework | framework.go:193 - STEP: Destroying namespace "services-2054" for this suite. 06/12/23 21:40:59.914 + STEP: Destroying namespace "proxy-9191" for this suite. 07/27/23 02:18:38.117 << End Captured GinkgoWriter Output ------------------------------ -SS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-api-machinery] Garbage collector - should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] - test/e2e/apimachinery/garbage_collector.go:650 -[BeforeEach] [sig-api-machinery] Garbage collector +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a replication controller. [Conformance] + test/e2e/apimachinery/resource_quota.go:392 +[BeforeEach] [sig-api-machinery] ResourceQuota set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:40:59.936 -Jun 12 21:40:59.936: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename gc 06/12/23 21:40:59.939 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:40:59.993 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:41:00.006 -[BeforeEach] [sig-api-machinery] Garbage collector +STEP: Creating a kubernetes client 07/27/23 02:18:38.171 +Jul 27 02:18:38.171: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename resourcequota 07/27/23 02:18:38.172 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:18:38.213 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:18:38.222 +[BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:31 -[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] - test/e2e/apimachinery/garbage_collector.go:650 -STEP: create the rc 06/12/23 21:41:00.032 -STEP: delete the rc 06/12/23 21:41:05.066 -STEP: wait for the rc to be deleted 06/12/23 21:41:05.099 -STEP: Gathering metrics 06/12/23 21:41:06.131 -W0612 21:41:06.159758 23 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. -Jun 12 21:41:06.159: INFO: For apiserver_request_total: -For apiserver_request_latency_seconds: -For apiserver_init_events_total: -For garbage_collector_attempt_to_delete_queue_latency: -For garbage_collector_attempt_to_delete_work_duration: -For garbage_collector_attempt_to_orphan_queue_latency: -For garbage_collector_attempt_to_orphan_work_duration: -For garbage_collector_dirty_processing_latency_microseconds: -For garbage_collector_event_processing_latency_microseconds: -For garbage_collector_graph_changes_queue_latency: -For garbage_collector_graph_changes_work_duration: -For garbage_collector_orphan_processing_latency_microseconds: -For namespace_queue_latency: -For namespace_queue_latency_sum: -For namespace_queue_latency_count: -For namespace_retries: -For namespace_work_duration: -For namespace_work_duration_sum: -For namespace_work_duration_count: -For function_duration_seconds: -For errors_total: -For evicted_pods_total: - -[AfterEach] [sig-api-machinery] Garbage collector +[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] + test/e2e/apimachinery/resource_quota.go:392 +STEP: Counting existing ResourceQuota 07/27/23 02:18:38.258 +STEP: Creating a ResourceQuota 07/27/23 02:18:43.307 +STEP: Ensuring resource quota status is calculated 07/27/23 02:18:43.324 +STEP: Creating a ReplicationController 07/27/23 02:18:45.351 +STEP: Ensuring resource quota status captures replication controller creation 07/27/23 02:18:45.48 +STEP: Deleting a ReplicationController 07/27/23 02:18:47.489 +STEP: Ensuring resource quota status released usage 07/27/23 02:18:47.511 +[AfterEach] [sig-api-machinery] ResourceQuota test/e2e/framework/node/init/init.go:32 -Jun 12 21:41:06.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] Garbage collector +Jul 27 02:18:49.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] Garbage collector +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] Garbage collector +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota tear down framework | framework.go:193 -STEP: Destroying namespace "gc-577" for this suite. 06/12/23 21:41:06.178 +STEP: Destroying namespace "resourcequota-7169" for this suite. 07/27/23 02:18:49.534 ------------------------------ -• [SLOW TEST] [6.271 seconds] -[sig-api-machinery] Garbage collector +• [SLOW TEST] [11.386 seconds] +[sig-api-machinery] ResourceQuota test/e2e/apimachinery/framework.go:23 - should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] - test/e2e/apimachinery/garbage_collector.go:650 + should create a ResourceQuota and capture the life of a replication controller. [Conformance] + test/e2e/apimachinery/resource_quota.go:392 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] Garbage collector + [BeforeEach] [sig-api-machinery] ResourceQuota set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:40:59.936 - Jun 12 21:40:59.936: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename gc 06/12/23 21:40:59.939 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:40:59.993 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:41:00.006 - [BeforeEach] [sig-api-machinery] Garbage collector + STEP: Creating a kubernetes client 07/27/23 02:18:38.171 + Jul 27 02:18:38.171: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename resourcequota 07/27/23 02:18:38.172 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:18:38.213 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:18:38.222 + [BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:31 - [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] - test/e2e/apimachinery/garbage_collector.go:650 - STEP: create the rc 06/12/23 21:41:00.032 - STEP: delete the rc 06/12/23 21:41:05.066 - STEP: wait for the rc to be deleted 06/12/23 21:41:05.099 - STEP: Gathering metrics 06/12/23 21:41:06.131 - W0612 21:41:06.159758 23 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. - Jun 12 21:41:06.159: INFO: For apiserver_request_total: - For apiserver_request_latency_seconds: - For apiserver_init_events_total: - For garbage_collector_attempt_to_delete_queue_latency: - For garbage_collector_attempt_to_delete_work_duration: - For garbage_collector_attempt_to_orphan_queue_latency: - For garbage_collector_attempt_to_orphan_work_duration: - For garbage_collector_dirty_processing_latency_microseconds: - For garbage_collector_event_processing_latency_microseconds: - For garbage_collector_graph_changes_queue_latency: - For garbage_collector_graph_changes_work_duration: - For garbage_collector_orphan_processing_latency_microseconds: - For namespace_queue_latency: - For namespace_queue_latency_sum: - For namespace_queue_latency_count: - For namespace_retries: - For namespace_work_duration: - For namespace_work_duration_sum: - For namespace_work_duration_count: - For function_duration_seconds: - For errors_total: - For evicted_pods_total: - - [AfterEach] [sig-api-machinery] Garbage collector + [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] + test/e2e/apimachinery/resource_quota.go:392 + STEP: Counting existing ResourceQuota 07/27/23 02:18:38.258 + STEP: Creating a ResourceQuota 07/27/23 02:18:43.307 + STEP: Ensuring resource quota status is calculated 07/27/23 02:18:43.324 + STEP: Creating a ReplicationController 07/27/23 02:18:45.351 + STEP: Ensuring resource quota status captures replication controller creation 07/27/23 02:18:45.48 + STEP: Deleting a ReplicationController 07/27/23 02:18:47.489 + STEP: Ensuring resource quota status released usage 07/27/23 02:18:47.511 + [AfterEach] [sig-api-machinery] ResourceQuota test/e2e/framework/node/init/init.go:32 - Jun 12 21:41:06.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + Jul 27 02:18:49.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota tear down framework | framework.go:193 - STEP: Destroying namespace "gc-577" for this suite. 06/12/23 21:41:06.178 + STEP: Destroying namespace "resourcequota-7169" for this suite. 07/27/23 02:18:49.534 << End Captured GinkgoWriter Output ------------------------------ -SS +SSSSSS ------------------------------ -[sig-node] Pods - should support retrieving logs from the container over websockets [NodeConformance] [Conformance] - test/e2e/common/node/pods.go:618 -[BeforeEach] [sig-node] Pods +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should unconditionally reject operations on fail closed webhook [Conformance] + test/e2e/apimachinery/webhook.go:239 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:41:06.223 -Jun 12 21:41:06.224: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename pods 06/12/23 21:41:06.228 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:41:06.285 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:41:06.305 -[BeforeEach] [sig-node] Pods +STEP: Creating a kubernetes client 07/27/23 02:18:49.557 +Jul 27 02:18:49.557: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename webhook 07/27/23 02:18:49.558 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:18:49.602 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:18:49.61 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-node] Pods - test/e2e/common/node/pods.go:194 -[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] - test/e2e/common/node/pods.go:618 -Jun 12 21:41:06.340: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: creating the pod 06/12/23 21:41:06.345 -STEP: submitting the pod to kubernetes 06/12/23 21:41:06.346 -Jun 12 21:41:06.378: INFO: Waiting up to 5m0s for pod "pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0" in namespace "pods-7253" to be "running and ready" -Jun 12 21:41:06.394: INFO: Pod "pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0": Phase="Pending", Reason="", readiness=false. Elapsed: 15.609593ms -Jun 12 21:41:06.394: INFO: The phase of Pod pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:41:08.407: INFO: Pod "pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029088116s -Jun 12 21:41:08.408: INFO: The phase of Pod pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:41:10.406: INFO: Pod "pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028093361s -Jun 12 21:41:10.407: INFO: The phase of Pod pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:41:12.432: INFO: Pod "pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053377386s -Jun 12 21:41:12.432: INFO: The phase of Pod pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:41:14.405: INFO: Pod "pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.026393172s -Jun 12 21:41:14.405: INFO: The phase of Pod pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:41:16.410: INFO: Pod "pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.031516695s -Jun 12 21:41:16.410: INFO: The phase of Pod pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:41:18.404: INFO: Pod "pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0": Phase="Pending", Reason="", readiness=false. Elapsed: 12.025602259s -Jun 12 21:41:18.404: INFO: The phase of Pod pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:41:20.404: INFO: Pod "pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0": Phase="Pending", Reason="", readiness=false. Elapsed: 14.026285871s -Jun 12 21:41:20.405: INFO: The phase of Pod pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:41:22.404: INFO: Pod "pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0": Phase="Running", Reason="", readiness=true. Elapsed: 16.026322185s -Jun 12 21:41:22.405: INFO: The phase of Pod pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0 is Running (Ready = true) -Jun 12 21:41:22.405: INFO: Pod "pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0" satisfied condition "running and ready" -[AfterEach] [sig-node] Pods +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 07/27/23 02:18:49.681 +STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 02:18:49.826 +STEP: Deploying the webhook pod 07/27/23 02:18:49.866 +STEP: Wait for the deployment to be ready 07/27/23 02:18:49.892 +Jul 27 02:18:49.912: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 07/27/23 02:18:51.954 +STEP: Verifying the service has paired with the endpoint 07/27/23 02:18:52.005 +Jul 27 02:18:53.006: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should unconditionally reject operations on fail closed webhook [Conformance] + test/e2e/apimachinery/webhook.go:239 +STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API 07/27/23 02:18:53.018 +STEP: create a namespace for the webhook 07/27/23 02:18:53.075 +STEP: create a configmap should be unconditionally rejected by the webhook 07/27/23 02:18:53.097 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 -Jun 12 21:41:22.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Pods +Jul 27 02:18:53.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Pods +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Pods +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 -STEP: Destroying namespace "pods-7253" for this suite. 06/12/23 21:41:22.553 +STEP: Destroying namespace "webhook-6171" for this suite. 07/27/23 02:18:53.332 +STEP: Destroying namespace "webhook-6171-markers" for this suite. 07/27/23 02:18:53.36 ------------------------------ -• [SLOW TEST] [16.359 seconds] -[sig-node] Pods -test/e2e/common/node/framework.go:23 - should support retrieving logs from the container over websockets [NodeConformance] [Conformance] - test/e2e/common/node/pods.go:618 - - Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Pods +• [3.827 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should unconditionally reject operations on fail closed webhook [Conformance] + test/e2e/apimachinery/webhook.go:239 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:41:06.223 - Jun 12 21:41:06.224: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename pods 06/12/23 21:41:06.228 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:41:06.285 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:41:06.305 - [BeforeEach] [sig-node] Pods + STEP: Creating a kubernetes client 07/27/23 02:18:49.557 + Jul 27 02:18:49.557: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename webhook 07/27/23 02:18:49.558 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:18:49.602 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:18:49.61 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-node] Pods - test/e2e/common/node/pods.go:194 - [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] - test/e2e/common/node/pods.go:618 - Jun 12 21:41:06.340: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: creating the pod 06/12/23 21:41:06.345 - STEP: submitting the pod to kubernetes 06/12/23 21:41:06.346 - Jun 12 21:41:06.378: INFO: Waiting up to 5m0s for pod "pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0" in namespace "pods-7253" to be "running and ready" - Jun 12 21:41:06.394: INFO: Pod "pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0": Phase="Pending", Reason="", readiness=false. Elapsed: 15.609593ms - Jun 12 21:41:06.394: INFO: The phase of Pod pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:41:08.407: INFO: Pod "pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029088116s - Jun 12 21:41:08.408: INFO: The phase of Pod pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:41:10.406: INFO: Pod "pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028093361s - Jun 12 21:41:10.407: INFO: The phase of Pod pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:41:12.432: INFO: Pod "pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053377386s - Jun 12 21:41:12.432: INFO: The phase of Pod pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:41:14.405: INFO: Pod "pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.026393172s - Jun 12 21:41:14.405: INFO: The phase of Pod pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:41:16.410: INFO: Pod "pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.031516695s - Jun 12 21:41:16.410: INFO: The phase of Pod pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:41:18.404: INFO: Pod "pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0": Phase="Pending", Reason="", readiness=false. Elapsed: 12.025602259s - Jun 12 21:41:18.404: INFO: The phase of Pod pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:41:20.404: INFO: Pod "pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0": Phase="Pending", Reason="", readiness=false. Elapsed: 14.026285871s - Jun 12 21:41:20.405: INFO: The phase of Pod pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:41:22.404: INFO: Pod "pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0": Phase="Running", Reason="", readiness=true. Elapsed: 16.026322185s - Jun 12 21:41:22.405: INFO: The phase of Pod pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0 is Running (Ready = true) - Jun 12 21:41:22.405: INFO: Pod "pod-logs-websocket-950b61e4-5fd3-4596-a374-4c90fa017fc0" satisfied condition "running and ready" - [AfterEach] [sig-node] Pods + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 07/27/23 02:18:49.681 + STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 02:18:49.826 + STEP: Deploying the webhook pod 07/27/23 02:18:49.866 + STEP: Wait for the deployment to be ready 07/27/23 02:18:49.892 + Jul 27 02:18:49.912: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 07/27/23 02:18:51.954 + STEP: Verifying the service has paired with the endpoint 07/27/23 02:18:52.005 + Jul 27 02:18:53.006: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should unconditionally reject operations on fail closed webhook [Conformance] + test/e2e/apimachinery/webhook.go:239 + STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API 07/27/23 02:18:53.018 + STEP: create a namespace for the webhook 07/27/23 02:18:53.075 + STEP: create a configmap should be unconditionally rejected by the webhook 07/27/23 02:18:53.097 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 21:41:22.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Pods + Jul 27 02:18:53.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Pods + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Pods + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "pods-7253" for this suite. 06/12/23 21:41:22.553 + STEP: Destroying namespace "webhook-6171" for this suite. 07/27/23 02:18:53.332 + STEP: Destroying namespace "webhook-6171-markers" for this suite. 07/27/23 02:18:53.36 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSS ------------------------------ -[sig-storage] Secrets - should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/secrets_volume.go:68 -[BeforeEach] [sig-storage] Secrets +[sig-network] Services + should be able to change the type from ExternalName to ClusterIP [Conformance] + test/e2e/network/service.go:1438 +[BeforeEach] [sig-network] Services set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:41:22.586 -Jun 12 21:41:22.586: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename secrets 06/12/23 21:41:22.598 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:41:22.68 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:41:22.738 -[BeforeEach] [sig-storage] Secrets +STEP: Creating a kubernetes client 07/27/23 02:18:53.384 +Jul 27 02:18:53.385: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename services 07/27/23 02:18:53.385 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:18:53.424 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:18:53.434 +[BeforeEach] [sig-network] Services test/e2e/framework/metrics/init/init.go:31 -[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/secrets_volume.go:68 -STEP: Creating secret with name secret-test-a4318d99-f8cc-4150-82e4-bd72925ea9e4 06/12/23 21:41:22.756 -STEP: Creating a pod to test consume secrets 06/12/23 21:41:22.788 -Jun 12 21:41:22.820: INFO: Waiting up to 5m0s for pod "pod-secrets-faad6bc5-7977-4e78-8998-4487752f29d7" in namespace "secrets-5883" to be "Succeeded or Failed" -Jun 12 21:41:22.839: INFO: Pod "pod-secrets-faad6bc5-7977-4e78-8998-4487752f29d7": Phase="Pending", Reason="", readiness=false. Elapsed: 9.15951ms -Jun 12 21:41:24.876: INFO: Pod "pod-secrets-faad6bc5-7977-4e78-8998-4487752f29d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046457937s -Jun 12 21:41:26.850: INFO: Pod "pod-secrets-faad6bc5-7977-4e78-8998-4487752f29d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020618977s -Jun 12 21:41:28.850: INFO: Pod "pod-secrets-faad6bc5-7977-4e78-8998-4487752f29d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020046544s -STEP: Saw pod success 06/12/23 21:41:28.85 -Jun 12 21:41:28.850: INFO: Pod "pod-secrets-faad6bc5-7977-4e78-8998-4487752f29d7" satisfied condition "Succeeded or Failed" -Jun 12 21:41:28.858: INFO: Trying to get logs from node 10.138.75.112 pod pod-secrets-faad6bc5-7977-4e78-8998-4487752f29d7 container secret-volume-test: -STEP: delete the pod 06/12/23 21:41:28.899 -Jun 12 21:41:28.921: INFO: Waiting for pod pod-secrets-faad6bc5-7977-4e78-8998-4487752f29d7 to disappear -Jun 12 21:41:28.930: INFO: Pod pod-secrets-faad6bc5-7977-4e78-8998-4487752f29d7 no longer exists -[AfterEach] [sig-storage] Secrets +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should be able to change the type from ExternalName to ClusterIP [Conformance] + test/e2e/network/service.go:1438 +STEP: creating a service externalname-service with the type=ExternalName in namespace services-3948 07/27/23 02:18:53.443 +STEP: changing the ExternalName service to type=ClusterIP 07/27/23 02:18:53.466 +STEP: creating replication controller externalname-service in namespace services-3948 07/27/23 02:18:53.588 +I0727 02:18:53.607272 20 runners.go:193] Created replication controller with name: externalname-service, namespace: services-3948, replica count: 2 +I0727 02:18:56.658501 20 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Jul 27 02:18:56.658: INFO: Creating new exec pod +Jul 27 02:18:56.681: INFO: Waiting up to 5m0s for pod "execpodv8fqn" in namespace "services-3948" to be "running" +Jul 27 02:18:56.690: INFO: Pod "execpodv8fqn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.986911ms +Jul 27 02:18:58.702: INFO: Pod "execpodv8fqn": Phase="Running", Reason="", readiness=true. Elapsed: 2.020676299s +Jul 27 02:18:58.702: INFO: Pod "execpodv8fqn" satisfied condition "running" +Jul 27 02:18:59.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-3948 exec execpodv8fqn -- /bin/sh -x -c nc -v -z -w 2 externalname-service 80' +Jul 27 02:18:59.930: INFO: stderr: "+ nc -v -z -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Jul 27 02:18:59.930: INFO: stdout: "" +Jul 27 02:18:59.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-3948 exec execpodv8fqn -- /bin/sh -x -c nc -v -z -w 2 172.21.119.178 80' +Jul 27 02:19:00.174: INFO: stderr: "+ nc -v -z -w 2 172.21.119.178 80\nConnection to 172.21.119.178 80 port [tcp/http] succeeded!\n" +Jul 27 02:19:00.174: INFO: stdout: "" +Jul 27 02:19:00.174: INFO: Cleaning up the ExternalName to ClusterIP test service +[AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 -Jun 12 21:41:28.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Secrets +Jul 27 02:19:00.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Secrets +[DeferCleanup (Each)] [sig-network] Services dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Secrets +[DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 -STEP: Destroying namespace "secrets-5883" for this suite. 06/12/23 21:41:28.947 +STEP: Destroying namespace "services-3948" for this suite. 07/27/23 02:19:00.269 ------------------------------ -• [SLOW TEST] [6.385 seconds] -[sig-storage] Secrets -test/e2e/common/storage/framework.go:23 - should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/secrets_volume.go:68 +• [SLOW TEST] [6.909 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should be able to change the type from ExternalName to ClusterIP [Conformance] + test/e2e/network/service.go:1438 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Secrets + [BeforeEach] [sig-network] Services set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:41:22.586 - Jun 12 21:41:22.586: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename secrets 06/12/23 21:41:22.598 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:41:22.68 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:41:22.738 - [BeforeEach] [sig-storage] Secrets + STEP: Creating a kubernetes client 07/27/23 02:18:53.384 + Jul 27 02:18:53.385: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename services 07/27/23 02:18:53.385 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:18:53.424 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:18:53.434 + [BeforeEach] [sig-network] Services test/e2e/framework/metrics/init/init.go:31 - [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/secrets_volume.go:68 - STEP: Creating secret with name secret-test-a4318d99-f8cc-4150-82e4-bd72925ea9e4 06/12/23 21:41:22.756 - STEP: Creating a pod to test consume secrets 06/12/23 21:41:22.788 - Jun 12 21:41:22.820: INFO: Waiting up to 5m0s for pod "pod-secrets-faad6bc5-7977-4e78-8998-4487752f29d7" in namespace "secrets-5883" to be "Succeeded or Failed" - Jun 12 21:41:22.839: INFO: Pod "pod-secrets-faad6bc5-7977-4e78-8998-4487752f29d7": Phase="Pending", Reason="", readiness=false. Elapsed: 9.15951ms - Jun 12 21:41:24.876: INFO: Pod "pod-secrets-faad6bc5-7977-4e78-8998-4487752f29d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046457937s - Jun 12 21:41:26.850: INFO: Pod "pod-secrets-faad6bc5-7977-4e78-8998-4487752f29d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020618977s - Jun 12 21:41:28.850: INFO: Pod "pod-secrets-faad6bc5-7977-4e78-8998-4487752f29d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020046544s - STEP: Saw pod success 06/12/23 21:41:28.85 - Jun 12 21:41:28.850: INFO: Pod "pod-secrets-faad6bc5-7977-4e78-8998-4487752f29d7" satisfied condition "Succeeded or Failed" - Jun 12 21:41:28.858: INFO: Trying to get logs from node 10.138.75.112 pod pod-secrets-faad6bc5-7977-4e78-8998-4487752f29d7 container secret-volume-test: - STEP: delete the pod 06/12/23 21:41:28.899 - Jun 12 21:41:28.921: INFO: Waiting for pod pod-secrets-faad6bc5-7977-4e78-8998-4487752f29d7 to disappear - Jun 12 21:41:28.930: INFO: Pod pod-secrets-faad6bc5-7977-4e78-8998-4487752f29d7 no longer exists - [AfterEach] [sig-storage] Secrets + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should be able to change the type from ExternalName to ClusterIP [Conformance] + test/e2e/network/service.go:1438 + STEP: creating a service externalname-service with the type=ExternalName in namespace services-3948 07/27/23 02:18:53.443 + STEP: changing the ExternalName service to type=ClusterIP 07/27/23 02:18:53.466 + STEP: creating replication controller externalname-service in namespace services-3948 07/27/23 02:18:53.588 + I0727 02:18:53.607272 20 runners.go:193] Created replication controller with name: externalname-service, namespace: services-3948, replica count: 2 + I0727 02:18:56.658501 20 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Jul 27 02:18:56.658: INFO: Creating new exec pod + Jul 27 02:18:56.681: INFO: Waiting up to 5m0s for pod "execpodv8fqn" in namespace "services-3948" to be "running" + Jul 27 02:18:56.690: INFO: Pod "execpodv8fqn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.986911ms + Jul 27 02:18:58.702: INFO: Pod "execpodv8fqn": Phase="Running", Reason="", readiness=true. Elapsed: 2.020676299s + Jul 27 02:18:58.702: INFO: Pod "execpodv8fqn" satisfied condition "running" + Jul 27 02:18:59.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-3948 exec execpodv8fqn -- /bin/sh -x -c nc -v -z -w 2 externalname-service 80' + Jul 27 02:18:59.930: INFO: stderr: "+ nc -v -z -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" + Jul 27 02:18:59.930: INFO: stdout: "" + Jul 27 02:18:59.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-3948 exec execpodv8fqn -- /bin/sh -x -c nc -v -z -w 2 172.21.119.178 80' + Jul 27 02:19:00.174: INFO: stderr: "+ nc -v -z -w 2 172.21.119.178 80\nConnection to 172.21.119.178 80 port [tcp/http] succeeded!\n" + Jul 27 02:19:00.174: INFO: stdout: "" + Jul 27 02:19:00.174: INFO: Cleaning up the ExternalName to ClusterIP test service + [AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 - Jun 12 21:41:28.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Secrets + Jul 27 02:19:00.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Secrets + [DeferCleanup (Each)] [sig-network] Services dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Secrets + [DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 - STEP: Destroying namespace "secrets-5883" for this suite. 06/12/23 21:41:28.947 + STEP: Destroying namespace "services-3948" for this suite. 07/27/23 02:19:00.269 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSS ------------------------------ -[sig-node] Pods - should get a host IP [NodeConformance] [Conformance] - test/e2e/common/node/pods.go:204 -[BeforeEach] [sig-node] Pods +[sig-api-machinery] Garbage collector + should not be blocked by dependency circle [Conformance] + test/e2e/apimachinery/garbage_collector.go:849 +[BeforeEach] [sig-api-machinery] Garbage collector set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:41:28.98 -Jun 12 21:41:28.980: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename pods 06/12/23 21:41:28.982 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:41:29.048 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:41:29.064 -[BeforeEach] [sig-node] Pods +STEP: Creating a kubernetes client 07/27/23 02:19:00.294 +Jul 27 02:19:00.294: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename gc 07/27/23 02:19:00.295 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:19:00.335 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:19:00.343 +[BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-node] Pods - test/e2e/common/node/pods.go:194 -[It] should get a host IP [NodeConformance] [Conformance] - test/e2e/common/node/pods.go:204 -STEP: creating pod 06/12/23 21:41:29.079 -Jun 12 21:41:29.105: INFO: Waiting up to 5m0s for pod "pod-hostip-332a94a2-b4ee-49d8-b781-ba86ffb7d35f" in namespace "pods-711" to be "running and ready" -Jun 12 21:41:29.113: INFO: Pod "pod-hostip-332a94a2-b4ee-49d8-b781-ba86ffb7d35f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.723787ms -Jun 12 21:41:29.114: INFO: The phase of Pod pod-hostip-332a94a2-b4ee-49d8-b781-ba86ffb7d35f is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:41:31.122: INFO: Pod "pod-hostip-332a94a2-b4ee-49d8-b781-ba86ffb7d35f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017658246s -Jun 12 21:41:31.122: INFO: The phase of Pod pod-hostip-332a94a2-b4ee-49d8-b781-ba86ffb7d35f is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:41:33.130: INFO: Pod "pod-hostip-332a94a2-b4ee-49d8-b781-ba86ffb7d35f": Phase="Running", Reason="", readiness=true. Elapsed: 4.025531598s -Jun 12 21:41:33.130: INFO: The phase of Pod pod-hostip-332a94a2-b4ee-49d8-b781-ba86ffb7d35f is Running (Ready = true) -Jun 12 21:41:33.130: INFO: Pod "pod-hostip-332a94a2-b4ee-49d8-b781-ba86ffb7d35f" satisfied condition "running and ready" -Jun 12 21:41:33.153: INFO: Pod pod-hostip-332a94a2-b4ee-49d8-b781-ba86ffb7d35f has hostIP: 10.138.75.112 -[AfterEach] [sig-node] Pods +[It] should not be blocked by dependency circle [Conformance] + test/e2e/apimachinery/garbage_collector.go:849 +Jul 27 02:19:00.451: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"7673b809-292e-48e0-927b-03596369db06", Controller:(*bool)(0xc002af6ed2), BlockOwnerDeletion:(*bool)(0xc002af6ed3)}} +Jul 27 02:19:00.476: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"a3e95f3f-1082-45b4-ac8c-3a405038d7ac", Controller:(*bool)(0xc002af71b2), BlockOwnerDeletion:(*bool)(0xc002af71b3)}} +Jul 27 02:19:00.502: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"0f217743-ad66-4d47-95eb-afe19db159ab", Controller:(*bool)(0xc004a156a6), BlockOwnerDeletion:(*bool)(0xc004a156a7)}} +[AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 -Jun 12 21:41:33.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Pods +Jul 27 02:19:05.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Pods +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Pods +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector tear down framework | framework.go:193 -STEP: Destroying namespace "pods-711" for this suite. 06/12/23 21:41:33.184 +STEP: Destroying namespace "gc-5108" for this suite. 07/27/23 02:19:05.557 ------------------------------ -• [4.225 seconds] -[sig-node] Pods -test/e2e/common/node/framework.go:23 - should get a host IP [NodeConformance] [Conformance] - test/e2e/common/node/pods.go:204 +• [SLOW TEST] [5.288 seconds] +[sig-api-machinery] Garbage collector +test/e2e/apimachinery/framework.go:23 + should not be blocked by dependency circle [Conformance] + test/e2e/apimachinery/garbage_collector.go:849 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Pods + [BeforeEach] [sig-api-machinery] Garbage collector set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:41:28.98 - Jun 12 21:41:28.980: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename pods 06/12/23 21:41:28.982 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:41:29.048 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:41:29.064 - [BeforeEach] [sig-node] Pods + STEP: Creating a kubernetes client 07/27/23 02:19:00.294 + Jul 27 02:19:00.294: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename gc 07/27/23 02:19:00.295 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:19:00.335 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:19:00.343 + [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-node] Pods - test/e2e/common/node/pods.go:194 - [It] should get a host IP [NodeConformance] [Conformance] - test/e2e/common/node/pods.go:204 - STEP: creating pod 06/12/23 21:41:29.079 - Jun 12 21:41:29.105: INFO: Waiting up to 5m0s for pod "pod-hostip-332a94a2-b4ee-49d8-b781-ba86ffb7d35f" in namespace "pods-711" to be "running and ready" - Jun 12 21:41:29.113: INFO: Pod "pod-hostip-332a94a2-b4ee-49d8-b781-ba86ffb7d35f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.723787ms - Jun 12 21:41:29.114: INFO: The phase of Pod pod-hostip-332a94a2-b4ee-49d8-b781-ba86ffb7d35f is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:41:31.122: INFO: Pod "pod-hostip-332a94a2-b4ee-49d8-b781-ba86ffb7d35f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017658246s - Jun 12 21:41:31.122: INFO: The phase of Pod pod-hostip-332a94a2-b4ee-49d8-b781-ba86ffb7d35f is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:41:33.130: INFO: Pod "pod-hostip-332a94a2-b4ee-49d8-b781-ba86ffb7d35f": Phase="Running", Reason="", readiness=true. Elapsed: 4.025531598s - Jun 12 21:41:33.130: INFO: The phase of Pod pod-hostip-332a94a2-b4ee-49d8-b781-ba86ffb7d35f is Running (Ready = true) - Jun 12 21:41:33.130: INFO: Pod "pod-hostip-332a94a2-b4ee-49d8-b781-ba86ffb7d35f" satisfied condition "running and ready" - Jun 12 21:41:33.153: INFO: Pod pod-hostip-332a94a2-b4ee-49d8-b781-ba86ffb7d35f has hostIP: 10.138.75.112 - [AfterEach] [sig-node] Pods + [It] should not be blocked by dependency circle [Conformance] + test/e2e/apimachinery/garbage_collector.go:849 + Jul 27 02:19:00.451: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"7673b809-292e-48e0-927b-03596369db06", Controller:(*bool)(0xc002af6ed2), BlockOwnerDeletion:(*bool)(0xc002af6ed3)}} + Jul 27 02:19:00.476: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"a3e95f3f-1082-45b4-ac8c-3a405038d7ac", Controller:(*bool)(0xc002af71b2), BlockOwnerDeletion:(*bool)(0xc002af71b3)}} + Jul 27 02:19:00.502: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"0f217743-ad66-4d47-95eb-afe19db159ab", Controller:(*bool)(0xc004a156a6), BlockOwnerDeletion:(*bool)(0xc004a156a7)}} + [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 - Jun 12 21:41:33.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Pods + Jul 27 02:19:05.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Pods + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Pods + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector tear down framework | framework.go:193 - STEP: Destroying namespace "pods-711" for this suite. 06/12/23 21:41:33.184 + STEP: Destroying namespace "gc-5108" for this suite. 07/27/23 02:19:05.557 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSS ------------------------------ -[sig-apps] ReplicationController - should get and update a ReplicationController scale [Conformance] - test/e2e/apps/rc.go:402 -[BeforeEach] [sig-apps] ReplicationController +[sig-node] ConfigMap + should fail to create ConfigMap with empty key [Conformance] + test/e2e/common/node/configmap.go:138 +[BeforeEach] [sig-node] ConfigMap set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:41:33.213 -Jun 12 21:41:33.213: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename replication-controller 06/12/23 21:41:33.215 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:41:33.269 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:41:33.286 -[BeforeEach] [sig-apps] ReplicationController +STEP: Creating a kubernetes client 07/27/23 02:19:05.582 +Jul 27 02:19:05.583: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename configmap 07/27/23 02:19:05.583 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:19:05.647 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:19:05.658 +[BeforeEach] [sig-node] ConfigMap test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-apps] ReplicationController - test/e2e/apps/rc.go:57 -[It] should get and update a ReplicationController scale [Conformance] - test/e2e/apps/rc.go:402 -STEP: Creating ReplicationController "e2e-rc-lxbbm" 06/12/23 21:41:33.3 -Jun 12 21:41:33.316: INFO: Get Replication Controller "e2e-rc-lxbbm" to confirm replicas -Jun 12 21:41:34.332: INFO: Get Replication Controller "e2e-rc-lxbbm" to confirm replicas -Jun 12 21:41:34.353: INFO: Found 1 replicas for "e2e-rc-lxbbm" replication controller -STEP: Getting scale subresource for ReplicationController "e2e-rc-lxbbm" 06/12/23 21:41:34.353 -STEP: Updating a scale subresource 06/12/23 21:41:34.366 -STEP: Verifying replicas where modified for replication controller "e2e-rc-lxbbm" 06/12/23 21:41:34.42 -Jun 12 21:41:34.420: INFO: Get Replication Controller "e2e-rc-lxbbm" to confirm replicas -Jun 12 21:41:34.439: INFO: Found 2 replicas for "e2e-rc-lxbbm" replication controller -[AfterEach] [sig-apps] ReplicationController +[It] should fail to create ConfigMap with empty key [Conformance] + test/e2e/common/node/configmap.go:138 +STEP: Creating configMap that has name configmap-test-emptyKey-fb656e76-c933-4e9c-bbd0-217137675bdd 07/27/23 02:19:05.667 +[AfterEach] [sig-node] ConfigMap test/e2e/framework/node/init/init.go:32 -Jun 12 21:41:34.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] ReplicationController +Jul 27 02:19:05.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] ConfigMap test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] ReplicationController +[DeferCleanup (Each)] [sig-node] ConfigMap dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] ReplicationController +[DeferCleanup (Each)] [sig-node] ConfigMap tear down framework | framework.go:193 -STEP: Destroying namespace "replication-controller-2801" for this suite. 06/12/23 21:41:34.459 +STEP: Destroying namespace "configmap-1607" for this suite. 07/27/23 02:19:05.685 ------------------------------ -• [1.269 seconds] -[sig-apps] ReplicationController -test/e2e/apps/framework.go:23 - should get and update a ReplicationController scale [Conformance] - test/e2e/apps/rc.go:402 +• [0.128 seconds] +[sig-node] ConfigMap +test/e2e/common/node/framework.go:23 + should fail to create ConfigMap with empty key [Conformance] + test/e2e/common/node/configmap.go:138 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] ReplicationController + [BeforeEach] [sig-node] ConfigMap set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:41:33.213 - Jun 12 21:41:33.213: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename replication-controller 06/12/23 21:41:33.215 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:41:33.269 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:41:33.286 - [BeforeEach] [sig-apps] ReplicationController + STEP: Creating a kubernetes client 07/27/23 02:19:05.582 + Jul 27 02:19:05.583: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename configmap 07/27/23 02:19:05.583 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:19:05.647 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:19:05.658 + [BeforeEach] [sig-node] ConfigMap test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-apps] ReplicationController - test/e2e/apps/rc.go:57 - [It] should get and update a ReplicationController scale [Conformance] - test/e2e/apps/rc.go:402 - STEP: Creating ReplicationController "e2e-rc-lxbbm" 06/12/23 21:41:33.3 - Jun 12 21:41:33.316: INFO: Get Replication Controller "e2e-rc-lxbbm" to confirm replicas - Jun 12 21:41:34.332: INFO: Get Replication Controller "e2e-rc-lxbbm" to confirm replicas - Jun 12 21:41:34.353: INFO: Found 1 replicas for "e2e-rc-lxbbm" replication controller - STEP: Getting scale subresource for ReplicationController "e2e-rc-lxbbm" 06/12/23 21:41:34.353 - STEP: Updating a scale subresource 06/12/23 21:41:34.366 - STEP: Verifying replicas where modified for replication controller "e2e-rc-lxbbm" 06/12/23 21:41:34.42 - Jun 12 21:41:34.420: INFO: Get Replication Controller "e2e-rc-lxbbm" to confirm replicas - Jun 12 21:41:34.439: INFO: Found 2 replicas for "e2e-rc-lxbbm" replication controller - [AfterEach] [sig-apps] ReplicationController + [It] should fail to create ConfigMap with empty key [Conformance] + test/e2e/common/node/configmap.go:138 + STEP: Creating configMap that has name configmap-test-emptyKey-fb656e76-c933-4e9c-bbd0-217137675bdd 07/27/23 02:19:05.667 + [AfterEach] [sig-node] ConfigMap test/e2e/framework/node/init/init.go:32 - Jun 12 21:41:34.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] ReplicationController + Jul 27 02:19:05.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] ConfigMap test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] ReplicationController + [DeferCleanup (Each)] [sig-node] ConfigMap dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] ReplicationController + [DeferCleanup (Each)] [sig-node] ConfigMap tear down framework | framework.go:193 - STEP: Destroying namespace "replication-controller-2801" for this suite. 06/12/23 21:41:34.459 + STEP: Destroying namespace "configmap-1607" for this suite. 07/27/23 02:19:05.685 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSS ------------------------------ -[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] - works for multiple CRDs of same group but different versions [Conformance] - test/e2e/apimachinery/crd_publish_openapi.go:309 -[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +[sig-storage] Secrets + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:125 +[BeforeEach] [sig-storage] Secrets set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:41:34.485 -Jun 12 21:41:34.486: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename crd-publish-openapi 06/12/23 21:41:34.489 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:41:34.546 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:41:34.56 -[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +STEP: Creating a kubernetes client 07/27/23 02:19:05.711 +Jul 27 02:19:05.711: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename secrets 07/27/23 02:19:05.712 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:19:05.755 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:19:05.765 +[BeforeEach] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:31 -[It] works for multiple CRDs of same group but different versions [Conformance] - test/e2e/apimachinery/crd_publish_openapi.go:309 -STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation 06/12/23 21:41:34.577 -Jun 12 21:41:34.581: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation 06/12/23 21:42:01.231 -Jun 12 21:42:01.233: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 21:42:08.362: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:125 +STEP: Creating secret with name secret-test-23883e30-e65e-46de-89f2-6b661cf32418 07/27/23 02:19:05.775 +STEP: Creating a pod to test consume secrets 07/27/23 02:19:05.789 +Jul 27 02:19:05.816: INFO: Waiting up to 5m0s for pod "pod-secrets-8769960b-cf25-4844-82e2-4584c9185162" in namespace "secrets-2483" to be "Succeeded or Failed" +Jul 27 02:19:05.826: INFO: Pod "pod-secrets-8769960b-cf25-4844-82e2-4584c9185162": Phase="Pending", Reason="", readiness=false. Elapsed: 10.411531ms +Jul 27 02:19:07.837: INFO: Pod "pod-secrets-8769960b-cf25-4844-82e2-4584c9185162": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021098098s +Jul 27 02:19:09.837: INFO: Pod "pod-secrets-8769960b-cf25-4844-82e2-4584c9185162": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021175224s +STEP: Saw pod success 07/27/23 02:19:09.837 +Jul 27 02:19:09.837: INFO: Pod "pod-secrets-8769960b-cf25-4844-82e2-4584c9185162" satisfied condition "Succeeded or Failed" +Jul 27 02:19:09.850: INFO: Trying to get logs from node 10.245.128.19 pod pod-secrets-8769960b-cf25-4844-82e2-4584c9185162 container secret-volume-test: +STEP: delete the pod 07/27/23 02:19:09.891 +Jul 27 02:19:09.911: INFO: Waiting for pod pod-secrets-8769960b-cf25-4844-82e2-4584c9185162 to disappear +Jul 27 02:19:09.919: INFO: Pod pod-secrets-8769960b-cf25-4844-82e2-4584c9185162 no longer exists +[AfterEach] [sig-storage] Secrets test/e2e/framework/node/init/init.go:32 -Jun 12 21:42:38.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +Jul 27 02:19:09.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-storage] Secrets dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-storage] Secrets tear down framework | framework.go:193 -STEP: Destroying namespace "crd-publish-openapi-5590" for this suite. 06/12/23 21:42:38.106 +STEP: Destroying namespace "secrets-2483" for this suite. 07/27/23 02:19:09.932 ------------------------------ -• [SLOW TEST] [63.642 seconds] -[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] -test/e2e/apimachinery/framework.go:23 - works for multiple CRDs of same group but different versions [Conformance] - test/e2e/apimachinery/crd_publish_openapi.go:309 +• [4.246 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:125 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + [BeforeEach] [sig-storage] Secrets set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:41:34.485 - Jun 12 21:41:34.486: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename crd-publish-openapi 06/12/23 21:41:34.489 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:41:34.546 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:41:34.56 - [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + STEP: Creating a kubernetes client 07/27/23 02:19:05.711 + Jul 27 02:19:05.711: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename secrets 07/27/23 02:19:05.712 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:19:05.755 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:19:05.765 + [BeforeEach] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:31 - [It] works for multiple CRDs of same group but different versions [Conformance] - test/e2e/apimachinery/crd_publish_openapi.go:309 - STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation 06/12/23 21:41:34.577 - Jun 12 21:41:34.581: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation 06/12/23 21:42:01.231 - Jun 12 21:42:01.233: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 21:42:08.362: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:125 + STEP: Creating secret with name secret-test-23883e30-e65e-46de-89f2-6b661cf32418 07/27/23 02:19:05.775 + STEP: Creating a pod to test consume secrets 07/27/23 02:19:05.789 + Jul 27 02:19:05.816: INFO: Waiting up to 5m0s for pod "pod-secrets-8769960b-cf25-4844-82e2-4584c9185162" in namespace "secrets-2483" to be "Succeeded or Failed" + Jul 27 02:19:05.826: INFO: Pod "pod-secrets-8769960b-cf25-4844-82e2-4584c9185162": Phase="Pending", Reason="", readiness=false. Elapsed: 10.411531ms + Jul 27 02:19:07.837: INFO: Pod "pod-secrets-8769960b-cf25-4844-82e2-4584c9185162": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021098098s + Jul 27 02:19:09.837: INFO: Pod "pod-secrets-8769960b-cf25-4844-82e2-4584c9185162": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021175224s + STEP: Saw pod success 07/27/23 02:19:09.837 + Jul 27 02:19:09.837: INFO: Pod "pod-secrets-8769960b-cf25-4844-82e2-4584c9185162" satisfied condition "Succeeded or Failed" + Jul 27 02:19:09.850: INFO: Trying to get logs from node 10.245.128.19 pod pod-secrets-8769960b-cf25-4844-82e2-4584c9185162 container secret-volume-test: + STEP: delete the pod 07/27/23 02:19:09.891 + Jul 27 02:19:09.911: INFO: Waiting for pod pod-secrets-8769960b-cf25-4844-82e2-4584c9185162 to disappear + Jul 27 02:19:09.919: INFO: Pod pod-secrets-8769960b-cf25-4844-82e2-4584c9185162 no longer exists + [AfterEach] [sig-storage] Secrets test/e2e/framework/node/init/init.go:32 - Jun 12 21:42:38.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + Jul 27 02:19:09.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-storage] Secrets dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-storage] Secrets tear down framework | framework.go:193 - STEP: Destroying namespace "crd-publish-openapi-5590" for this suite. 06/12/23 21:42:38.106 + STEP: Destroying namespace "secrets-2483" for this suite. 07/27/23 02:19:09.932 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-apps] Deployment - deployment should support rollover [Conformance] - test/e2e/apps/deployment.go:132 -[BeforeEach] [sig-apps] Deployment +[sig-cli] Kubectl client Update Demo + should create and stop a replication controller [Conformance] + test/e2e/kubectl/kubectl.go:339 +[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:42:38.129 -Jun 12 21:42:38.129: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename deployment 06/12/23 21:42:38.137 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:42:38.197 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:42:38.222 -[BeforeEach] [sig-apps] Deployment +STEP: Creating a kubernetes client 07/27/23 02:19:09.958 +Jul 27 02:19:09.958: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename kubectl 07/27/23 02:19:09.959 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:19:09.999 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:19:10.008 +[BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:91 -[It] deployment should support rollover [Conformance] - test/e2e/apps/deployment.go:132 -Jun 12 21:42:38.281: INFO: Pod name rollover-pod: Found 0 pods out of 1 -Jun 12 21:42:43.298: INFO: Pod name rollover-pod: Found 1 pods out of 1 -STEP: ensuring each pod is running 06/12/23 21:42:43.298 -Jun 12 21:42:43.299: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready -Jun 12 21:42:45.308: INFO: Creating deployment "test-rollover-deployment" -Jun 12 21:42:45.338: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations -Jun 12 21:42:47.366: INFO: Check revision of new replica set for deployment "test-rollover-deployment" -Jun 12 21:42:47.391: INFO: Ensure that both replica sets have 1 created replica -Jun 12 21:42:47.411: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update -Jun 12 21:42:47.440: INFO: Updating deployment test-rollover-deployment -Jun 12 21:42:47.440: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller -Jun 12 21:42:49.469: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 -Jun 12 21:42:49.490: INFO: Make sure deployment "test-rollover-deployment" is complete -Jun 12 21:42:49.516: INFO: all replica sets need to contain the pod-template-hash label -Jun 12 21:42:49.516: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 42, 47, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 21:42:51.539: INFO: all replica sets need to contain the pod-template-hash label -Jun 12 21:42:51.539: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 42, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 21:42:53.559: INFO: all replica sets need to contain the pod-template-hash label -Jun 12 21:42:53.559: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 42, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 21:42:55.540: INFO: all replica sets need to contain the pod-template-hash label -Jun 12 21:42:55.540: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 42, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 21:42:57.541: INFO: all replica sets need to contain the pod-template-hash label -Jun 12 21:42:57.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 42, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 21:42:59.543: INFO: all replica sets need to contain the pod-template-hash label -Jun 12 21:42:59.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 42, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 21:43:01.545: INFO: -Jun 12 21:43:01.545: INFO: Ensure that both old replica sets have no replicas -[AfterEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:84 -Jun 12 21:43:01.579: INFO: Deployment "test-rollover-deployment": -&Deployment{ObjectMeta:{test-rollover-deployment deployment-3079 a79bbfe9-681f-41cc-8d78-15fb519d8502 113888 2 2023-06-12 21:42:45 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-06-12 21:42:47 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-06-12 21:43:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00a135ee8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-06-12 21:42:45 +0000 UTC,LastTransitionTime:2023-06-12 21:42:45 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-6c6df9974f" has successfully progressed.,LastUpdateTime:2023-06-12 21:43:00 +0000 UTC,LastTransitionTime:2023-06-12 21:42:45 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} - -Jun 12 21:43:01.590: INFO: New ReplicaSet "test-rollover-deployment-6c6df9974f" of Deployment "test-rollover-deployment": -&ReplicaSet{ObjectMeta:{test-rollover-deployment-6c6df9974f deployment-3079 5f6592b7-6ce0-4b8f-b17d-cd3f8c628f37 113877 2 2023-06-12 21:42:47 +0000 UTC map[name:rollover-pod pod-template-hash:6c6df9974f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment a79bbfe9-681f-41cc-8d78-15fb519d8502 0xc0032483b7 0xc0032483b8}] [] [{kube-controller-manager Update apps/v1 2023-06-12 21:42:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a79bbfe9-681f-41cc-8d78-15fb519d8502\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-06-12 21:43:00 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 6c6df9974f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:6c6df9974f] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003248468 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} -Jun 12 21:43:01.590: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": -Jun 12 21:43:01.590: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-3079 5d441907-3402-4666-83dc-2d92556eb690 113887 2 2023-06-12 21:42:38 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment a79bbfe9-681f-41cc-8d78-15fb519d8502 0xc003248287 0xc003248288}] [] [{e2e.test Update apps/v1 2023-06-12 21:42:38 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-06-12 21:43:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a79bbfe9-681f-41cc-8d78-15fb519d8502\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2023-06-12 21:43:00 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003248348 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} -Jun 12 21:43:01.591: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-768dcbc65b deployment-3079 6836a191-6ea8-4233-8860-308774188d2d 113791 2 2023-06-12 21:42:45 +0000 UTC map[name:rollover-pod pod-template-hash:768dcbc65b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment a79bbfe9-681f-41cc-8d78-15fb519d8502 0xc0032484d7 0xc0032484d8}] [] [{kube-controller-manager Update apps/v1 2023-06-12 21:42:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a79bbfe9-681f-41cc-8d78-15fb519d8502\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-06-12 21:42:47 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 768dcbc65b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:768dcbc65b] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003248588 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} -Jun 12 21:43:01.606: INFO: Pod "test-rollover-deployment-6c6df9974f-hlv5s" is available: -&Pod{ObjectMeta:{test-rollover-deployment-6c6df9974f-hlv5s test-rollover-deployment-6c6df9974f- deployment-3079 d9463553-35cc-49e1-bc0b-ef8076cae57b 113827 0 2023-06-12 21:42:47 +0000 UTC map[name:rollover-pod pod-template-hash:6c6df9974f] map[cni.projectcalico.org/containerID:a83ad1d1264885f83695c6b04e2e8ec27e9688ec5d13ae5cc8db42ed9acd7c77 cni.projectcalico.org/podIP:172.30.161.112/32 cni.projectcalico.org/podIPs:172.30.161.112/32 k8s.v1.cni.cncf.io/network-status:[{ - "name": "k8s-pod-network", - "ips": [ - "172.30.161.112" - ], - "default": true, - "dns": {} -}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-rollover-deployment-6c6df9974f 5f6592b7-6ce0-4b8f-b17d-cd3f8c628f37 0xc002ae6fc7 0xc002ae6fc8}] [] [{kube-controller-manager Update v1 2023-06-12 21:42:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5f6592b7-6ce0-4b8f-b17d-cd3f8c628f37\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-06-12 21:42:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-06-12 21:42:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-06-12 21:42:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.30.161.112\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-sps72,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sps72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.112,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c55,c10,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-l4fkf,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 21:42:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 21:42:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 21:42:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 21:42:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.112,PodIP:172.30.161.112,StartTime:2023-06-12 21:42:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-12 21:42:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,ImageID:registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://13acf4de9ee6858926521de1e212bd347332fd1d9c038a7976e6c6a8449f1837,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.30.161.112,},},EphemeralContainerStatuses:[]ContainerStatus{},},} -[AfterEach] [sig-apps] Deployment +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[BeforeEach] Update Demo + test/e2e/kubectl/kubectl.go:326 +[It] should create and stop a replication controller [Conformance] + test/e2e/kubectl/kubectl.go:339 +STEP: creating a replication controller 07/27/23 02:19:10.018 +Jul 27 02:19:10.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2510 create -f -' +Jul 27 02:19:10.621: INFO: stderr: "" +Jul 27 02:19:10.621: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. 07/27/23 02:19:10.621 +Jul 27 02:19:10.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2510 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Jul 27 02:19:10.719: INFO: stderr: "" +Jul 27 02:19:10.719: INFO: stdout: "update-demo-nautilus-6htgr update-demo-nautilus-fh4pb " +Jul 27 02:19:10.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2510 get pods update-demo-nautilus-6htgr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Jul 27 02:19:10.802: INFO: stderr: "" +Jul 27 02:19:10.802: INFO: stdout: "" +Jul 27 02:19:10.802: INFO: update-demo-nautilus-6htgr is created but not running +Jul 27 02:19:15.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2510 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Jul 27 02:19:15.884: INFO: stderr: "" +Jul 27 02:19:15.884: INFO: stdout: "update-demo-nautilus-6htgr update-demo-nautilus-fh4pb " +Jul 27 02:19:15.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2510 get pods update-demo-nautilus-6htgr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Jul 27 02:19:15.972: INFO: stderr: "" +Jul 27 02:19:15.972: INFO: stdout: "" +Jul 27 02:19:15.972: INFO: update-demo-nautilus-6htgr is created but not running +Jul 27 02:19:20.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2510 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Jul 27 02:19:21.207: INFO: stderr: "" +Jul 27 02:19:21.207: INFO: stdout: "update-demo-nautilus-6htgr update-demo-nautilus-fh4pb " +Jul 27 02:19:21.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2510 get pods update-demo-nautilus-6htgr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Jul 27 02:19:21.337: INFO: stderr: "" +Jul 27 02:19:21.337: INFO: stdout: "" +Jul 27 02:19:21.337: INFO: update-demo-nautilus-6htgr is created but not running +Jul 27 02:19:26.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2510 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Jul 27 02:19:26.425: INFO: stderr: "" +Jul 27 02:19:26.425: INFO: stdout: "update-demo-nautilus-6htgr update-demo-nautilus-fh4pb " +Jul 27 02:19:26.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2510 get pods update-demo-nautilus-6htgr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Jul 27 02:19:26.513: INFO: stderr: "" +Jul 27 02:19:26.513: INFO: stdout: "true" +Jul 27 02:19:26.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2510 get pods update-demo-nautilus-6htgr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Jul 27 02:19:26.588: INFO: stderr: "" +Jul 27 02:19:26.588: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" +Jul 27 02:19:26.588: INFO: validating pod update-demo-nautilus-6htgr +Jul 27 02:19:26.607: INFO: got data: { + "image": "nautilus.jpg" +} + +Jul 27 02:19:26.607: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jul 27 02:19:26.607: INFO: update-demo-nautilus-6htgr is verified up and running +Jul 27 02:19:26.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2510 get pods update-demo-nautilus-fh4pb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Jul 27 02:19:26.688: INFO: stderr: "" +Jul 27 02:19:26.688: INFO: stdout: "true" +Jul 27 02:19:26.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2510 get pods update-demo-nautilus-fh4pb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Jul 27 02:19:26.763: INFO: stderr: "" +Jul 27 02:19:26.763: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" +Jul 27 02:19:26.763: INFO: validating pod update-demo-nautilus-fh4pb +Jul 27 02:19:26.780: INFO: got data: { + "image": "nautilus.jpg" +} + +Jul 27 02:19:26.780: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jul 27 02:19:26.780: INFO: update-demo-nautilus-fh4pb is verified up and running +STEP: using delete to clean up resources 07/27/23 02:19:26.78 +Jul 27 02:19:26.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2510 delete --grace-period=0 --force -f -' +Jul 27 02:19:26.864: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jul 27 02:19:26.864: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Jul 27 02:19:26.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2510 get rc,svc -l name=update-demo --no-headers' +Jul 27 02:19:26.974: INFO: stderr: "No resources found in kubectl-2510 namespace.\n" +Jul 27 02:19:26.974: INFO: stdout: "" +Jul 27 02:19:26.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2510 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Jul 27 02:19:27.065: INFO: stderr: "" +Jul 27 02:19:27.065: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Jul 27 02:19:27.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-2510" for this suite. 07/27/23 02:19:27.08 +------------------------------ +• [SLOW TEST] [17.145 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Update Demo + test/e2e/kubectl/kubectl.go:324 + should create and stop a replication controller [Conformance] + test/e2e/kubectl/kubectl.go:339 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 07/27/23 02:19:09.958 + Jul 27 02:19:09.958: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename kubectl 07/27/23 02:19:09.959 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:19:09.999 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:19:10.008 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [BeforeEach] Update Demo + test/e2e/kubectl/kubectl.go:326 + [It] should create and stop a replication controller [Conformance] + test/e2e/kubectl/kubectl.go:339 + STEP: creating a replication controller 07/27/23 02:19:10.018 + Jul 27 02:19:10.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2510 create -f -' + Jul 27 02:19:10.621: INFO: stderr: "" + Jul 27 02:19:10.621: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" + STEP: waiting for all containers in name=update-demo pods to come up. 07/27/23 02:19:10.621 + Jul 27 02:19:10.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2510 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Jul 27 02:19:10.719: INFO: stderr: "" + Jul 27 02:19:10.719: INFO: stdout: "update-demo-nautilus-6htgr update-demo-nautilus-fh4pb " + Jul 27 02:19:10.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2510 get pods update-demo-nautilus-6htgr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Jul 27 02:19:10.802: INFO: stderr: "" + Jul 27 02:19:10.802: INFO: stdout: "" + Jul 27 02:19:10.802: INFO: update-demo-nautilus-6htgr is created but not running + Jul 27 02:19:15.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2510 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Jul 27 02:19:15.884: INFO: stderr: "" + Jul 27 02:19:15.884: INFO: stdout: "update-demo-nautilus-6htgr update-demo-nautilus-fh4pb " + Jul 27 02:19:15.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2510 get pods update-demo-nautilus-6htgr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Jul 27 02:19:15.972: INFO: stderr: "" + Jul 27 02:19:15.972: INFO: stdout: "" + Jul 27 02:19:15.972: INFO: update-demo-nautilus-6htgr is created but not running + Jul 27 02:19:20.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2510 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Jul 27 02:19:21.207: INFO: stderr: "" + Jul 27 02:19:21.207: INFO: stdout: "update-demo-nautilus-6htgr update-demo-nautilus-fh4pb " + Jul 27 02:19:21.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2510 get pods update-demo-nautilus-6htgr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Jul 27 02:19:21.337: INFO: stderr: "" + Jul 27 02:19:21.337: INFO: stdout: "" + Jul 27 02:19:21.337: INFO: update-demo-nautilus-6htgr is created but not running + Jul 27 02:19:26.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2510 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Jul 27 02:19:26.425: INFO: stderr: "" + Jul 27 02:19:26.425: INFO: stdout: "update-demo-nautilus-6htgr update-demo-nautilus-fh4pb " + Jul 27 02:19:26.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2510 get pods update-demo-nautilus-6htgr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Jul 27 02:19:26.513: INFO: stderr: "" + Jul 27 02:19:26.513: INFO: stdout: "true" + Jul 27 02:19:26.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2510 get pods update-demo-nautilus-6htgr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' + Jul 27 02:19:26.588: INFO: stderr: "" + Jul 27 02:19:26.588: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" + Jul 27 02:19:26.588: INFO: validating pod update-demo-nautilus-6htgr + Jul 27 02:19:26.607: INFO: got data: { + "image": "nautilus.jpg" + } + + Jul 27 02:19:26.607: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . + Jul 27 02:19:26.607: INFO: update-demo-nautilus-6htgr is verified up and running + Jul 27 02:19:26.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2510 get pods update-demo-nautilus-fh4pb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Jul 27 02:19:26.688: INFO: stderr: "" + Jul 27 02:19:26.688: INFO: stdout: "true" + Jul 27 02:19:26.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2510 get pods update-demo-nautilus-fh4pb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' + Jul 27 02:19:26.763: INFO: stderr: "" + Jul 27 02:19:26.763: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" + Jul 27 02:19:26.763: INFO: validating pod update-demo-nautilus-fh4pb + Jul 27 02:19:26.780: INFO: got data: { + "image": "nautilus.jpg" + } + + Jul 27 02:19:26.780: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . + Jul 27 02:19:26.780: INFO: update-demo-nautilus-fh4pb is verified up and running + STEP: using delete to clean up resources 07/27/23 02:19:26.78 + Jul 27 02:19:26.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2510 delete --grace-period=0 --force -f -' + Jul 27 02:19:26.864: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Jul 27 02:19:26.864: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" + Jul 27 02:19:26.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2510 get rc,svc -l name=update-demo --no-headers' + Jul 27 02:19:26.974: INFO: stderr: "No resources found in kubectl-2510 namespace.\n" + Jul 27 02:19:26.974: INFO: stdout: "" + Jul 27 02:19:26.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2510 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' + Jul 27 02:19:27.065: INFO: stderr: "" + Jul 27 02:19:27.065: INFO: stdout: "" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Jul 27 02:19:27.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-2510" for this suite. 07/27/23 02:19:27.08 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should support proportional scaling [Conformance] + test/e2e/apps/deployment.go:160 +[BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 +STEP: Creating a kubernetes client 07/27/23 02:19:27.104 +Jul 27 02:19:27.104: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename deployment 07/27/23 02:19:27.105 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:19:27.144 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:19:27.152 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] deployment should support proportional scaling [Conformance] + test/e2e/apps/deployment.go:160 +Jul 27 02:19:27.163: INFO: Creating deployment "webserver-deployment" +W0727 02:19:27.182378 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "httpd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "httpd" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "httpd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "httpd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +Jul 27 02:19:27.182: INFO: Waiting for observed generation 1 +Jul 27 02:19:29.203: INFO: Waiting for all required pods to come up +Jul 27 02:19:29.215: INFO: Pod name httpd: Found 10 pods out of 10 +STEP: ensuring each pod is running 07/27/23 02:19:29.215 +Jul 27 02:19:29.215: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-ptbft" in namespace "deployment-6737" to be "running" +Jul 27 02:19:29.215: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-v74wd" in namespace "deployment-6737" to be "running" +Jul 27 02:19:29.215: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-r2b49" in namespace "deployment-6737" to be "running" +Jul 27 02:19:29.215: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-b8dvf" in namespace "deployment-6737" to be "running" +Jul 27 02:19:29.224: INFO: Pod "webserver-deployment-7f5969cbc7-v74wd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.244848ms +Jul 27 02:19:29.226: INFO: Pod "webserver-deployment-7f5969cbc7-r2b49": Phase="Pending", Reason="", readiness=false. Elapsed: 11.534585ms +Jul 27 02:19:29.227: INFO: Pod "webserver-deployment-7f5969cbc7-b8dvf": Phase="Pending", Reason="", readiness=false. Elapsed: 12.377731ms +Jul 27 02:19:29.228: INFO: Pod "webserver-deployment-7f5969cbc7-ptbft": Phase="Pending", Reason="", readiness=false. Elapsed: 12.977177ms +Jul 27 02:19:31.234: INFO: Pod "webserver-deployment-7f5969cbc7-v74wd": Phase="Running", Reason="", readiness=true. Elapsed: 2.018896s +Jul 27 02:19:31.234: INFO: Pod "webserver-deployment-7f5969cbc7-v74wd" satisfied condition "running" +Jul 27 02:19:31.235: INFO: Pod "webserver-deployment-7f5969cbc7-b8dvf": Phase="Running", Reason="", readiness=true. Elapsed: 2.020523437s +Jul 27 02:19:31.236: INFO: Pod "webserver-deployment-7f5969cbc7-b8dvf" satisfied condition "running" +Jul 27 02:19:31.237: INFO: Pod "webserver-deployment-7f5969cbc7-ptbft": Phase="Running", Reason="", readiness=true. Elapsed: 2.022519336s +Jul 27 02:19:31.237: INFO: Pod "webserver-deployment-7f5969cbc7-ptbft" satisfied condition "running" +Jul 27 02:19:31.238: INFO: Pod "webserver-deployment-7f5969cbc7-r2b49": Phase="Running", Reason="", readiness=true. Elapsed: 2.022587826s +Jul 27 02:19:31.238: INFO: Pod "webserver-deployment-7f5969cbc7-r2b49" satisfied condition "running" +Jul 27 02:19:31.238: INFO: Waiting for deployment "webserver-deployment" to complete +Jul 27 02:19:31.253: INFO: Updating deployment "webserver-deployment" with a non-existent image +Jul 27 02:19:31.322: INFO: Updating deployment webserver-deployment +Jul 27 02:19:31.322: INFO: Waiting for observed generation 2 +Jul 27 02:19:33.352: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 +Jul 27 02:19:33.360: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 +Jul 27 02:19:33.375: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas +Jul 27 02:19:33.401: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 +Jul 27 02:19:33.401: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 +Jul 27 02:19:33.410: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas +Jul 27 02:19:33.429: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas +Jul 27 02:19:33.429: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 +Jul 27 02:19:33.451: INFO: Updating deployment webserver-deployment +Jul 27 02:19:33.451: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas +Jul 27 02:19:33.469: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 +Jul 27 02:19:33.477: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Jul 27 02:19:33.505: INFO: Deployment "webserver-deployment": +&Deployment{ObjectMeta:{webserver-deployment deployment-6737 d7214936-0109-418a-aacc-71167ce19b7b 99191 3 2023-07-27 02:19:27 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002132218 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-d9f79cb5" is progressing.,LastUpdateTime:2023-07-27 02:19:31 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-07-27 02:19:33 +0000 UTC,LastTransitionTime:2023-07-27 02:19:33 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} + +Jul 27 02:19:33.525: INFO: New ReplicaSet "webserver-deployment-d9f79cb5" of Deployment "webserver-deployment": +&ReplicaSet{ObjectMeta:{webserver-deployment-d9f79cb5 deployment-6737 1dee3e4f-9c0a-40d3-8991-747b68431017 99188 3 2023-07-27 02:19:31 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment d7214936-0109-418a-aacc-71167ce19b7b 0xc0045bb0c7 0xc0045bb0c8}] [] [{kube-controller-manager Update apps/v1 2023-07-27 02:19:31 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status} {kube-controller-manager Update apps/v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d7214936-0109-418a-aacc-71167ce19b7b\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: d9f79cb5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0045bb168 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Jul 27 02:19:33.525: INFO: All old ReplicaSets of Deployment "webserver-deployment": +Jul 27 02:19:33.525: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-7f5969cbc7 deployment-6737 e327a1a3-e900-4087-a5c8-9c543a43047b 99187 3 2023-07-27 02:19:27 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment d7214936-0109-418a-aacc-71167ce19b7b 0xc0045bafd7 0xc0045bafd8}] [] [{kube-controller-manager Update apps/v1 2023-07-27 02:19:31 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status} {kube-controller-manager Update apps/v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d7214936-0109-418a-aacc-71167ce19b7b\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 7f5969cbc7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0045bb068 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} +Jul 27 02:19:33.550: INFO: Pod "webserver-deployment-7f5969cbc7-6sp2h" is not available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-6sp2h webserver-deployment-7f5969cbc7- deployment-6737 cc0ce273-efe3-4097-af1b-dfc7b7592aba 99201 0 2023-07-27 02:19:33 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 e327a1a3-e900-4087-a5c8-9c543a43047b 0xc0045bb677 0xc0045bb678}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e327a1a3-e900-4087-a5c8-9c543a43047b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xx75q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xx75q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jul 27 02:19:33.550: INFO: Pod "webserver-deployment-7f5969cbc7-8bl84" is available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-8bl84 webserver-deployment-7f5969cbc7- deployment-6737 6f7f9b00-79b0-4af0-99a6-929dd9da3373 99020 0 2023-07-27 02:19:27 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:48d2cf0be056c5cab4c1cfa4d7dd52708258280119a6bb9fca90f057b2104847 cni.projectcalico.org/podIP:172.17.230.181/32 cni.projectcalico.org/podIPs:172.17.230.181/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.230.181" + ], + "default": true, + "dns": {} +}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 e327a1a3-e900-4087-a5c8-9c543a43047b 0xc0045bb7f7 0xc0045bb7f8}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e327a1a3-e900-4087-a5c8-9c543a43047b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-07-27 02:19:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.230.181\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4bcsw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4bcsw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.18,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.18,PodIP:172.17.230.181,StartTime:2023-07-27 02:19:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-07-27 02:19:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://7bb1ae37af996ac879150795cca366236466099fa9b65a0e49e1769b9c78fda5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.17.230.181,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jul 27 02:19:33.550: INFO: Pod "webserver-deployment-7f5969cbc7-9wsb4" is available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-9wsb4 webserver-deployment-7f5969cbc7- deployment-6737 2cb54a9d-66e9-4e01-b570-d5a9f51fbd8f 99014 0 2023-07-27 02:19:27 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:765d688a17d789d8ee542cadd794a5713caddc8622f2d463b617ac93824f0fa0 cni.projectcalico.org/podIP:172.17.230.180/32 cni.projectcalico.org/podIPs:172.17.230.180/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.230.180" + ], + "default": true, + "dns": {} +}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 e327a1a3-e900-4087-a5c8-9c543a43047b 0xc0045bba67 0xc0045bba68}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e327a1a3-e900-4087-a5c8-9c543a43047b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-07-27 02:19:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.230.180\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-sr2mn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sr2mn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.18,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.18,PodIP:172.17.230.180,StartTime:2023-07-27 02:19:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-07-27 02:19:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://49e5c09ab57556461061e63658dbd46579eda732aa1c29a8e2e44e5ac3e867c7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.17.230.180,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jul 27 02:19:33.551: INFO: Pod "webserver-deployment-7f5969cbc7-czhw2" is not available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-czhw2 webserver-deployment-7f5969cbc7- deployment-6737 c759e7df-843f-4efb-a4b7-1ed616cd6441 99203 0 2023-07-27 02:19:33 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 e327a1a3-e900-4087-a5c8-9c543a43047b 0xc0045bbce7 0xc0045bbce8}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e327a1a3-e900-4087-a5c8-9c543a43047b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wqj92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wqj92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jul 27 02:19:33.551: INFO: Pod "webserver-deployment-7f5969cbc7-hmhld" is not available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-hmhld webserver-deployment-7f5969cbc7- deployment-6737 7ba0fac5-e4c7-499a-aa9c-2f2c1e217593 99204 0 2023-07-27 02:19:33 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 e327a1a3-e900-4087-a5c8-9c543a43047b 0xc0045bbe67 0xc0045bbe68}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e327a1a3-e900-4087-a5c8-9c543a43047b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hfz6b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hfz6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jul 27 02:19:33.551: INFO: Pod "webserver-deployment-7f5969cbc7-j29zn" is not available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-j29zn webserver-deployment-7f5969cbc7- deployment-6737 fc1ac968-7715-4421-93c9-557e945fa1da 99195 0 2023-07-27 02:19:33 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 e327a1a3-e900-4087-a5c8-9c543a43047b 0xc0045bbff7 0xc0045bbff8}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e327a1a3-e900-4087-a5c8-9c543a43047b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dj7qc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dj7qc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.19,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jul 27 02:19:33.551: INFO: Pod "webserver-deployment-7f5969cbc7-mwjm9" is available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-mwjm9 webserver-deployment-7f5969cbc7- deployment-6737 ae297d5b-cc8b-4527-9629-d18e909112d5 98981 0 2023-07-27 02:19:27 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:94830f7f64ae45efe447269f95ad880fa5bc17739e72717147b3904cea7b9f2f cni.projectcalico.org/podIP:172.17.218.28/32 cni.projectcalico.org/podIPs:172.17.218.28/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.218.28" + ], + "default": true, + "dns": {} +}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 e327a1a3-e900-4087-a5c8-9c543a43047b 0xc0006d83e7 0xc0006d83e8}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e327a1a3-e900-4087-a5c8-9c543a43047b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.218.28\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status} {multus Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-np57w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-np57w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.17,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.17,PodIP:172.17.218.28,StartTime:2023-07-27 02:19:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-07-27 02:19:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://0e18bd50b8f5dffb716eb8357d15e033fa59989e922a2dcea713a3b5e65d3ab5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.17.218.28,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jul 27 02:19:33.552: INFO: Pod "webserver-deployment-7f5969cbc7-nxnkf" is available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-nxnkf webserver-deployment-7f5969cbc7- deployment-6737 94d739c4-5430-47f9-91ef-8d54f59b435c 98985 0 2023-07-27 02:19:27 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:7d29999f971e27469d9579f553bffb895faf6665bc1a520583a335ac2219c05b cni.projectcalico.org/podIP:172.17.225.2/32 cni.projectcalico.org/podIPs:172.17.225.2/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.225.2" + ], + "default": true, + "dns": {} +}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 e327a1a3-e900-4087-a5c8-9c543a43047b 0xc0006d8bb7 0xc0006d8bb8}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e327a1a3-e900-4087-a5c8-9c543a43047b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.225.2\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status} {multus Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qt8q2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qt8q2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.19,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.19,PodIP:172.17.225.2,StartTime:2023-07-27 02:19:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-07-27 02:19:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://f34b42cf174785d2180dc27219bc2c051982b6b16337d37db7b68022e118151b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.17.225.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jul 27 02:19:33.552: INFO: Pod "webserver-deployment-7f5969cbc7-ptbft" is available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-ptbft webserver-deployment-7f5969cbc7- deployment-6737 a4f80880-47cd-45dd-a79b-7d84fb99795d 99038 0 2023-07-27 02:19:27 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:12b06d1b4c0a45ec95e8f467ef1fd8d6b821452188f248a50612260a9bcd23a6 cni.projectcalico.org/podIP:172.17.225.27/32 cni.projectcalico.org/podIPs:172.17.225.27/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.225.27" + ], + "default": true, + "dns": {} +}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 e327a1a3-e900-4087-a5c8-9c543a43047b 0xc000dd35e7 0xc000dd35e8}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e327a1a3-e900-4087-a5c8-9c543a43047b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-07-27 02:19:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.225.27\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-nwc2b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nwc2b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.19,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.19,PodIP:172.17.225.27,StartTime:2023-07-27 02:19:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-07-27 02:19:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://809eb9a5b07cae5c61319977e06149fffbf4198d85c1c1ff48106be5cb178e96,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.17.225.27,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jul 27 02:19:33.552: INFO: Pod "webserver-deployment-7f5969cbc7-qgkn9" is available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-qgkn9 webserver-deployment-7f5969cbc7- deployment-6737 7d83d7e0-72d8-4299-bef2-5c6c7f90bf79 99017 0 2023-07-27 02:19:27 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:e9ed1ddc263f260b294eba27614dc2ebbdc140bcf2338c52296b7911f1822032 cni.projectcalico.org/podIP:172.17.230.178/32 cni.projectcalico.org/podIPs:172.17.230.178/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.230.178" + ], + "default": true, + "dns": {} +}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 e327a1a3-e900-4087-a5c8-9c543a43047b 0xc004c42237 0xc004c42238}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e327a1a3-e900-4087-a5c8-9c543a43047b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-07-27 02:19:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.230.178\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2w7dx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2w7dx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.18,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.18,PodIP:172.17.230.178,StartTime:2023-07-27 02:19:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-07-27 02:19:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://063c33f968ee444f0600a25e9e6afc771bd9ed77bd762f9649f3f2f7526094df,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.17.230.178,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jul 27 02:19:33.552: INFO: Pod "webserver-deployment-7f5969cbc7-qwqd6" is not available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-qwqd6 webserver-deployment-7f5969cbc7- deployment-6737 357bb273-b8d5-4452-9ea3-6e5e4320885e 99206 0 2023-07-27 02:19:33 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 e327a1a3-e900-4087-a5c8-9c543a43047b 0xc004c424a7 0xc004c424a8}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e327a1a3-e900-4087-a5c8-9c543a43047b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-w8xr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w8xr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.19,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jul 27 02:19:33.553: INFO: Pod "webserver-deployment-7f5969cbc7-r2b49" is available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-r2b49 webserver-deployment-7f5969cbc7- deployment-6737 03ef15f8-6c02-440b-909b-933e48061568 99031 0 2023-07-27 02:19:27 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:ba2476c38b296ddbfbaa002381e5ed594d3972bc84dcb1d34ac5797aa191b7fc cni.projectcalico.org/podIP:172.17.218.50/32 cni.projectcalico.org/podIPs:172.17.218.50/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.218.50" + ], + "default": true, + "dns": {} +}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 e327a1a3-e900-4087-a5c8-9c543a43047b 0xc004c42677 0xc004c42678}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e327a1a3-e900-4087-a5c8-9c543a43047b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-07-27 02:19:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.218.50\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-k9brb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k9brb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.17,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.17,PodIP:172.17.218.50,StartTime:2023-07-27 02:19:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-07-27 02:19:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://e3d4df458ee08e0a5bb8163e753fac28390b178a80fb480cab97491f9464a030,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.17.218.50,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jul 27 02:19:33.553: INFO: Pod "webserver-deployment-7f5969cbc7-v74wd" is available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-v74wd webserver-deployment-7f5969cbc7- deployment-6737 9fc88f52-a7d4-4569-bd8f-43f375e0eddb 99028 0 2023-07-27 02:19:27 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:f34cb32af553193b2d3ff0cf72d1b9c136f5064a3647c433a63ac5a384183c99 cni.projectcalico.org/podIP:172.17.218.26/32 cni.projectcalico.org/podIPs:172.17.218.26/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.218.26" + ], + "default": true, + "dns": {} +}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 e327a1a3-e900-4087-a5c8-9c543a43047b 0xc004c42907 0xc004c42908}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e327a1a3-e900-4087-a5c8-9c543a43047b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-07-27 02:19:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.218.26\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-glhqr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-glhqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.17,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.17,PodIP:172.17.218.26,StartTime:2023-07-27 02:19:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-07-27 02:19:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://6b94b9b7e9e7db23b2b0518a2d6e386ef5a99cfb5c00d7a80dc8b7c1ddd728db,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.17.218.26,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jul 27 02:19:33.553: INFO: Pod "webserver-deployment-7f5969cbc7-vskt4" is not available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-vskt4 webserver-deployment-7f5969cbc7- deployment-6737 fd361538-6d9f-49d2-899e-4b23cd7e3ed0 99202 0 2023-07-27 02:19:33 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 e327a1a3-e900-4087-a5c8-9c543a43047b 0xc004c42b77 0xc004c42b78}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e327a1a3-e900-4087-a5c8-9c543a43047b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-c5znx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c5znx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jul 27 02:19:33.553: INFO: Pod "webserver-deployment-7f5969cbc7-wdtvg" is not available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-wdtvg webserver-deployment-7f5969cbc7- deployment-6737 bd83e868-fc5c-4e75-a6ca-79840a05929e 99205 0 2023-07-27 02:19:33 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 e327a1a3-e900-4087-a5c8-9c543a43047b 0xc004c42cf7 0xc004c42cf8}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e327a1a3-e900-4087-a5c8-9c543a43047b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-v4lkd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v4lkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.19,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jul 27 02:19:33.553: INFO: Pod "webserver-deployment-d9f79cb5-7ttqd" is not available: +&Pod{ObjectMeta:{webserver-deployment-d9f79cb5-7ttqd webserver-deployment-d9f79cb5- deployment-6737 27fef42d-c34e-482b-95a8-9e7096f967e5 99128 0 2023-07-27 02:19:31 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[cni.projectcalico.org/containerID:9eae1cd3e23a09cfd86ea2742327a2e93249f56f897f0450b1538cc12a4325f8 cni.projectcalico.org/podIP:172.17.230.167/32 cni.projectcalico.org/podIPs:172.17.230.167/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.230.167" + ], + "default": true, + "dns": {} +}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 1dee3e4f-9c0a-40d3-8991-747b68431017 0xc004c42ea7 0xc004c42ea8}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1dee3e4f-9c0a-40d3-8991-747b68431017\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-07-27 02:19:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-07-27 02:19:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-07-27 02:19:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-pp27m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pp27m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.18,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.18,PodIP:,StartTime:2023-07-27 02:19:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jul 27 02:19:33.554: INFO: Pod "webserver-deployment-d9f79cb5-92zkn" is not available: +&Pod{ObjectMeta:{webserver-deployment-d9f79cb5-92zkn webserver-deployment-d9f79cb5- deployment-6737 39d6ff91-1a28-4335-915a-6fcc45fb527e 99200 0 2023-07-27 02:19:33 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 1dee3e4f-9c0a-40d3-8991-747b68431017 0xc004c43117 0xc004c43118}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1dee3e4f-9c0a-40d3-8991-747b68431017\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5b24b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5b24b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.18,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jul 27 02:19:33.554: INFO: Pod "webserver-deployment-d9f79cb5-b9mtt" is not available: +&Pod{ObjectMeta:{webserver-deployment-d9f79cb5-b9mtt webserver-deployment-d9f79cb5- deployment-6737 9a8521c7-e59a-406c-afec-c6d0fa82801a 99130 0 2023-07-27 02:19:31 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[cni.projectcalico.org/containerID:2413f5816681ec00c1e38434977506c2375e00629fc450ca322876a2774531ee cni.projectcalico.org/podIP:172.17.225.29/32 cni.projectcalico.org/podIPs:172.17.225.29/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.225.29" + ], + "default": true, + "dns": {} +}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 1dee3e4f-9c0a-40d3-8991-747b68431017 0xc004c432f7 0xc004c432f8}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1dee3e4f-9c0a-40d3-8991-747b68431017\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-07-27 02:19:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-07-27 02:19:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-07-27 02:19:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hn7jq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hn7jq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.19,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.19,PodIP:,StartTime:2023-07-27 02:19:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jul 27 02:19:33.554: INFO: Pod "webserver-deployment-d9f79cb5-bs6xx" is not available: +&Pod{ObjectMeta:{webserver-deployment-d9f79cb5-bs6xx webserver-deployment-d9f79cb5- deployment-6737 511e9844-44e5-43d2-b991-adb76583feb9 99197 0 2023-07-27 02:19:33 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 1dee3e4f-9c0a-40d3-8991-747b68431017 0xc004c43567 0xc004c43568}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1dee3e4f-9c0a-40d3-8991-747b68431017\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rk76c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rk76c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jul 27 02:19:33.555: INFO: Pod "webserver-deployment-d9f79cb5-mb7rs" is not available: +&Pod{ObjectMeta:{webserver-deployment-d9f79cb5-mb7rs webserver-deployment-d9f79cb5- deployment-6737 4967ba40-f57f-4bb3-93f7-fe52da65ef8a 99212 0 2023-07-27 02:19:33 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 1dee3e4f-9c0a-40d3-8991-747b68431017 0xc004c43937 0xc004c43938}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1dee3e4f-9c0a-40d3-8991-747b68431017\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jhcqw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jhcqw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jul 27 02:19:33.555: INFO: Pod "webserver-deployment-d9f79cb5-pv6qh" is not available: +&Pod{ObjectMeta:{webserver-deployment-d9f79cb5-pv6qh webserver-deployment-d9f79cb5- deployment-6737 3a2f3c99-c64d-4a68-9fa5-dcde71ff85e8 99149 0 2023-07-27 02:19:31 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[cni.projectcalico.org/containerID:50d5d367cf37ed1ac21640320cb6e81c39f7a4bcc06b5705fa5812ed3258fc30 cni.projectcalico.org/podIP:172.17.225.34/32 cni.projectcalico.org/podIPs:172.17.225.34/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.225.34" + ], + "default": true, + "dns": {} +}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 1dee3e4f-9c0a-40d3-8991-747b68431017 0xc004c43ae7 0xc004c43ae8}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1dee3e4f-9c0a-40d3-8991-747b68431017\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-07-27 02:19:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-07-27 02:19:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-07-27 02:19:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mpcbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mpcbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.19,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.19,PodIP:,StartTime:2023-07-27 02:19:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jul 27 02:19:33.555: INFO: Pod "webserver-deployment-d9f79cb5-px66t" is not available: +&Pod{ObjectMeta:{webserver-deployment-d9f79cb5-px66t webserver-deployment-d9f79cb5- deployment-6737 717fbaef-2361-4067-a9e5-028075ad66cb 99209 0 2023-07-27 02:19:33 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 1dee3e4f-9c0a-40d3-8991-747b68431017 0xc004c43d57 0xc004c43d58}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1dee3e4f-9c0a-40d3-8991-747b68431017\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-458t5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-458t5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jul 27 02:19:33.555: INFO: Pod "webserver-deployment-d9f79cb5-rrblc" is not available: +&Pod{ObjectMeta:{webserver-deployment-d9f79cb5-rrblc webserver-deployment-d9f79cb5- deployment-6737 4dc68ef3-e390-4e0c-88bd-8df0ab5de7fe 99139 0 2023-07-27 02:19:31 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[cni.projectcalico.org/containerID:f534c745c0ec8de9807d43b16fcf87a62fcbc9fc8ab715bccb0f21524a028b6f cni.projectcalico.org/podIP:172.17.218.40/32 cni.projectcalico.org/podIPs:172.17.218.40/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.218.40" + ], + "default": true, + "dns": {} +}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 1dee3e4f-9c0a-40d3-8991-747b68431017 0xc004c43f07 0xc004c43f08}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1dee3e4f-9c0a-40d3-8991-747b68431017\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-07-27 02:19:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-07-27 02:19:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-07-27 02:19:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zr2wm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zr2wm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.17,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.17,PodIP:,StartTime:2023-07-27 02:19:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jul 27 02:19:33.556: INFO: Pod "webserver-deployment-d9f79cb5-rrg87" is not available: +&Pod{ObjectMeta:{webserver-deployment-d9f79cb5-rrg87 webserver-deployment-d9f79cb5- deployment-6737 6461fe3a-f524-42d9-b818-be5644775903 99164 0 2023-07-27 02:19:31 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[cni.projectcalico.org/containerID:dd2700e013e6b7d29ebe53bcef56b3ba7bcd8a8f241a8d2354a86ed6ab996e0e cni.projectcalico.org/podIP:172.17.218.49/32 cni.projectcalico.org/podIPs:172.17.218.49/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.218.49" + ], + "default": true, + "dns": {} +}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 1dee3e4f-9c0a-40d3-8991-747b68431017 0xc0055bc667 0xc0055bc668}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1dee3e4f-9c0a-40d3-8991-747b68431017\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-07-27 02:19:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-07-27 02:19:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-07-27 02:19:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cjqf9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cjqf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.17,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.17,PodIP:,StartTime:2023-07-27 02:19:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jul 27 02:19:33.556: INFO: Pod "webserver-deployment-d9f79cb5-vbz55" is not available: +&Pod{ObjectMeta:{webserver-deployment-d9f79cb5-vbz55 webserver-deployment-d9f79cb5- deployment-6737 35f2a78a-5787-499c-8f81-1cec463c1407 99198 0 2023-07-27 02:19:33 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 1dee3e4f-9c0a-40d3-8991-747b68431017 0xc0055bce67 0xc0055bce68}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1dee3e4f-9c0a-40d3-8991-747b68431017\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2h22v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2h22v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jul 27 02:19:33.556: INFO: Pod "webserver-deployment-d9f79cb5-x8lvl" is not available: +&Pod{ObjectMeta:{webserver-deployment-d9f79cb5-x8lvl webserver-deployment-d9f79cb5- deployment-6737 f2504dc9-01a4-4d8b-8f03-cd6ab6171e3e 99211 0 2023-07-27 02:19:33 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 1dee3e4f-9c0a-40d3-8991-747b68431017 0xc0055bcff7 0xc0055bcff8}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1dee3e4f-9c0a-40d3-8991-747b68431017\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wplhd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wplhd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment test/e2e/framework/node/init/init.go:32 -Jun 12 21:43:01.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +Jul 27 02:19:33.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Deployment test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Deployment dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Deployment tear down framework | framework.go:193 -STEP: Destroying namespace "deployment-3079" for this suite. 06/12/23 21:43:01.625 +STEP: Destroying namespace "deployment-6737" for this suite. 07/27/23 02:19:33.578 ------------------------------ -• [SLOW TEST] [23.520 seconds] +• [SLOW TEST] [6.509 seconds] [sig-apps] Deployment test/e2e/apps/framework.go:23 - deployment should support rollover [Conformance] - test/e2e/apps/deployment.go:132 + deployment should support proportional scaling [Conformance] + test/e2e/apps/deployment.go:160 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-apps] Deployment set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:42:38.129 - Jun 12 21:42:38.129: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename deployment 06/12/23 21:42:38.137 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:42:38.197 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:42:38.222 + STEP: Creating a kubernetes client 07/27/23 02:19:27.104 + Jul 27 02:19:27.104: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename deployment 07/27/23 02:19:27.105 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:19:27.144 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:19:27.152 [BeforeEach] [sig-apps] Deployment test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Deployment test/e2e/apps/deployment.go:91 - [It] deployment should support rollover [Conformance] - test/e2e/apps/deployment.go:132 - Jun 12 21:42:38.281: INFO: Pod name rollover-pod: Found 0 pods out of 1 - Jun 12 21:42:43.298: INFO: Pod name rollover-pod: Found 1 pods out of 1 - STEP: ensuring each pod is running 06/12/23 21:42:43.298 - Jun 12 21:42:43.299: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready - Jun 12 21:42:45.308: INFO: Creating deployment "test-rollover-deployment" - Jun 12 21:42:45.338: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations - Jun 12 21:42:47.366: INFO: Check revision of new replica set for deployment "test-rollover-deployment" - Jun 12 21:42:47.391: INFO: Ensure that both replica sets have 1 created replica - Jun 12 21:42:47.411: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update - Jun 12 21:42:47.440: INFO: Updating deployment test-rollover-deployment - Jun 12 21:42:47.440: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller - Jun 12 21:42:49.469: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 - Jun 12 21:42:49.490: INFO: Make sure deployment "test-rollover-deployment" is complete - Jun 12 21:42:49.516: INFO: all replica sets need to contain the pod-template-hash label - Jun 12 21:42:49.516: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 42, 47, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 21:42:51.539: INFO: all replica sets need to contain the pod-template-hash label - Jun 12 21:42:51.539: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 42, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 21:42:53.559: INFO: all replica sets need to contain the pod-template-hash label - Jun 12 21:42:53.559: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 42, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 21:42:55.540: INFO: all replica sets need to contain the pod-template-hash label - Jun 12 21:42:55.540: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 42, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 21:42:57.541: INFO: all replica sets need to contain the pod-template-hash label - Jun 12 21:42:57.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 42, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 21:42:59.543: INFO: all replica sets need to contain the pod-template-hash label - Jun 12 21:42:59.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 42, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 42, 45, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 21:43:01.545: INFO: - Jun 12 21:43:01.545: INFO: Ensure that both old replica sets have no replicas + [It] deployment should support proportional scaling [Conformance] + test/e2e/apps/deployment.go:160 + Jul 27 02:19:27.163: INFO: Creating deployment "webserver-deployment" + W0727 02:19:27.182378 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "httpd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "httpd" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "httpd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "httpd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + Jul 27 02:19:27.182: INFO: Waiting for observed generation 1 + Jul 27 02:19:29.203: INFO: Waiting for all required pods to come up + Jul 27 02:19:29.215: INFO: Pod name httpd: Found 10 pods out of 10 + STEP: ensuring each pod is running 07/27/23 02:19:29.215 + Jul 27 02:19:29.215: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-ptbft" in namespace "deployment-6737" to be "running" + Jul 27 02:19:29.215: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-v74wd" in namespace "deployment-6737" to be "running" + Jul 27 02:19:29.215: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-r2b49" in namespace "deployment-6737" to be "running" + Jul 27 02:19:29.215: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-b8dvf" in namespace "deployment-6737" to be "running" + Jul 27 02:19:29.224: INFO: Pod "webserver-deployment-7f5969cbc7-v74wd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.244848ms + Jul 27 02:19:29.226: INFO: Pod "webserver-deployment-7f5969cbc7-r2b49": Phase="Pending", Reason="", readiness=false. Elapsed: 11.534585ms + Jul 27 02:19:29.227: INFO: Pod "webserver-deployment-7f5969cbc7-b8dvf": Phase="Pending", Reason="", readiness=false. Elapsed: 12.377731ms + Jul 27 02:19:29.228: INFO: Pod "webserver-deployment-7f5969cbc7-ptbft": Phase="Pending", Reason="", readiness=false. Elapsed: 12.977177ms + Jul 27 02:19:31.234: INFO: Pod "webserver-deployment-7f5969cbc7-v74wd": Phase="Running", Reason="", readiness=true. Elapsed: 2.018896s + Jul 27 02:19:31.234: INFO: Pod "webserver-deployment-7f5969cbc7-v74wd" satisfied condition "running" + Jul 27 02:19:31.235: INFO: Pod "webserver-deployment-7f5969cbc7-b8dvf": Phase="Running", Reason="", readiness=true. Elapsed: 2.020523437s + Jul 27 02:19:31.236: INFO: Pod "webserver-deployment-7f5969cbc7-b8dvf" satisfied condition "running" + Jul 27 02:19:31.237: INFO: Pod "webserver-deployment-7f5969cbc7-ptbft": Phase="Running", Reason="", readiness=true. Elapsed: 2.022519336s + Jul 27 02:19:31.237: INFO: Pod "webserver-deployment-7f5969cbc7-ptbft" satisfied condition "running" + Jul 27 02:19:31.238: INFO: Pod "webserver-deployment-7f5969cbc7-r2b49": Phase="Running", Reason="", readiness=true. Elapsed: 2.022587826s + Jul 27 02:19:31.238: INFO: Pod "webserver-deployment-7f5969cbc7-r2b49" satisfied condition "running" + Jul 27 02:19:31.238: INFO: Waiting for deployment "webserver-deployment" to complete + Jul 27 02:19:31.253: INFO: Updating deployment "webserver-deployment" with a non-existent image + Jul 27 02:19:31.322: INFO: Updating deployment webserver-deployment + Jul 27 02:19:31.322: INFO: Waiting for observed generation 2 + Jul 27 02:19:33.352: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 + Jul 27 02:19:33.360: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 + Jul 27 02:19:33.375: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas + Jul 27 02:19:33.401: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 + Jul 27 02:19:33.401: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 + Jul 27 02:19:33.410: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas + Jul 27 02:19:33.429: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas + Jul 27 02:19:33.429: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 + Jul 27 02:19:33.451: INFO: Updating deployment webserver-deployment + Jul 27 02:19:33.451: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas + Jul 27 02:19:33.469: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 + Jul 27 02:19:33.477: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment test/e2e/apps/deployment.go:84 - Jun 12 21:43:01.579: INFO: Deployment "test-rollover-deployment": - &Deployment{ObjectMeta:{test-rollover-deployment deployment-3079 a79bbfe9-681f-41cc-8d78-15fb519d8502 113888 2 2023-06-12 21:42:45 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-06-12 21:42:47 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-06-12 21:43:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00a135ee8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-06-12 21:42:45 +0000 UTC,LastTransitionTime:2023-06-12 21:42:45 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-6c6df9974f" has successfully progressed.,LastUpdateTime:2023-06-12 21:43:00 +0000 UTC,LastTransitionTime:2023-06-12 21:42:45 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} - - Jun 12 21:43:01.590: INFO: New ReplicaSet "test-rollover-deployment-6c6df9974f" of Deployment "test-rollover-deployment": - &ReplicaSet{ObjectMeta:{test-rollover-deployment-6c6df9974f deployment-3079 5f6592b7-6ce0-4b8f-b17d-cd3f8c628f37 113877 2 2023-06-12 21:42:47 +0000 UTC map[name:rollover-pod pod-template-hash:6c6df9974f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment a79bbfe9-681f-41cc-8d78-15fb519d8502 0xc0032483b7 0xc0032483b8}] [] [{kube-controller-manager Update apps/v1 2023-06-12 21:42:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a79bbfe9-681f-41cc-8d78-15fb519d8502\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-06-12 21:43:00 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 6c6df9974f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:6c6df9974f] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003248468 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} - Jun 12 21:43:01.590: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": - Jun 12 21:43:01.590: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-3079 5d441907-3402-4666-83dc-2d92556eb690 113887 2 2023-06-12 21:42:38 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment a79bbfe9-681f-41cc-8d78-15fb519d8502 0xc003248287 0xc003248288}] [] [{e2e.test Update apps/v1 2023-06-12 21:42:38 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-06-12 21:43:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a79bbfe9-681f-41cc-8d78-15fb519d8502\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2023-06-12 21:43:00 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003248348 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} - Jun 12 21:43:01.591: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-768dcbc65b deployment-3079 6836a191-6ea8-4233-8860-308774188d2d 113791 2 2023-06-12 21:42:45 +0000 UTC map[name:rollover-pod pod-template-hash:768dcbc65b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment a79bbfe9-681f-41cc-8d78-15fb519d8502 0xc0032484d7 0xc0032484d8}] [] [{kube-controller-manager Update apps/v1 2023-06-12 21:42:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a79bbfe9-681f-41cc-8d78-15fb519d8502\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-06-12 21:42:47 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 768dcbc65b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:768dcbc65b] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003248588 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} - Jun 12 21:43:01.606: INFO: Pod "test-rollover-deployment-6c6df9974f-hlv5s" is available: - &Pod{ObjectMeta:{test-rollover-deployment-6c6df9974f-hlv5s test-rollover-deployment-6c6df9974f- deployment-3079 d9463553-35cc-49e1-bc0b-ef8076cae57b 113827 0 2023-06-12 21:42:47 +0000 UTC map[name:rollover-pod pod-template-hash:6c6df9974f] map[cni.projectcalico.org/containerID:a83ad1d1264885f83695c6b04e2e8ec27e9688ec5d13ae5cc8db42ed9acd7c77 cni.projectcalico.org/podIP:172.30.161.112/32 cni.projectcalico.org/podIPs:172.30.161.112/32 k8s.v1.cni.cncf.io/network-status:[{ + Jul 27 02:19:33.505: INFO: Deployment "webserver-deployment": + &Deployment{ObjectMeta:{webserver-deployment deployment-6737 d7214936-0109-418a-aacc-71167ce19b7b 99191 3 2023-07-27 02:19:27 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002132218 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-d9f79cb5" is progressing.,LastUpdateTime:2023-07-27 02:19:31 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-07-27 02:19:33 +0000 UTC,LastTransitionTime:2023-07-27 02:19:33 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} + + Jul 27 02:19:33.525: INFO: New ReplicaSet "webserver-deployment-d9f79cb5" of Deployment "webserver-deployment": + &ReplicaSet{ObjectMeta:{webserver-deployment-d9f79cb5 deployment-6737 1dee3e4f-9c0a-40d3-8991-747b68431017 99188 3 2023-07-27 02:19:31 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment d7214936-0109-418a-aacc-71167ce19b7b 0xc0045bb0c7 0xc0045bb0c8}] [] [{kube-controller-manager Update apps/v1 2023-07-27 02:19:31 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status} {kube-controller-manager Update apps/v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d7214936-0109-418a-aacc-71167ce19b7b\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: d9f79cb5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0045bb168 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + Jul 27 02:19:33.525: INFO: All old ReplicaSets of Deployment "webserver-deployment": + Jul 27 02:19:33.525: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-7f5969cbc7 deployment-6737 e327a1a3-e900-4087-a5c8-9c543a43047b 99187 3 2023-07-27 02:19:27 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment d7214936-0109-418a-aacc-71167ce19b7b 0xc0045bafd7 0xc0045bafd8}] [] [{kube-controller-manager Update apps/v1 2023-07-27 02:19:31 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status} {kube-controller-manager Update apps/v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d7214936-0109-418a-aacc-71167ce19b7b\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 7f5969cbc7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0045bb068 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} + Jul 27 02:19:33.550: INFO: Pod "webserver-deployment-7f5969cbc7-6sp2h" is not available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-6sp2h webserver-deployment-7f5969cbc7- deployment-6737 cc0ce273-efe3-4097-af1b-dfc7b7592aba 99201 0 2023-07-27 02:19:33 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 e327a1a3-e900-4087-a5c8-9c543a43047b 0xc0045bb677 0xc0045bb678}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e327a1a3-e900-4087-a5c8-9c543a43047b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xx75q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xx75q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jul 27 02:19:33.550: INFO: Pod "webserver-deployment-7f5969cbc7-8bl84" is available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-8bl84 webserver-deployment-7f5969cbc7- deployment-6737 6f7f9b00-79b0-4af0-99a6-929dd9da3373 99020 0 2023-07-27 02:19:27 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:48d2cf0be056c5cab4c1cfa4d7dd52708258280119a6bb9fca90f057b2104847 cni.projectcalico.org/podIP:172.17.230.181/32 cni.projectcalico.org/podIPs:172.17.230.181/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.230.181" + ], + "default": true, + "dns": {} + }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 e327a1a3-e900-4087-a5c8-9c543a43047b 0xc0045bb7f7 0xc0045bb7f8}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e327a1a3-e900-4087-a5c8-9c543a43047b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-07-27 02:19:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.230.181\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4bcsw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4bcsw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.18,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.18,PodIP:172.17.230.181,StartTime:2023-07-27 02:19:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-07-27 02:19:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://7bb1ae37af996ac879150795cca366236466099fa9b65a0e49e1769b9c78fda5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.17.230.181,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jul 27 02:19:33.550: INFO: Pod "webserver-deployment-7f5969cbc7-9wsb4" is available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-9wsb4 webserver-deployment-7f5969cbc7- deployment-6737 2cb54a9d-66e9-4e01-b570-d5a9f51fbd8f 99014 0 2023-07-27 02:19:27 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:765d688a17d789d8ee542cadd794a5713caddc8622f2d463b617ac93824f0fa0 cni.projectcalico.org/podIP:172.17.230.180/32 cni.projectcalico.org/podIPs:172.17.230.180/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.230.180" + ], + "default": true, + "dns": {} + }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 e327a1a3-e900-4087-a5c8-9c543a43047b 0xc0045bba67 0xc0045bba68}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e327a1a3-e900-4087-a5c8-9c543a43047b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-07-27 02:19:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.230.180\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-sr2mn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sr2mn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.18,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.18,PodIP:172.17.230.180,StartTime:2023-07-27 02:19:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-07-27 02:19:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://49e5c09ab57556461061e63658dbd46579eda732aa1c29a8e2e44e5ac3e867c7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.17.230.180,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jul 27 02:19:33.551: INFO: Pod "webserver-deployment-7f5969cbc7-czhw2" is not available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-czhw2 webserver-deployment-7f5969cbc7- deployment-6737 c759e7df-843f-4efb-a4b7-1ed616cd6441 99203 0 2023-07-27 02:19:33 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 e327a1a3-e900-4087-a5c8-9c543a43047b 0xc0045bbce7 0xc0045bbce8}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e327a1a3-e900-4087-a5c8-9c543a43047b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wqj92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wqj92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jul 27 02:19:33.551: INFO: Pod "webserver-deployment-7f5969cbc7-hmhld" is not available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-hmhld webserver-deployment-7f5969cbc7- deployment-6737 7ba0fac5-e4c7-499a-aa9c-2f2c1e217593 99204 0 2023-07-27 02:19:33 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 e327a1a3-e900-4087-a5c8-9c543a43047b 0xc0045bbe67 0xc0045bbe68}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e327a1a3-e900-4087-a5c8-9c543a43047b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hfz6b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hfz6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jul 27 02:19:33.551: INFO: Pod "webserver-deployment-7f5969cbc7-j29zn" is not available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-j29zn webserver-deployment-7f5969cbc7- deployment-6737 fc1ac968-7715-4421-93c9-557e945fa1da 99195 0 2023-07-27 02:19:33 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 e327a1a3-e900-4087-a5c8-9c543a43047b 0xc0045bbff7 0xc0045bbff8}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e327a1a3-e900-4087-a5c8-9c543a43047b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dj7qc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dj7qc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.19,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jul 27 02:19:33.551: INFO: Pod "webserver-deployment-7f5969cbc7-mwjm9" is available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-mwjm9 webserver-deployment-7f5969cbc7- deployment-6737 ae297d5b-cc8b-4527-9629-d18e909112d5 98981 0 2023-07-27 02:19:27 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:94830f7f64ae45efe447269f95ad880fa5bc17739e72717147b3904cea7b9f2f cni.projectcalico.org/podIP:172.17.218.28/32 cni.projectcalico.org/podIPs:172.17.218.28/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.218.28" + ], + "default": true, + "dns": {} + }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 e327a1a3-e900-4087-a5c8-9c543a43047b 0xc0006d83e7 0xc0006d83e8}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e327a1a3-e900-4087-a5c8-9c543a43047b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.218.28\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status} {multus Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-np57w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-np57w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.17,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.17,PodIP:172.17.218.28,StartTime:2023-07-27 02:19:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-07-27 02:19:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://0e18bd50b8f5dffb716eb8357d15e033fa59989e922a2dcea713a3b5e65d3ab5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.17.218.28,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jul 27 02:19:33.552: INFO: Pod "webserver-deployment-7f5969cbc7-nxnkf" is available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-nxnkf webserver-deployment-7f5969cbc7- deployment-6737 94d739c4-5430-47f9-91ef-8d54f59b435c 98985 0 2023-07-27 02:19:27 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:7d29999f971e27469d9579f553bffb895faf6665bc1a520583a335ac2219c05b cni.projectcalico.org/podIP:172.17.225.2/32 cni.projectcalico.org/podIPs:172.17.225.2/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.225.2" + ], + "default": true, + "dns": {} + }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 e327a1a3-e900-4087-a5c8-9c543a43047b 0xc0006d8bb7 0xc0006d8bb8}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e327a1a3-e900-4087-a5c8-9c543a43047b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.225.2\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status} {multus Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qt8q2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qt8q2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.19,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.19,PodIP:172.17.225.2,StartTime:2023-07-27 02:19:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-07-27 02:19:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://f34b42cf174785d2180dc27219bc2c051982b6b16337d37db7b68022e118151b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.17.225.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jul 27 02:19:33.552: INFO: Pod "webserver-deployment-7f5969cbc7-ptbft" is available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-ptbft webserver-deployment-7f5969cbc7- deployment-6737 a4f80880-47cd-45dd-a79b-7d84fb99795d 99038 0 2023-07-27 02:19:27 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:12b06d1b4c0a45ec95e8f467ef1fd8d6b821452188f248a50612260a9bcd23a6 cni.projectcalico.org/podIP:172.17.225.27/32 cni.projectcalico.org/podIPs:172.17.225.27/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.225.27" + ], + "default": true, + "dns": {} + }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 e327a1a3-e900-4087-a5c8-9c543a43047b 0xc000dd35e7 0xc000dd35e8}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e327a1a3-e900-4087-a5c8-9c543a43047b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-07-27 02:19:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.225.27\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-nwc2b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nwc2b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.19,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.19,PodIP:172.17.225.27,StartTime:2023-07-27 02:19:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-07-27 02:19:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://809eb9a5b07cae5c61319977e06149fffbf4198d85c1c1ff48106be5cb178e96,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.17.225.27,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jul 27 02:19:33.552: INFO: Pod "webserver-deployment-7f5969cbc7-qgkn9" is available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-qgkn9 webserver-deployment-7f5969cbc7- deployment-6737 7d83d7e0-72d8-4299-bef2-5c6c7f90bf79 99017 0 2023-07-27 02:19:27 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:e9ed1ddc263f260b294eba27614dc2ebbdc140bcf2338c52296b7911f1822032 cni.projectcalico.org/podIP:172.17.230.178/32 cni.projectcalico.org/podIPs:172.17.230.178/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.230.178" + ], + "default": true, + "dns": {} + }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 e327a1a3-e900-4087-a5c8-9c543a43047b 0xc004c42237 0xc004c42238}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e327a1a3-e900-4087-a5c8-9c543a43047b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-07-27 02:19:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.230.178\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2w7dx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2w7dx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.18,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.18,PodIP:172.17.230.178,StartTime:2023-07-27 02:19:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-07-27 02:19:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://063c33f968ee444f0600a25e9e6afc771bd9ed77bd762f9649f3f2f7526094df,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.17.230.178,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jul 27 02:19:33.552: INFO: Pod "webserver-deployment-7f5969cbc7-qwqd6" is not available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-qwqd6 webserver-deployment-7f5969cbc7- deployment-6737 357bb273-b8d5-4452-9ea3-6e5e4320885e 99206 0 2023-07-27 02:19:33 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 e327a1a3-e900-4087-a5c8-9c543a43047b 0xc004c424a7 0xc004c424a8}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e327a1a3-e900-4087-a5c8-9c543a43047b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-w8xr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w8xr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.19,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jul 27 02:19:33.553: INFO: Pod "webserver-deployment-7f5969cbc7-r2b49" is available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-r2b49 webserver-deployment-7f5969cbc7- deployment-6737 03ef15f8-6c02-440b-909b-933e48061568 99031 0 2023-07-27 02:19:27 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:ba2476c38b296ddbfbaa002381e5ed594d3972bc84dcb1d34ac5797aa191b7fc cni.projectcalico.org/podIP:172.17.218.50/32 cni.projectcalico.org/podIPs:172.17.218.50/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.218.50" + ], + "default": true, + "dns": {} + }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 e327a1a3-e900-4087-a5c8-9c543a43047b 0xc004c42677 0xc004c42678}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e327a1a3-e900-4087-a5c8-9c543a43047b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-07-27 02:19:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.218.50\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-k9brb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k9brb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.17,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.17,PodIP:172.17.218.50,StartTime:2023-07-27 02:19:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-07-27 02:19:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://e3d4df458ee08e0a5bb8163e753fac28390b178a80fb480cab97491f9464a030,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.17.218.50,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jul 27 02:19:33.553: INFO: Pod "webserver-deployment-7f5969cbc7-v74wd" is available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-v74wd webserver-deployment-7f5969cbc7- deployment-6737 9fc88f52-a7d4-4569-bd8f-43f375e0eddb 99028 0 2023-07-27 02:19:27 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:f34cb32af553193b2d3ff0cf72d1b9c136f5064a3647c433a63ac5a384183c99 cni.projectcalico.org/podIP:172.17.218.26/32 cni.projectcalico.org/podIPs:172.17.218.26/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.218.26" + ], + "default": true, + "dns": {} + }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 e327a1a3-e900-4087-a5c8-9c543a43047b 0xc004c42907 0xc004c42908}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e327a1a3-e900-4087-a5c8-9c543a43047b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-07-27 02:19:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-07-27 02:19:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.218.26\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-glhqr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-glhqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.17,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.17,PodIP:172.17.218.26,StartTime:2023-07-27 02:19:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-07-27 02:19:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://6b94b9b7e9e7db23b2b0518a2d6e386ef5a99cfb5c00d7a80dc8b7c1ddd728db,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.17.218.26,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jul 27 02:19:33.553: INFO: Pod "webserver-deployment-7f5969cbc7-vskt4" is not available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-vskt4 webserver-deployment-7f5969cbc7- deployment-6737 fd361538-6d9f-49d2-899e-4b23cd7e3ed0 99202 0 2023-07-27 02:19:33 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 e327a1a3-e900-4087-a5c8-9c543a43047b 0xc004c42b77 0xc004c42b78}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e327a1a3-e900-4087-a5c8-9c543a43047b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-c5znx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c5znx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jul 27 02:19:33.553: INFO: Pod "webserver-deployment-7f5969cbc7-wdtvg" is not available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-wdtvg webserver-deployment-7f5969cbc7- deployment-6737 bd83e868-fc5c-4e75-a6ca-79840a05929e 99205 0 2023-07-27 02:19:33 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 e327a1a3-e900-4087-a5c8-9c543a43047b 0xc004c42cf7 0xc004c42cf8}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e327a1a3-e900-4087-a5c8-9c543a43047b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-v4lkd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v4lkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.19,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jul 27 02:19:33.553: INFO: Pod "webserver-deployment-d9f79cb5-7ttqd" is not available: + &Pod{ObjectMeta:{webserver-deployment-d9f79cb5-7ttqd webserver-deployment-d9f79cb5- deployment-6737 27fef42d-c34e-482b-95a8-9e7096f967e5 99128 0 2023-07-27 02:19:31 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[cni.projectcalico.org/containerID:9eae1cd3e23a09cfd86ea2742327a2e93249f56f897f0450b1538cc12a4325f8 cni.projectcalico.org/podIP:172.17.230.167/32 cni.projectcalico.org/podIPs:172.17.230.167/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.230.167" + ], + "default": true, + "dns": {} + }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 1dee3e4f-9c0a-40d3-8991-747b68431017 0xc004c42ea7 0xc004c42ea8}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1dee3e4f-9c0a-40d3-8991-747b68431017\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-07-27 02:19:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-07-27 02:19:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-07-27 02:19:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-pp27m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pp27m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.18,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.18,PodIP:,StartTime:2023-07-27 02:19:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jul 27 02:19:33.554: INFO: Pod "webserver-deployment-d9f79cb5-92zkn" is not available: + &Pod{ObjectMeta:{webserver-deployment-d9f79cb5-92zkn webserver-deployment-d9f79cb5- deployment-6737 39d6ff91-1a28-4335-915a-6fcc45fb527e 99200 0 2023-07-27 02:19:33 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 1dee3e4f-9c0a-40d3-8991-747b68431017 0xc004c43117 0xc004c43118}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1dee3e4f-9c0a-40d3-8991-747b68431017\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5b24b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5b24b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.18,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jul 27 02:19:33.554: INFO: Pod "webserver-deployment-d9f79cb5-b9mtt" is not available: + &Pod{ObjectMeta:{webserver-deployment-d9f79cb5-b9mtt webserver-deployment-d9f79cb5- deployment-6737 9a8521c7-e59a-406c-afec-c6d0fa82801a 99130 0 2023-07-27 02:19:31 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[cni.projectcalico.org/containerID:2413f5816681ec00c1e38434977506c2375e00629fc450ca322876a2774531ee cni.projectcalico.org/podIP:172.17.225.29/32 cni.projectcalico.org/podIPs:172.17.225.29/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.225.29" + ], + "default": true, + "dns": {} + }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 1dee3e4f-9c0a-40d3-8991-747b68431017 0xc004c432f7 0xc004c432f8}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1dee3e4f-9c0a-40d3-8991-747b68431017\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-07-27 02:19:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-07-27 02:19:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-07-27 02:19:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hn7jq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hn7jq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.19,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.19,PodIP:,StartTime:2023-07-27 02:19:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jul 27 02:19:33.554: INFO: Pod "webserver-deployment-d9f79cb5-bs6xx" is not available: + &Pod{ObjectMeta:{webserver-deployment-d9f79cb5-bs6xx webserver-deployment-d9f79cb5- deployment-6737 511e9844-44e5-43d2-b991-adb76583feb9 99197 0 2023-07-27 02:19:33 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 1dee3e4f-9c0a-40d3-8991-747b68431017 0xc004c43567 0xc004c43568}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1dee3e4f-9c0a-40d3-8991-747b68431017\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rk76c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rk76c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jul 27 02:19:33.555: INFO: Pod "webserver-deployment-d9f79cb5-mb7rs" is not available: + &Pod{ObjectMeta:{webserver-deployment-d9f79cb5-mb7rs webserver-deployment-d9f79cb5- deployment-6737 4967ba40-f57f-4bb3-93f7-fe52da65ef8a 99212 0 2023-07-27 02:19:33 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 1dee3e4f-9c0a-40d3-8991-747b68431017 0xc004c43937 0xc004c43938}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1dee3e4f-9c0a-40d3-8991-747b68431017\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jhcqw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jhcqw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jul 27 02:19:33.555: INFO: Pod "webserver-deployment-d9f79cb5-pv6qh" is not available: + &Pod{ObjectMeta:{webserver-deployment-d9f79cb5-pv6qh webserver-deployment-d9f79cb5- deployment-6737 3a2f3c99-c64d-4a68-9fa5-dcde71ff85e8 99149 0 2023-07-27 02:19:31 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[cni.projectcalico.org/containerID:50d5d367cf37ed1ac21640320cb6e81c39f7a4bcc06b5705fa5812ed3258fc30 cni.projectcalico.org/podIP:172.17.225.34/32 cni.projectcalico.org/podIPs:172.17.225.34/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.225.34" + ], + "default": true, + "dns": {} + }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 1dee3e4f-9c0a-40d3-8991-747b68431017 0xc004c43ae7 0xc004c43ae8}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1dee3e4f-9c0a-40d3-8991-747b68431017\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-07-27 02:19:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-07-27 02:19:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-07-27 02:19:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mpcbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mpcbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.19,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.19,PodIP:,StartTime:2023-07-27 02:19:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jul 27 02:19:33.555: INFO: Pod "webserver-deployment-d9f79cb5-px66t" is not available: + &Pod{ObjectMeta:{webserver-deployment-d9f79cb5-px66t webserver-deployment-d9f79cb5- deployment-6737 717fbaef-2361-4067-a9e5-028075ad66cb 99209 0 2023-07-27 02:19:33 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 1dee3e4f-9c0a-40d3-8991-747b68431017 0xc004c43d57 0xc004c43d58}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1dee3e4f-9c0a-40d3-8991-747b68431017\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-458t5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-458t5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jul 27 02:19:33.555: INFO: Pod "webserver-deployment-d9f79cb5-rrblc" is not available: + &Pod{ObjectMeta:{webserver-deployment-d9f79cb5-rrblc webserver-deployment-d9f79cb5- deployment-6737 4dc68ef3-e390-4e0c-88bd-8df0ab5de7fe 99139 0 2023-07-27 02:19:31 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[cni.projectcalico.org/containerID:f534c745c0ec8de9807d43b16fcf87a62fcbc9fc8ab715bccb0f21524a028b6f cni.projectcalico.org/podIP:172.17.218.40/32 cni.projectcalico.org/podIPs:172.17.218.40/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.218.40" + ], + "default": true, + "dns": {} + }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 1dee3e4f-9c0a-40d3-8991-747b68431017 0xc004c43f07 0xc004c43f08}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1dee3e4f-9c0a-40d3-8991-747b68431017\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-07-27 02:19:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-07-27 02:19:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-07-27 02:19:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zr2wm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zr2wm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.17,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.17,PodIP:,StartTime:2023-07-27 02:19:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jul 27 02:19:33.556: INFO: Pod "webserver-deployment-d9f79cb5-rrg87" is not available: + &Pod{ObjectMeta:{webserver-deployment-d9f79cb5-rrg87 webserver-deployment-d9f79cb5- deployment-6737 6461fe3a-f524-42d9-b818-be5644775903 99164 0 2023-07-27 02:19:31 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[cni.projectcalico.org/containerID:dd2700e013e6b7d29ebe53bcef56b3ba7bcd8a8f241a8d2354a86ed6ab996e0e cni.projectcalico.org/podIP:172.17.218.49/32 cni.projectcalico.org/podIPs:172.17.218.49/32 k8s.v1.cni.cncf.io/network-status:[{ "name": "k8s-pod-network", "ips": [ - "172.30.161.112" + "172.17.218.49" ], "default": true, "dns": {} - }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-rollover-deployment-6c6df9974f 5f6592b7-6ce0-4b8f-b17d-cd3f8c628f37 0xc002ae6fc7 0xc002ae6fc8}] [] [{kube-controller-manager Update v1 2023-06-12 21:42:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5f6592b7-6ce0-4b8f-b17d-cd3f8c628f37\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-06-12 21:42:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-06-12 21:42:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-06-12 21:42:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.30.161.112\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-sps72,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sps72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.112,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c55,c10,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-l4fkf,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 21:42:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 21:42:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 21:42:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 21:42:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.112,PodIP:172.30.161.112,StartTime:2023-06-12 21:42:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-12 21:42:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,ImageID:registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://13acf4de9ee6858926521de1e212bd347332fd1d9c038a7976e6c6a8449f1837,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.30.161.112,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 1dee3e4f-9c0a-40d3-8991-747b68431017 0xc0055bc667 0xc0055bc668}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1dee3e4f-9c0a-40d3-8991-747b68431017\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-07-27 02:19:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-07-27 02:19:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-07-27 02:19:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cjqf9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cjqf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.17,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:19:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.17,PodIP:,StartTime:2023-07-27 02:19:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jul 27 02:19:33.556: INFO: Pod "webserver-deployment-d9f79cb5-vbz55" is not available: + &Pod{ObjectMeta:{webserver-deployment-d9f79cb5-vbz55 webserver-deployment-d9f79cb5- deployment-6737 35f2a78a-5787-499c-8f81-1cec463c1407 99198 0 2023-07-27 02:19:33 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 1dee3e4f-9c0a-40d3-8991-747b68431017 0xc0055bce67 0xc0055bce68}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1dee3e4f-9c0a-40d3-8991-747b68431017\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2h22v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2h22v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jul 27 02:19:33.556: INFO: Pod "webserver-deployment-d9f79cb5-x8lvl" is not available: + &Pod{ObjectMeta:{webserver-deployment-d9f79cb5-x8lvl webserver-deployment-d9f79cb5- deployment-6737 f2504dc9-01a4-4d8b-8f03-cd6ab6171e3e 99211 0 2023-07-27 02:19:33 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 1dee3e4f-9c0a-40d3-8991-747b68431017 0xc0055bcff7 0xc0055bcff8}] [] [{kube-controller-manager Update v1 2023-07-27 02:19:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1dee3e4f-9c0a-40d3-8991-747b68431017\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wplhd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wplhd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c54,c14,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-tfs6j,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment test/e2e/framework/node/init/init.go:32 - Jun 12 21:43:01.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + Jul 27 02:19:33.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Deployment test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Deployment dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Deployment tear down framework | framework.go:193 - STEP: Destroying namespace "deployment-3079" for this suite. 06/12/23 21:43:01.625 + STEP: Destroying namespace "deployment-6737" for this suite. 07/27/23 02:19:33.578 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] - should include custom resource definition resources in discovery documents [Conformance] - test/e2e/apimachinery/custom_resource_definition.go:198 -[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + test/e2e/apimachinery/webhook.go:277 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:43:01.654 -Jun 12 21:43:01.654: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename custom-resource-definition 06/12/23 21:43:01.658 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:43:01.747 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:43:01.762 -[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +STEP: Creating a kubernetes client 07/27/23 02:19:33.615 +Jul 27 02:19:33.615: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename webhook 07/27/23 02:19:33.616 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:19:33.664 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:19:33.673 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 -[It] should include custom resource definition resources in discovery documents [Conformance] - test/e2e/apimachinery/custom_resource_definition.go:198 -STEP: fetching the /apis discovery document 06/12/23 21:43:01.779 -STEP: finding the apiextensions.k8s.io API group in the /apis discovery document 06/12/23 21:43:01.785 -STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document 06/12/23 21:43:01.785 -STEP: fetching the /apis/apiextensions.k8s.io discovery document 06/12/23 21:43:01.785 -STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document 06/12/23 21:43:01.793 -STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document 06/12/23 21:43:01.794 -STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document 06/12/23 21:43:01.805 -[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 07/27/23 02:19:33.788 +STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 02:19:33.99 +STEP: Deploying the webhook pod 07/27/23 02:19:34.024 +STEP: Wait for the deployment to be ready 07/27/23 02:19:34.053 +Jul 27 02:19:34.069: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 07/27/23 02:19:36.094 +STEP: Verifying the service has paired with the endpoint 07/27/23 02:19:36.128 +Jul 27 02:19:37.128: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + test/e2e/apimachinery/webhook.go:277 +STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API 07/27/23 02:19:37.14 +STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API 07/27/23 02:19:37.202 +STEP: Creating a dummy validating-webhook-configuration object 07/27/23 02:19:37.241 +STEP: Deleting the validating-webhook-configuration, which should be possible to remove 07/27/23 02:19:37.271 +STEP: Creating a dummy mutating-webhook-configuration object 07/27/23 02:19:37.293 +STEP: Deleting the mutating-webhook-configuration, which should be possible to remove 07/27/23 02:19:37.321 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 -Jun 12 21:43:01.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +Jul 27 02:19:37.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 -STEP: Destroying namespace "custom-resource-definition-5018" for this suite. 06/12/23 21:43:01.839 +STEP: Destroying namespace "webhook-465" for this suite. 07/27/23 02:19:37.497 +STEP: Destroying namespace "webhook-465-markers" for this suite. 07/27/23 02:19:37.52 ------------------------------ -• [0.229 seconds] -[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +• [3.928 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/apimachinery/framework.go:23 - should include custom resource definition resources in discovery documents [Conformance] - test/e2e/apimachinery/custom_resource_definition.go:198 + should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + test/e2e/apimachinery/webhook.go:277 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:43:01.654 - Jun 12 21:43:01.654: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename custom-resource-definition 06/12/23 21:43:01.658 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:43:01.747 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:43:01.762 - [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + STEP: Creating a kubernetes client 07/27/23 02:19:33.615 + Jul 27 02:19:33.615: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename webhook 07/27/23 02:19:33.616 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:19:33.664 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:19:33.673 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 - [It] should include custom resource definition resources in discovery documents [Conformance] - test/e2e/apimachinery/custom_resource_definition.go:198 - STEP: fetching the /apis discovery document 06/12/23 21:43:01.779 - STEP: finding the apiextensions.k8s.io API group in the /apis discovery document 06/12/23 21:43:01.785 - STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document 06/12/23 21:43:01.785 - STEP: fetching the /apis/apiextensions.k8s.io discovery document 06/12/23 21:43:01.785 - STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document 06/12/23 21:43:01.793 - STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document 06/12/23 21:43:01.794 - STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document 06/12/23 21:43:01.805 - [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 07/27/23 02:19:33.788 + STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 02:19:33.99 + STEP: Deploying the webhook pod 07/27/23 02:19:34.024 + STEP: Wait for the deployment to be ready 07/27/23 02:19:34.053 + Jul 27 02:19:34.069: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 07/27/23 02:19:36.094 + STEP: Verifying the service has paired with the endpoint 07/27/23 02:19:36.128 + Jul 27 02:19:37.128: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + test/e2e/apimachinery/webhook.go:277 + STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API 07/27/23 02:19:37.14 + STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API 07/27/23 02:19:37.202 + STEP: Creating a dummy validating-webhook-configuration object 07/27/23 02:19:37.241 + STEP: Deleting the validating-webhook-configuration, which should be possible to remove 07/27/23 02:19:37.271 + STEP: Creating a dummy mutating-webhook-configuration object 07/27/23 02:19:37.293 + STEP: Deleting the mutating-webhook-configuration, which should be possible to remove 07/27/23 02:19:37.321 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 21:43:01.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + Jul 27 02:19:37.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "custom-resource-definition-5018" for this suite. 06/12/23 21:43:01.839 + STEP: Destroying namespace "webhook-465" for this suite. 07/27/23 02:19:37.497 + STEP: Destroying namespace "webhook-465-markers" for this suite. 07/27/23 02:19:37.52 << End Captured GinkgoWriter Output ------------------------------ -SS +SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] Downward API volume - should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:84 -[BeforeEach] [sig-storage] Downward API volume +[sig-node] Downward API + should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:217 +[BeforeEach] [sig-node] Downward API set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:43:01.883 -Jun 12 21:43:01.883: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename downward-api 06/12/23 21:43:01.891 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:43:01.949 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:43:01.969 -[BeforeEach] [sig-storage] Downward API volume +STEP: Creating a kubernetes client 07/27/23 02:19:37.544 +Jul 27 02:19:37.544: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename downward-api 07/27/23 02:19:37.545 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:19:37.588 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:19:37.597 +[BeforeEach] [sig-node] Downward API test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-storage] Downward API volume - test/e2e/common/storage/downwardapi_volume.go:44 -[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:84 -STEP: Creating a pod to test downward API volume plugin 06/12/23 21:43:01.984 -Jun 12 21:43:02.029: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7c1c269c-c8f6-4ea5-8b72-ce14f3d4d0e8" in namespace "downward-api-9907" to be "Succeeded or Failed" -Jun 12 21:43:02.098: INFO: Pod "downwardapi-volume-7c1c269c-c8f6-4ea5-8b72-ce14f3d4d0e8": Phase="Pending", Reason="", readiness=false. Elapsed: 69.481765ms -Jun 12 21:43:04.123: INFO: Pod "downwardapi-volume-7c1c269c-c8f6-4ea5-8b72-ce14f3d4d0e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094243346s -Jun 12 21:43:06.120: INFO: Pod "downwardapi-volume-7c1c269c-c8f6-4ea5-8b72-ce14f3d4d0e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091176958s -Jun 12 21:43:08.114: INFO: Pod "downwardapi-volume-7c1c269c-c8f6-4ea5-8b72-ce14f3d4d0e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.085052372s -STEP: Saw pod success 06/12/23 21:43:08.114 -Jun 12 21:43:08.115: INFO: Pod "downwardapi-volume-7c1c269c-c8f6-4ea5-8b72-ce14f3d4d0e8" satisfied condition "Succeeded or Failed" -Jun 12 21:43:08.129: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-7c1c269c-c8f6-4ea5-8b72-ce14f3d4d0e8 container client-container: -STEP: delete the pod 06/12/23 21:43:08.188 -Jun 12 21:43:08.230: INFO: Waiting for pod downwardapi-volume-7c1c269c-c8f6-4ea5-8b72-ce14f3d4d0e8 to disappear -Jun 12 21:43:08.241: INFO: Pod downwardapi-volume-7c1c269c-c8f6-4ea5-8b72-ce14f3d4d0e8 no longer exists -[AfterEach] [sig-storage] Downward API volume +[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:217 +STEP: Creating a pod to test downward api env vars 07/27/23 02:19:37.606 +Jul 27 02:19:37.663: INFO: Waiting up to 5m0s for pod "downward-api-56c83d19-b238-439d-b112-2131453feb60" in namespace "downward-api-9520" to be "Succeeded or Failed" +Jul 27 02:19:37.680: INFO: Pod "downward-api-56c83d19-b238-439d-b112-2131453feb60": Phase="Pending", Reason="", readiness=false. Elapsed: 16.794927ms +Jul 27 02:19:39.691: INFO: Pod "downward-api-56c83d19-b238-439d-b112-2131453feb60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028110815s +Jul 27 02:19:41.693: INFO: Pod "downward-api-56c83d19-b238-439d-b112-2131453feb60": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029730061s +Jul 27 02:19:43.691: INFO: Pod "downward-api-56c83d19-b238-439d-b112-2131453feb60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027892755s +STEP: Saw pod success 07/27/23 02:19:43.691 +Jul 27 02:19:43.691: INFO: Pod "downward-api-56c83d19-b238-439d-b112-2131453feb60" satisfied condition "Succeeded or Failed" +Jul 27 02:19:43.699: INFO: Trying to get logs from node 10.245.128.19 pod downward-api-56c83d19-b238-439d-b112-2131453feb60 container dapi-container: +STEP: delete the pod 07/27/23 02:19:43.729 +Jul 27 02:19:43.753: INFO: Waiting for pod downward-api-56c83d19-b238-439d-b112-2131453feb60 to disappear +Jul 27 02:19:43.763: INFO: Pod downward-api-56c83d19-b238-439d-b112-2131453feb60 no longer exists +[AfterEach] [sig-node] Downward API test/e2e/framework/node/init/init.go:32 -Jun 12 21:43:08.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Downward API volume +Jul 27 02:19:43.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Downward API test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Downward API volume +[DeferCleanup (Each)] [sig-node] Downward API dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Downward API volume +[DeferCleanup (Each)] [sig-node] Downward API tear down framework | framework.go:193 -STEP: Destroying namespace "downward-api-9907" for this suite. 06/12/23 21:43:08.26 +STEP: Destroying namespace "downward-api-9520" for this suite. 07/27/23 02:19:43.777 ------------------------------ -• [SLOW TEST] [6.399 seconds] -[sig-storage] Downward API volume -test/e2e/common/storage/framework.go:23 - should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:84 +• [SLOW TEST] [6.256 seconds] +[sig-node] Downward API +test/e2e/common/node/framework.go:23 + should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:217 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Downward API volume + [BeforeEach] [sig-node] Downward API set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:43:01.883 - Jun 12 21:43:01.883: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename downward-api 06/12/23 21:43:01.891 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:43:01.949 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:43:01.969 - [BeforeEach] [sig-storage] Downward API volume + STEP: Creating a kubernetes client 07/27/23 02:19:37.544 + Jul 27 02:19:37.544: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename downward-api 07/27/23 02:19:37.545 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:19:37.588 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:19:37.597 + [BeforeEach] [sig-node] Downward API test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-storage] Downward API volume - test/e2e/common/storage/downwardapi_volume.go:44 - [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:84 - STEP: Creating a pod to test downward API volume plugin 06/12/23 21:43:01.984 - Jun 12 21:43:02.029: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7c1c269c-c8f6-4ea5-8b72-ce14f3d4d0e8" in namespace "downward-api-9907" to be "Succeeded or Failed" - Jun 12 21:43:02.098: INFO: Pod "downwardapi-volume-7c1c269c-c8f6-4ea5-8b72-ce14f3d4d0e8": Phase="Pending", Reason="", readiness=false. Elapsed: 69.481765ms - Jun 12 21:43:04.123: INFO: Pod "downwardapi-volume-7c1c269c-c8f6-4ea5-8b72-ce14f3d4d0e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094243346s - Jun 12 21:43:06.120: INFO: Pod "downwardapi-volume-7c1c269c-c8f6-4ea5-8b72-ce14f3d4d0e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091176958s - Jun 12 21:43:08.114: INFO: Pod "downwardapi-volume-7c1c269c-c8f6-4ea5-8b72-ce14f3d4d0e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.085052372s - STEP: Saw pod success 06/12/23 21:43:08.114 - Jun 12 21:43:08.115: INFO: Pod "downwardapi-volume-7c1c269c-c8f6-4ea5-8b72-ce14f3d4d0e8" satisfied condition "Succeeded or Failed" - Jun 12 21:43:08.129: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-7c1c269c-c8f6-4ea5-8b72-ce14f3d4d0e8 container client-container: - STEP: delete the pod 06/12/23 21:43:08.188 - Jun 12 21:43:08.230: INFO: Waiting for pod downwardapi-volume-7c1c269c-c8f6-4ea5-8b72-ce14f3d4d0e8 to disappear - Jun 12 21:43:08.241: INFO: Pod downwardapi-volume-7c1c269c-c8f6-4ea5-8b72-ce14f3d4d0e8 no longer exists - [AfterEach] [sig-storage] Downward API volume + [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:217 + STEP: Creating a pod to test downward api env vars 07/27/23 02:19:37.606 + Jul 27 02:19:37.663: INFO: Waiting up to 5m0s for pod "downward-api-56c83d19-b238-439d-b112-2131453feb60" in namespace "downward-api-9520" to be "Succeeded or Failed" + Jul 27 02:19:37.680: INFO: Pod "downward-api-56c83d19-b238-439d-b112-2131453feb60": Phase="Pending", Reason="", readiness=false. Elapsed: 16.794927ms + Jul 27 02:19:39.691: INFO: Pod "downward-api-56c83d19-b238-439d-b112-2131453feb60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028110815s + Jul 27 02:19:41.693: INFO: Pod "downward-api-56c83d19-b238-439d-b112-2131453feb60": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029730061s + Jul 27 02:19:43.691: INFO: Pod "downward-api-56c83d19-b238-439d-b112-2131453feb60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027892755s + STEP: Saw pod success 07/27/23 02:19:43.691 + Jul 27 02:19:43.691: INFO: Pod "downward-api-56c83d19-b238-439d-b112-2131453feb60" satisfied condition "Succeeded or Failed" + Jul 27 02:19:43.699: INFO: Trying to get logs from node 10.245.128.19 pod downward-api-56c83d19-b238-439d-b112-2131453feb60 container dapi-container: + STEP: delete the pod 07/27/23 02:19:43.729 + Jul 27 02:19:43.753: INFO: Waiting for pod downward-api-56c83d19-b238-439d-b112-2131453feb60 to disappear + Jul 27 02:19:43.763: INFO: Pod downward-api-56c83d19-b238-439d-b112-2131453feb60 no longer exists + [AfterEach] [sig-node] Downward API test/e2e/framework/node/init/init.go:32 - Jun 12 21:43:08.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Downward API volume + Jul 27 02:19:43.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Downward API test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Downward API volume + [DeferCleanup (Each)] [sig-node] Downward API dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Downward API volume + [DeferCleanup (Each)] [sig-node] Downward API tear down framework | framework.go:193 - STEP: Destroying namespace "downward-api-9907" for this suite. 06/12/23 21:43:08.26 + STEP: Destroying namespace "downward-api-9520" for this suite. 07/27/23 02:19:43.777 << End Captured GinkgoWriter Output ------------------------------ -SSS +SSSSSSSSSSSSSSS ------------------------------ -[sig-node] Security Context When creating a container with runAsUser - should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/node/security_context.go:347 -[BeforeEach] [sig-node] Security Context +[sig-api-machinery] Watchers + should observe add, update, and delete watch notifications on configmaps [Conformance] + test/e2e/apimachinery/watch.go:60 +[BeforeEach] [sig-api-machinery] Watchers set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:43:08.282 -Jun 12 21:43:08.282: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename security-context-test 06/12/23 21:43:08.284 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:43:08.341 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:43:08.356 -[BeforeEach] [sig-node] Security Context +STEP: Creating a kubernetes client 07/27/23 02:19:43.801 +Jul 27 02:19:43.801: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename watch 07/27/23 02:19:43.802 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:19:43.846 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:19:43.855 +[BeforeEach] [sig-api-machinery] Watchers test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-node] Security Context - test/e2e/common/node/security_context.go:50 -[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/node/security_context.go:347 -Jun 12 21:43:08.399: INFO: Waiting up to 5m0s for pod "busybox-user-65534-9463633c-9935-4f09-9999-6d96ac876e76" in namespace "security-context-test-4751" to be "Succeeded or Failed" -Jun 12 21:43:08.421: INFO: Pod "busybox-user-65534-9463633c-9935-4f09-9999-6d96ac876e76": Phase="Pending", Reason="", readiness=false. Elapsed: 22.364069ms -Jun 12 21:43:10.435: INFO: Pod "busybox-user-65534-9463633c-9935-4f09-9999-6d96ac876e76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036386919s -Jun 12 21:43:12.438: INFO: Pod "busybox-user-65534-9463633c-9935-4f09-9999-6d96ac876e76": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039102631s -Jun 12 21:43:14.435: INFO: Pod "busybox-user-65534-9463633c-9935-4f09-9999-6d96ac876e76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035704794s -Jun 12 21:43:14.435: INFO: Pod "busybox-user-65534-9463633c-9935-4f09-9999-6d96ac876e76" satisfied condition "Succeeded or Failed" -[AfterEach] [sig-node] Security Context +[It] should observe add, update, and delete watch notifications on configmaps [Conformance] + test/e2e/apimachinery/watch.go:60 +STEP: creating a watch on configmaps with label A 07/27/23 02:19:43.864 +STEP: creating a watch on configmaps with label B 07/27/23 02:19:43.869 +STEP: creating a watch on configmaps with label A or B 07/27/23 02:19:43.875 +STEP: creating a configmap with label A and ensuring the correct watchers observe the notification 07/27/23 02:19:43.881 +Jul 27 02:19:43.901: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1356 9fc4e564-5634-4ce3-8483-032a533d34e4 99802 0 2023-07-27 02:19:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-07-27 02:19:43 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Jul 27 02:19:43.901: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1356 9fc4e564-5634-4ce3-8483-032a533d34e4 99802 0 2023-07-27 02:19:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-07-27 02:19:43 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying configmap A and ensuring the correct watchers observe the notification 07/27/23 02:19:43.901 +Jul 27 02:19:43.943: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1356 9fc4e564-5634-4ce3-8483-032a533d34e4 99809 0 2023-07-27 02:19:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-07-27 02:19:43 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +Jul 27 02:19:43.944: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1356 9fc4e564-5634-4ce3-8483-032a533d34e4 99809 0 2023-07-27 02:19:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-07-27 02:19:43 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying configmap A again and ensuring the correct watchers observe the notification 07/27/23 02:19:43.944 +Jul 27 02:19:43.976: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1356 9fc4e564-5634-4ce3-8483-032a533d34e4 99814 0 2023-07-27 02:19:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-07-27 02:19:43 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Jul 27 02:19:43.976: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1356 9fc4e564-5634-4ce3-8483-032a533d34e4 99814 0 2023-07-27 02:19:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-07-27 02:19:43 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: deleting configmap A and ensuring the correct watchers observe the notification 07/27/23 02:19:43.977 +Jul 27 02:19:43.999: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1356 9fc4e564-5634-4ce3-8483-032a533d34e4 99817 0 2023-07-27 02:19:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-07-27 02:19:43 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Jul 27 02:19:43.999: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1356 9fc4e564-5634-4ce3-8483-032a533d34e4 99817 0 2023-07-27 02:19:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-07-27 02:19:43 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: creating a configmap with label B and ensuring the correct watchers observe the notification 07/27/23 02:19:43.999 +Jul 27 02:19:44.018: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1356 8aeb46fc-eba9-4572-81fc-8a3374316505 99819 0 2023-07-27 02:19:44 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-07-27 02:19:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Jul 27 02:19:44.018: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1356 8aeb46fc-eba9-4572-81fc-8a3374316505 99819 0 2023-07-27 02:19:44 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-07-27 02:19:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: deleting configmap B and ensuring the correct watchers observe the notification 07/27/23 02:19:54.019 +Jul 27 02:19:54.042: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1356 8aeb46fc-eba9-4572-81fc-8a3374316505 99885 0 2023-07-27 02:19:44 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-07-27 02:19:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Jul 27 02:19:54.042: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1356 8aeb46fc-eba9-4572-81fc-8a3374316505 99885 0 2023-07-27 02:19:44 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-07-27 02:19:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers test/e2e/framework/node/init/init.go:32 -Jun 12 21:43:14.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Security Context +Jul 27 02:20:04.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Watchers test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Security Context +[DeferCleanup (Each)] [sig-api-machinery] Watchers dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Security Context +[DeferCleanup (Each)] [sig-api-machinery] Watchers tear down framework | framework.go:193 -STEP: Destroying namespace "security-context-test-4751" for this suite. 06/12/23 21:43:14.454 +STEP: Destroying namespace "watch-1356" for this suite. 07/27/23 02:20:04.057 ------------------------------ -• [SLOW TEST] [6.243 seconds] -[sig-node] Security Context -test/e2e/common/node/framework.go:23 - When creating a container with runAsUser - test/e2e/common/node/security_context.go:309 - should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/node/security_context.go:347 +• [SLOW TEST] [20.284 seconds] +[sig-api-machinery] Watchers +test/e2e/apimachinery/framework.go:23 + should observe add, update, and delete watch notifications on configmaps [Conformance] + test/e2e/apimachinery/watch.go:60 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Security Context + [BeforeEach] [sig-api-machinery] Watchers set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:43:08.282 - Jun 12 21:43:08.282: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename security-context-test 06/12/23 21:43:08.284 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:43:08.341 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:43:08.356 - [BeforeEach] [sig-node] Security Context + STEP: Creating a kubernetes client 07/27/23 02:19:43.801 + Jul 27 02:19:43.801: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename watch 07/27/23 02:19:43.802 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:19:43.846 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:19:43.855 + [BeforeEach] [sig-api-machinery] Watchers test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-node] Security Context - test/e2e/common/node/security_context.go:50 - [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/node/security_context.go:347 - Jun 12 21:43:08.399: INFO: Waiting up to 5m0s for pod "busybox-user-65534-9463633c-9935-4f09-9999-6d96ac876e76" in namespace "security-context-test-4751" to be "Succeeded or Failed" - Jun 12 21:43:08.421: INFO: Pod "busybox-user-65534-9463633c-9935-4f09-9999-6d96ac876e76": Phase="Pending", Reason="", readiness=false. Elapsed: 22.364069ms - Jun 12 21:43:10.435: INFO: Pod "busybox-user-65534-9463633c-9935-4f09-9999-6d96ac876e76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036386919s - Jun 12 21:43:12.438: INFO: Pod "busybox-user-65534-9463633c-9935-4f09-9999-6d96ac876e76": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039102631s - Jun 12 21:43:14.435: INFO: Pod "busybox-user-65534-9463633c-9935-4f09-9999-6d96ac876e76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035704794s - Jun 12 21:43:14.435: INFO: Pod "busybox-user-65534-9463633c-9935-4f09-9999-6d96ac876e76" satisfied condition "Succeeded or Failed" - [AfterEach] [sig-node] Security Context + [It] should observe add, update, and delete watch notifications on configmaps [Conformance] + test/e2e/apimachinery/watch.go:60 + STEP: creating a watch on configmaps with label A 07/27/23 02:19:43.864 + STEP: creating a watch on configmaps with label B 07/27/23 02:19:43.869 + STEP: creating a watch on configmaps with label A or B 07/27/23 02:19:43.875 + STEP: creating a configmap with label A and ensuring the correct watchers observe the notification 07/27/23 02:19:43.881 + Jul 27 02:19:43.901: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1356 9fc4e564-5634-4ce3-8483-032a533d34e4 99802 0 2023-07-27 02:19:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-07-27 02:19:43 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} + Jul 27 02:19:43.901: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1356 9fc4e564-5634-4ce3-8483-032a533d34e4 99802 0 2023-07-27 02:19:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-07-27 02:19:43 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} + STEP: modifying configmap A and ensuring the correct watchers observe the notification 07/27/23 02:19:43.901 + Jul 27 02:19:43.943: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1356 9fc4e564-5634-4ce3-8483-032a533d34e4 99809 0 2023-07-27 02:19:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-07-27 02:19:43 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} + Jul 27 02:19:43.944: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1356 9fc4e564-5634-4ce3-8483-032a533d34e4 99809 0 2023-07-27 02:19:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-07-27 02:19:43 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} + STEP: modifying configmap A again and ensuring the correct watchers observe the notification 07/27/23 02:19:43.944 + Jul 27 02:19:43.976: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1356 9fc4e564-5634-4ce3-8483-032a533d34e4 99814 0 2023-07-27 02:19:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-07-27 02:19:43 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + Jul 27 02:19:43.976: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1356 9fc4e564-5634-4ce3-8483-032a533d34e4 99814 0 2023-07-27 02:19:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-07-27 02:19:43 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + STEP: deleting configmap A and ensuring the correct watchers observe the notification 07/27/23 02:19:43.977 + Jul 27 02:19:43.999: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1356 9fc4e564-5634-4ce3-8483-032a533d34e4 99817 0 2023-07-27 02:19:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-07-27 02:19:43 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + Jul 27 02:19:43.999: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1356 9fc4e564-5634-4ce3-8483-032a533d34e4 99817 0 2023-07-27 02:19:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-07-27 02:19:43 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + STEP: creating a configmap with label B and ensuring the correct watchers observe the notification 07/27/23 02:19:43.999 + Jul 27 02:19:44.018: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1356 8aeb46fc-eba9-4572-81fc-8a3374316505 99819 0 2023-07-27 02:19:44 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-07-27 02:19:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} + Jul 27 02:19:44.018: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1356 8aeb46fc-eba9-4572-81fc-8a3374316505 99819 0 2023-07-27 02:19:44 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-07-27 02:19:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} + STEP: deleting configmap B and ensuring the correct watchers observe the notification 07/27/23 02:19:54.019 + Jul 27 02:19:54.042: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1356 8aeb46fc-eba9-4572-81fc-8a3374316505 99885 0 2023-07-27 02:19:44 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-07-27 02:19:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} + Jul 27 02:19:54.042: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1356 8aeb46fc-eba9-4572-81fc-8a3374316505 99885 0 2023-07-27 02:19:44 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-07-27 02:19:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} + [AfterEach] [sig-api-machinery] Watchers test/e2e/framework/node/init/init.go:32 - Jun 12 21:43:14.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Security Context + Jul 27 02:20:04.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Watchers test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Security Context + [DeferCleanup (Each)] [sig-api-machinery] Watchers dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Security Context + [DeferCleanup (Each)] [sig-api-machinery] Watchers tear down framework | framework.go:193 - STEP: Destroying namespace "security-context-test-4751" for this suite. 06/12/23 21:43:14.454 + STEP: Destroying namespace "watch-1356" for this suite. 07/27/23 02:20:04.057 << End Captured GinkgoWriter Output ------------------------------ SSS ------------------------------ -[sig-network] Services - should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] - test/e2e/network/service.go:2213 -[BeforeEach] [sig-network] Services +[sig-node] Container Runtime blackbox test on terminated container + should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:232 +[BeforeEach] [sig-node] Container Runtime set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:43:14.526 -Jun 12 21:43:14.526: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename services 06/12/23 21:43:14.531 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:43:14.587 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:43:14.599 -[BeforeEach] [sig-network] Services +STEP: Creating a kubernetes client 07/27/23 02:20:04.085 +Jul 27 02:20:04.085: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename container-runtime 07/27/23 02:20:04.086 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:20:04.125 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:20:04.135 +[BeforeEach] [sig-node] Container Runtime test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-network] Services - test/e2e/network/service.go:766 -[It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] - test/e2e/network/service.go:2213 -STEP: creating service in namespace services-395 06/12/23 21:43:14.612 -STEP: creating service affinity-clusterip-transition in namespace services-395 06/12/23 21:43:14.613 -STEP: creating replication controller affinity-clusterip-transition in namespace services-395 06/12/23 21:43:14.67 -I0612 21:43:14.688713 23 runners.go:193] Created replication controller with name: affinity-clusterip-transition, namespace: services-395, replica count: 3 -I0612 21:43:17.746799 23 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady -I0612 21:43:20.747899 23 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady -Jun 12 21:43:20.774: INFO: Creating new exec pod -Jun 12 21:43:20.800: INFO: Waiting up to 5m0s for pod "execpod-affinity59lqt" in namespace "services-395" to be "running" -Jun 12 21:43:20.811: INFO: Pod "execpod-affinity59lqt": Phase="Pending", Reason="", readiness=false. Elapsed: 11.254216ms -Jun 12 21:43:22.825: INFO: Pod "execpod-affinity59lqt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024950739s -Jun 12 21:43:24.825: INFO: Pod "execpod-affinity59lqt": Phase="Running", Reason="", readiness=true. Elapsed: 4.025383795s -Jun 12 21:43:24.825: INFO: Pod "execpod-affinity59lqt" satisfied condition "running" -Jun 12 21:43:25.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-395 exec execpod-affinity59lqt -- /bin/sh -x -c nc -v -z -w 2 affinity-clusterip-transition 80' -Jun 12 21:43:26.273: INFO: stderr: "+ nc -v -z -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" -Jun 12 21:43:26.273: INFO: stdout: "" -Jun 12 21:43:26.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-395 exec execpod-affinity59lqt -- /bin/sh -x -c nc -v -z -w 2 172.21.220.156 80' -Jun 12 21:43:26.699: INFO: stderr: "+ nc -v -z -w 2 172.21.220.156 80\nConnection to 172.21.220.156 80 port [tcp/http] succeeded!\n" -Jun 12 21:43:26.699: INFO: stdout: "" -Jun 12 21:43:26.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-395 exec execpod-affinity59lqt -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.21.220.156:80/ ; done' -Jun 12 21:43:27.352: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n" -Jun 12 21:43:27.352: INFO: stdout: "\naffinity-clusterip-transition-bttzg\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bttzg\naffinity-clusterip-transition-bttzg\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-gjrm6\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bttzg\naffinity-clusterip-transition-gjrm6\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-gjrm6\naffinity-clusterip-transition-gjrm6\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-gjrm6\naffinity-clusterip-transition-gjrm6" -Jun 12 21:43:27.352: INFO: Received response from host: affinity-clusterip-transition-bttzg -Jun 12 21:43:27.352: INFO: Received response from host: affinity-clusterip-transition-bd5dn -Jun 12 21:43:27.352: INFO: Received response from host: affinity-clusterip-transition-bttzg -Jun 12 21:43:27.352: INFO: Received response from host: affinity-clusterip-transition-bttzg -Jun 12 21:43:27.352: INFO: Received response from host: affinity-clusterip-transition-bd5dn -Jun 12 21:43:27.352: INFO: Received response from host: affinity-clusterip-transition-bd5dn -Jun 12 21:43:27.352: INFO: Received response from host: affinity-clusterip-transition-gjrm6 -Jun 12 21:43:27.352: INFO: Received response from host: affinity-clusterip-transition-bd5dn -Jun 12 21:43:27.352: INFO: Received response from host: affinity-clusterip-transition-bttzg -Jun 12 21:43:27.352: INFO: Received response from host: affinity-clusterip-transition-gjrm6 -Jun 12 21:43:27.352: INFO: Received response from host: affinity-clusterip-transition-bd5dn -Jun 12 21:43:27.352: INFO: Received response from host: affinity-clusterip-transition-gjrm6 -Jun 12 21:43:27.352: INFO: Received response from host: affinity-clusterip-transition-gjrm6 -Jun 12 21:43:27.353: INFO: Received response from host: affinity-clusterip-transition-bd5dn -Jun 12 21:43:27.353: INFO: Received response from host: affinity-clusterip-transition-gjrm6 -Jun 12 21:43:27.353: INFO: Received response from host: affinity-clusterip-transition-gjrm6 -Jun 12 21:43:27.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-395 exec execpod-affinity59lqt -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.21.220.156:80/ ; done' -Jun 12 21:43:28.166: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n" -Jun 12 21:43:28.166: INFO: stdout: "\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bd5dn" -Jun 12 21:43:28.166: INFO: Received response from host: affinity-clusterip-transition-bd5dn -Jun 12 21:43:28.166: INFO: Received response from host: affinity-clusterip-transition-bd5dn -Jun 12 21:43:28.166: INFO: Received response from host: affinity-clusterip-transition-bd5dn -Jun 12 21:43:28.166: INFO: Received response from host: affinity-clusterip-transition-bd5dn -Jun 12 21:43:28.166: INFO: Received response from host: affinity-clusterip-transition-bd5dn -Jun 12 21:43:28.166: INFO: Received response from host: affinity-clusterip-transition-bd5dn -Jun 12 21:43:28.166: INFO: Received response from host: affinity-clusterip-transition-bd5dn -Jun 12 21:43:28.166: INFO: Received response from host: affinity-clusterip-transition-bd5dn -Jun 12 21:43:28.166: INFO: Received response from host: affinity-clusterip-transition-bd5dn -Jun 12 21:43:28.166: INFO: Received response from host: affinity-clusterip-transition-bd5dn -Jun 12 21:43:28.166: INFO: Received response from host: affinity-clusterip-transition-bd5dn -Jun 12 21:43:28.166: INFO: Received response from host: affinity-clusterip-transition-bd5dn -Jun 12 21:43:28.166: INFO: Received response from host: affinity-clusterip-transition-bd5dn -Jun 12 21:43:28.166: INFO: Received response from host: affinity-clusterip-transition-bd5dn -Jun 12 21:43:28.167: INFO: Received response from host: affinity-clusterip-transition-bd5dn -Jun 12 21:43:28.167: INFO: Received response from host: affinity-clusterip-transition-bd5dn -Jun 12 21:43:28.167: INFO: Cleaning up the exec pod -STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-395, will wait for the garbage collector to delete the pods 06/12/23 21:43:28.209 -Jun 12 21:43:28.285: INFO: Deleting ReplicationController affinity-clusterip-transition took: 15.745211ms -Jun 12 21:43:28.385: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.174773ms -[AfterEach] [sig-network] Services +[It] should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:232 +STEP: create the container 07/27/23 02:20:04.144 +STEP: wait for the container to reach Succeeded 07/27/23 02:20:04.18 +STEP: get the container status 07/27/23 02:20:08.283 +STEP: the container should be terminated 07/27/23 02:20:08.294 +STEP: the termination message should be set 07/27/23 02:20:08.294 +Jul 27 02:20:08.294: INFO: Expected: &{} to match Container's Termination Message: -- +STEP: delete the container 07/27/23 02:20:08.294 +[AfterEach] [sig-node] Container Runtime test/e2e/framework/node/init/init.go:32 -Jun 12 21:43:32.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] Services +Jul 27 02:20:08.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Container Runtime test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] Services +[DeferCleanup (Each)] [sig-node] Container Runtime dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] Services +[DeferCleanup (Each)] [sig-node] Container Runtime tear down framework | framework.go:193 -STEP: Destroying namespace "services-395" for this suite. 06/12/23 21:43:32.365 +STEP: Destroying namespace "container-runtime-7578" for this suite. 07/27/23 02:20:08.357 ------------------------------ -• [SLOW TEST] [17.863 seconds] -[sig-network] Services -test/e2e/network/common/framework.go:23 - should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] - test/e2e/network/service.go:2213 +• [4.298 seconds] +[sig-node] Container Runtime +test/e2e/common/node/framework.go:23 + blackbox test + test/e2e/common/node/runtime.go:44 + on terminated container + test/e2e/common/node/runtime.go:137 + should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:232 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] Services + [BeforeEach] [sig-node] Container Runtime set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:43:14.526 - Jun 12 21:43:14.526: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename services 06/12/23 21:43:14.531 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:43:14.587 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:43:14.599 - [BeforeEach] [sig-network] Services + STEP: Creating a kubernetes client 07/27/23 02:20:04.085 + Jul 27 02:20:04.085: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename container-runtime 07/27/23 02:20:04.086 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:20:04.125 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:20:04.135 + [BeforeEach] [sig-node] Container Runtime test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-network] Services - test/e2e/network/service.go:766 - [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] - test/e2e/network/service.go:2213 - STEP: creating service in namespace services-395 06/12/23 21:43:14.612 - STEP: creating service affinity-clusterip-transition in namespace services-395 06/12/23 21:43:14.613 - STEP: creating replication controller affinity-clusterip-transition in namespace services-395 06/12/23 21:43:14.67 - I0612 21:43:14.688713 23 runners.go:193] Created replication controller with name: affinity-clusterip-transition, namespace: services-395, replica count: 3 - I0612 21:43:17.746799 23 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady - I0612 21:43:20.747899 23 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady - Jun 12 21:43:20.774: INFO: Creating new exec pod - Jun 12 21:43:20.800: INFO: Waiting up to 5m0s for pod "execpod-affinity59lqt" in namespace "services-395" to be "running" - Jun 12 21:43:20.811: INFO: Pod "execpod-affinity59lqt": Phase="Pending", Reason="", readiness=false. Elapsed: 11.254216ms - Jun 12 21:43:22.825: INFO: Pod "execpod-affinity59lqt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024950739s - Jun 12 21:43:24.825: INFO: Pod "execpod-affinity59lqt": Phase="Running", Reason="", readiness=true. Elapsed: 4.025383795s - Jun 12 21:43:24.825: INFO: Pod "execpod-affinity59lqt" satisfied condition "running" - Jun 12 21:43:25.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-395 exec execpod-affinity59lqt -- /bin/sh -x -c nc -v -z -w 2 affinity-clusterip-transition 80' - Jun 12 21:43:26.273: INFO: stderr: "+ nc -v -z -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" - Jun 12 21:43:26.273: INFO: stdout: "" - Jun 12 21:43:26.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-395 exec execpod-affinity59lqt -- /bin/sh -x -c nc -v -z -w 2 172.21.220.156 80' - Jun 12 21:43:26.699: INFO: stderr: "+ nc -v -z -w 2 172.21.220.156 80\nConnection to 172.21.220.156 80 port [tcp/http] succeeded!\n" - Jun 12 21:43:26.699: INFO: stdout: "" - Jun 12 21:43:26.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-395 exec execpod-affinity59lqt -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.21.220.156:80/ ; done' - Jun 12 21:43:27.352: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n" - Jun 12 21:43:27.352: INFO: stdout: "\naffinity-clusterip-transition-bttzg\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bttzg\naffinity-clusterip-transition-bttzg\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-gjrm6\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bttzg\naffinity-clusterip-transition-gjrm6\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-gjrm6\naffinity-clusterip-transition-gjrm6\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-gjrm6\naffinity-clusterip-transition-gjrm6" - Jun 12 21:43:27.352: INFO: Received response from host: affinity-clusterip-transition-bttzg - Jun 12 21:43:27.352: INFO: Received response from host: affinity-clusterip-transition-bd5dn - Jun 12 21:43:27.352: INFO: Received response from host: affinity-clusterip-transition-bttzg - Jun 12 21:43:27.352: INFO: Received response from host: affinity-clusterip-transition-bttzg - Jun 12 21:43:27.352: INFO: Received response from host: affinity-clusterip-transition-bd5dn - Jun 12 21:43:27.352: INFO: Received response from host: affinity-clusterip-transition-bd5dn - Jun 12 21:43:27.352: INFO: Received response from host: affinity-clusterip-transition-gjrm6 - Jun 12 21:43:27.352: INFO: Received response from host: affinity-clusterip-transition-bd5dn - Jun 12 21:43:27.352: INFO: Received response from host: affinity-clusterip-transition-bttzg - Jun 12 21:43:27.352: INFO: Received response from host: affinity-clusterip-transition-gjrm6 - Jun 12 21:43:27.352: INFO: Received response from host: affinity-clusterip-transition-bd5dn - Jun 12 21:43:27.352: INFO: Received response from host: affinity-clusterip-transition-gjrm6 - Jun 12 21:43:27.352: INFO: Received response from host: affinity-clusterip-transition-gjrm6 - Jun 12 21:43:27.353: INFO: Received response from host: affinity-clusterip-transition-bd5dn - Jun 12 21:43:27.353: INFO: Received response from host: affinity-clusterip-transition-gjrm6 - Jun 12 21:43:27.353: INFO: Received response from host: affinity-clusterip-transition-gjrm6 - Jun 12 21:43:27.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-395 exec execpod-affinity59lqt -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.21.220.156:80/ ; done' - Jun 12 21:43:28.166: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.220.156:80/\n" - Jun 12 21:43:28.166: INFO: stdout: "\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bd5dn\naffinity-clusterip-transition-bd5dn" - Jun 12 21:43:28.166: INFO: Received response from host: affinity-clusterip-transition-bd5dn - Jun 12 21:43:28.166: INFO: Received response from host: affinity-clusterip-transition-bd5dn - Jun 12 21:43:28.166: INFO: Received response from host: affinity-clusterip-transition-bd5dn - Jun 12 21:43:28.166: INFO: Received response from host: affinity-clusterip-transition-bd5dn - Jun 12 21:43:28.166: INFO: Received response from host: affinity-clusterip-transition-bd5dn - Jun 12 21:43:28.166: INFO: Received response from host: affinity-clusterip-transition-bd5dn - Jun 12 21:43:28.166: INFO: Received response from host: affinity-clusterip-transition-bd5dn - Jun 12 21:43:28.166: INFO: Received response from host: affinity-clusterip-transition-bd5dn - Jun 12 21:43:28.166: INFO: Received response from host: affinity-clusterip-transition-bd5dn - Jun 12 21:43:28.166: INFO: Received response from host: affinity-clusterip-transition-bd5dn - Jun 12 21:43:28.166: INFO: Received response from host: affinity-clusterip-transition-bd5dn - Jun 12 21:43:28.166: INFO: Received response from host: affinity-clusterip-transition-bd5dn - Jun 12 21:43:28.166: INFO: Received response from host: affinity-clusterip-transition-bd5dn - Jun 12 21:43:28.166: INFO: Received response from host: affinity-clusterip-transition-bd5dn - Jun 12 21:43:28.167: INFO: Received response from host: affinity-clusterip-transition-bd5dn - Jun 12 21:43:28.167: INFO: Received response from host: affinity-clusterip-transition-bd5dn - Jun 12 21:43:28.167: INFO: Cleaning up the exec pod - STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-395, will wait for the garbage collector to delete the pods 06/12/23 21:43:28.209 - Jun 12 21:43:28.285: INFO: Deleting ReplicationController affinity-clusterip-transition took: 15.745211ms - Jun 12 21:43:28.385: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.174773ms - [AfterEach] [sig-network] Services + [It] should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:232 + STEP: create the container 07/27/23 02:20:04.144 + STEP: wait for the container to reach Succeeded 07/27/23 02:20:04.18 + STEP: get the container status 07/27/23 02:20:08.283 + STEP: the container should be terminated 07/27/23 02:20:08.294 + STEP: the termination message should be set 07/27/23 02:20:08.294 + Jul 27 02:20:08.294: INFO: Expected: &{} to match Container's Termination Message: -- + STEP: delete the container 07/27/23 02:20:08.294 + [AfterEach] [sig-node] Container Runtime test/e2e/framework/node/init/init.go:32 - Jun 12 21:43:32.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] Services + Jul 27 02:20:08.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Container Runtime test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] Services + [DeferCleanup (Each)] [sig-node] Container Runtime dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] Services + [DeferCleanup (Each)] [sig-node] Container Runtime tear down framework | framework.go:193 - STEP: Destroying namespace "services-395" for this suite. 06/12/23 21:43:32.365 + STEP: Destroying namespace "container-runtime-7578" for this suite. 07/27/23 02:20:08.357 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSS +SSSSSSSS ------------------------------ -[sig-node] ConfigMap - should run through a ConfigMap lifecycle [Conformance] - test/e2e/common/node/configmap.go:169 -[BeforeEach] [sig-node] ConfigMap +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if matching [Conformance] + test/e2e/scheduling/predicates.go:466 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:43:32.39 -Jun 12 21:43:32.391: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename configmap 06/12/23 21:43:32.398 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:43:32.471 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:43:32.489 -[BeforeEach] [sig-node] ConfigMap +STEP: Creating a kubernetes client 07/27/23 02:20:08.384 +Jul 27 02:20:08.384: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename sched-pred 07/27/23 02:20:08.385 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:20:08.455 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:20:08.465 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 -[It] should run through a ConfigMap lifecycle [Conformance] - test/e2e/common/node/configmap.go:169 -STEP: creating a ConfigMap 06/12/23 21:43:32.52 -STEP: fetching the ConfigMap 06/12/23 21:43:32.584 -STEP: patching the ConfigMap 06/12/23 21:43:32.601 -STEP: listing all ConfigMaps in all namespaces with a label selector 06/12/23 21:43:32.651 -STEP: deleting the ConfigMap by collection with a label selector 06/12/23 21:43:32.818 -STEP: listing all ConfigMaps in test namespace 06/12/23 21:43:32.85 -[AfterEach] [sig-node] ConfigMap +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:97 +Jul 27 02:20:08.475: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Jul 27 02:20:08.502: INFO: Waiting for terminating namespaces to be deleted... +Jul 27 02:20:08.530: INFO: +Logging pods the apiserver thinks is on node 10.245.128.17 before test +Jul 27 02:20:08.609: INFO: calico-node-6gb7d from calico-system started at 2023-07-26 23:19:48 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container calico-node ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: ibm-keepalived-watcher-krnnt from kube-system started at 2023-07-26 23:12:13 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container keepalived-watcher ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: ibm-master-proxy-static-10.245.128.17 from kube-system started at 2023-07-26 23:12:09 +0000 UTC (2 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container ibm-master-proxy-static ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container pause ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: ibm-vpc-block-csi-controller-0 from kube-system started at 2023-07-26 23:25:41 +0000 UTC (7 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container csi-attacher ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container csi-provisioner ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container csi-resizer ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container csi-snapshotter ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container iks-vpc-block-driver ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container liveness-probe ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container storage-secret-sidecar ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: ibm-vpc-block-csi-node-pb2sj from kube-system started at 2023-07-26 23:12:13 +0000 UTC (4 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container csi-driver-registrar ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container iks-vpc-block-node-driver ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container liveness-probe ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container storage-secret-sidecar ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: vpn-7d8b749c64-87d9s from kube-system started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container vpn ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: tuned-wnh5v from openshift-cluster-node-tuning-operator started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container tuned ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: csi-snapshot-controller-5b77984679-frszr from openshift-cluster-storage-operator started at 2023-07-26 23:21:55 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container snapshot-controller ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: csi-snapshot-webhook-78b8c8d77c-2pk6s from openshift-cluster-storage-operator started at 2023-07-26 23:21:55 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container webhook ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: console-7fd48bd95f-wksvb from openshift-console started at 2023-07-26 23:27:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container console ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: downloads-6874b45df6-w7xkq from openshift-console started at 2023-07-26 23:22:05 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container download-server ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: dns-default-5mw2g from openshift-dns started at 2023-07-26 23:25:39 +0000 UTC (2 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container dns ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: node-resolver-2kt92 from openshift-dns started at 2023-07-26 23:25:38 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container dns-node-resolver ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: image-registry-69fbbd6d88-6xgnp from openshift-image-registry started at 2023-07-27 01:50:07 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container registry ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: node-ca-pmxp9 from openshift-image-registry started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container node-ca ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: ingress-canary-wh5qj from openshift-ingress-canary started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container serve-healthcheck-canary ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: router-default-865b575f54-qjwfv from openshift-ingress started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container router ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: openshift-kube-proxy-r7t77 from openshift-kube-proxy started at 2023-07-26 23:17:29 +0000 UTC (2 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container kube-proxy ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: migrator-77d7ddf546-9g7xm from openshift-kube-storage-version-migrator started at 2023-07-26 23:25:38 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container migrator ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: certified-operators-qlqcc from openshift-marketplace started at 2023-07-26 23:23:19 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container registry-server ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: community-operators-dtgmg from openshift-marketplace started at 2023-07-26 23:23:19 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container registry-server ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: redhat-marketplace-vnvdb from openshift-marketplace started at 2023-07-26 23:23:19 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container registry-server ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: redhat-operators-9qw52 from openshift-marketplace started at 2023-07-27 01:30:34 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container registry-server ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: alertmanager-main-1 from openshift-monitoring started at 2023-07-26 23:27:44 +0000 UTC (6 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container alertmanager ready: true, restart count 1 +Jul 27 02:20:08.609: INFO: Container alertmanager-proxy ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container config-reloader ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy-metric ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container prom-label-proxy ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: kube-state-metrics-575bd9d6b6-2wk6g from openshift-monitoring started at 2023-07-27 01:50:07 +0000 UTC (3 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy-main ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy-self ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container kube-state-metrics ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: node-exporter-2tscc from openshift-monitoring started at 2023-07-26 23:26:18 +0000 UTC (2 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container node-exporter ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: openshift-state-metrics-99754b784-vdbrs from openshift-monitoring started at 2023-07-27 01:50:07 +0000 UTC (3 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy-main ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy-self ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container openshift-state-metrics ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: prometheus-adapter-657855c676-qlc95 from openshift-monitoring started at 2023-07-26 23:26:23 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container prometheus-adapter ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: prometheus-k8s-1 from openshift-monitoring started at 2023-07-26 23:27:58 +0000 UTC (6 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container config-reloader ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy-thanos ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container prometheus ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container prometheus-proxy ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container thanos-sidecar ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: prometheus-operator-765bbdfd45-twq98 from openshift-monitoring started at 2023-07-27 01:50:07 +0000 UTC (2 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container prometheus-operator ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: prometheus-operator-admission-webhook-84c7bbc8cc-hct4l from openshift-monitoring started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container prometheus-operator-admission-webhook ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: telemeter-client-c964ff8c9-xszvz from openshift-monitoring started at 2023-07-27 01:50:07 +0000 UTC (3 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container reload ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container telemeter-client ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: thanos-querier-7f9c896d7f-xqld6 from openshift-monitoring started at 2023-07-26 23:26:32 +0000 UTC (6 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy-metrics ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy-rules ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container oauth-proxy ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container prom-label-proxy ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container thanos-query ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: multus-5x56j from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container kube-multus ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: multus-additional-cni-plugins-p7gf5 from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: multus-admission-controller-8ccd764f4-j68g7 from openshift-multus started at 2023-07-26 23:25:38 +0000 UTC (2 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container multus-admission-controller ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: network-metrics-daemon-djvdx from openshift-multus started at 2023-07-26 23:17:25 +0000 UTC (2 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container network-metrics-daemon ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: network-check-target-2j7hq from openshift-network-diagnostics started at 2023-07-26 23:17:32 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container network-check-target-container ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: collect-profiles-28173720-hn9xm from openshift-operator-lifecycle-manager started at 2023-07-27 02:00:00 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container collect-profiles ready: false, restart count 0 +Jul 27 02:20:08.609: INFO: collect-profiles-28173735-ln8gp from openshift-operator-lifecycle-manager started at 2023-07-27 02:15:00 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container collect-profiles ready: false, restart count 0 +Jul 27 02:20:08.609: INFO: packageserver-b9964c68-p2fd4 from openshift-operator-lifecycle-manager started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container packageserver ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: service-ca-665db46585-9cprv from openshift-service-ca started at 2023-07-26 23:21:59 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container service-ca-controller ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: sonobuoy-e2e-job-17fd703895604ed7 from sonobuoy started at 2023-07-27 01:27:09 +0000 UTC (2 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container e2e ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: sonobuoy-systemd-logs-daemon-set-2f6dede7a68c4213-vft4d from sonobuoy started at 2023-07-27 01:27:09 +0000 UTC (2 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: Container systemd-logs ready: true, restart count 0 +Jul 27 02:20:08.609: INFO: tigera-operator-5b48cf996b-5zb5v from tigera-operator started at 2023-07-26 23:12:21 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.609: INFO: Container tigera-operator ready: true, restart count 6 +Jul 27 02:20:08.609: INFO: +Logging pods the apiserver thinks is on node 10.245.128.18 before test +Jul 27 02:20:08.667: INFO: calico-kube-controllers-5575667dcd-ps6n9 from calico-system started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.667: INFO: Container calico-kube-controllers ready: true, restart count 0 +Jul 27 02:20:08.667: INFO: calico-node-2vsm9 from calico-system started at 2023-07-26 23:19:48 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.667: INFO: Container calico-node ready: true, restart count 0 +Jul 27 02:20:08.667: INFO: calico-typha-5549cc5cdc-nsmq8 from calico-system started at 2023-07-26 23:19:56 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.667: INFO: Container calico-typha ready: true, restart count 0 +Jul 27 02:20:08.667: INFO: managed-storage-validation-webhooks-6dfcff48fb-4xxsq from ibm-odf-validation-webhook started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.667: INFO: Container managed-storage-validation-webhooks ready: true, restart count 0 +Jul 27 02:20:08.667: INFO: managed-storage-validation-webhooks-6dfcff48fb-k6pcc from ibm-odf-validation-webhook started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.667: INFO: Container managed-storage-validation-webhooks ready: true, restart count 0 +Jul 27 02:20:08.667: INFO: managed-storage-validation-webhooks-6dfcff48fb-swht2 from ibm-odf-validation-webhook started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.667: INFO: Container managed-storage-validation-webhooks ready: true, restart count 1 +Jul 27 02:20:08.667: INFO: ibm-keepalived-watcher-wjqkn from kube-system started at 2023-07-26 23:12:23 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.667: INFO: Container keepalived-watcher ready: true, restart count 0 +Jul 27 02:20:08.667: INFO: ibm-master-proxy-static-10.245.128.18 from kube-system started at 2023-07-26 23:12:20 +0000 UTC (2 container statuses recorded) +Jul 27 02:20:08.667: INFO: Container ibm-master-proxy-static ready: true, restart count 0 +Jul 27 02:20:08.667: INFO: Container pause ready: true, restart count 0 +Jul 27 02:20:08.667: INFO: ibm-storage-metrics-agent-9fd89b544-292dm from kube-system started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) +Jul 27 02:20:08.667: INFO: Container ibm-storage-metrics-agent ready: true, restart count 0 +Jul 27 02:20:08.667: INFO: Container storage-secret-sidecar ready: true, restart count 0 +Jul 27 02:20:08.667: INFO: ibm-vpc-block-csi-node-lp4cr from kube-system started at 2023-07-26 23:12:23 +0000 UTC (4 container statuses recorded) +Jul 27 02:20:08.667: INFO: Container csi-driver-registrar ready: true, restart count 0 +Jul 27 02:20:08.667: INFO: Container iks-vpc-block-node-driver ready: true, restart count 0 +Jul 27 02:20:08.667: INFO: Container liveness-probe ready: true, restart count 0 +Jul 27 02:20:08.667: INFO: Container storage-secret-sidecar ready: true, restart count 0 +Jul 27 02:20:08.667: INFO: cluster-node-tuning-operator-5b85c5d47b-9cbp5 from openshift-cluster-node-tuning-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.667: INFO: Container cluster-node-tuning-operator ready: true, restart count 0 +Jul 27 02:20:08.667: INFO: tuned-zxrv4 from openshift-cluster-node-tuning-operator started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.667: INFO: Container tuned ready: true, restart count 0 +Jul 27 02:20:08.667: INFO: cluster-samples-operator-588cc6f8cc-fh5hj from openshift-cluster-samples-operator started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) +Jul 27 02:20:08.667: INFO: Container cluster-samples-operator ready: true, restart count 0 +Jul 27 02:20:08.667: INFO: Container cluster-samples-operator-watch ready: true, restart count 0 +Jul 27 02:20:08.667: INFO: cluster-storage-operator-586d5b4d95-tq97j from openshift-cluster-storage-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.667: INFO: Container cluster-storage-operator ready: true, restart count 1 +Jul 27 02:20:08.667: INFO: csi-snapshot-controller-5b77984679-wxrv8 from openshift-cluster-storage-operator started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.667: INFO: Container snapshot-controller ready: true, restart count 0 +Jul 27 02:20:08.667: INFO: csi-snapshot-controller-operator-7c998b6874-9flch from openshift-cluster-storage-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.667: INFO: Container csi-snapshot-controller-operator ready: true, restart count 0 +Jul 27 02:20:08.667: INFO: csi-snapshot-webhook-78b8c8d77c-jqbww from openshift-cluster-storage-operator started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.667: INFO: Container webhook ready: true, restart count 0 +Jul 27 02:20:08.667: INFO: console-operator-8486d48d6-4xzr7 from openshift-console-operator started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) +Jul 27 02:20:08.667: INFO: Container console-operator ready: true, restart count 1 +Jul 27 02:20:08.667: INFO: Container conversion-webhook-server ready: true, restart count 2 +Jul 27 02:20:08.667: INFO: console-7fd48bd95f-pzr2s from openshift-console started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.667: INFO: Container console ready: true, restart count 0 +Jul 27 02:20:08.667: INFO: downloads-6874b45df6-nm9q6 from openshift-console started at 2023-07-27 01:50:07 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.667: INFO: Container download-server ready: true, restart count 0 +Jul 27 02:20:08.667: INFO: dns-operator-7c549b76fd-t56tt from openshift-dns-operator started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) +Jul 27 02:20:08.667: INFO: Container dns-operator ready: true, restart count 0 +Jul 27 02:20:08.667: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:20:08.667: INFO: dns-default-r982z from openshift-dns started at 2023-07-26 23:25:39 +0000 UTC (2 container statuses recorded) +Jul 27 02:20:08.667: INFO: Container dns ready: true, restart count 0 +Jul 27 02:20:08.667: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:20:08.667: INFO: node-resolver-txjwq from openshift-dns started at 2023-07-26 23:25:38 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.667: INFO: Container dns-node-resolver ready: true, restart count 0 +Jul 27 02:20:08.667: INFO: cluster-image-registry-operator-96d4d84cf-65k8l from openshift-image-registry started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.667: INFO: Container cluster-image-registry-operator ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: node-ca-ntzct from openshift-image-registry started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.668: INFO: Container node-ca ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: ingress-canary-jphk8 from openshift-ingress-canary started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.668: INFO: Container serve-healthcheck-canary ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: ingress-operator-64bc7f7964-9sbtr from openshift-ingress-operator started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) +Jul 27 02:20:08.668: INFO: Container ingress-operator ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: router-default-865b575f54-b946s from openshift-ingress started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.668: INFO: Container router ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: insights-operator-5db47f7654-r8xdq from openshift-insights started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.668: INFO: Container insights-operator ready: true, restart count 1 +Jul 27 02:20:08.668: INFO: openshift-kube-proxy-6hxmn from openshift-kube-proxy started at 2023-07-26 23:17:29 +0000 UTC (2 container statuses recorded) +Jul 27 02:20:08.668: INFO: Container kube-proxy ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: kube-storage-version-migrator-operator-f4b8bf677-c24bz from openshift-kube-storage-version-migrator-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.668: INFO: Container kube-storage-version-migrator-operator ready: true, restart count 1 +Jul 27 02:20:08.668: INFO: marketplace-operator-5ddbd9fdbc-lrhrq from openshift-marketplace started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.668: INFO: Container marketplace-operator ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: alertmanager-main-0 from openshift-monitoring started at 2023-07-27 01:50:10 +0000 UTC (6 container statuses recorded) +Jul 27 02:20:08.668: INFO: Container alertmanager ready: true, restart count 1 +Jul 27 02:20:08.668: INFO: Container alertmanager-proxy ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: Container config-reloader ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: Container kube-rbac-proxy-metric ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: Container prom-label-proxy ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: cluster-monitoring-operator-7448698f65-65wn9 from openshift-monitoring started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.668: INFO: Container cluster-monitoring-operator ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: node-exporter-d46sh from openshift-monitoring started at 2023-07-26 23:26:18 +0000 UTC (2 container statuses recorded) +Jul 27 02:20:08.668: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: Container node-exporter ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: prometheus-adapter-657855c676-hwbr7 from openshift-monitoring started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.668: INFO: Container prometheus-adapter ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: prometheus-k8s-0 from openshift-monitoring started at 2023-07-27 01:50:11 +0000 UTC (6 container statuses recorded) +Jul 27 02:20:08.668: INFO: Container config-reloader ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: Container kube-rbac-proxy-thanos ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: Container prometheus ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: Container prometheus-proxy ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: Container thanos-sidecar ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: prometheus-operator-admission-webhook-84c7bbc8cc-jvbxn from openshift-monitoring started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.668: INFO: Container prometheus-operator-admission-webhook ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: thanos-querier-7f9c896d7f-fk8mk from openshift-monitoring started at 2023-07-27 01:45:59 +0000 UTC (6 container statuses recorded) +Jul 27 02:20:08.668: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: Container kube-rbac-proxy-metrics ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: Container kube-rbac-proxy-rules ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: Container oauth-proxy ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: Container prom-label-proxy ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: Container thanos-query ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: multus-additional-cni-plugins-njhzm from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.668: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: multus-admission-controller-8ccd764f4-7kmkg from openshift-multus started at 2023-07-26 23:25:53 +0000 UTC (2 container statuses recorded) +Jul 27 02:20:08.668: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: Container multus-admission-controller ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: multus-zhftn from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.668: INFO: Container kube-multus ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: network-metrics-daemon-cglg2 from openshift-multus started at 2023-07-26 23:17:25 +0000 UTC (2 container statuses recorded) +Jul 27 02:20:08.668: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: Container network-metrics-daemon ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: network-check-source-6777f6456-pt5nn from openshift-network-diagnostics started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.668: INFO: Container check-endpoints ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: network-check-target-85dgs from openshift-network-diagnostics started at 2023-07-26 23:17:32 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.668: INFO: Container network-check-target-container ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: network-operator-6dddb4f685-gc764 from openshift-network-operator started at 2023-07-26 23:17:11 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.668: INFO: Container network-operator ready: true, restart count 1 +Jul 27 02:20:08.668: INFO: catalog-operator-69ccd5899d-lrpkv from openshift-operator-lifecycle-manager started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.668: INFO: Container catalog-operator ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: collect-profiles-28173705-ctzz8 from openshift-operator-lifecycle-manager started at 2023-07-27 01:45:00 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.668: INFO: Container collect-profiles ready: false, restart count 0 +Jul 27 02:20:08.668: INFO: olm-operator-8448b5677d-bf2sl from openshift-operator-lifecycle-manager started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.668: INFO: Container olm-operator ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: package-server-manager-579d664b8c-klrwt from openshift-operator-lifecycle-manager started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.668: INFO: Container package-server-manager ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: packageserver-b9964c68-6gdlp from openshift-operator-lifecycle-manager started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.668: INFO: Container packageserver ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: metrics-6ff747d58d-llt7w from openshift-roks-metrics started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.668: INFO: Container metrics ready: true, restart count 2 +Jul 27 02:20:08.668: INFO: push-gateway-6448c6788-hrxtl from openshift-roks-metrics started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.668: INFO: Container push-gateway ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: service-ca-operator-5db987957b-pftl9 from openshift-service-ca-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.668: INFO: Container service-ca-operator ready: true, restart count 1 +Jul 27 02:20:08.668: INFO: sonobuoy from sonobuoy started at 2023-07-27 01:26:57 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.668: INFO: Container kube-sonobuoy ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: sonobuoy-systemd-logs-daemon-set-2f6dede7a68c4213-7p2cx from sonobuoy started at 2023-07-27 01:27:09 +0000 UTC (2 container statuses recorded) +Jul 27 02:20:08.668: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: Container systemd-logs ready: true, restart count 0 +Jul 27 02:20:08.668: INFO: +Logging pods the apiserver thinks is on node 10.245.128.19 before test +Jul 27 02:20:08.712: INFO: calico-node-tnbmn from calico-system started at 2023-07-26 23:19:48 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.712: INFO: Container calico-node ready: true, restart count 0 +Jul 27 02:20:08.712: INFO: calico-typha-5549cc5cdc-25l9k from calico-system started at 2023-07-26 23:19:48 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.712: INFO: Container calico-typha ready: true, restart count 0 +Jul 27 02:20:08.712: INFO: ibm-keepalived-watcher-228gb from kube-system started at 2023-07-26 23:12:15 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.712: INFO: Container keepalived-watcher ready: true, restart count 0 +Jul 27 02:20:08.712: INFO: ibm-master-proxy-static-10.245.128.19 from kube-system started at 2023-07-26 23:12:13 +0000 UTC (2 container statuses recorded) +Jul 27 02:20:08.712: INFO: Container ibm-master-proxy-static ready: true, restart count 0 +Jul 27 02:20:08.712: INFO: Container pause ready: true, restart count 0 +Jul 27 02:20:08.712: INFO: ibm-vpc-block-csi-node-m8dqf from kube-system started at 2023-07-26 23:12:15 +0000 UTC (4 container statuses recorded) +Jul 27 02:20:08.712: INFO: Container csi-driver-registrar ready: true, restart count 0 +Jul 27 02:20:08.712: INFO: Container iks-vpc-block-node-driver ready: true, restart count 0 +Jul 27 02:20:08.712: INFO: Container liveness-probe ready: true, restart count 0 +Jul 27 02:20:08.712: INFO: Container storage-secret-sidecar ready: true, restart count 0 +Jul 27 02:20:08.712: INFO: tuned-8xqng from openshift-cluster-node-tuning-operator started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.712: INFO: Container tuned ready: true, restart count 0 +Jul 27 02:20:08.712: INFO: dns-default-9k25b from openshift-dns started at 2023-07-27 01:50:33 +0000 UTC (2 container statuses recorded) +Jul 27 02:20:08.712: INFO: Container dns ready: true, restart count 0 +Jul 27 02:20:08.712: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:20:08.712: INFO: node-resolver-s2q44 from openshift-dns started at 2023-07-26 23:25:38 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.712: INFO: Container dns-node-resolver ready: true, restart count 0 +Jul 27 02:20:08.712: INFO: node-ca-kz4vp from openshift-image-registry started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.712: INFO: Container node-ca ready: true, restart count 0 +Jul 27 02:20:08.712: INFO: ingress-canary-nf2dw from openshift-ingress-canary started at 2023-07-27 01:50:33 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.712: INFO: Container serve-healthcheck-canary ready: true, restart count 0 +Jul 27 02:20:08.712: INFO: openshift-kube-proxy-4qg5c from openshift-kube-proxy started at 2023-07-26 23:17:29 +0000 UTC (2 container statuses recorded) +Jul 27 02:20:08.712: INFO: Container kube-proxy ready: true, restart count 0 +Jul 27 02:20:08.712: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:20:08.712: INFO: node-exporter-vz8m9 from openshift-monitoring started at 2023-07-26 23:26:18 +0000 UTC (2 container statuses recorded) +Jul 27 02:20:08.712: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:20:08.712: INFO: Container node-exporter ready: true, restart count 0 +Jul 27 02:20:08.712: INFO: multus-287s2 from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.712: INFO: Container kube-multus ready: true, restart count 0 +Jul 27 02:20:08.712: INFO: multus-additional-cni-plugins-xns7c from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.712: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 +Jul 27 02:20:08.712: INFO: network-metrics-daemon-xpw2q from openshift-multus started at 2023-07-26 23:17:25 +0000 UTC (2 container statuses recorded) +Jul 27 02:20:08.712: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:20:08.712: INFO: Container network-metrics-daemon ready: true, restart count 0 +Jul 27 02:20:08.712: INFO: network-check-target-hf22d from openshift-network-diagnostics started at 2023-07-26 23:17:32 +0000 UTC (1 container statuses recorded) +Jul 27 02:20:08.712: INFO: Container network-check-target-container ready: true, restart count 0 +Jul 27 02:20:08.712: INFO: sonobuoy-systemd-logs-daemon-set-2f6dede7a68c4213-p74pn from sonobuoy started at 2023-07-27 01:27:09 +0000 UTC (2 container statuses recorded) +Jul 27 02:20:08.712: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jul 27 02:20:08.712: INFO: Container systemd-logs ready: true, restart count 0 +[It] validates that NodeSelector is respected if matching [Conformance] + test/e2e/scheduling/predicates.go:466 +STEP: Trying to launch a pod without a label to get a node which can launch it. 07/27/23 02:20:08.712 +Jul 27 02:20:08.737: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-3836" to be "running" +Jul 27 02:20:08.745: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 7.763789ms +Jul 27 02:20:10.754: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.016579405s +Jul 27 02:20:10.754: INFO: Pod "without-label" satisfied condition "running" +STEP: Explicitly delete pod here to free the resource it takes. 07/27/23 02:20:10.762 +STEP: Trying to apply a random label on the found node. 07/27/23 02:20:10.784 +STEP: verifying the node has the label kubernetes.io/e2e-68102f05-d8b4-436c-b3d3-30d31157c871 42 07/27/23 02:20:10.811 +STEP: Trying to relaunch the pod, now with labels. 07/27/23 02:20:10.819 +Jul 27 02:20:10.837: INFO: Waiting up to 5m0s for pod "with-labels" in namespace "sched-pred-3836" to be "not pending" +Jul 27 02:20:10.848: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 11.102119ms +Jul 27 02:20:12.859: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02237895s +Jul 27 02:20:14.858: INFO: Pod "with-labels": Phase="Running", Reason="", readiness=true. Elapsed: 4.021393063s +Jul 27 02:20:14.858: INFO: Pod "with-labels" satisfied condition "not pending" +STEP: removing the label kubernetes.io/e2e-68102f05-d8b4-436c-b3d3-30d31157c871 off the node 10.245.128.19 07/27/23 02:20:14.867 +STEP: verifying the node doesn't have the label kubernetes.io/e2e-68102f05-d8b4-436c-b3d3-30d31157c871 07/27/23 02:20:14.915 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 -Jun 12 21:43:32.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] ConfigMap +Jul 27 02:20:14.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:88 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] ConfigMap +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] ConfigMap +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 -STEP: Destroying namespace "configmap-8935" for this suite. 06/12/23 21:43:32.881 +STEP: Destroying namespace "sched-pred-3836" for this suite. 07/27/23 02:20:14.935 ------------------------------ -• [0.515 seconds] -[sig-node] ConfigMap -test/e2e/common/node/framework.go:23 - should run through a ConfigMap lifecycle [Conformance] - test/e2e/common/node/configmap.go:169 +• [SLOW TEST] [6.573 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +test/e2e/scheduling/framework.go:40 + validates that NodeSelector is respected if matching [Conformance] + test/e2e/scheduling/predicates.go:466 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] ConfigMap + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:43:32.39 - Jun 12 21:43:32.391: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename configmap 06/12/23 21:43:32.398 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:43:32.471 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:43:32.489 - [BeforeEach] [sig-node] ConfigMap + STEP: Creating a kubernetes client 07/27/23 02:20:08.384 + Jul 27 02:20:08.384: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename sched-pred 07/27/23 02:20:08.385 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:20:08.455 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:20:08.465 + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 - [It] should run through a ConfigMap lifecycle [Conformance] - test/e2e/common/node/configmap.go:169 - STEP: creating a ConfigMap 06/12/23 21:43:32.52 - STEP: fetching the ConfigMap 06/12/23 21:43:32.584 - STEP: patching the ConfigMap 06/12/23 21:43:32.601 - STEP: listing all ConfigMaps in all namespaces with a label selector 06/12/23 21:43:32.651 - STEP: deleting the ConfigMap by collection with a label selector 06/12/23 21:43:32.818 - STEP: listing all ConfigMaps in test namespace 06/12/23 21:43:32.85 - [AfterEach] [sig-node] ConfigMap + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:97 + Jul 27 02:20:08.475: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready + Jul 27 02:20:08.502: INFO: Waiting for terminating namespaces to be deleted... + Jul 27 02:20:08.530: INFO: + Logging pods the apiserver thinks is on node 10.245.128.17 before test + Jul 27 02:20:08.609: INFO: calico-node-6gb7d from calico-system started at 2023-07-26 23:19:48 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container calico-node ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: ibm-keepalived-watcher-krnnt from kube-system started at 2023-07-26 23:12:13 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container keepalived-watcher ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: ibm-master-proxy-static-10.245.128.17 from kube-system started at 2023-07-26 23:12:09 +0000 UTC (2 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container ibm-master-proxy-static ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container pause ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: ibm-vpc-block-csi-controller-0 from kube-system started at 2023-07-26 23:25:41 +0000 UTC (7 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container csi-attacher ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container csi-provisioner ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container csi-resizer ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container csi-snapshotter ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container iks-vpc-block-driver ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container liveness-probe ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container storage-secret-sidecar ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: ibm-vpc-block-csi-node-pb2sj from kube-system started at 2023-07-26 23:12:13 +0000 UTC (4 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container csi-driver-registrar ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container iks-vpc-block-node-driver ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container liveness-probe ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container storage-secret-sidecar ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: vpn-7d8b749c64-87d9s from kube-system started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container vpn ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: tuned-wnh5v from openshift-cluster-node-tuning-operator started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container tuned ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: csi-snapshot-controller-5b77984679-frszr from openshift-cluster-storage-operator started at 2023-07-26 23:21:55 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container snapshot-controller ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: csi-snapshot-webhook-78b8c8d77c-2pk6s from openshift-cluster-storage-operator started at 2023-07-26 23:21:55 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container webhook ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: console-7fd48bd95f-wksvb from openshift-console started at 2023-07-26 23:27:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container console ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: downloads-6874b45df6-w7xkq from openshift-console started at 2023-07-26 23:22:05 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container download-server ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: dns-default-5mw2g from openshift-dns started at 2023-07-26 23:25:39 +0000 UTC (2 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container dns ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: node-resolver-2kt92 from openshift-dns started at 2023-07-26 23:25:38 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container dns-node-resolver ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: image-registry-69fbbd6d88-6xgnp from openshift-image-registry started at 2023-07-27 01:50:07 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container registry ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: node-ca-pmxp9 from openshift-image-registry started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container node-ca ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: ingress-canary-wh5qj from openshift-ingress-canary started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container serve-healthcheck-canary ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: router-default-865b575f54-qjwfv from openshift-ingress started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container router ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: openshift-kube-proxy-r7t77 from openshift-kube-proxy started at 2023-07-26 23:17:29 +0000 UTC (2 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container kube-proxy ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: migrator-77d7ddf546-9g7xm from openshift-kube-storage-version-migrator started at 2023-07-26 23:25:38 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container migrator ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: certified-operators-qlqcc from openshift-marketplace started at 2023-07-26 23:23:19 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container registry-server ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: community-operators-dtgmg from openshift-marketplace started at 2023-07-26 23:23:19 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container registry-server ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: redhat-marketplace-vnvdb from openshift-marketplace started at 2023-07-26 23:23:19 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container registry-server ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: redhat-operators-9qw52 from openshift-marketplace started at 2023-07-27 01:30:34 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container registry-server ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: alertmanager-main-1 from openshift-monitoring started at 2023-07-26 23:27:44 +0000 UTC (6 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container alertmanager ready: true, restart count 1 + Jul 27 02:20:08.609: INFO: Container alertmanager-proxy ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container config-reloader ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy-metric ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container prom-label-proxy ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: kube-state-metrics-575bd9d6b6-2wk6g from openshift-monitoring started at 2023-07-27 01:50:07 +0000 UTC (3 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy-main ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy-self ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container kube-state-metrics ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: node-exporter-2tscc from openshift-monitoring started at 2023-07-26 23:26:18 +0000 UTC (2 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container node-exporter ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: openshift-state-metrics-99754b784-vdbrs from openshift-monitoring started at 2023-07-27 01:50:07 +0000 UTC (3 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy-main ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy-self ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container openshift-state-metrics ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: prometheus-adapter-657855c676-qlc95 from openshift-monitoring started at 2023-07-26 23:26:23 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container prometheus-adapter ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: prometheus-k8s-1 from openshift-monitoring started at 2023-07-26 23:27:58 +0000 UTC (6 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container config-reloader ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy-thanos ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container prometheus ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container prometheus-proxy ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container thanos-sidecar ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: prometheus-operator-765bbdfd45-twq98 from openshift-monitoring started at 2023-07-27 01:50:07 +0000 UTC (2 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container prometheus-operator ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: prometheus-operator-admission-webhook-84c7bbc8cc-hct4l from openshift-monitoring started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container prometheus-operator-admission-webhook ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: telemeter-client-c964ff8c9-xszvz from openshift-monitoring started at 2023-07-27 01:50:07 +0000 UTC (3 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container reload ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container telemeter-client ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: thanos-querier-7f9c896d7f-xqld6 from openshift-monitoring started at 2023-07-26 23:26:32 +0000 UTC (6 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy-metrics ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy-rules ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container oauth-proxy ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container prom-label-proxy ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container thanos-query ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: multus-5x56j from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container kube-multus ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: multus-additional-cni-plugins-p7gf5 from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: multus-admission-controller-8ccd764f4-j68g7 from openshift-multus started at 2023-07-26 23:25:38 +0000 UTC (2 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container multus-admission-controller ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: network-metrics-daemon-djvdx from openshift-multus started at 2023-07-26 23:17:25 +0000 UTC (2 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container network-metrics-daemon ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: network-check-target-2j7hq from openshift-network-diagnostics started at 2023-07-26 23:17:32 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container network-check-target-container ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: collect-profiles-28173720-hn9xm from openshift-operator-lifecycle-manager started at 2023-07-27 02:00:00 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container collect-profiles ready: false, restart count 0 + Jul 27 02:20:08.609: INFO: collect-profiles-28173735-ln8gp from openshift-operator-lifecycle-manager started at 2023-07-27 02:15:00 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container collect-profiles ready: false, restart count 0 + Jul 27 02:20:08.609: INFO: packageserver-b9964c68-p2fd4 from openshift-operator-lifecycle-manager started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container packageserver ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: service-ca-665db46585-9cprv from openshift-service-ca started at 2023-07-26 23:21:59 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container service-ca-controller ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: sonobuoy-e2e-job-17fd703895604ed7 from sonobuoy started at 2023-07-27 01:27:09 +0000 UTC (2 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container e2e ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container sonobuoy-worker ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: sonobuoy-systemd-logs-daemon-set-2f6dede7a68c4213-vft4d from sonobuoy started at 2023-07-27 01:27:09 +0000 UTC (2 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container sonobuoy-worker ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: Container systemd-logs ready: true, restart count 0 + Jul 27 02:20:08.609: INFO: tigera-operator-5b48cf996b-5zb5v from tigera-operator started at 2023-07-26 23:12:21 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.609: INFO: Container tigera-operator ready: true, restart count 6 + Jul 27 02:20:08.609: INFO: + Logging pods the apiserver thinks is on node 10.245.128.18 before test + Jul 27 02:20:08.667: INFO: calico-kube-controllers-5575667dcd-ps6n9 from calico-system started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.667: INFO: Container calico-kube-controllers ready: true, restart count 0 + Jul 27 02:20:08.667: INFO: calico-node-2vsm9 from calico-system started at 2023-07-26 23:19:48 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.667: INFO: Container calico-node ready: true, restart count 0 + Jul 27 02:20:08.667: INFO: calico-typha-5549cc5cdc-nsmq8 from calico-system started at 2023-07-26 23:19:56 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.667: INFO: Container calico-typha ready: true, restart count 0 + Jul 27 02:20:08.667: INFO: managed-storage-validation-webhooks-6dfcff48fb-4xxsq from ibm-odf-validation-webhook started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.667: INFO: Container managed-storage-validation-webhooks ready: true, restart count 0 + Jul 27 02:20:08.667: INFO: managed-storage-validation-webhooks-6dfcff48fb-k6pcc from ibm-odf-validation-webhook started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.667: INFO: Container managed-storage-validation-webhooks ready: true, restart count 0 + Jul 27 02:20:08.667: INFO: managed-storage-validation-webhooks-6dfcff48fb-swht2 from ibm-odf-validation-webhook started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.667: INFO: Container managed-storage-validation-webhooks ready: true, restart count 1 + Jul 27 02:20:08.667: INFO: ibm-keepalived-watcher-wjqkn from kube-system started at 2023-07-26 23:12:23 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.667: INFO: Container keepalived-watcher ready: true, restart count 0 + Jul 27 02:20:08.667: INFO: ibm-master-proxy-static-10.245.128.18 from kube-system started at 2023-07-26 23:12:20 +0000 UTC (2 container statuses recorded) + Jul 27 02:20:08.667: INFO: Container ibm-master-proxy-static ready: true, restart count 0 + Jul 27 02:20:08.667: INFO: Container pause ready: true, restart count 0 + Jul 27 02:20:08.667: INFO: ibm-storage-metrics-agent-9fd89b544-292dm from kube-system started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) + Jul 27 02:20:08.667: INFO: Container ibm-storage-metrics-agent ready: true, restart count 0 + Jul 27 02:20:08.667: INFO: Container storage-secret-sidecar ready: true, restart count 0 + Jul 27 02:20:08.667: INFO: ibm-vpc-block-csi-node-lp4cr from kube-system started at 2023-07-26 23:12:23 +0000 UTC (4 container statuses recorded) + Jul 27 02:20:08.667: INFO: Container csi-driver-registrar ready: true, restart count 0 + Jul 27 02:20:08.667: INFO: Container iks-vpc-block-node-driver ready: true, restart count 0 + Jul 27 02:20:08.667: INFO: Container liveness-probe ready: true, restart count 0 + Jul 27 02:20:08.667: INFO: Container storage-secret-sidecar ready: true, restart count 0 + Jul 27 02:20:08.667: INFO: cluster-node-tuning-operator-5b85c5d47b-9cbp5 from openshift-cluster-node-tuning-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.667: INFO: Container cluster-node-tuning-operator ready: true, restart count 0 + Jul 27 02:20:08.667: INFO: tuned-zxrv4 from openshift-cluster-node-tuning-operator started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.667: INFO: Container tuned ready: true, restart count 0 + Jul 27 02:20:08.667: INFO: cluster-samples-operator-588cc6f8cc-fh5hj from openshift-cluster-samples-operator started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) + Jul 27 02:20:08.667: INFO: Container cluster-samples-operator ready: true, restart count 0 + Jul 27 02:20:08.667: INFO: Container cluster-samples-operator-watch ready: true, restart count 0 + Jul 27 02:20:08.667: INFO: cluster-storage-operator-586d5b4d95-tq97j from openshift-cluster-storage-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.667: INFO: Container cluster-storage-operator ready: true, restart count 1 + Jul 27 02:20:08.667: INFO: csi-snapshot-controller-5b77984679-wxrv8 from openshift-cluster-storage-operator started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.667: INFO: Container snapshot-controller ready: true, restart count 0 + Jul 27 02:20:08.667: INFO: csi-snapshot-controller-operator-7c998b6874-9flch from openshift-cluster-storage-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.667: INFO: Container csi-snapshot-controller-operator ready: true, restart count 0 + Jul 27 02:20:08.667: INFO: csi-snapshot-webhook-78b8c8d77c-jqbww from openshift-cluster-storage-operator started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.667: INFO: Container webhook ready: true, restart count 0 + Jul 27 02:20:08.667: INFO: console-operator-8486d48d6-4xzr7 from openshift-console-operator started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) + Jul 27 02:20:08.667: INFO: Container console-operator ready: true, restart count 1 + Jul 27 02:20:08.667: INFO: Container conversion-webhook-server ready: true, restart count 2 + Jul 27 02:20:08.667: INFO: console-7fd48bd95f-pzr2s from openshift-console started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.667: INFO: Container console ready: true, restart count 0 + Jul 27 02:20:08.667: INFO: downloads-6874b45df6-nm9q6 from openshift-console started at 2023-07-27 01:50:07 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.667: INFO: Container download-server ready: true, restart count 0 + Jul 27 02:20:08.667: INFO: dns-operator-7c549b76fd-t56tt from openshift-dns-operator started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) + Jul 27 02:20:08.667: INFO: Container dns-operator ready: true, restart count 0 + Jul 27 02:20:08.667: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:20:08.667: INFO: dns-default-r982z from openshift-dns started at 2023-07-26 23:25:39 +0000 UTC (2 container statuses recorded) + Jul 27 02:20:08.667: INFO: Container dns ready: true, restart count 0 + Jul 27 02:20:08.667: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:20:08.667: INFO: node-resolver-txjwq from openshift-dns started at 2023-07-26 23:25:38 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.667: INFO: Container dns-node-resolver ready: true, restart count 0 + Jul 27 02:20:08.667: INFO: cluster-image-registry-operator-96d4d84cf-65k8l from openshift-image-registry started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.667: INFO: Container cluster-image-registry-operator ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: node-ca-ntzct from openshift-image-registry started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.668: INFO: Container node-ca ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: ingress-canary-jphk8 from openshift-ingress-canary started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.668: INFO: Container serve-healthcheck-canary ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: ingress-operator-64bc7f7964-9sbtr from openshift-ingress-operator started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) + Jul 27 02:20:08.668: INFO: Container ingress-operator ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: router-default-865b575f54-b946s from openshift-ingress started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.668: INFO: Container router ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: insights-operator-5db47f7654-r8xdq from openshift-insights started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.668: INFO: Container insights-operator ready: true, restart count 1 + Jul 27 02:20:08.668: INFO: openshift-kube-proxy-6hxmn from openshift-kube-proxy started at 2023-07-26 23:17:29 +0000 UTC (2 container statuses recorded) + Jul 27 02:20:08.668: INFO: Container kube-proxy ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: kube-storage-version-migrator-operator-f4b8bf677-c24bz from openshift-kube-storage-version-migrator-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.668: INFO: Container kube-storage-version-migrator-operator ready: true, restart count 1 + Jul 27 02:20:08.668: INFO: marketplace-operator-5ddbd9fdbc-lrhrq from openshift-marketplace started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.668: INFO: Container marketplace-operator ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: alertmanager-main-0 from openshift-monitoring started at 2023-07-27 01:50:10 +0000 UTC (6 container statuses recorded) + Jul 27 02:20:08.668: INFO: Container alertmanager ready: true, restart count 1 + Jul 27 02:20:08.668: INFO: Container alertmanager-proxy ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: Container config-reloader ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: Container kube-rbac-proxy-metric ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: Container prom-label-proxy ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: cluster-monitoring-operator-7448698f65-65wn9 from openshift-monitoring started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.668: INFO: Container cluster-monitoring-operator ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: node-exporter-d46sh from openshift-monitoring started at 2023-07-26 23:26:18 +0000 UTC (2 container statuses recorded) + Jul 27 02:20:08.668: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: Container node-exporter ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: prometheus-adapter-657855c676-hwbr7 from openshift-monitoring started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.668: INFO: Container prometheus-adapter ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: prometheus-k8s-0 from openshift-monitoring started at 2023-07-27 01:50:11 +0000 UTC (6 container statuses recorded) + Jul 27 02:20:08.668: INFO: Container config-reloader ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: Container kube-rbac-proxy-thanos ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: Container prometheus ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: Container prometheus-proxy ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: Container thanos-sidecar ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: prometheus-operator-admission-webhook-84c7bbc8cc-jvbxn from openshift-monitoring started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.668: INFO: Container prometheus-operator-admission-webhook ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: thanos-querier-7f9c896d7f-fk8mk from openshift-monitoring started at 2023-07-27 01:45:59 +0000 UTC (6 container statuses recorded) + Jul 27 02:20:08.668: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: Container kube-rbac-proxy-metrics ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: Container kube-rbac-proxy-rules ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: Container oauth-proxy ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: Container prom-label-proxy ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: Container thanos-query ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: multus-additional-cni-plugins-njhzm from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.668: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: multus-admission-controller-8ccd764f4-7kmkg from openshift-multus started at 2023-07-26 23:25:53 +0000 UTC (2 container statuses recorded) + Jul 27 02:20:08.668: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: Container multus-admission-controller ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: multus-zhftn from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.668: INFO: Container kube-multus ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: network-metrics-daemon-cglg2 from openshift-multus started at 2023-07-26 23:17:25 +0000 UTC (2 container statuses recorded) + Jul 27 02:20:08.668: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: Container network-metrics-daemon ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: network-check-source-6777f6456-pt5nn from openshift-network-diagnostics started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.668: INFO: Container check-endpoints ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: network-check-target-85dgs from openshift-network-diagnostics started at 2023-07-26 23:17:32 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.668: INFO: Container network-check-target-container ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: network-operator-6dddb4f685-gc764 from openshift-network-operator started at 2023-07-26 23:17:11 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.668: INFO: Container network-operator ready: true, restart count 1 + Jul 27 02:20:08.668: INFO: catalog-operator-69ccd5899d-lrpkv from openshift-operator-lifecycle-manager started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.668: INFO: Container catalog-operator ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: collect-profiles-28173705-ctzz8 from openshift-operator-lifecycle-manager started at 2023-07-27 01:45:00 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.668: INFO: Container collect-profiles ready: false, restart count 0 + Jul 27 02:20:08.668: INFO: olm-operator-8448b5677d-bf2sl from openshift-operator-lifecycle-manager started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.668: INFO: Container olm-operator ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: package-server-manager-579d664b8c-klrwt from openshift-operator-lifecycle-manager started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.668: INFO: Container package-server-manager ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: packageserver-b9964c68-6gdlp from openshift-operator-lifecycle-manager started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.668: INFO: Container packageserver ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: metrics-6ff747d58d-llt7w from openshift-roks-metrics started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.668: INFO: Container metrics ready: true, restart count 2 + Jul 27 02:20:08.668: INFO: push-gateway-6448c6788-hrxtl from openshift-roks-metrics started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.668: INFO: Container push-gateway ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: service-ca-operator-5db987957b-pftl9 from openshift-service-ca-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.668: INFO: Container service-ca-operator ready: true, restart count 1 + Jul 27 02:20:08.668: INFO: sonobuoy from sonobuoy started at 2023-07-27 01:26:57 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.668: INFO: Container kube-sonobuoy ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: sonobuoy-systemd-logs-daemon-set-2f6dede7a68c4213-7p2cx from sonobuoy started at 2023-07-27 01:27:09 +0000 UTC (2 container statuses recorded) + Jul 27 02:20:08.668: INFO: Container sonobuoy-worker ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: Container systemd-logs ready: true, restart count 0 + Jul 27 02:20:08.668: INFO: + Logging pods the apiserver thinks is on node 10.245.128.19 before test + Jul 27 02:20:08.712: INFO: calico-node-tnbmn from calico-system started at 2023-07-26 23:19:48 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.712: INFO: Container calico-node ready: true, restart count 0 + Jul 27 02:20:08.712: INFO: calico-typha-5549cc5cdc-25l9k from calico-system started at 2023-07-26 23:19:48 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.712: INFO: Container calico-typha ready: true, restart count 0 + Jul 27 02:20:08.712: INFO: ibm-keepalived-watcher-228gb from kube-system started at 2023-07-26 23:12:15 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.712: INFO: Container keepalived-watcher ready: true, restart count 0 + Jul 27 02:20:08.712: INFO: ibm-master-proxy-static-10.245.128.19 from kube-system started at 2023-07-26 23:12:13 +0000 UTC (2 container statuses recorded) + Jul 27 02:20:08.712: INFO: Container ibm-master-proxy-static ready: true, restart count 0 + Jul 27 02:20:08.712: INFO: Container pause ready: true, restart count 0 + Jul 27 02:20:08.712: INFO: ibm-vpc-block-csi-node-m8dqf from kube-system started at 2023-07-26 23:12:15 +0000 UTC (4 container statuses recorded) + Jul 27 02:20:08.712: INFO: Container csi-driver-registrar ready: true, restart count 0 + Jul 27 02:20:08.712: INFO: Container iks-vpc-block-node-driver ready: true, restart count 0 + Jul 27 02:20:08.712: INFO: Container liveness-probe ready: true, restart count 0 + Jul 27 02:20:08.712: INFO: Container storage-secret-sidecar ready: true, restart count 0 + Jul 27 02:20:08.712: INFO: tuned-8xqng from openshift-cluster-node-tuning-operator started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.712: INFO: Container tuned ready: true, restart count 0 + Jul 27 02:20:08.712: INFO: dns-default-9k25b from openshift-dns started at 2023-07-27 01:50:33 +0000 UTC (2 container statuses recorded) + Jul 27 02:20:08.712: INFO: Container dns ready: true, restart count 0 + Jul 27 02:20:08.712: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:20:08.712: INFO: node-resolver-s2q44 from openshift-dns started at 2023-07-26 23:25:38 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.712: INFO: Container dns-node-resolver ready: true, restart count 0 + Jul 27 02:20:08.712: INFO: node-ca-kz4vp from openshift-image-registry started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.712: INFO: Container node-ca ready: true, restart count 0 + Jul 27 02:20:08.712: INFO: ingress-canary-nf2dw from openshift-ingress-canary started at 2023-07-27 01:50:33 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.712: INFO: Container serve-healthcheck-canary ready: true, restart count 0 + Jul 27 02:20:08.712: INFO: openshift-kube-proxy-4qg5c from openshift-kube-proxy started at 2023-07-26 23:17:29 +0000 UTC (2 container statuses recorded) + Jul 27 02:20:08.712: INFO: Container kube-proxy ready: true, restart count 0 + Jul 27 02:20:08.712: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:20:08.712: INFO: node-exporter-vz8m9 from openshift-monitoring started at 2023-07-26 23:26:18 +0000 UTC (2 container statuses recorded) + Jul 27 02:20:08.712: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:20:08.712: INFO: Container node-exporter ready: true, restart count 0 + Jul 27 02:20:08.712: INFO: multus-287s2 from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.712: INFO: Container kube-multus ready: true, restart count 0 + Jul 27 02:20:08.712: INFO: multus-additional-cni-plugins-xns7c from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.712: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 + Jul 27 02:20:08.712: INFO: network-metrics-daemon-xpw2q from openshift-multus started at 2023-07-26 23:17:25 +0000 UTC (2 container statuses recorded) + Jul 27 02:20:08.712: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:20:08.712: INFO: Container network-metrics-daemon ready: true, restart count 0 + Jul 27 02:20:08.712: INFO: network-check-target-hf22d from openshift-network-diagnostics started at 2023-07-26 23:17:32 +0000 UTC (1 container statuses recorded) + Jul 27 02:20:08.712: INFO: Container network-check-target-container ready: true, restart count 0 + Jul 27 02:20:08.712: INFO: sonobuoy-systemd-logs-daemon-set-2f6dede7a68c4213-p74pn from sonobuoy started at 2023-07-27 01:27:09 +0000 UTC (2 container statuses recorded) + Jul 27 02:20:08.712: INFO: Container sonobuoy-worker ready: true, restart count 0 + Jul 27 02:20:08.712: INFO: Container systemd-logs ready: true, restart count 0 + [It] validates that NodeSelector is respected if matching [Conformance] + test/e2e/scheduling/predicates.go:466 + STEP: Trying to launch a pod without a label to get a node which can launch it. 07/27/23 02:20:08.712 + Jul 27 02:20:08.737: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-3836" to be "running" + Jul 27 02:20:08.745: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 7.763789ms + Jul 27 02:20:10.754: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.016579405s + Jul 27 02:20:10.754: INFO: Pod "without-label" satisfied condition "running" + STEP: Explicitly delete pod here to free the resource it takes. 07/27/23 02:20:10.762 + STEP: Trying to apply a random label on the found node. 07/27/23 02:20:10.784 + STEP: verifying the node has the label kubernetes.io/e2e-68102f05-d8b4-436c-b3d3-30d31157c871 42 07/27/23 02:20:10.811 + STEP: Trying to relaunch the pod, now with labels. 07/27/23 02:20:10.819 + Jul 27 02:20:10.837: INFO: Waiting up to 5m0s for pod "with-labels" in namespace "sched-pred-3836" to be "not pending" + Jul 27 02:20:10.848: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 11.102119ms + Jul 27 02:20:12.859: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02237895s + Jul 27 02:20:14.858: INFO: Pod "with-labels": Phase="Running", Reason="", readiness=true. Elapsed: 4.021393063s + Jul 27 02:20:14.858: INFO: Pod "with-labels" satisfied condition "not pending" + STEP: removing the label kubernetes.io/e2e-68102f05-d8b4-436c-b3d3-30d31157c871 off the node 10.245.128.19 07/27/23 02:20:14.867 + STEP: verifying the node doesn't have the label kubernetes.io/e2e-68102f05-d8b4-436c-b3d3-30d31157c871 07/27/23 02:20:14.915 + [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 - Jun 12 21:43:32.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] ConfigMap + Jul 27 02:20:14.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:88 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] ConfigMap + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] ConfigMap + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 - STEP: Destroying namespace "configmap-8935" for this suite. 06/12/23 21:43:32.881 + STEP: Destroying namespace "sched-pred-3836" for this suite. 07/27/23 02:20:14.935 << End Captured GinkgoWriter Output ------------------------------ -SSSS ------------------------------- -[sig-api-machinery] Garbage collector - should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] - test/e2e/apimachinery/garbage_collector.go:550 -[BeforeEach] [sig-api-machinery] Garbage collector +[sig-network] DNS + should provide DNS for ExternalName services [Conformance] + test/e2e/network/dns.go:333 +[BeforeEach] [sig-network] DNS set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:43:32.915 -Jun 12 21:43:32.915: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename gc 06/12/23 21:43:32.917 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:43:32.98 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:43:33.001 -[BeforeEach] [sig-api-machinery] Garbage collector +STEP: Creating a kubernetes client 07/27/23 02:20:14.958 +Jul 27 02:20:14.958: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename dns 07/27/23 02:20:14.958 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:20:14.998 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:20:15.008 +[BeforeEach] [sig-network] DNS test/e2e/framework/metrics/init/init.go:31 -[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] - test/e2e/apimachinery/garbage_collector.go:550 -STEP: create the deployment 06/12/23 21:43:33.049 -STEP: Wait for the Deployment to create new ReplicaSet 06/12/23 21:43:33.076 -STEP: delete the deployment 06/12/23 21:43:33.227 -STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs 06/12/23 21:43:33.25 -STEP: Gathering metrics 06/12/23 21:43:33.853 -W0612 21:43:33.916308 23 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. -Jun 12 21:43:33.916: INFO: For apiserver_request_total: -For apiserver_request_latency_seconds: -For apiserver_init_events_total: -For garbage_collector_attempt_to_delete_queue_latency: -For garbage_collector_attempt_to_delete_work_duration: -For garbage_collector_attempt_to_orphan_queue_latency: -For garbage_collector_attempt_to_orphan_work_duration: -For garbage_collector_dirty_processing_latency_microseconds: -For garbage_collector_event_processing_latency_microseconds: -For garbage_collector_graph_changes_queue_latency: -For garbage_collector_graph_changes_work_duration: -For garbage_collector_orphan_processing_latency_microseconds: -For namespace_queue_latency: -For namespace_queue_latency_sum: -For namespace_queue_latency_count: -For namespace_retries: -For namespace_work_duration: -For namespace_work_duration_sum: -For namespace_work_duration_count: -For function_duration_seconds: -For errors_total: -For evicted_pods_total: - -[AfterEach] [sig-api-machinery] Garbage collector +[It] should provide DNS for ExternalName services [Conformance] + test/e2e/network/dns.go:333 +STEP: Creating a test externalName service 07/27/23 02:20:15.017 +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9775.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9775.svc.cluster.local; sleep 1; done + 07/27/23 02:20:15.036 +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9775.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9775.svc.cluster.local; sleep 1; done + 07/27/23 02:20:15.036 +STEP: creating a pod to probe DNS 07/27/23 02:20:15.036 +STEP: submitting the pod to kubernetes 07/27/23 02:20:15.036 +Jul 27 02:20:15.066: INFO: Waiting up to 15m0s for pod "dns-test-68a0b9dd-3fb3-4887-b0b9-65f180932d66" in namespace "dns-9775" to be "running" +Jul 27 02:20:15.076: INFO: Pod "dns-test-68a0b9dd-3fb3-4887-b0b9-65f180932d66": Phase="Pending", Reason="", readiness=false. Elapsed: 10.205781ms +Jul 27 02:20:17.086: INFO: Pod "dns-test-68a0b9dd-3fb3-4887-b0b9-65f180932d66": Phase="Running", Reason="", readiness=true. Elapsed: 2.020354952s +Jul 27 02:20:17.086: INFO: Pod "dns-test-68a0b9dd-3fb3-4887-b0b9-65f180932d66" satisfied condition "running" +STEP: retrieving the pod 07/27/23 02:20:17.086 +STEP: looking for the results for each expected name from probers 07/27/23 02:20:17.095 +Jul 27 02:20:17.128: INFO: DNS probes using dns-test-68a0b9dd-3fb3-4887-b0b9-65f180932d66 succeeded + +STEP: deleting the pod 07/27/23 02:20:17.128 +STEP: changing the externalName to bar.example.com 07/27/23 02:20:17.149 +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9775.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9775.svc.cluster.local; sleep 1; done + 07/27/23 02:20:17.179 +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9775.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9775.svc.cluster.local; sleep 1; done + 07/27/23 02:20:17.18 +STEP: creating a second pod to probe DNS 07/27/23 02:20:17.18 +STEP: submitting the pod to kubernetes 07/27/23 02:20:17.18 +Jul 27 02:20:17.197: INFO: Waiting up to 15m0s for pod "dns-test-2487282c-1bfd-4681-8369-e44bb239daca" in namespace "dns-9775" to be "running" +Jul 27 02:20:17.204: INFO: Pod "dns-test-2487282c-1bfd-4681-8369-e44bb239daca": Phase="Pending", Reason="", readiness=false. Elapsed: 7.436503ms +Jul 27 02:20:19.214: INFO: Pod "dns-test-2487282c-1bfd-4681-8369-e44bb239daca": Phase="Running", Reason="", readiness=true. Elapsed: 2.017058081s +Jul 27 02:20:19.214: INFO: Pod "dns-test-2487282c-1bfd-4681-8369-e44bb239daca" satisfied condition "running" +STEP: retrieving the pod 07/27/23 02:20:19.214 +STEP: looking for the results for each expected name from probers 07/27/23 02:20:19.222 +Jul 27 02:20:19.243: INFO: File wheezy_udp@dns-test-service-3.dns-9775.svc.cluster.local from pod dns-9775/dns-test-2487282c-1bfd-4681-8369-e44bb239daca contains 'foo.example.com. +' instead of 'bar.example.com.' +Jul 27 02:20:19.256: INFO: File jessie_udp@dns-test-service-3.dns-9775.svc.cluster.local from pod dns-9775/dns-test-2487282c-1bfd-4681-8369-e44bb239daca contains 'foo.example.com. +' instead of 'bar.example.com.' +Jul 27 02:20:19.256: INFO: Lookups using dns-9775/dns-test-2487282c-1bfd-4681-8369-e44bb239daca failed for: [wheezy_udp@dns-test-service-3.dns-9775.svc.cluster.local jessie_udp@dns-test-service-3.dns-9775.svc.cluster.local] + +Jul 27 02:20:24.286: INFO: DNS probes using dns-test-2487282c-1bfd-4681-8369-e44bb239daca succeeded + +STEP: deleting the pod 07/27/23 02:20:24.286 +STEP: changing the service to type=ClusterIP 07/27/23 02:20:24.304 +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9775.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9775.svc.cluster.local; sleep 1; done + 07/27/23 02:20:24.356 +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9775.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-9775.svc.cluster.local; sleep 1; done + 07/27/23 02:20:24.356 +STEP: creating a third pod to probe DNS 07/27/23 02:20:24.357 +STEP: submitting the pod to kubernetes 07/27/23 02:20:24.368 +Jul 27 02:20:24.388: INFO: Waiting up to 15m0s for pod "dns-test-9476c1cf-ccb4-4e77-a773-96da5311f385" in namespace "dns-9775" to be "running" +Jul 27 02:20:24.397: INFO: Pod "dns-test-9476c1cf-ccb4-4e77-a773-96da5311f385": Phase="Pending", Reason="", readiness=false. Elapsed: 8.708097ms +Jul 27 02:20:26.407: INFO: Pod "dns-test-9476c1cf-ccb4-4e77-a773-96da5311f385": Phase="Running", Reason="", readiness=true. Elapsed: 2.019441349s +Jul 27 02:20:26.407: INFO: Pod "dns-test-9476c1cf-ccb4-4e77-a773-96da5311f385" satisfied condition "running" +STEP: retrieving the pod 07/27/23 02:20:26.407 +STEP: looking for the results for each expected name from probers 07/27/23 02:20:26.416 +Jul 27 02:20:26.468: INFO: DNS probes using dns-test-9476c1cf-ccb4-4e77-a773-96da5311f385 succeeded + +STEP: deleting the pod 07/27/23 02:20:26.468 +STEP: deleting the test externalName service 07/27/23 02:20:26.494 +[AfterEach] [sig-network] DNS test/e2e/framework/node/init/init.go:32 -Jun 12 21:43:33.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] Garbage collector +Jul 27 02:20:26.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] DNS test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] Garbage collector +[DeferCleanup (Each)] [sig-network] DNS dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] Garbage collector +[DeferCleanup (Each)] [sig-network] DNS tear down framework | framework.go:193 -STEP: Destroying namespace "gc-5086" for this suite. 06/12/23 21:43:33.937 +STEP: Destroying namespace "dns-9775" for this suite. 07/27/23 02:20:26.552 ------------------------------ -• [1.049 seconds] -[sig-api-machinery] Garbage collector -test/e2e/apimachinery/framework.go:23 - should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] - test/e2e/apimachinery/garbage_collector.go:550 +• [SLOW TEST] [11.617 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should provide DNS for ExternalName services [Conformance] + test/e2e/network/dns.go:333 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] Garbage collector + [BeforeEach] [sig-network] DNS set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:43:32.915 - Jun 12 21:43:32.915: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename gc 06/12/23 21:43:32.917 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:43:32.98 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:43:33.001 - [BeforeEach] [sig-api-machinery] Garbage collector + STEP: Creating a kubernetes client 07/27/23 02:20:14.958 + Jul 27 02:20:14.958: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename dns 07/27/23 02:20:14.958 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:20:14.998 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:20:15.008 + [BeforeEach] [sig-network] DNS test/e2e/framework/metrics/init/init.go:31 - [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] - test/e2e/apimachinery/garbage_collector.go:550 - STEP: create the deployment 06/12/23 21:43:33.049 - STEP: Wait for the Deployment to create new ReplicaSet 06/12/23 21:43:33.076 - STEP: delete the deployment 06/12/23 21:43:33.227 - STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs 06/12/23 21:43:33.25 - STEP: Gathering metrics 06/12/23 21:43:33.853 - W0612 21:43:33.916308 23 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. - Jun 12 21:43:33.916: INFO: For apiserver_request_total: - For apiserver_request_latency_seconds: - For apiserver_init_events_total: - For garbage_collector_attempt_to_delete_queue_latency: - For garbage_collector_attempt_to_delete_work_duration: - For garbage_collector_attempt_to_orphan_queue_latency: - For garbage_collector_attempt_to_orphan_work_duration: - For garbage_collector_dirty_processing_latency_microseconds: - For garbage_collector_event_processing_latency_microseconds: - For garbage_collector_graph_changes_queue_latency: - For garbage_collector_graph_changes_work_duration: - For garbage_collector_orphan_processing_latency_microseconds: - For namespace_queue_latency: - For namespace_queue_latency_sum: - For namespace_queue_latency_count: - For namespace_retries: - For namespace_work_duration: - For namespace_work_duration_sum: - For namespace_work_duration_count: - For function_duration_seconds: - For errors_total: - For evicted_pods_total: + [It] should provide DNS for ExternalName services [Conformance] + test/e2e/network/dns.go:333 + STEP: Creating a test externalName service 07/27/23 02:20:15.017 + STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9775.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9775.svc.cluster.local; sleep 1; done + 07/27/23 02:20:15.036 + STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9775.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9775.svc.cluster.local; sleep 1; done + 07/27/23 02:20:15.036 + STEP: creating a pod to probe DNS 07/27/23 02:20:15.036 + STEP: submitting the pod to kubernetes 07/27/23 02:20:15.036 + Jul 27 02:20:15.066: INFO: Waiting up to 15m0s for pod "dns-test-68a0b9dd-3fb3-4887-b0b9-65f180932d66" in namespace "dns-9775" to be "running" + Jul 27 02:20:15.076: INFO: Pod "dns-test-68a0b9dd-3fb3-4887-b0b9-65f180932d66": Phase="Pending", Reason="", readiness=false. Elapsed: 10.205781ms + Jul 27 02:20:17.086: INFO: Pod "dns-test-68a0b9dd-3fb3-4887-b0b9-65f180932d66": Phase="Running", Reason="", readiness=true. Elapsed: 2.020354952s + Jul 27 02:20:17.086: INFO: Pod "dns-test-68a0b9dd-3fb3-4887-b0b9-65f180932d66" satisfied condition "running" + STEP: retrieving the pod 07/27/23 02:20:17.086 + STEP: looking for the results for each expected name from probers 07/27/23 02:20:17.095 + Jul 27 02:20:17.128: INFO: DNS probes using dns-test-68a0b9dd-3fb3-4887-b0b9-65f180932d66 succeeded + + STEP: deleting the pod 07/27/23 02:20:17.128 + STEP: changing the externalName to bar.example.com 07/27/23 02:20:17.149 + STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9775.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9775.svc.cluster.local; sleep 1; done + 07/27/23 02:20:17.179 + STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9775.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9775.svc.cluster.local; sleep 1; done + 07/27/23 02:20:17.18 + STEP: creating a second pod to probe DNS 07/27/23 02:20:17.18 + STEP: submitting the pod to kubernetes 07/27/23 02:20:17.18 + Jul 27 02:20:17.197: INFO: Waiting up to 15m0s for pod "dns-test-2487282c-1bfd-4681-8369-e44bb239daca" in namespace "dns-9775" to be "running" + Jul 27 02:20:17.204: INFO: Pod "dns-test-2487282c-1bfd-4681-8369-e44bb239daca": Phase="Pending", Reason="", readiness=false. Elapsed: 7.436503ms + Jul 27 02:20:19.214: INFO: Pod "dns-test-2487282c-1bfd-4681-8369-e44bb239daca": Phase="Running", Reason="", readiness=true. Elapsed: 2.017058081s + Jul 27 02:20:19.214: INFO: Pod "dns-test-2487282c-1bfd-4681-8369-e44bb239daca" satisfied condition "running" + STEP: retrieving the pod 07/27/23 02:20:19.214 + STEP: looking for the results for each expected name from probers 07/27/23 02:20:19.222 + Jul 27 02:20:19.243: INFO: File wheezy_udp@dns-test-service-3.dns-9775.svc.cluster.local from pod dns-9775/dns-test-2487282c-1bfd-4681-8369-e44bb239daca contains 'foo.example.com. + ' instead of 'bar.example.com.' + Jul 27 02:20:19.256: INFO: File jessie_udp@dns-test-service-3.dns-9775.svc.cluster.local from pod dns-9775/dns-test-2487282c-1bfd-4681-8369-e44bb239daca contains 'foo.example.com. + ' instead of 'bar.example.com.' + Jul 27 02:20:19.256: INFO: Lookups using dns-9775/dns-test-2487282c-1bfd-4681-8369-e44bb239daca failed for: [wheezy_udp@dns-test-service-3.dns-9775.svc.cluster.local jessie_udp@dns-test-service-3.dns-9775.svc.cluster.local] + + Jul 27 02:20:24.286: INFO: DNS probes using dns-test-2487282c-1bfd-4681-8369-e44bb239daca succeeded + + STEP: deleting the pod 07/27/23 02:20:24.286 + STEP: changing the service to type=ClusterIP 07/27/23 02:20:24.304 + STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9775.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9775.svc.cluster.local; sleep 1; done + 07/27/23 02:20:24.356 + STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9775.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-9775.svc.cluster.local; sleep 1; done + 07/27/23 02:20:24.356 + STEP: creating a third pod to probe DNS 07/27/23 02:20:24.357 + STEP: submitting the pod to kubernetes 07/27/23 02:20:24.368 + Jul 27 02:20:24.388: INFO: Waiting up to 15m0s for pod "dns-test-9476c1cf-ccb4-4e77-a773-96da5311f385" in namespace "dns-9775" to be "running" + Jul 27 02:20:24.397: INFO: Pod "dns-test-9476c1cf-ccb4-4e77-a773-96da5311f385": Phase="Pending", Reason="", readiness=false. Elapsed: 8.708097ms + Jul 27 02:20:26.407: INFO: Pod "dns-test-9476c1cf-ccb4-4e77-a773-96da5311f385": Phase="Running", Reason="", readiness=true. Elapsed: 2.019441349s + Jul 27 02:20:26.407: INFO: Pod "dns-test-9476c1cf-ccb4-4e77-a773-96da5311f385" satisfied condition "running" + STEP: retrieving the pod 07/27/23 02:20:26.407 + STEP: looking for the results for each expected name from probers 07/27/23 02:20:26.416 + Jul 27 02:20:26.468: INFO: DNS probes using dns-test-9476c1cf-ccb4-4e77-a773-96da5311f385 succeeded + + STEP: deleting the pod 07/27/23 02:20:26.468 + STEP: deleting the test externalName service 07/27/23 02:20:26.494 + [AfterEach] [sig-network] DNS + test/e2e/framework/node/init/init.go:32 + Jul 27 02:20:26.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] DNS + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] DNS + tear down framework | framework.go:193 + STEP: Destroying namespace "dns-9775" for this suite. 07/27/23 02:20:26.552 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test when starting a container that exits + should run with the expected status [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:52 +[BeforeEach] [sig-node] Container Runtime + set up framework | framework.go:178 +STEP: Creating a kubernetes client 07/27/23 02:20:26.576 +Jul 27 02:20:26.576: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename container-runtime 07/27/23 02:20:26.577 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:20:26.62 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:20:26.629 +[BeforeEach] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:31 +[It] should run with the expected status [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:52 +STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' 07/27/23 02:20:26.669 +STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' 07/27/23 02:20:45.877 +STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition 07/27/23 02:20:45.885 +STEP: Container 'terminate-cmd-rpa': should get the expected 'State' 07/27/23 02:20:45.902 +STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] 07/27/23 02:20:45.902 +STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' 07/27/23 02:20:45.966 +STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' 07/27/23 02:20:49.004 +STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition 07/27/23 02:20:51.033 +STEP: Container 'terminate-cmd-rpof': should get the expected 'State' 07/27/23 02:20:51.051 +STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] 07/27/23 02:20:51.051 +STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' 07/27/23 02:20:51.098 +STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' 07/27/23 02:20:52.133 +STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition 07/27/23 02:20:55.166 +STEP: Container 'terminate-cmd-rpn': should get the expected 'State' 07/27/23 02:20:55.184 +STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] 07/27/23 02:20:55.184 +[AfterEach] [sig-node] Container Runtime + test/e2e/framework/node/init/init.go:32 +Jul 27 02:20:55.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Container Runtime + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Container Runtime + tear down framework | framework.go:193 +STEP: Destroying namespace "container-runtime-3553" for this suite. 07/27/23 02:20:55.256 +------------------------------ +• [SLOW TEST] [28.792 seconds] +[sig-node] Container Runtime +test/e2e/common/node/framework.go:23 + blackbox test + test/e2e/common/node/runtime.go:44 + when starting a container that exits + test/e2e/common/node/runtime.go:45 + should run with the expected status [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:52 - [AfterEach] [sig-api-machinery] Garbage collector + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Container Runtime + set up framework | framework.go:178 + STEP: Creating a kubernetes client 07/27/23 02:20:26.576 + Jul 27 02:20:26.576: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename container-runtime 07/27/23 02:20:26.577 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:20:26.62 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:20:26.629 + [BeforeEach] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:31 + [It] should run with the expected status [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:52 + STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' 07/27/23 02:20:26.669 + STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' 07/27/23 02:20:45.877 + STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition 07/27/23 02:20:45.885 + STEP: Container 'terminate-cmd-rpa': should get the expected 'State' 07/27/23 02:20:45.902 + STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] 07/27/23 02:20:45.902 + STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' 07/27/23 02:20:45.966 + STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' 07/27/23 02:20:49.004 + STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition 07/27/23 02:20:51.033 + STEP: Container 'terminate-cmd-rpof': should get the expected 'State' 07/27/23 02:20:51.051 + STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] 07/27/23 02:20:51.051 + STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' 07/27/23 02:20:51.098 + STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' 07/27/23 02:20:52.133 + STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition 07/27/23 02:20:55.166 + STEP: Container 'terminate-cmd-rpn': should get the expected 'State' 07/27/23 02:20:55.184 + STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] 07/27/23 02:20:55.184 + [AfterEach] [sig-node] Container Runtime test/e2e/framework/node/init/init.go:32 - Jun 12 21:43:33.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + Jul 27 02:20:55.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Container Runtime test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + [DeferCleanup (Each)] [sig-node] Container Runtime dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + [DeferCleanup (Each)] [sig-node] Container Runtime tear down framework | framework.go:193 - STEP: Destroying namespace "gc-5086" for this suite. 06/12/23 21:43:33.937 + STEP: Destroying namespace "container-runtime-3553" for this suite. 07/27/23 02:20:55.256 << End Captured GinkgoWriter Output ------------------------------ -SSSSSS +SSSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + test/e2e/common/node/init_container.go:458 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 07/27/23 02:20:55.368 +Jul 27 02:20:55.368: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename init-container 07/27/23 02:20:55.37 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:20:55.427 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:20:55.438 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:165 +[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + test/e2e/common/node/init_container.go:458 +STEP: creating the pod 07/27/23 02:20:55.449 +Jul 27 02:20:55.449: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/node/init/init.go:32 +Jul 27 02:21:01.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + tear down framework | framework.go:193 +STEP: Destroying namespace "init-container-3525" for this suite. 07/27/23 02:21:01.179 +------------------------------ +• [SLOW TEST] [5.843 seconds] +[sig-node] InitContainer [NodeConformance] +test/e2e/common/node/framework.go:23 + should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + test/e2e/common/node/init_container.go:458 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] InitContainer [NodeConformance] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 07/27/23 02:20:55.368 + Jul 27 02:20:55.368: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename init-container 07/27/23 02:20:55.37 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:20:55.427 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:20:55.438 + [BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:165 + [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + test/e2e/common/node/init_container.go:458 + STEP: creating the pod 07/27/23 02:20:55.449 + Jul 27 02:20:55.449: INFO: PodSpec: initContainers in spec.initContainers + [AfterEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/node/init/init.go:32 + Jul 27 02:21:01.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + tear down framework | framework.go:193 + STEP: Destroying namespace "init-container-3525" for this suite. 07/27/23 02:21:01.179 + << End Captured GinkgoWriter Output +------------------------------ +SS ------------------------------ [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] test/e2e/common/node/security_context.go:528 [BeforeEach] [sig-node] Security Context set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:43:33.973 -Jun 12 21:43:33.974: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename security-context-test 06/12/23 21:43:33.976 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:43:34.162 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:43:34.198 +STEP: Creating a kubernetes client 07/27/23 02:21:01.212 +Jul 27 02:21:01.212: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename security-context-test 07/27/23 02:21:01.213 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:21:01.259 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:21:01.281 [BeforeEach] [sig-node] Security Context test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-node] Security Context test/e2e/common/node/security_context.go:50 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] test/e2e/common/node/security_context.go:528 -Jun 12 21:43:34.261: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-88189053-8599-4060-9d06-e45d049715b4" in namespace "security-context-test-7115" to be "Succeeded or Failed" -Jun 12 21:43:34.305: INFO: Pod "busybox-privileged-false-88189053-8599-4060-9d06-e45d049715b4": Phase="Pending", Reason="", readiness=false. Elapsed: 42.709113ms -Jun 12 21:43:36.319: INFO: Pod "busybox-privileged-false-88189053-8599-4060-9d06-e45d049715b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057374647s -Jun 12 21:43:38.332: INFO: Pod "busybox-privileged-false-88189053-8599-4060-9d06-e45d049715b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070290648s -Jun 12 21:43:40.321: INFO: Pod "busybox-privileged-false-88189053-8599-4060-9d06-e45d049715b4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059458598s -Jun 12 21:43:42.319: INFO: Pod "busybox-privileged-false-88189053-8599-4060-9d06-e45d049715b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057323686s -Jun 12 21:43:42.319: INFO: Pod "busybox-privileged-false-88189053-8599-4060-9d06-e45d049715b4" satisfied condition "Succeeded or Failed" -Jun 12 21:43:42.389: INFO: Got logs for pod "busybox-privileged-false-88189053-8599-4060-9d06-e45d049715b4": "ip: RTNETLINK answers: Operation not permitted\n" +Jul 27 02:21:01.318: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-b52de837-e08f-4819-a902-8e541217daa1" in namespace "security-context-test-8178" to be "Succeeded or Failed" +Jul 27 02:21:01.327: INFO: Pod "busybox-privileged-false-b52de837-e08f-4819-a902-8e541217daa1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.714814ms +Jul 27 02:21:03.336: INFO: Pod "busybox-privileged-false-b52de837-e08f-4819-a902-8e541217daa1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018099783s +Jul 27 02:21:05.337: INFO: Pod "busybox-privileged-false-b52de837-e08f-4819-a902-8e541217daa1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018924471s +Jul 27 02:21:05.337: INFO: Pod "busybox-privileged-false-b52de837-e08f-4819-a902-8e541217daa1" satisfied condition "Succeeded or Failed" +Jul 27 02:21:05.355: INFO: Got logs for pod "busybox-privileged-false-b52de837-e08f-4819-a902-8e541217daa1": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [sig-node] Security Context test/e2e/framework/node/init/init.go:32 -Jun 12 21:43:42.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +Jul 27 02:21:05.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Security Context test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Security Context dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-node] Security Context tear down framework | framework.go:193 -STEP: Destroying namespace "security-context-test-7115" for this suite. 06/12/23 21:43:42.443 +STEP: Destroying namespace "security-context-test-8178" for this suite. 07/27/23 02:21:05.369 ------------------------------ -• [SLOW TEST] [8.495 seconds] +• [4.183 seconds] [sig-node] Security Context test/e2e/common/node/framework.go:23 When creating a pod with privileged @@ -24188,3895 +22789,2935 @@ test/e2e/common/node/framework.go:23 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-node] Security Context set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:43:33.973 - Jun 12 21:43:33.974: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename security-context-test 06/12/23 21:43:33.976 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:43:34.162 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:43:34.198 + STEP: Creating a kubernetes client 07/27/23 02:21:01.212 + Jul 27 02:21:01.212: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename security-context-test 07/27/23 02:21:01.213 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:21:01.259 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:21:01.281 [BeforeEach] [sig-node] Security Context test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-node] Security Context test/e2e/common/node/security_context.go:50 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] test/e2e/common/node/security_context.go:528 - Jun 12 21:43:34.261: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-88189053-8599-4060-9d06-e45d049715b4" in namespace "security-context-test-7115" to be "Succeeded or Failed" - Jun 12 21:43:34.305: INFO: Pod "busybox-privileged-false-88189053-8599-4060-9d06-e45d049715b4": Phase="Pending", Reason="", readiness=false. Elapsed: 42.709113ms - Jun 12 21:43:36.319: INFO: Pod "busybox-privileged-false-88189053-8599-4060-9d06-e45d049715b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057374647s - Jun 12 21:43:38.332: INFO: Pod "busybox-privileged-false-88189053-8599-4060-9d06-e45d049715b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070290648s - Jun 12 21:43:40.321: INFO: Pod "busybox-privileged-false-88189053-8599-4060-9d06-e45d049715b4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059458598s - Jun 12 21:43:42.319: INFO: Pod "busybox-privileged-false-88189053-8599-4060-9d06-e45d049715b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057323686s - Jun 12 21:43:42.319: INFO: Pod "busybox-privileged-false-88189053-8599-4060-9d06-e45d049715b4" satisfied condition "Succeeded or Failed" - Jun 12 21:43:42.389: INFO: Got logs for pod "busybox-privileged-false-88189053-8599-4060-9d06-e45d049715b4": "ip: RTNETLINK answers: Operation not permitted\n" + Jul 27 02:21:01.318: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-b52de837-e08f-4819-a902-8e541217daa1" in namespace "security-context-test-8178" to be "Succeeded or Failed" + Jul 27 02:21:01.327: INFO: Pod "busybox-privileged-false-b52de837-e08f-4819-a902-8e541217daa1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.714814ms + Jul 27 02:21:03.336: INFO: Pod "busybox-privileged-false-b52de837-e08f-4819-a902-8e541217daa1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018099783s + Jul 27 02:21:05.337: INFO: Pod "busybox-privileged-false-b52de837-e08f-4819-a902-8e541217daa1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018924471s + Jul 27 02:21:05.337: INFO: Pod "busybox-privileged-false-b52de837-e08f-4819-a902-8e541217daa1" satisfied condition "Succeeded or Failed" + Jul 27 02:21:05.355: INFO: Got logs for pod "busybox-privileged-false-b52de837-e08f-4819-a902-8e541217daa1": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [sig-node] Security Context test/e2e/framework/node/init/init.go:32 - Jun 12 21:43:42.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + Jul 27 02:21:05.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Security Context test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Security Context dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-node] Security Context tear down framework | framework.go:193 - STEP: Destroying namespace "security-context-test-7115" for this suite. 06/12/23 21:43:42.443 + STEP: Destroying namespace "security-context-test-8178" for this suite. 07/27/23 02:21:05.369 << End Captured GinkgoWriter Output ------------------------------ -SSSSS +SSS ------------------------------ -[sig-storage] Projected downwardAPI - should provide podname only [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:53 -[BeforeEach] [sig-storage] Projected downwardAPI +[sig-cli] Kubectl client Proxy server + should support --unix-socket=/path [Conformance] + test/e2e/kubectl/kubectl.go:1812 +[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:43:42.469 -Jun 12 21:43:42.469: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename projected 06/12/23 21:43:42.47 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:43:42.52 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:43:42.534 -[BeforeEach] [sig-storage] Projected downwardAPI +STEP: Creating a kubernetes client 07/27/23 02:21:05.395 +Jul 27 02:21:05.395: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename kubectl 07/27/23 02:21:05.396 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:21:05.487 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:21:05.496 +[BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-storage] Projected downwardAPI - test/e2e/common/storage/projected_downwardapi.go:44 -[It] should provide podname only [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:53 -STEP: Creating a pod to test downward API volume plugin 06/12/23 21:43:42.543 -Jun 12 21:43:42.583: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b37752fb-cc73-4a89-9a78-02e09c735fb5" in namespace "projected-8940" to be "Succeeded or Failed" -Jun 12 21:43:42.596: INFO: Pod "downwardapi-volume-b37752fb-cc73-4a89-9a78-02e09c735fb5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.691818ms -Jun 12 21:43:44.613: INFO: Pod "downwardapi-volume-b37752fb-cc73-4a89-9a78-02e09c735fb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028948681s -Jun 12 21:43:46.611: INFO: Pod "downwardapi-volume-b37752fb-cc73-4a89-9a78-02e09c735fb5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027748372s -Jun 12 21:43:48.611: INFO: Pod "downwardapi-volume-b37752fb-cc73-4a89-9a78-02e09c735fb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027879629s -STEP: Saw pod success 06/12/23 21:43:48.612 -Jun 12 21:43:48.614: INFO: Pod "downwardapi-volume-b37752fb-cc73-4a89-9a78-02e09c735fb5" satisfied condition "Succeeded or Failed" -Jun 12 21:43:48.626: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-b37752fb-cc73-4a89-9a78-02e09c735fb5 container client-container: -STEP: delete the pod 06/12/23 21:43:48.659 -Jun 12 21:43:48.691: INFO: Waiting for pod downwardapi-volume-b37752fb-cc73-4a89-9a78-02e09c735fb5 to disappear -Jun 12 21:43:48.702: INFO: Pod downwardapi-volume-b37752fb-cc73-4a89-9a78-02e09c735fb5 no longer exists -[AfterEach] [sig-storage] Projected downwardAPI +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[It] should support --unix-socket=/path [Conformance] + test/e2e/kubectl/kubectl.go:1812 +STEP: Starting the proxy 07/27/23 02:21:05.506 +Jul 27 02:21:05.506: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-124 proxy --unix-socket=/tmp/kubectl-proxy-unix1307361643/test' +STEP: retrieving proxy /api/ output 07/27/23 02:21:05.558 +[AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 -Jun 12 21:43:48.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Projected downwardAPI +Jul 27 02:21:05.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Projected downwardAPI +[DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Projected downwardAPI +[DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 -STEP: Destroying namespace "projected-8940" for this suite. 06/12/23 21:43:48.724 +STEP: Destroying namespace "kubectl-124" for this suite. 07/27/23 02:21:05.57 ------------------------------ -• [SLOW TEST] [6.279 seconds] -[sig-storage] Projected downwardAPI -test/e2e/common/storage/framework.go:23 - should provide podname only [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:53 +• [0.203 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Proxy server + test/e2e/kubectl/kubectl.go:1780 + should support --unix-socket=/path [Conformance] + test/e2e/kubectl/kubectl.go:1812 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Projected downwardAPI + [BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:43:42.469 - Jun 12 21:43:42.469: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename projected 06/12/23 21:43:42.47 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:43:42.52 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:43:42.534 - [BeforeEach] [sig-storage] Projected downwardAPI + STEP: Creating a kubernetes client 07/27/23 02:21:05.395 + Jul 27 02:21:05.395: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename kubectl 07/27/23 02:21:05.396 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:21:05.487 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:21:05.496 + [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-storage] Projected downwardAPI - test/e2e/common/storage/projected_downwardapi.go:44 - [It] should provide podname only [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:53 - STEP: Creating a pod to test downward API volume plugin 06/12/23 21:43:42.543 - Jun 12 21:43:42.583: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b37752fb-cc73-4a89-9a78-02e09c735fb5" in namespace "projected-8940" to be "Succeeded or Failed" - Jun 12 21:43:42.596: INFO: Pod "downwardapi-volume-b37752fb-cc73-4a89-9a78-02e09c735fb5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.691818ms - Jun 12 21:43:44.613: INFO: Pod "downwardapi-volume-b37752fb-cc73-4a89-9a78-02e09c735fb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028948681s - Jun 12 21:43:46.611: INFO: Pod "downwardapi-volume-b37752fb-cc73-4a89-9a78-02e09c735fb5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027748372s - Jun 12 21:43:48.611: INFO: Pod "downwardapi-volume-b37752fb-cc73-4a89-9a78-02e09c735fb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027879629s - STEP: Saw pod success 06/12/23 21:43:48.612 - Jun 12 21:43:48.614: INFO: Pod "downwardapi-volume-b37752fb-cc73-4a89-9a78-02e09c735fb5" satisfied condition "Succeeded or Failed" - Jun 12 21:43:48.626: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-b37752fb-cc73-4a89-9a78-02e09c735fb5 container client-container: - STEP: delete the pod 06/12/23 21:43:48.659 - Jun 12 21:43:48.691: INFO: Waiting for pod downwardapi-volume-b37752fb-cc73-4a89-9a78-02e09c735fb5 to disappear - Jun 12 21:43:48.702: INFO: Pod downwardapi-volume-b37752fb-cc73-4a89-9a78-02e09c735fb5 no longer exists - [AfterEach] [sig-storage] Projected downwardAPI + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [It] should support --unix-socket=/path [Conformance] + test/e2e/kubectl/kubectl.go:1812 + STEP: Starting the proxy 07/27/23 02:21:05.506 + Jul 27 02:21:05.506: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-124 proxy --unix-socket=/tmp/kubectl-proxy-unix1307361643/test' + STEP: retrieving proxy /api/ output 07/27/23 02:21:05.558 + [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 - Jun 12 21:43:48.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + Jul 27 02:21:05.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 - STEP: Destroying namespace "projected-8940" for this suite. 06/12/23 21:43:48.724 + STEP: Destroying namespace "kubectl-124" for this suite. 07/27/23 02:21:05.57 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-api-machinery] Watchers - should receive events on concurrent watches in same order [Conformance] - test/e2e/apimachinery/watch.go:334 -[BeforeEach] [sig-api-machinery] Watchers +[sig-apps] ReplicationController + should test the lifecycle of a ReplicationController [Conformance] + test/e2e/apps/rc.go:110 +[BeforeEach] [sig-apps] ReplicationController set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:43:48.755 -Jun 12 21:43:48.755: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename watch 06/12/23 21:43:48.759 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:43:48.813 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:43:48.827 -[BeforeEach] [sig-api-machinery] Watchers +STEP: Creating a kubernetes client 07/27/23 02:21:05.598 +Jul 27 02:21:05.598: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename replication-controller 07/27/23 02:21:05.599 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:21:05.64 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:21:05.648 +[BeforeEach] [sig-apps] ReplicationController test/e2e/framework/metrics/init/init.go:31 -[It] should receive events on concurrent watches in same order [Conformance] - test/e2e/apimachinery/watch.go:334 -STEP: getting a starting resourceVersion 06/12/23 21:43:48.846 -STEP: starting a background goroutine to produce watch events 06/12/23 21:43:48.859 -STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order 06/12/23 21:43:48.859 -[AfterEach] [sig-api-machinery] Watchers +[BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 +[It] should test the lifecycle of a ReplicationController [Conformance] + test/e2e/apps/rc.go:110 +STEP: creating a ReplicationController 07/27/23 02:21:05.687 +STEP: waiting for RC to be added 07/27/23 02:21:05.724 +STEP: waiting for available Replicas 07/27/23 02:21:05.724 +STEP: patching ReplicationController 07/27/23 02:21:07.193 +STEP: waiting for RC to be modified 07/27/23 02:21:07.222 +STEP: patching ReplicationController status 07/27/23 02:21:07.223 +STEP: waiting for RC to be modified 07/27/23 02:21:07.272 +STEP: waiting for available Replicas 07/27/23 02:21:07.273 +STEP: fetching ReplicationController status 07/27/23 02:21:07.282 +STEP: patching ReplicationController scale 07/27/23 02:21:07.299 +STEP: waiting for RC to be modified 07/27/23 02:21:07.323 +STEP: waiting for ReplicationController's scale to be the max amount 07/27/23 02:21:07.324 +STEP: fetching ReplicationController; ensuring that it's patched 07/27/23 02:21:09.085 +STEP: updating ReplicationController status 07/27/23 02:21:09.099 +STEP: waiting for RC to be modified 07/27/23 02:21:09.122 +STEP: listing all ReplicationControllers 07/27/23 02:21:09.122 +STEP: checking that ReplicationController has expected values 07/27/23 02:21:09.14 +STEP: deleting ReplicationControllers by collection 07/27/23 02:21:09.14 +STEP: waiting for ReplicationController to have a DELETED watchEvent 07/27/23 02:21:09.182 +[AfterEach] [sig-apps] ReplicationController test/e2e/framework/node/init/init.go:32 -Jun 12 21:43:51.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] Watchers +Jul 27 02:21:09.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicationController test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] Watchers +[DeferCleanup (Each)] [sig-apps] ReplicationController dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] Watchers +[DeferCleanup (Each)] [sig-apps] ReplicationController tear down framework | framework.go:193 -STEP: Destroying namespace "watch-7707" for this suite. 06/12/23 21:43:51.625 +STEP: Destroying namespace "replication-controller-3576" for this suite. 07/27/23 02:21:09.299 ------------------------------ -• [2.929 seconds] -[sig-api-machinery] Watchers -test/e2e/apimachinery/framework.go:23 - should receive events on concurrent watches in same order [Conformance] - test/e2e/apimachinery/watch.go:334 +• [3.727 seconds] +[sig-apps] ReplicationController +test/e2e/apps/framework.go:23 + should test the lifecycle of a ReplicationController [Conformance] + test/e2e/apps/rc.go:110 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] Watchers + [BeforeEach] [sig-apps] ReplicationController set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:43:48.755 - Jun 12 21:43:48.755: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename watch 06/12/23 21:43:48.759 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:43:48.813 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:43:48.827 - [BeforeEach] [sig-api-machinery] Watchers + STEP: Creating a kubernetes client 07/27/23 02:21:05.598 + Jul 27 02:21:05.598: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename replication-controller 07/27/23 02:21:05.599 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:21:05.64 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:21:05.648 + [BeforeEach] [sig-apps] ReplicationController test/e2e/framework/metrics/init/init.go:31 - [It] should receive events on concurrent watches in same order [Conformance] - test/e2e/apimachinery/watch.go:334 - STEP: getting a starting resourceVersion 06/12/23 21:43:48.846 - STEP: starting a background goroutine to produce watch events 06/12/23 21:43:48.859 - STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order 06/12/23 21:43:48.859 - [AfterEach] [sig-api-machinery] Watchers + [BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 + [It] should test the lifecycle of a ReplicationController [Conformance] + test/e2e/apps/rc.go:110 + STEP: creating a ReplicationController 07/27/23 02:21:05.687 + STEP: waiting for RC to be added 07/27/23 02:21:05.724 + STEP: waiting for available Replicas 07/27/23 02:21:05.724 + STEP: patching ReplicationController 07/27/23 02:21:07.193 + STEP: waiting for RC to be modified 07/27/23 02:21:07.222 + STEP: patching ReplicationController status 07/27/23 02:21:07.223 + STEP: waiting for RC to be modified 07/27/23 02:21:07.272 + STEP: waiting for available Replicas 07/27/23 02:21:07.273 + STEP: fetching ReplicationController status 07/27/23 02:21:07.282 + STEP: patching ReplicationController scale 07/27/23 02:21:07.299 + STEP: waiting for RC to be modified 07/27/23 02:21:07.323 + STEP: waiting for ReplicationController's scale to be the max amount 07/27/23 02:21:07.324 + STEP: fetching ReplicationController; ensuring that it's patched 07/27/23 02:21:09.085 + STEP: updating ReplicationController status 07/27/23 02:21:09.099 + STEP: waiting for RC to be modified 07/27/23 02:21:09.122 + STEP: listing all ReplicationControllers 07/27/23 02:21:09.122 + STEP: checking that ReplicationController has expected values 07/27/23 02:21:09.14 + STEP: deleting ReplicationControllers by collection 07/27/23 02:21:09.14 + STEP: waiting for ReplicationController to have a DELETED watchEvent 07/27/23 02:21:09.182 + [AfterEach] [sig-apps] ReplicationController test/e2e/framework/node/init/init.go:32 - Jun 12 21:43:51.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] Watchers + Jul 27 02:21:09.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicationController test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] Watchers + [DeferCleanup (Each)] [sig-apps] ReplicationController dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] Watchers + [DeferCleanup (Each)] [sig-apps] ReplicationController tear down framework | framework.go:193 - STEP: Destroying namespace "watch-7707" for this suite. 06/12/23 21:43:51.625 + STEP: Destroying namespace "replication-controller-3576" for this suite. 07/27/23 02:21:09.299 << End Captured GinkgoWriter Output ------------------------------ -SSSSSS +SSSS ------------------------------ -[sig-apps] CronJob - should not schedule jobs when suspended [Slow] [Conformance] - test/e2e/apps/cronjob.go:96 -[BeforeEach] [sig-apps] CronJob +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a pod. [Conformance] + test/e2e/apimachinery/resource_quota.go:230 +[BeforeEach] [sig-api-machinery] ResourceQuota set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:43:51.688 -Jun 12 21:43:51.689: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename cronjob 06/12/23 21:43:51.703 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:43:51.76 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:43:51.773 -[BeforeEach] [sig-apps] CronJob +STEP: Creating a kubernetes client 07/27/23 02:21:09.326 +Jul 27 02:21:09.326: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename resourcequota 07/27/23 02:21:09.327 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:21:09.367 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:21:09.378 +[BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:31 -[It] should not schedule jobs when suspended [Slow] [Conformance] - test/e2e/apps/cronjob.go:96 -STEP: Creating a suspended cronjob 06/12/23 21:43:51.786 -STEP: Ensuring no jobs are scheduled 06/12/23 21:43:51.806 -STEP: Ensuring no job exists by listing jobs explicitly 06/12/23 21:48:51.83 -STEP: Removing cronjob 06/12/23 21:48:51.84 -[AfterEach] [sig-apps] CronJob +[It] should create a ResourceQuota and capture the life of a pod. [Conformance] + test/e2e/apimachinery/resource_quota.go:230 +STEP: Counting existing ResourceQuota 07/27/23 02:21:09.39 +STEP: Creating a ResourceQuota 07/27/23 02:21:14.44 +STEP: Ensuring resource quota status is calculated 07/27/23 02:21:14.453 +STEP: Creating a Pod that fits quota 07/27/23 02:21:16.461 +STEP: Ensuring ResourceQuota status captures the pod usage 07/27/23 02:21:16.495 +STEP: Not allowing a pod to be created that exceeds remaining quota 07/27/23 02:21:18.531 +STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) 07/27/23 02:21:18.543 +STEP: Ensuring a pod cannot update its resource requirements 07/27/23 02:21:18.553 +STEP: Ensuring attempts to update pod resource requirements did not change quota usage 07/27/23 02:21:18.607 +STEP: Deleting the pod 07/27/23 02:21:20.616 +STEP: Ensuring resource quota status released the pod usage 07/27/23 02:21:20.639 +[AfterEach] [sig-api-machinery] ResourceQuota test/e2e/framework/node/init/init.go:32 -Jun 12 21:48:51.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] CronJob +Jul 27 02:21:22.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] CronJob +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] CronJob +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota tear down framework | framework.go:193 -STEP: Destroying namespace "cronjob-9127" for this suite. 06/12/23 21:48:51.877 +STEP: Destroying namespace "resourcequota-1870" for this suite. 07/27/23 02:21:22.667 ------------------------------ -• [SLOW TEST] [300.210 seconds] -[sig-apps] CronJob -test/e2e/apps/framework.go:23 - should not schedule jobs when suspended [Slow] [Conformance] - test/e2e/apps/cronjob.go:96 +• [SLOW TEST] [13.377 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a pod. [Conformance] + test/e2e/apimachinery/resource_quota.go:230 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] CronJob + [BeforeEach] [sig-api-machinery] ResourceQuota set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:43:51.688 - Jun 12 21:43:51.689: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename cronjob 06/12/23 21:43:51.703 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:43:51.76 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:43:51.773 - [BeforeEach] [sig-apps] CronJob + STEP: Creating a kubernetes client 07/27/23 02:21:09.326 + Jul 27 02:21:09.326: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename resourcequota 07/27/23 02:21:09.327 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:21:09.367 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:21:09.378 + [BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:31 - [It] should not schedule jobs when suspended [Slow] [Conformance] - test/e2e/apps/cronjob.go:96 - STEP: Creating a suspended cronjob 06/12/23 21:43:51.786 - STEP: Ensuring no jobs are scheduled 06/12/23 21:43:51.806 - STEP: Ensuring no job exists by listing jobs explicitly 06/12/23 21:48:51.83 - STEP: Removing cronjob 06/12/23 21:48:51.84 - [AfterEach] [sig-apps] CronJob + [It] should create a ResourceQuota and capture the life of a pod. [Conformance] + test/e2e/apimachinery/resource_quota.go:230 + STEP: Counting existing ResourceQuota 07/27/23 02:21:09.39 + STEP: Creating a ResourceQuota 07/27/23 02:21:14.44 + STEP: Ensuring resource quota status is calculated 07/27/23 02:21:14.453 + STEP: Creating a Pod that fits quota 07/27/23 02:21:16.461 + STEP: Ensuring ResourceQuota status captures the pod usage 07/27/23 02:21:16.495 + STEP: Not allowing a pod to be created that exceeds remaining quota 07/27/23 02:21:18.531 + STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) 07/27/23 02:21:18.543 + STEP: Ensuring a pod cannot update its resource requirements 07/27/23 02:21:18.553 + STEP: Ensuring attempts to update pod resource requirements did not change quota usage 07/27/23 02:21:18.607 + STEP: Deleting the pod 07/27/23 02:21:20.616 + STEP: Ensuring resource quota status released the pod usage 07/27/23 02:21:20.639 + [AfterEach] [sig-api-machinery] ResourceQuota test/e2e/framework/node/init/init.go:32 - Jun 12 21:48:51.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] CronJob + Jul 27 02:21:22.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] CronJob + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] CronJob + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota tear down framework | framework.go:193 - STEP: Destroying namespace "cronjob-9127" for this suite. 06/12/23 21:48:51.877 + STEP: Destroying namespace "resourcequota-1870" for this suite. 07/27/23 02:21:22.667 << End Captured GinkgoWriter Output ------------------------------ -S +SSSSSSSSSSSSSSS ------------------------------ -[sig-node] PodTemplates - should replace a pod template [Conformance] - test/e2e/common/node/podtemplates.go:176 -[BeforeEach] [sig-node] PodTemplates +[sig-node] Variable Expansion + should allow substituting values in a container's args [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:92 +[BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:48:51.9 -Jun 12 21:48:51.900: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename podtemplate 06/12/23 21:48:51.902 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:48:51.952 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:48:51.99 -[BeforeEach] [sig-node] PodTemplates +STEP: Creating a kubernetes client 07/27/23 02:21:22.703 +Jul 27 02:21:22.703: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename var-expansion 07/27/23 02:21:22.704 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:21:22.787 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:21:22.798 +[BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 -[It] should replace a pod template [Conformance] - test/e2e/common/node/podtemplates.go:176 -STEP: Create a pod template 06/12/23 21:48:52.002 -STEP: Replace a pod template 06/12/23 21:48:52.03 -Jun 12 21:48:52.065: INFO: Found updated podtemplate annotation: "true" - -[AfterEach] [sig-node] PodTemplates +[It] should allow substituting values in a container's args [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:92 +STEP: Creating a pod to test substitution in container's args 07/27/23 02:21:22.808 +Jul 27 02:21:22.833: INFO: Waiting up to 5m0s for pod "var-expansion-c92d63a9-61d1-4452-9a01-62db2da6d6f0" in namespace "var-expansion-9164" to be "Succeeded or Failed" +Jul 27 02:21:22.848: INFO: Pod "var-expansion-c92d63a9-61d1-4452-9a01-62db2da6d6f0": Phase="Pending", Reason="", readiness=false. Elapsed: 15.291849ms +Jul 27 02:21:24.860: INFO: Pod "var-expansion-c92d63a9-61d1-4452-9a01-62db2da6d6f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026794836s +Jul 27 02:21:26.858: INFO: Pod "var-expansion-c92d63a9-61d1-4452-9a01-62db2da6d6f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024831003s +STEP: Saw pod success 07/27/23 02:21:26.858 +Jul 27 02:21:26.858: INFO: Pod "var-expansion-c92d63a9-61d1-4452-9a01-62db2da6d6f0" satisfied condition "Succeeded or Failed" +Jul 27 02:21:26.893: INFO: Trying to get logs from node 10.245.128.19 pod var-expansion-c92d63a9-61d1-4452-9a01-62db2da6d6f0 container dapi-container: +STEP: delete the pod 07/27/23 02:21:26.912 +Jul 27 02:21:26.933: INFO: Waiting for pod var-expansion-c92d63a9-61d1-4452-9a01-62db2da6d6f0 to disappear +Jul 27 02:21:26.941: INFO: Pod var-expansion-c92d63a9-61d1-4452-9a01-62db2da6d6f0 no longer exists +[AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 -Jun 12 21:48:52.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] PodTemplates +Jul 27 02:21:26.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] PodTemplates +[DeferCleanup (Each)] [sig-node] Variable Expansion dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] PodTemplates +[DeferCleanup (Each)] [sig-node] Variable Expansion tear down framework | framework.go:193 -STEP: Destroying namespace "podtemplate-87" for this suite. 06/12/23 21:48:52.086 +STEP: Destroying namespace "var-expansion-9164" for this suite. 07/27/23 02:21:26.955 ------------------------------ -• [0.218 seconds] -[sig-node] PodTemplates +• [4.275 seconds] +[sig-node] Variable Expansion test/e2e/common/node/framework.go:23 - should replace a pod template [Conformance] - test/e2e/common/node/podtemplates.go:176 + should allow substituting values in a container's args [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:92 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] PodTemplates + [BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:48:51.9 - Jun 12 21:48:51.900: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename podtemplate 06/12/23 21:48:51.902 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:48:51.952 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:48:51.99 - [BeforeEach] [sig-node] PodTemplates + STEP: Creating a kubernetes client 07/27/23 02:21:22.703 + Jul 27 02:21:22.703: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename var-expansion 07/27/23 02:21:22.704 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:21:22.787 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:21:22.798 + [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 - [It] should replace a pod template [Conformance] - test/e2e/common/node/podtemplates.go:176 - STEP: Create a pod template 06/12/23 21:48:52.002 - STEP: Replace a pod template 06/12/23 21:48:52.03 - Jun 12 21:48:52.065: INFO: Found updated podtemplate annotation: "true" - - [AfterEach] [sig-node] PodTemplates + [It] should allow substituting values in a container's args [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:92 + STEP: Creating a pod to test substitution in container's args 07/27/23 02:21:22.808 + Jul 27 02:21:22.833: INFO: Waiting up to 5m0s for pod "var-expansion-c92d63a9-61d1-4452-9a01-62db2da6d6f0" in namespace "var-expansion-9164" to be "Succeeded or Failed" + Jul 27 02:21:22.848: INFO: Pod "var-expansion-c92d63a9-61d1-4452-9a01-62db2da6d6f0": Phase="Pending", Reason="", readiness=false. Elapsed: 15.291849ms + Jul 27 02:21:24.860: INFO: Pod "var-expansion-c92d63a9-61d1-4452-9a01-62db2da6d6f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026794836s + Jul 27 02:21:26.858: INFO: Pod "var-expansion-c92d63a9-61d1-4452-9a01-62db2da6d6f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024831003s + STEP: Saw pod success 07/27/23 02:21:26.858 + Jul 27 02:21:26.858: INFO: Pod "var-expansion-c92d63a9-61d1-4452-9a01-62db2da6d6f0" satisfied condition "Succeeded or Failed" + Jul 27 02:21:26.893: INFO: Trying to get logs from node 10.245.128.19 pod var-expansion-c92d63a9-61d1-4452-9a01-62db2da6d6f0 container dapi-container: + STEP: delete the pod 07/27/23 02:21:26.912 + Jul 27 02:21:26.933: INFO: Waiting for pod var-expansion-c92d63a9-61d1-4452-9a01-62db2da6d6f0 to disappear + Jul 27 02:21:26.941: INFO: Pod var-expansion-c92d63a9-61d1-4452-9a01-62db2da6d6f0 no longer exists + [AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 - Jun 12 21:48:52.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] PodTemplates + Jul 27 02:21:26.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] PodTemplates + [DeferCleanup (Each)] [sig-node] Variable Expansion dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] PodTemplates + [DeferCleanup (Each)] [sig-node] Variable Expansion tear down framework | framework.go:193 - STEP: Destroying namespace "podtemplate-87" for this suite. 06/12/23 21:48:52.086 + STEP: Destroying namespace "var-expansion-9164" for this suite. 07/27/23 02:21:26.955 << End Captured GinkgoWriter Output ------------------------------ -SSS +S ------------------------------ -[sig-apps] ReplicationController - should release no longer matching pods [Conformance] - test/e2e/apps/rc.go:101 -[BeforeEach] [sig-apps] ReplicationController +[sig-api-machinery] Watchers + should be able to start watching from a specific resource version [Conformance] + test/e2e/apimachinery/watch.go:142 +[BeforeEach] [sig-api-machinery] Watchers set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:48:52.12 -Jun 12 21:48:52.120: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename replication-controller 06/12/23 21:48:52.123 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:48:52.185 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:48:52.201 -[BeforeEach] [sig-apps] ReplicationController +STEP: Creating a kubernetes client 07/27/23 02:21:26.978 +Jul 27 02:21:26.978: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename watch 07/27/23 02:21:26.979 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:21:27.021 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:21:27.029 +[BeforeEach] [sig-api-machinery] Watchers test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-apps] ReplicationController - test/e2e/apps/rc.go:57 -[It] should release no longer matching pods [Conformance] - test/e2e/apps/rc.go:101 -STEP: Given a ReplicationController is created 06/12/23 21:48:52.221 -STEP: When the matched label of one of its pods change 06/12/23 21:48:52.239 -Jun 12 21:48:52.260: INFO: Pod name pod-release: Found 0 pods out of 1 -Jun 12 21:48:57.376: INFO: Pod name pod-release: Found 1 pods out of 1 -STEP: Then the pod is released 06/12/23 21:48:57.431 -[AfterEach] [sig-apps] ReplicationController +[It] should be able to start watching from a specific resource version [Conformance] + test/e2e/apimachinery/watch.go:142 +STEP: creating a new configmap 07/27/23 02:21:27.038 +STEP: modifying the configmap once 07/27/23 02:21:27.061 +STEP: modifying the configmap a second time 07/27/23 02:21:27.096 +STEP: deleting the configmap 07/27/23 02:21:27.132 +STEP: creating a watch on configmaps from the resource version returned by the first update 07/27/23 02:21:27.158 +STEP: Expecting to observe notifications for all changes to the configmap after the first update 07/27/23 02:21:27.165 +Jul 27 02:21:27.165: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8290 5dbd9abe-de1c-4858-8645-6267a0df3431 101236 0 2023-07-27 02:21:27 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-07-27 02:21:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Jul 27 02:21:27.165: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8290 5dbd9abe-de1c-4858-8645-6267a0df3431 101239 0 2023-07-27 02:21:27 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-07-27 02:21:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers test/e2e/framework/node/init/init.go:32 -Jun 12 21:48:58.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] ReplicationController +Jul 27 02:21:27.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Watchers test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] ReplicationController +[DeferCleanup (Each)] [sig-api-machinery] Watchers dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] ReplicationController +[DeferCleanup (Each)] [sig-api-machinery] Watchers tear down framework | framework.go:193 -STEP: Destroying namespace "replication-controller-4604" for this suite. 06/12/23 21:48:58.477 +STEP: Destroying namespace "watch-8290" for this suite. 07/27/23 02:21:27.177 ------------------------------ -• [SLOW TEST] [6.388 seconds] -[sig-apps] ReplicationController -test/e2e/apps/framework.go:23 - should release no longer matching pods [Conformance] - test/e2e/apps/rc.go:101 +• [0.220 seconds] +[sig-api-machinery] Watchers +test/e2e/apimachinery/framework.go:23 + should be able to start watching from a specific resource version [Conformance] + test/e2e/apimachinery/watch.go:142 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] ReplicationController + [BeforeEach] [sig-api-machinery] Watchers set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:48:52.12 - Jun 12 21:48:52.120: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename replication-controller 06/12/23 21:48:52.123 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:48:52.185 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:48:52.201 - [BeforeEach] [sig-apps] ReplicationController + STEP: Creating a kubernetes client 07/27/23 02:21:26.978 + Jul 27 02:21:26.978: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename watch 07/27/23 02:21:26.979 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:21:27.021 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:21:27.029 + [BeforeEach] [sig-api-machinery] Watchers test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-apps] ReplicationController - test/e2e/apps/rc.go:57 - [It] should release no longer matching pods [Conformance] - test/e2e/apps/rc.go:101 - STEP: Given a ReplicationController is created 06/12/23 21:48:52.221 - STEP: When the matched label of one of its pods change 06/12/23 21:48:52.239 - Jun 12 21:48:52.260: INFO: Pod name pod-release: Found 0 pods out of 1 - Jun 12 21:48:57.376: INFO: Pod name pod-release: Found 1 pods out of 1 - STEP: Then the pod is released 06/12/23 21:48:57.431 - [AfterEach] [sig-apps] ReplicationController + [It] should be able to start watching from a specific resource version [Conformance] + test/e2e/apimachinery/watch.go:142 + STEP: creating a new configmap 07/27/23 02:21:27.038 + STEP: modifying the configmap once 07/27/23 02:21:27.061 + STEP: modifying the configmap a second time 07/27/23 02:21:27.096 + STEP: deleting the configmap 07/27/23 02:21:27.132 + STEP: creating a watch on configmaps from the resource version returned by the first update 07/27/23 02:21:27.158 + STEP: Expecting to observe notifications for all changes to the configmap after the first update 07/27/23 02:21:27.165 + Jul 27 02:21:27.165: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8290 5dbd9abe-de1c-4858-8645-6267a0df3431 101236 0 2023-07-27 02:21:27 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-07-27 02:21:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + Jul 27 02:21:27.165: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8290 5dbd9abe-de1c-4858-8645-6267a0df3431 101239 0 2023-07-27 02:21:27 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-07-27 02:21:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + [AfterEach] [sig-api-machinery] Watchers test/e2e/framework/node/init/init.go:32 - Jun 12 21:48:58.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] ReplicationController + Jul 27 02:21:27.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Watchers test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] ReplicationController + [DeferCleanup (Each)] [sig-api-machinery] Watchers dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] ReplicationController + [DeferCleanup (Each)] [sig-api-machinery] Watchers tear down framework | framework.go:193 - STEP: Destroying namespace "replication-controller-4604" for this suite. 06/12/23 21:48:58.477 + STEP: Destroying namespace "watch-8290" for this suite. 07/27/23 02:21:27.177 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] EmptyDir wrapper volumes - should not cause race condition when used for configmaps [Serial] [Conformance] - test/e2e/storage/empty_dir_wrapper.go:189 -[BeforeEach] [sig-storage] EmptyDir wrapper volumes +[sig-scheduling] SchedulerPreemption [Serial] + validates lower priority pod preemption by critical pod [Conformance] + test/e2e/scheduling/preemption.go:224 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:48:58.516 -Jun 12 21:48:58.516: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename emptydir-wrapper 06/12/23 21:48:58.518 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:48:58.654 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:48:58.68 -[BeforeEach] [sig-storage] EmptyDir wrapper volumes +STEP: Creating a kubernetes client 07/27/23 02:21:27.2 +Jul 27 02:21:27.200: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename sched-preemption 07/27/23 02:21:27.201 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:21:27.245 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:21:27.254 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 -[It] should not cause race condition when used for configmaps [Serial] [Conformance] - test/e2e/storage/empty_dir_wrapper.go:189 -STEP: Creating 50 configmaps 06/12/23 21:48:58.714 -STEP: Creating RC which spawns configmap-volume pods 06/12/23 21:49:00.091 -Jun 12 21:49:00.143: INFO: Pod name wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485: Found 0 pods out of 5 -Jun 12 21:49:05.194: INFO: Pod name wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485: Found 5 pods out of 5 -STEP: Ensuring each pod is running 06/12/23 21:49:05.194 -Jun 12 21:49:05.195: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-2hjt9" in namespace "emptydir-wrapper-6593" to be "running" -Jun 12 21:49:05.213: INFO: Pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-2hjt9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.064191ms -Jun 12 21:49:07.236: INFO: Pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-2hjt9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041693917s -Jun 12 21:49:09.253: INFO: Pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-2hjt9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058012166s -Jun 12 21:49:11.229: INFO: Pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-2hjt9": Phase="Running", Reason="", readiness=true. Elapsed: 6.034451119s -Jun 12 21:49:11.229: INFO: Pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-2hjt9" satisfied condition "running" -Jun 12 21:49:11.230: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-brm6b" in namespace "emptydir-wrapper-6593" to be "running" -Jun 12 21:49:11.244: INFO: Pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-brm6b": Phase="Running", Reason="", readiness=true. Elapsed: 14.010996ms -Jun 12 21:49:11.244: INFO: Pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-brm6b" satisfied condition "running" -Jun 12 21:49:11.244: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-hfmbp" in namespace "emptydir-wrapper-6593" to be "running" -Jun 12 21:49:11.256: INFO: Pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-hfmbp": Phase="Pending", Reason="", readiness=false. Elapsed: 12.702827ms -Jun 12 21:49:13.283: INFO: Pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-hfmbp": Phase="Running", Reason="", readiness=true. Elapsed: 2.039492684s -Jun 12 21:49:13.283: INFO: Pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-hfmbp" satisfied condition "running" -Jun 12 21:49:13.283: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-rzpxx" in namespace "emptydir-wrapper-6593" to be "running" -Jun 12 21:49:13.303: INFO: Pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-rzpxx": Phase="Running", Reason="", readiness=true. Elapsed: 19.59644ms -Jun 12 21:49:13.303: INFO: Pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-rzpxx" satisfied condition "running" -Jun 12 21:49:13.304: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-v6clk" in namespace "emptydir-wrapper-6593" to be "running" -Jun 12 21:49:13.326: INFO: Pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-v6clk": Phase="Running", Reason="", readiness=true. Elapsed: 22.048364ms -Jun 12 21:49:13.326: INFO: Pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-v6clk" satisfied condition "running" -STEP: deleting ReplicationController wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485 in namespace emptydir-wrapper-6593, will wait for the garbage collector to delete the pods 06/12/23 21:49:13.326 -Jun 12 21:49:13.462: INFO: Deleting ReplicationController wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485 took: 34.421225ms -Jun 12 21:49:13.665: INFO: Terminating ReplicationController wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485 pods took: 203.575523ms -STEP: Creating RC which spawns configmap-volume pods 06/12/23 21:49:20.686 -Jun 12 21:49:20.732: INFO: Pod name wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8: Found 0 pods out of 5 -Jun 12 21:49:25.767: INFO: Pod name wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8: Found 5 pods out of 5 -STEP: Ensuring each pod is running 06/12/23 21:49:25.767 -Jun 12 21:49:25.768: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8-7xz6n" in namespace "emptydir-wrapper-6593" to be "running" -Jun 12 21:49:25.781: INFO: Pod "wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8-7xz6n": Phase="Pending", Reason="", readiness=false. Elapsed: 13.494115ms -Jun 12 21:49:27.800: INFO: Pod "wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8-7xz6n": Phase="Running", Reason="", readiness=true. Elapsed: 2.032004245s -Jun 12 21:49:27.800: INFO: Pod "wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8-7xz6n" satisfied condition "running" -Jun 12 21:49:27.800: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8-k52g8" in namespace "emptydir-wrapper-6593" to be "running" -Jun 12 21:49:27.841: INFO: Pod "wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8-k52g8": Phase="Running", Reason="", readiness=true. Elapsed: 40.867913ms -Jun 12 21:49:27.841: INFO: Pod "wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8-k52g8" satisfied condition "running" -Jun 12 21:49:27.841: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8-s9hkw" in namespace "emptydir-wrapper-6593" to be "running" -Jun 12 21:49:27.855: INFO: Pod "wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8-s9hkw": Phase="Running", Reason="", readiness=true. Elapsed: 13.378926ms -Jun 12 21:49:27.855: INFO: Pod "wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8-s9hkw" satisfied condition "running" -Jun 12 21:49:27.855: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8-vt5jg" in namespace "emptydir-wrapper-6593" to be "running" -Jun 12 21:49:27.877: INFO: Pod "wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8-vt5jg": Phase="Running", Reason="", readiness=true. Elapsed: 21.196979ms -Jun 12 21:49:27.877: INFO: Pod "wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8-vt5jg" satisfied condition "running" -Jun 12 21:49:27.877: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8-xqlw6" in namespace "emptydir-wrapper-6593" to be "running" -Jun 12 21:49:27.898: INFO: Pod "wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8-xqlw6": Phase="Running", Reason="", readiness=true. Elapsed: 20.908866ms -Jun 12 21:49:27.898: INFO: Pod "wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8-xqlw6" satisfied condition "running" -STEP: deleting ReplicationController wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8 in namespace emptydir-wrapper-6593, will wait for the garbage collector to delete the pods 06/12/23 21:49:27.898 -Jun 12 21:49:28.011: INFO: Deleting ReplicationController wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8 took: 19.753096ms -Jun 12 21:49:28.212: INFO: Terminating ReplicationController wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8 pods took: 200.917897ms -STEP: Creating RC which spawns configmap-volume pods 06/12/23 21:49:33.961 -Jun 12 21:49:34.032: INFO: Pod name wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590: Found 0 pods out of 5 -Jun 12 21:49:39.110: INFO: Pod name wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590: Found 5 pods out of 5 -STEP: Ensuring each pod is running 06/12/23 21:49:39.11 -Jun 12 21:49:39.111: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-ghm9h" in namespace "emptydir-wrapper-6593" to be "running" -Jun 12 21:49:39.126: INFO: Pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-ghm9h": Phase="Pending", Reason="", readiness=false. Elapsed: 15.348647ms -Jun 12 21:49:41.184: INFO: Pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-ghm9h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073160251s -Jun 12 21:49:43.168: INFO: Pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-ghm9h": Phase="Running", Reason="", readiness=true. Elapsed: 4.056829322s -Jun 12 21:49:43.168: INFO: Pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-ghm9h" satisfied condition "running" -Jun 12 21:49:43.168: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-m9cs2" in namespace "emptydir-wrapper-6593" to be "running" -Jun 12 21:49:43.183: INFO: Pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-m9cs2": Phase="Running", Reason="", readiness=true. Elapsed: 15.326292ms -Jun 12 21:49:43.183: INFO: Pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-m9cs2" satisfied condition "running" -Jun 12 21:49:43.183: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-njlh5" in namespace "emptydir-wrapper-6593" to be "running" -Jun 12 21:49:43.200: INFO: Pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-njlh5": Phase="Running", Reason="", readiness=true. Elapsed: 16.539071ms -Jun 12 21:49:43.200: INFO: Pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-njlh5" satisfied condition "running" -Jun 12 21:49:43.200: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-rrppk" in namespace "emptydir-wrapper-6593" to be "running" -Jun 12 21:49:43.247: INFO: Pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-rrppk": Phase="Running", Reason="", readiness=true. Elapsed: 46.716405ms -Jun 12 21:49:43.247: INFO: Pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-rrppk" satisfied condition "running" -Jun 12 21:49:43.247: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-z2n8w" in namespace "emptydir-wrapper-6593" to be "running" -Jun 12 21:49:43.287: INFO: Pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-z2n8w": Phase="Running", Reason="", readiness=true. Elapsed: 40.356183ms -Jun 12 21:49:43.287: INFO: Pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-z2n8w" satisfied condition "running" -STEP: deleting ReplicationController wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590 in namespace emptydir-wrapper-6593, will wait for the garbage collector to delete the pods 06/12/23 21:49:43.287 -Jun 12 21:49:43.397: INFO: Deleting ReplicationController wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590 took: 17.161978ms -Jun 12 21:49:43.498: INFO: Terminating ReplicationController wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590 pods took: 101.371047ms -STEP: Cleaning up the configMaps 06/12/23 21:49:48.699 -[AfterEach] [sig-storage] EmptyDir wrapper volumes +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:97 +Jul 27 02:21:27.348: INFO: Waiting up to 1m0s for all nodes to be ready +Jul 27 02:22:27.566: INFO: Waiting for terminating namespaces to be deleted... +[It] validates lower priority pod preemption by critical pod [Conformance] + test/e2e/scheduling/preemption.go:224 +STEP: Create pods that use 4/5 of node resources. 07/27/23 02:22:27.592 +Jul 27 02:22:27.644: INFO: Created pod: pod0-0-sched-preemption-low-priority +Jul 27 02:22:27.662: INFO: Created pod: pod0-1-sched-preemption-medium-priority +Jul 27 02:22:27.737: INFO: Created pod: pod1-0-sched-preemption-medium-priority +Jul 27 02:22:27.758: INFO: Created pod: pod1-1-sched-preemption-medium-priority +Jul 27 02:22:27.815: INFO: Created pod: pod2-0-sched-preemption-medium-priority +Jul 27 02:22:27.835: INFO: Created pod: pod2-1-sched-preemption-medium-priority +STEP: Wait for pods to be scheduled. 07/27/23 02:22:27.835 +Jul 27 02:22:27.835: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-9139" to be "running" +Jul 27 02:22:27.843: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 7.910158ms +Jul 27 02:22:29.863: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028098452s +Jul 27 02:22:31.852: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 4.017465882s +Jul 27 02:22:31.853: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" +Jul 27 02:22:31.853: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-9139" to be "running" +Jul 27 02:22:31.861: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 8.493209ms +Jul 27 02:22:31.861: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" +Jul 27 02:22:31.861: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-9139" to be "running" +Jul 27 02:22:31.869: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 8.353579ms +Jul 27 02:22:31.869: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" +Jul 27 02:22:31.869: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-9139" to be "running" +Jul 27 02:22:31.879: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 9.424903ms +Jul 27 02:22:31.879: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" +Jul 27 02:22:31.879: INFO: Waiting up to 5m0s for pod "pod2-0-sched-preemption-medium-priority" in namespace "sched-preemption-9139" to be "running" +Jul 27 02:22:31.886: INFO: Pod "pod2-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 7.409441ms +Jul 27 02:22:31.886: INFO: Pod "pod2-0-sched-preemption-medium-priority" satisfied condition "running" +Jul 27 02:22:31.886: INFO: Waiting up to 5m0s for pod "pod2-1-sched-preemption-medium-priority" in namespace "sched-preemption-9139" to be "running" +Jul 27 02:22:31.895: INFO: Pod "pod2-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 8.297661ms +Jul 27 02:22:31.895: INFO: Pod "pod2-1-sched-preemption-medium-priority" satisfied condition "running" +STEP: Run a critical pod that use same resources as that of a lower priority pod 07/27/23 02:22:31.895 +Jul 27 02:22:31.925: INFO: Waiting up to 2m0s for pod "critical-pod" in namespace "kube-system" to be "running" +Jul 27 02:22:31.936: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 10.494915ms +Jul 27 02:22:33.956: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0311638s +Jul 27 02:22:35.947: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02232123s +Jul 27 02:22:37.953: INFO: Pod "critical-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.027919471s +Jul 27 02:22:37.953: INFO: Pod "critical-pod" satisfied condition "running" +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 -Jun 12 21:49:49.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes +Jul 27 02:22:38.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:84 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes +[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes +[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 -STEP: Destroying namespace "emptydir-wrapper-6593" for this suite. 06/12/23 21:49:49.823 +STEP: Destroying namespace "sched-preemption-9139" for this suite. 07/27/23 02:22:38.294 ------------------------------ -• [SLOW TEST] [51.334 seconds] -[sig-storage] EmptyDir wrapper volumes -test/e2e/storage/utils/framework.go:23 - should not cause race condition when used for configmaps [Serial] [Conformance] - test/e2e/storage/empty_dir_wrapper.go:189 +• [SLOW TEST] [71.140 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +test/e2e/scheduling/framework.go:40 + validates lower priority pod preemption by critical pod [Conformance] + test/e2e/scheduling/preemption.go:224 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] EmptyDir wrapper volumes + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:48:58.516 - Jun 12 21:48:58.516: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename emptydir-wrapper 06/12/23 21:48:58.518 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:48:58.654 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:48:58.68 - [BeforeEach] [sig-storage] EmptyDir wrapper volumes + STEP: Creating a kubernetes client 07/27/23 02:21:27.2 + Jul 27 02:21:27.200: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename sched-preemption 07/27/23 02:21:27.201 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:21:27.245 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:21:27.254 + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 - [It] should not cause race condition when used for configmaps [Serial] [Conformance] - test/e2e/storage/empty_dir_wrapper.go:189 - STEP: Creating 50 configmaps 06/12/23 21:48:58.714 - STEP: Creating RC which spawns configmap-volume pods 06/12/23 21:49:00.091 - Jun 12 21:49:00.143: INFO: Pod name wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485: Found 0 pods out of 5 - Jun 12 21:49:05.194: INFO: Pod name wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485: Found 5 pods out of 5 - STEP: Ensuring each pod is running 06/12/23 21:49:05.194 - Jun 12 21:49:05.195: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-2hjt9" in namespace "emptydir-wrapper-6593" to be "running" - Jun 12 21:49:05.213: INFO: Pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-2hjt9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.064191ms - Jun 12 21:49:07.236: INFO: Pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-2hjt9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041693917s - Jun 12 21:49:09.253: INFO: Pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-2hjt9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058012166s - Jun 12 21:49:11.229: INFO: Pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-2hjt9": Phase="Running", Reason="", readiness=true. Elapsed: 6.034451119s - Jun 12 21:49:11.229: INFO: Pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-2hjt9" satisfied condition "running" - Jun 12 21:49:11.230: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-brm6b" in namespace "emptydir-wrapper-6593" to be "running" - Jun 12 21:49:11.244: INFO: Pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-brm6b": Phase="Running", Reason="", readiness=true. Elapsed: 14.010996ms - Jun 12 21:49:11.244: INFO: Pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-brm6b" satisfied condition "running" - Jun 12 21:49:11.244: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-hfmbp" in namespace "emptydir-wrapper-6593" to be "running" - Jun 12 21:49:11.256: INFO: Pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-hfmbp": Phase="Pending", Reason="", readiness=false. Elapsed: 12.702827ms - Jun 12 21:49:13.283: INFO: Pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-hfmbp": Phase="Running", Reason="", readiness=true. Elapsed: 2.039492684s - Jun 12 21:49:13.283: INFO: Pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-hfmbp" satisfied condition "running" - Jun 12 21:49:13.283: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-rzpxx" in namespace "emptydir-wrapper-6593" to be "running" - Jun 12 21:49:13.303: INFO: Pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-rzpxx": Phase="Running", Reason="", readiness=true. Elapsed: 19.59644ms - Jun 12 21:49:13.303: INFO: Pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-rzpxx" satisfied condition "running" - Jun 12 21:49:13.304: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-v6clk" in namespace "emptydir-wrapper-6593" to be "running" - Jun 12 21:49:13.326: INFO: Pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-v6clk": Phase="Running", Reason="", readiness=true. Elapsed: 22.048364ms - Jun 12 21:49:13.326: INFO: Pod "wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485-v6clk" satisfied condition "running" - STEP: deleting ReplicationController wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485 in namespace emptydir-wrapper-6593, will wait for the garbage collector to delete the pods 06/12/23 21:49:13.326 - Jun 12 21:49:13.462: INFO: Deleting ReplicationController wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485 took: 34.421225ms - Jun 12 21:49:13.665: INFO: Terminating ReplicationController wrapped-volume-race-f6e9d88c-0bea-44b5-b6f8-9056ec357485 pods took: 203.575523ms - STEP: Creating RC which spawns configmap-volume pods 06/12/23 21:49:20.686 - Jun 12 21:49:20.732: INFO: Pod name wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8: Found 0 pods out of 5 - Jun 12 21:49:25.767: INFO: Pod name wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8: Found 5 pods out of 5 - STEP: Ensuring each pod is running 06/12/23 21:49:25.767 - Jun 12 21:49:25.768: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8-7xz6n" in namespace "emptydir-wrapper-6593" to be "running" - Jun 12 21:49:25.781: INFO: Pod "wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8-7xz6n": Phase="Pending", Reason="", readiness=false. Elapsed: 13.494115ms - Jun 12 21:49:27.800: INFO: Pod "wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8-7xz6n": Phase="Running", Reason="", readiness=true. Elapsed: 2.032004245s - Jun 12 21:49:27.800: INFO: Pod "wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8-7xz6n" satisfied condition "running" - Jun 12 21:49:27.800: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8-k52g8" in namespace "emptydir-wrapper-6593" to be "running" - Jun 12 21:49:27.841: INFO: Pod "wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8-k52g8": Phase="Running", Reason="", readiness=true. Elapsed: 40.867913ms - Jun 12 21:49:27.841: INFO: Pod "wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8-k52g8" satisfied condition "running" - Jun 12 21:49:27.841: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8-s9hkw" in namespace "emptydir-wrapper-6593" to be "running" - Jun 12 21:49:27.855: INFO: Pod "wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8-s9hkw": Phase="Running", Reason="", readiness=true. Elapsed: 13.378926ms - Jun 12 21:49:27.855: INFO: Pod "wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8-s9hkw" satisfied condition "running" - Jun 12 21:49:27.855: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8-vt5jg" in namespace "emptydir-wrapper-6593" to be "running" - Jun 12 21:49:27.877: INFO: Pod "wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8-vt5jg": Phase="Running", Reason="", readiness=true. Elapsed: 21.196979ms - Jun 12 21:49:27.877: INFO: Pod "wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8-vt5jg" satisfied condition "running" - Jun 12 21:49:27.877: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8-xqlw6" in namespace "emptydir-wrapper-6593" to be "running" - Jun 12 21:49:27.898: INFO: Pod "wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8-xqlw6": Phase="Running", Reason="", readiness=true. Elapsed: 20.908866ms - Jun 12 21:49:27.898: INFO: Pod "wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8-xqlw6" satisfied condition "running" - STEP: deleting ReplicationController wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8 in namespace emptydir-wrapper-6593, will wait for the garbage collector to delete the pods 06/12/23 21:49:27.898 - Jun 12 21:49:28.011: INFO: Deleting ReplicationController wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8 took: 19.753096ms - Jun 12 21:49:28.212: INFO: Terminating ReplicationController wrapped-volume-race-505b640d-3846-4646-ba53-42a2a3eefeb8 pods took: 200.917897ms - STEP: Creating RC which spawns configmap-volume pods 06/12/23 21:49:33.961 - Jun 12 21:49:34.032: INFO: Pod name wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590: Found 0 pods out of 5 - Jun 12 21:49:39.110: INFO: Pod name wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590: Found 5 pods out of 5 - STEP: Ensuring each pod is running 06/12/23 21:49:39.11 - Jun 12 21:49:39.111: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-ghm9h" in namespace "emptydir-wrapper-6593" to be "running" - Jun 12 21:49:39.126: INFO: Pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-ghm9h": Phase="Pending", Reason="", readiness=false. Elapsed: 15.348647ms - Jun 12 21:49:41.184: INFO: Pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-ghm9h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073160251s - Jun 12 21:49:43.168: INFO: Pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-ghm9h": Phase="Running", Reason="", readiness=true. Elapsed: 4.056829322s - Jun 12 21:49:43.168: INFO: Pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-ghm9h" satisfied condition "running" - Jun 12 21:49:43.168: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-m9cs2" in namespace "emptydir-wrapper-6593" to be "running" - Jun 12 21:49:43.183: INFO: Pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-m9cs2": Phase="Running", Reason="", readiness=true. Elapsed: 15.326292ms - Jun 12 21:49:43.183: INFO: Pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-m9cs2" satisfied condition "running" - Jun 12 21:49:43.183: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-njlh5" in namespace "emptydir-wrapper-6593" to be "running" - Jun 12 21:49:43.200: INFO: Pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-njlh5": Phase="Running", Reason="", readiness=true. Elapsed: 16.539071ms - Jun 12 21:49:43.200: INFO: Pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-njlh5" satisfied condition "running" - Jun 12 21:49:43.200: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-rrppk" in namespace "emptydir-wrapper-6593" to be "running" - Jun 12 21:49:43.247: INFO: Pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-rrppk": Phase="Running", Reason="", readiness=true. Elapsed: 46.716405ms - Jun 12 21:49:43.247: INFO: Pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-rrppk" satisfied condition "running" - Jun 12 21:49:43.247: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-z2n8w" in namespace "emptydir-wrapper-6593" to be "running" - Jun 12 21:49:43.287: INFO: Pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-z2n8w": Phase="Running", Reason="", readiness=true. Elapsed: 40.356183ms - Jun 12 21:49:43.287: INFO: Pod "wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590-z2n8w" satisfied condition "running" - STEP: deleting ReplicationController wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590 in namespace emptydir-wrapper-6593, will wait for the garbage collector to delete the pods 06/12/23 21:49:43.287 - Jun 12 21:49:43.397: INFO: Deleting ReplicationController wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590 took: 17.161978ms - Jun 12 21:49:43.498: INFO: Terminating ReplicationController wrapped-volume-race-2d05d659-90ca-4346-b6d5-d5e5da59c590 pods took: 101.371047ms - STEP: Cleaning up the configMaps 06/12/23 21:49:48.699 - [AfterEach] [sig-storage] EmptyDir wrapper volumes + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:97 + Jul 27 02:21:27.348: INFO: Waiting up to 1m0s for all nodes to be ready + Jul 27 02:22:27.566: INFO: Waiting for terminating namespaces to be deleted... + [It] validates lower priority pod preemption by critical pod [Conformance] + test/e2e/scheduling/preemption.go:224 + STEP: Create pods that use 4/5 of node resources. 07/27/23 02:22:27.592 + Jul 27 02:22:27.644: INFO: Created pod: pod0-0-sched-preemption-low-priority + Jul 27 02:22:27.662: INFO: Created pod: pod0-1-sched-preemption-medium-priority + Jul 27 02:22:27.737: INFO: Created pod: pod1-0-sched-preemption-medium-priority + Jul 27 02:22:27.758: INFO: Created pod: pod1-1-sched-preemption-medium-priority + Jul 27 02:22:27.815: INFO: Created pod: pod2-0-sched-preemption-medium-priority + Jul 27 02:22:27.835: INFO: Created pod: pod2-1-sched-preemption-medium-priority + STEP: Wait for pods to be scheduled. 07/27/23 02:22:27.835 + Jul 27 02:22:27.835: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-9139" to be "running" + Jul 27 02:22:27.843: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 7.910158ms + Jul 27 02:22:29.863: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028098452s + Jul 27 02:22:31.852: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 4.017465882s + Jul 27 02:22:31.853: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" + Jul 27 02:22:31.853: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-9139" to be "running" + Jul 27 02:22:31.861: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 8.493209ms + Jul 27 02:22:31.861: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" + Jul 27 02:22:31.861: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-9139" to be "running" + Jul 27 02:22:31.869: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 8.353579ms + Jul 27 02:22:31.869: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" + Jul 27 02:22:31.869: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-9139" to be "running" + Jul 27 02:22:31.879: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 9.424903ms + Jul 27 02:22:31.879: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" + Jul 27 02:22:31.879: INFO: Waiting up to 5m0s for pod "pod2-0-sched-preemption-medium-priority" in namespace "sched-preemption-9139" to be "running" + Jul 27 02:22:31.886: INFO: Pod "pod2-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 7.409441ms + Jul 27 02:22:31.886: INFO: Pod "pod2-0-sched-preemption-medium-priority" satisfied condition "running" + Jul 27 02:22:31.886: INFO: Waiting up to 5m0s for pod "pod2-1-sched-preemption-medium-priority" in namespace "sched-preemption-9139" to be "running" + Jul 27 02:22:31.895: INFO: Pod "pod2-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 8.297661ms + Jul 27 02:22:31.895: INFO: Pod "pod2-1-sched-preemption-medium-priority" satisfied condition "running" + STEP: Run a critical pod that use same resources as that of a lower priority pod 07/27/23 02:22:31.895 + Jul 27 02:22:31.925: INFO: Waiting up to 2m0s for pod "critical-pod" in namespace "kube-system" to be "running" + Jul 27 02:22:31.936: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 10.494915ms + Jul 27 02:22:33.956: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0311638s + Jul 27 02:22:35.947: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02232123s + Jul 27 02:22:37.953: INFO: Pod "critical-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.027919471s + Jul 27 02:22:37.953: INFO: Pod "critical-pod" satisfied condition "running" + [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 - Jun 12 21:49:49.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes + Jul 27 02:22:38.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:84 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes + [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes + [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 - STEP: Destroying namespace "emptydir-wrapper-6593" for this suite. 06/12/23 21:49:49.823 + STEP: Destroying namespace "sched-preemption-9139" for this suite. 07/27/23 02:22:38.294 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-node] Probing container - should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] - test/e2e/common/node/container_probe.go:215 -[BeforeEach] [sig-node] Probing container +[sig-node] Containers + should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:73 +[BeforeEach] [sig-node] Containers set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:49:49.855 -Jun 12 21:49:49.855: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename container-probe 06/12/23 21:49:49.858 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:49:49.948 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:49:49.972 -[BeforeEach] [sig-node] Probing container +STEP: Creating a kubernetes client 07/27/23 02:22:38.342 +Jul 27 02:22:38.342: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename containers 07/27/23 02:22:38.343 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:22:38.462 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:22:38.473 +[BeforeEach] [sig-node] Containers test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-node] Probing container - test/e2e/common/node/container_probe.go:63 -[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] - test/e2e/common/node/container_probe.go:215 -STEP: Creating pod test-webserver-c0b3d239-b3bd-4fc6-b475-f05504b3a10c in namespace container-probe-1674 06/12/23 21:49:49.986 -Jun 12 21:49:50.024: INFO: Waiting up to 5m0s for pod "test-webserver-c0b3d239-b3bd-4fc6-b475-f05504b3a10c" in namespace "container-probe-1674" to be "not pending" -Jun 12 21:49:50.035: INFO: Pod "test-webserver-c0b3d239-b3bd-4fc6-b475-f05504b3a10c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.102027ms -Jun 12 21:49:52.049: INFO: Pod "test-webserver-c0b3d239-b3bd-4fc6-b475-f05504b3a10c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025205099s -Jun 12 21:49:54.075: INFO: Pod "test-webserver-c0b3d239-b3bd-4fc6-b475-f05504b3a10c": Phase="Running", Reason="", readiness=true. Elapsed: 4.051293481s -Jun 12 21:49:54.076: INFO: Pod "test-webserver-c0b3d239-b3bd-4fc6-b475-f05504b3a10c" satisfied condition "not pending" -Jun 12 21:49:54.076: INFO: Started pod test-webserver-c0b3d239-b3bd-4fc6-b475-f05504b3a10c in namespace container-probe-1674 -STEP: checking the pod's current state and verifying that restartCount is present 06/12/23 21:49:54.076 -Jun 12 21:49:54.091: INFO: Initial restart count of pod test-webserver-c0b3d239-b3bd-4fc6-b475-f05504b3a10c is 0 -STEP: deleting the pod 06/12/23 21:53:54.402 -[AfterEach] [sig-node] Probing container +[It] should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:73 +STEP: Creating a pod to test override command 07/27/23 02:22:38.491 +Jul 27 02:22:38.520: INFO: Waiting up to 5m0s for pod "client-containers-0ba00646-5991-48bb-8123-8025acbbacf5" in namespace "containers-1105" to be "Succeeded or Failed" +Jul 27 02:22:38.542: INFO: Pod "client-containers-0ba00646-5991-48bb-8123-8025acbbacf5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.119527ms +Jul 27 02:22:40.552: INFO: Pod "client-containers-0ba00646-5991-48bb-8123-8025acbbacf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032088742s +Jul 27 02:22:42.552: INFO: Pod "client-containers-0ba00646-5991-48bb-8123-8025acbbacf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031897947s +STEP: Saw pod success 07/27/23 02:22:42.552 +Jul 27 02:22:42.552: INFO: Pod "client-containers-0ba00646-5991-48bb-8123-8025acbbacf5" satisfied condition "Succeeded or Failed" +Jul 27 02:22:42.559: INFO: Trying to get logs from node 10.245.128.19 pod client-containers-0ba00646-5991-48bb-8123-8025acbbacf5 container agnhost-container: +STEP: delete the pod 07/27/23 02:22:42.577 +Jul 27 02:22:42.596: INFO: Waiting for pod client-containers-0ba00646-5991-48bb-8123-8025acbbacf5 to disappear +Jul 27 02:22:42.605: INFO: Pod client-containers-0ba00646-5991-48bb-8123-8025acbbacf5 no longer exists +[AfterEach] [sig-node] Containers test/e2e/framework/node/init/init.go:32 -Jun 12 21:53:54.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Probing container +Jul 27 02:22:42.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Containers test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Probing container +[DeferCleanup (Each)] [sig-node] Containers dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Probing container +[DeferCleanup (Each)] [sig-node] Containers tear down framework | framework.go:193 -STEP: Destroying namespace "container-probe-1674" for this suite. 06/12/23 21:53:54.463 +STEP: Destroying namespace "containers-1105" for this suite. 07/27/23 02:22:42.616 ------------------------------ -• [SLOW TEST] [244.630 seconds] -[sig-node] Probing container +• [4.298 seconds] +[sig-node] Containers test/e2e/common/node/framework.go:23 - should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] - test/e2e/common/node/container_probe.go:215 + should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:73 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Probing container + [BeforeEach] [sig-node] Containers set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:49:49.855 - Jun 12 21:49:49.855: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename container-probe 06/12/23 21:49:49.858 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:49:49.948 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:49:49.972 - [BeforeEach] [sig-node] Probing container + STEP: Creating a kubernetes client 07/27/23 02:22:38.342 + Jul 27 02:22:38.342: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename containers 07/27/23 02:22:38.343 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:22:38.462 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:22:38.473 + [BeforeEach] [sig-node] Containers test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-node] Probing container - test/e2e/common/node/container_probe.go:63 - [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] - test/e2e/common/node/container_probe.go:215 - STEP: Creating pod test-webserver-c0b3d239-b3bd-4fc6-b475-f05504b3a10c in namespace container-probe-1674 06/12/23 21:49:49.986 - Jun 12 21:49:50.024: INFO: Waiting up to 5m0s for pod "test-webserver-c0b3d239-b3bd-4fc6-b475-f05504b3a10c" in namespace "container-probe-1674" to be "not pending" - Jun 12 21:49:50.035: INFO: Pod "test-webserver-c0b3d239-b3bd-4fc6-b475-f05504b3a10c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.102027ms - Jun 12 21:49:52.049: INFO: Pod "test-webserver-c0b3d239-b3bd-4fc6-b475-f05504b3a10c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025205099s - Jun 12 21:49:54.075: INFO: Pod "test-webserver-c0b3d239-b3bd-4fc6-b475-f05504b3a10c": Phase="Running", Reason="", readiness=true. Elapsed: 4.051293481s - Jun 12 21:49:54.076: INFO: Pod "test-webserver-c0b3d239-b3bd-4fc6-b475-f05504b3a10c" satisfied condition "not pending" - Jun 12 21:49:54.076: INFO: Started pod test-webserver-c0b3d239-b3bd-4fc6-b475-f05504b3a10c in namespace container-probe-1674 - STEP: checking the pod's current state and verifying that restartCount is present 06/12/23 21:49:54.076 - Jun 12 21:49:54.091: INFO: Initial restart count of pod test-webserver-c0b3d239-b3bd-4fc6-b475-f05504b3a10c is 0 - STEP: deleting the pod 06/12/23 21:53:54.402 - [AfterEach] [sig-node] Probing container + [It] should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:73 + STEP: Creating a pod to test override command 07/27/23 02:22:38.491 + Jul 27 02:22:38.520: INFO: Waiting up to 5m0s for pod "client-containers-0ba00646-5991-48bb-8123-8025acbbacf5" in namespace "containers-1105" to be "Succeeded or Failed" + Jul 27 02:22:38.542: INFO: Pod "client-containers-0ba00646-5991-48bb-8123-8025acbbacf5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.119527ms + Jul 27 02:22:40.552: INFO: Pod "client-containers-0ba00646-5991-48bb-8123-8025acbbacf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032088742s + Jul 27 02:22:42.552: INFO: Pod "client-containers-0ba00646-5991-48bb-8123-8025acbbacf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031897947s + STEP: Saw pod success 07/27/23 02:22:42.552 + Jul 27 02:22:42.552: INFO: Pod "client-containers-0ba00646-5991-48bb-8123-8025acbbacf5" satisfied condition "Succeeded or Failed" + Jul 27 02:22:42.559: INFO: Trying to get logs from node 10.245.128.19 pod client-containers-0ba00646-5991-48bb-8123-8025acbbacf5 container agnhost-container: + STEP: delete the pod 07/27/23 02:22:42.577 + Jul 27 02:22:42.596: INFO: Waiting for pod client-containers-0ba00646-5991-48bb-8123-8025acbbacf5 to disappear + Jul 27 02:22:42.605: INFO: Pod client-containers-0ba00646-5991-48bb-8123-8025acbbacf5 no longer exists + [AfterEach] [sig-node] Containers test/e2e/framework/node/init/init.go:32 - Jun 12 21:53:54.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Probing container + Jul 27 02:22:42.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Containers test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Probing container + [DeferCleanup (Each)] [sig-node] Containers dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Probing container + [DeferCleanup (Each)] [sig-node] Containers tear down framework | framework.go:193 - STEP: Destroying namespace "container-probe-1674" for this suite. 06/12/23 21:53:54.463 + STEP: Destroying namespace "containers-1105" for this suite. 07/27/23 02:22:42.616 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSS ------------------------------ -[sig-cli] Kubectl client Kubectl diff - should check if kubectl diff finds a difference for Deployments [Conformance] - test/e2e/kubectl/kubectl.go:931 -[BeforeEach] [sig-cli] Kubectl client +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + listing mutating webhooks should work [Conformance] + test/e2e/apimachinery/webhook.go:656 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:53:54.512 -Jun 12 21:53:54.513: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename kubectl 06/12/23 21:53:54.516 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:53:54.604 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:53:54.657 -[BeforeEach] [sig-cli] Kubectl client +STEP: Creating a kubernetes client 07/27/23 02:22:42.64 +Jul 27 02:22:42.640: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename webhook 07/27/23 02:22:42.641 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:22:42.7 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:22:42.71 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-cli] Kubectl client - test/e2e/kubectl/kubectl.go:274 -[It] should check if kubectl diff finds a difference for Deployments [Conformance] - test/e2e/kubectl/kubectl.go:931 -STEP: create deployment with httpd image 06/12/23 21:53:54.7 -Jun 12 21:53:54.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-7347 create -f -' -Jun 12 21:53:56.067: INFO: stderr: "" -Jun 12 21:53:56.068: INFO: stdout: "deployment.apps/httpd-deployment created\n" -STEP: verify diff finds difference between live and declared image 06/12/23 21:53:56.068 -Jun 12 21:53:56.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-7347 diff -f -' -Jun 12 21:53:57.062: INFO: rc: 1 -Jun 12 21:53:57.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-7347 delete -f -' -Jun 12 21:53:57.375: INFO: stderr: "" -Jun 12 21:53:57.375: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" -[AfterEach] [sig-cli] Kubectl client +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 07/27/23 02:22:42.782 +STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 02:22:43.168 +STEP: Deploying the webhook pod 07/27/23 02:22:43.193 +STEP: Wait for the deployment to be ready 07/27/23 02:22:43.226 +Jul 27 02:22:43.242: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 07/27/23 02:22:45.271 +STEP: Verifying the service has paired with the endpoint 07/27/23 02:22:45.306 +Jul 27 02:22:46.307: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] listing mutating webhooks should work [Conformance] + test/e2e/apimachinery/webhook.go:656 +STEP: Listing all of the created validation webhooks 07/27/23 02:22:46.485 +STEP: Creating a configMap that should be mutated 07/27/23 02:22:46.525 +STEP: Deleting the collection of validation webhooks 07/27/23 02:22:46.641 +STEP: Creating a configMap that should not be mutated 07/27/23 02:22:46.785 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 -Jun 12 21:53:57.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-cli] Kubectl client +Jul 27 02:22:46.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-cli] Kubectl client +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-cli] Kubectl client +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 -STEP: Destroying namespace "kubectl-7347" for this suite. 06/12/23 21:53:57.403 +STEP: Destroying namespace "webhook-6021" for this suite. 07/27/23 02:22:46.936 +STEP: Destroying namespace "webhook-6021-markers" for this suite. 07/27/23 02:22:46.963 ------------------------------ -• [2.917 seconds] -[sig-cli] Kubectl client -test/e2e/kubectl/framework.go:23 - Kubectl diff - test/e2e/kubectl/kubectl.go:925 - should check if kubectl diff finds a difference for Deployments [Conformance] - test/e2e/kubectl/kubectl.go:931 +• [4.346 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + listing mutating webhooks should work [Conformance] + test/e2e/apimachinery/webhook.go:656 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-cli] Kubectl client + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:53:54.512 - Jun 12 21:53:54.513: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename kubectl 06/12/23 21:53:54.516 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:53:54.604 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:53:54.657 - [BeforeEach] [sig-cli] Kubectl client + STEP: Creating a kubernetes client 07/27/23 02:22:42.64 + Jul 27 02:22:42.640: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename webhook 07/27/23 02:22:42.641 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:22:42.7 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:22:42.71 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-cli] Kubectl client - test/e2e/kubectl/kubectl.go:274 - [It] should check if kubectl diff finds a difference for Deployments [Conformance] - test/e2e/kubectl/kubectl.go:931 - STEP: create deployment with httpd image 06/12/23 21:53:54.7 - Jun 12 21:53:54.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-7347 create -f -' - Jun 12 21:53:56.067: INFO: stderr: "" - Jun 12 21:53:56.068: INFO: stdout: "deployment.apps/httpd-deployment created\n" - STEP: verify diff finds difference between live and declared image 06/12/23 21:53:56.068 - Jun 12 21:53:56.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-7347 diff -f -' - Jun 12 21:53:57.062: INFO: rc: 1 - Jun 12 21:53:57.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-7347 delete -f -' - Jun 12 21:53:57.375: INFO: stderr: "" - Jun 12 21:53:57.375: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" - [AfterEach] [sig-cli] Kubectl client + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 07/27/23 02:22:42.782 + STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 02:22:43.168 + STEP: Deploying the webhook pod 07/27/23 02:22:43.193 + STEP: Wait for the deployment to be ready 07/27/23 02:22:43.226 + Jul 27 02:22:43.242: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 07/27/23 02:22:45.271 + STEP: Verifying the service has paired with the endpoint 07/27/23 02:22:45.306 + Jul 27 02:22:46.307: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] listing mutating webhooks should work [Conformance] + test/e2e/apimachinery/webhook.go:656 + STEP: Listing all of the created validation webhooks 07/27/23 02:22:46.485 + STEP: Creating a configMap that should be mutated 07/27/23 02:22:46.525 + STEP: Deleting the collection of validation webhooks 07/27/23 02:22:46.641 + STEP: Creating a configMap that should not be mutated 07/27/23 02:22:46.785 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 21:53:57.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-cli] Kubectl client + Jul 27 02:22:46.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-cli] Kubectl client + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-cli] Kubectl client + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "kubectl-7347" for this suite. 06/12/23 21:53:57.403 + STEP: Destroying namespace "webhook-6021" for this suite. 07/27/23 02:22:46.936 + STEP: Destroying namespace "webhook-6021-markers" for this suite. 07/27/23 02:22:46.963 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSS +SSS +------------------------------ +[sig-apps] ReplicationController + should serve a basic image on each replica with a public image [Conformance] + test/e2e/apps/rc.go:67 +[BeforeEach] [sig-apps] ReplicationController + set up framework | framework.go:178 +STEP: Creating a kubernetes client 07/27/23 02:22:46.987 +Jul 27 02:22:46.987: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename replication-controller 07/27/23 02:22:46.988 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:22:47.026 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:22:47.035 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 +[It] should serve a basic image on each replica with a public image [Conformance] + test/e2e/apps/rc.go:67 +STEP: Creating replication controller my-hostname-basic-dcfac022-f6fe-4375-b5bc-36d89d607d96 07/27/23 02:22:47.044 +W0727 02:22:47.066556 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "my-hostname-basic-dcfac022-f6fe-4375-b5bc-36d89d607d96" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "my-hostname-basic-dcfac022-f6fe-4375-b5bc-36d89d607d96" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "my-hostname-basic-dcfac022-f6fe-4375-b5bc-36d89d607d96" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "my-hostname-basic-dcfac022-f6fe-4375-b5bc-36d89d607d96" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +Jul 27 02:22:47.073: INFO: Pod name my-hostname-basic-dcfac022-f6fe-4375-b5bc-36d89d607d96: Found 0 pods out of 1 +Jul 27 02:22:52.086: INFO: Pod name my-hostname-basic-dcfac022-f6fe-4375-b5bc-36d89d607d96: Found 1 pods out of 1 +Jul 27 02:22:52.086: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-dcfac022-f6fe-4375-b5bc-36d89d607d96" are running +Jul 27 02:22:52.086: INFO: Waiting up to 5m0s for pod "my-hostname-basic-dcfac022-f6fe-4375-b5bc-36d89d607d96-thg2q" in namespace "replication-controller-7687" to be "running" +Jul 27 02:22:52.094: INFO: Pod "my-hostname-basic-dcfac022-f6fe-4375-b5bc-36d89d607d96-thg2q": Phase="Running", Reason="", readiness=true. Elapsed: 8.310128ms +Jul 27 02:22:52.095: INFO: Pod "my-hostname-basic-dcfac022-f6fe-4375-b5bc-36d89d607d96-thg2q" satisfied condition "running" +Jul 27 02:22:52.095: INFO: Pod "my-hostname-basic-dcfac022-f6fe-4375-b5bc-36d89d607d96-thg2q" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-27 02:22:47 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-27 02:22:48 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-27 02:22:48 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-27 02:22:47 +0000 UTC Reason: Message:}]) +Jul 27 02:22:52.095: INFO: Trying to dial the pod +Jul 27 02:22:57.158: INFO: Controller my-hostname-basic-dcfac022-f6fe-4375-b5bc-36d89d607d96: Got expected result from replica 1 [my-hostname-basic-dcfac022-f6fe-4375-b5bc-36d89d607d96-thg2q]: "my-hostname-basic-dcfac022-f6fe-4375-b5bc-36d89d607d96-thg2q", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicationController + test/e2e/framework/node/init/init.go:32 +Jul 27 02:22:57.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] ReplicationController + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] ReplicationController + tear down framework | framework.go:193 +STEP: Destroying namespace "replication-controller-7687" for this suite. 07/27/23 02:22:57.17 +------------------------------ +• [SLOW TEST] [10.296 seconds] +[sig-apps] ReplicationController +test/e2e/apps/framework.go:23 + should serve a basic image on each replica with a public image [Conformance] + test/e2e/apps/rc.go:67 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicationController + set up framework | framework.go:178 + STEP: Creating a kubernetes client 07/27/23 02:22:46.987 + Jul 27 02:22:46.987: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename replication-controller 07/27/23 02:22:46.988 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:22:47.026 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:22:47.035 + [BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 + [It] should serve a basic image on each replica with a public image [Conformance] + test/e2e/apps/rc.go:67 + STEP: Creating replication controller my-hostname-basic-dcfac022-f6fe-4375-b5bc-36d89d607d96 07/27/23 02:22:47.044 + W0727 02:22:47.066556 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "my-hostname-basic-dcfac022-f6fe-4375-b5bc-36d89d607d96" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "my-hostname-basic-dcfac022-f6fe-4375-b5bc-36d89d607d96" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "my-hostname-basic-dcfac022-f6fe-4375-b5bc-36d89d607d96" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "my-hostname-basic-dcfac022-f6fe-4375-b5bc-36d89d607d96" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + Jul 27 02:22:47.073: INFO: Pod name my-hostname-basic-dcfac022-f6fe-4375-b5bc-36d89d607d96: Found 0 pods out of 1 + Jul 27 02:22:52.086: INFO: Pod name my-hostname-basic-dcfac022-f6fe-4375-b5bc-36d89d607d96: Found 1 pods out of 1 + Jul 27 02:22:52.086: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-dcfac022-f6fe-4375-b5bc-36d89d607d96" are running + Jul 27 02:22:52.086: INFO: Waiting up to 5m0s for pod "my-hostname-basic-dcfac022-f6fe-4375-b5bc-36d89d607d96-thg2q" in namespace "replication-controller-7687" to be "running" + Jul 27 02:22:52.094: INFO: Pod "my-hostname-basic-dcfac022-f6fe-4375-b5bc-36d89d607d96-thg2q": Phase="Running", Reason="", readiness=true. Elapsed: 8.310128ms + Jul 27 02:22:52.095: INFO: Pod "my-hostname-basic-dcfac022-f6fe-4375-b5bc-36d89d607d96-thg2q" satisfied condition "running" + Jul 27 02:22:52.095: INFO: Pod "my-hostname-basic-dcfac022-f6fe-4375-b5bc-36d89d607d96-thg2q" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-27 02:22:47 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-27 02:22:48 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-27 02:22:48 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-27 02:22:47 +0000 UTC Reason: Message:}]) + Jul 27 02:22:52.095: INFO: Trying to dial the pod + Jul 27 02:22:57.158: INFO: Controller my-hostname-basic-dcfac022-f6fe-4375-b5bc-36d89d607d96: Got expected result from replica 1 [my-hostname-basic-dcfac022-f6fe-4375-b5bc-36d89d607d96-thg2q]: "my-hostname-basic-dcfac022-f6fe-4375-b5bc-36d89d607d96-thg2q", 1 of 1 required successes so far + [AfterEach] [sig-apps] ReplicationController + test/e2e/framework/node/init/init.go:32 + Jul 27 02:22:57.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] ReplicationController + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] ReplicationController + tear down framework | framework.go:193 + STEP: Destroying namespace "replication-controller-7687" for this suite. 07/27/23 02:22:57.17 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] DisruptionController - should create a PodDisruptionBudget [Conformance] - test/e2e/apps/disruption.go:108 + should observe PodDisruptionBudget status updated [Conformance] + test/e2e/apps/disruption.go:141 [BeforeEach] [sig-apps] DisruptionController set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:53:57.44 -Jun 12 21:53:57.440: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename disruption 06/12/23 21:53:57.443 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:53:57.599 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:53:57.642 +STEP: Creating a kubernetes client 07/27/23 02:22:57.284 +Jul 27 02:22:57.284: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename disruption 07/27/23 02:22:57.285 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:22:57.356 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:22:57.365 [BeforeEach] [sig-apps] DisruptionController test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] DisruptionController test/e2e/apps/disruption.go:72 -[It] should create a PodDisruptionBudget [Conformance] - test/e2e/apps/disruption.go:108 -STEP: creating the pdb 06/12/23 21:53:57.7 -STEP: Waiting for the pdb to be processed 06/12/23 21:53:57.72 -STEP: updating the pdb 06/12/23 21:53:59.742 -STEP: Waiting for the pdb to be processed 06/12/23 21:53:59.779 -STEP: patching the pdb 06/12/23 21:53:59.789 -STEP: Waiting for the pdb to be processed 06/12/23 21:53:59.841 -STEP: Waiting for the pdb to be deleted 06/12/23 21:54:01.877 +[It] should observe PodDisruptionBudget status updated [Conformance] + test/e2e/apps/disruption.go:141 +STEP: Waiting for the pdb to be processed 07/27/23 02:22:57.393 +STEP: Waiting for all pods to be running 07/27/23 02:22:59.495 +Jul 27 02:22:59.506: INFO: running pods: 0 < 3 +Jul 27 02:23:01.516: INFO: running pods: 0 < 3 [AfterEach] [sig-apps] DisruptionController test/e2e/framework/node/init/init.go:32 -Jun 12 21:54:01.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +Jul 27 02:23:03.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] DisruptionController test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] DisruptionController dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] DisruptionController tear down framework | framework.go:193 -STEP: Destroying namespace "disruption-3969" for this suite. 06/12/23 21:54:01.908 +STEP: Destroying namespace "disruption-4565" for this suite. 07/27/23 02:23:03.602 ------------------------------ -• [4.493 seconds] +• [SLOW TEST] [6.339 seconds] [sig-apps] DisruptionController test/e2e/apps/framework.go:23 - should create a PodDisruptionBudget [Conformance] - test/e2e/apps/disruption.go:108 + should observe PodDisruptionBudget status updated [Conformance] + test/e2e/apps/disruption.go:141 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-apps] DisruptionController set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:53:57.44 - Jun 12 21:53:57.440: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename disruption 06/12/23 21:53:57.443 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:53:57.599 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:53:57.642 + STEP: Creating a kubernetes client 07/27/23 02:22:57.284 + Jul 27 02:22:57.284: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename disruption 07/27/23 02:22:57.285 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:22:57.356 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:22:57.365 [BeforeEach] [sig-apps] DisruptionController test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] DisruptionController test/e2e/apps/disruption.go:72 - [It] should create a PodDisruptionBudget [Conformance] - test/e2e/apps/disruption.go:108 - STEP: creating the pdb 06/12/23 21:53:57.7 - STEP: Waiting for the pdb to be processed 06/12/23 21:53:57.72 - STEP: updating the pdb 06/12/23 21:53:59.742 - STEP: Waiting for the pdb to be processed 06/12/23 21:53:59.779 - STEP: patching the pdb 06/12/23 21:53:59.789 - STEP: Waiting for the pdb to be processed 06/12/23 21:53:59.841 - STEP: Waiting for the pdb to be deleted 06/12/23 21:54:01.877 + [It] should observe PodDisruptionBudget status updated [Conformance] + test/e2e/apps/disruption.go:141 + STEP: Waiting for the pdb to be processed 07/27/23 02:22:57.393 + STEP: Waiting for all pods to be running 07/27/23 02:22:59.495 + Jul 27 02:22:59.506: INFO: running pods: 0 < 3 + Jul 27 02:23:01.516: INFO: running pods: 0 < 3 [AfterEach] [sig-apps] DisruptionController test/e2e/framework/node/init/init.go:32 - Jun 12 21:54:01.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + Jul 27 02:23:03.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] DisruptionController test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] DisruptionController dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] DisruptionController tear down framework | framework.go:193 - STEP: Destroying namespace "disruption-3969" for this suite. 06/12/23 21:54:01.908 + STEP: Destroying namespace "disruption-4565" for this suite. 07/27/23 02:23:03.602 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-api-machinery] ResourceQuota - should create a ResourceQuota and capture the life of a configMap. [Conformance] - test/e2e/apimachinery/resource_quota.go:326 -[BeforeEach] [sig-api-machinery] ResourceQuota +[sig-network] EndpointSlice + should support creating EndpointSlice API operations [Conformance] + test/e2e/network/endpointslice.go:353 +[BeforeEach] [sig-network] EndpointSlice set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:54:01.938 -Jun 12 21:54:01.938: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename resourcequota 06/12/23 21:54:01.942 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:54:02.009 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:54:02.021 -[BeforeEach] [sig-api-machinery] ResourceQuota +STEP: Creating a kubernetes client 07/27/23 02:23:03.625 +Jul 27 02:23:03.625: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename endpointslice 07/27/23 02:23:03.625 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:23:03.665 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:23:03.674 +[BeforeEach] [sig-network] EndpointSlice test/e2e/framework/metrics/init/init.go:31 -[It] should create a ResourceQuota and capture the life of a configMap. [Conformance] - test/e2e/apimachinery/resource_quota.go:326 -STEP: Counting existing ResourceQuota 06/12/23 21:54:19.05 -STEP: Creating a ResourceQuota 06/12/23 21:54:24.089 -STEP: Ensuring resource quota status is calculated 06/12/23 21:54:24.132 -STEP: Creating a ConfigMap 06/12/23 21:54:26.163 -STEP: Ensuring resource quota status captures configMap creation 06/12/23 21:54:26.197 -STEP: Deleting a ConfigMap 06/12/23 21:54:28.229 -STEP: Ensuring resource quota status released usage 06/12/23 21:54:28.249 -[AfterEach] [sig-api-machinery] ResourceQuota +[BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:52 +[It] should support creating EndpointSlice API operations [Conformance] + test/e2e/network/endpointslice.go:353 +STEP: getting /apis 07/27/23 02:23:03.683 +STEP: getting /apis/discovery.k8s.io 07/27/23 02:23:03.693 +STEP: getting /apis/discovery.k8s.iov1 07/27/23 02:23:03.697 +STEP: creating 07/27/23 02:23:03.702 +STEP: getting 07/27/23 02:23:03.743 +STEP: listing 07/27/23 02:23:03.756 +STEP: watching 07/27/23 02:23:03.766 +Jul 27 02:23:03.766: INFO: starting watch +STEP: cluster-wide listing 07/27/23 02:23:03.77 +STEP: cluster-wide watching 07/27/23 02:23:03.785 +Jul 27 02:23:03.785: INFO: starting watch +STEP: patching 07/27/23 02:23:03.789 +STEP: updating 07/27/23 02:23:03.806 +Jul 27 02:23:03.830: INFO: waiting for watch events with expected annotations +Jul 27 02:23:03.830: INFO: saw patched and updated annotations +STEP: deleting 07/27/23 02:23:03.831 +STEP: deleting a collection 07/27/23 02:23:03.872 +[AfterEach] [sig-network] EndpointSlice test/e2e/framework/node/init/init.go:32 -Jun 12 21:54:30.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +Jul 27 02:23:03.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] EndpointSlice test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +[DeferCleanup (Each)] [sig-network] EndpointSlice dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +[DeferCleanup (Each)] [sig-network] EndpointSlice tear down framework | framework.go:193 -STEP: Destroying namespace "resourcequota-3649" for this suite. 06/12/23 21:54:30.28 +STEP: Destroying namespace "endpointslice-4933" for this suite. 07/27/23 02:23:03.932 ------------------------------ -• [SLOW TEST] [28.364 seconds] -[sig-api-machinery] ResourceQuota -test/e2e/apimachinery/framework.go:23 - should create a ResourceQuota and capture the life of a configMap. [Conformance] - test/e2e/apimachinery/resource_quota.go:326 +• [0.331 seconds] +[sig-network] EndpointSlice +test/e2e/network/common/framework.go:23 + should support creating EndpointSlice API operations [Conformance] + test/e2e/network/endpointslice.go:353 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] ResourceQuota + [BeforeEach] [sig-network] EndpointSlice set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:54:01.938 - Jun 12 21:54:01.938: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename resourcequota 06/12/23 21:54:01.942 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:54:02.009 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:54:02.021 - [BeforeEach] [sig-api-machinery] ResourceQuota + STEP: Creating a kubernetes client 07/27/23 02:23:03.625 + Jul 27 02:23:03.625: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename endpointslice 07/27/23 02:23:03.625 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:23:03.665 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:23:03.674 + [BeforeEach] [sig-network] EndpointSlice test/e2e/framework/metrics/init/init.go:31 - [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] - test/e2e/apimachinery/resource_quota.go:326 - STEP: Counting existing ResourceQuota 06/12/23 21:54:19.05 - STEP: Creating a ResourceQuota 06/12/23 21:54:24.089 - STEP: Ensuring resource quota status is calculated 06/12/23 21:54:24.132 - STEP: Creating a ConfigMap 06/12/23 21:54:26.163 - STEP: Ensuring resource quota status captures configMap creation 06/12/23 21:54:26.197 - STEP: Deleting a ConfigMap 06/12/23 21:54:28.229 - STEP: Ensuring resource quota status released usage 06/12/23 21:54:28.249 - [AfterEach] [sig-api-machinery] ResourceQuota + [BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:52 + [It] should support creating EndpointSlice API operations [Conformance] + test/e2e/network/endpointslice.go:353 + STEP: getting /apis 07/27/23 02:23:03.683 + STEP: getting /apis/discovery.k8s.io 07/27/23 02:23:03.693 + STEP: getting /apis/discovery.k8s.iov1 07/27/23 02:23:03.697 + STEP: creating 07/27/23 02:23:03.702 + STEP: getting 07/27/23 02:23:03.743 + STEP: listing 07/27/23 02:23:03.756 + STEP: watching 07/27/23 02:23:03.766 + Jul 27 02:23:03.766: INFO: starting watch + STEP: cluster-wide listing 07/27/23 02:23:03.77 + STEP: cluster-wide watching 07/27/23 02:23:03.785 + Jul 27 02:23:03.785: INFO: starting watch + STEP: patching 07/27/23 02:23:03.789 + STEP: updating 07/27/23 02:23:03.806 + Jul 27 02:23:03.830: INFO: waiting for watch events with expected annotations + Jul 27 02:23:03.830: INFO: saw patched and updated annotations + STEP: deleting 07/27/23 02:23:03.831 + STEP: deleting a collection 07/27/23 02:23:03.872 + [AfterEach] [sig-network] EndpointSlice test/e2e/framework/node/init/init.go:32 - Jun 12 21:54:30.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + Jul 27 02:23:03.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] EndpointSlice test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + [DeferCleanup (Each)] [sig-network] EndpointSlice dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + [DeferCleanup (Each)] [sig-network] EndpointSlice tear down framework | framework.go:193 - STEP: Destroying namespace "resourcequota-3649" for this suite. 06/12/23 21:54:30.28 + STEP: Destroying namespace "endpointslice-4933" for this suite. 07/27/23 02:23:03.932 << End Captured GinkgoWriter Output ------------------------------ -S +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-apps] ReplicationController - should test the lifecycle of a ReplicationController [Conformance] - test/e2e/apps/rc.go:110 -[BeforeEach] [sig-apps] ReplicationController +[sig-storage] EmptyDir volumes + should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:117 +[BeforeEach] [sig-storage] EmptyDir volumes set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:54:30.303 -Jun 12 21:54:30.303: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename replication-controller 06/12/23 21:54:30.305 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:54:30.427 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:54:30.438 -[BeforeEach] [sig-apps] ReplicationController +STEP: Creating a kubernetes client 07/27/23 02:23:03.957 +Jul 27 02:23:03.957: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename emptydir 07/27/23 02:23:03.958 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:23:04.008 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:23:04.018 +[BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-apps] ReplicationController - test/e2e/apps/rc.go:57 -[It] should test the lifecycle of a ReplicationController [Conformance] - test/e2e/apps/rc.go:110 -STEP: creating a ReplicationController 06/12/23 21:54:30.494 -STEP: waiting for RC to be added 06/12/23 21:54:30.51 -STEP: waiting for available Replicas 06/12/23 21:54:30.511 -STEP: patching ReplicationController 06/12/23 21:54:32.823 -STEP: waiting for RC to be modified 06/12/23 21:54:32.845 -STEP: patching ReplicationController status 06/12/23 21:54:32.845 -STEP: waiting for RC to be modified 06/12/23 21:54:32.862 -STEP: waiting for available Replicas 06/12/23 21:54:32.862 -STEP: fetching ReplicationController status 06/12/23 21:54:32.875 -STEP: patching ReplicationController scale 06/12/23 21:54:32.888 -STEP: waiting for RC to be modified 06/12/23 21:54:32.905 -STEP: waiting for ReplicationController's scale to be the max amount 06/12/23 21:54:32.906 -STEP: fetching ReplicationController; ensuring that it's patched 06/12/23 21:54:36.532 -STEP: updating ReplicationController status 06/12/23 21:54:36.543 -STEP: waiting for RC to be modified 06/12/23 21:54:36.562 -STEP: listing all ReplicationControllers 06/12/23 21:54:36.562 -STEP: checking that ReplicationController has expected values 06/12/23 21:54:36.574 -STEP: deleting ReplicationControllers by collection 06/12/23 21:54:36.574 -STEP: waiting for ReplicationController to have a DELETED watchEvent 06/12/23 21:54:36.599 -[AfterEach] [sig-apps] ReplicationController +[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:117 +STEP: Creating a pod to test emptydir 0777 on tmpfs 07/27/23 02:23:04.028 +Jul 27 02:23:05.057: INFO: Waiting up to 5m0s for pod "pod-455d2ae0-dd28-4de1-ad76-f267c60e55c4" in namespace "emptydir-7827" to be "Succeeded or Failed" +Jul 27 02:23:05.071: INFO: Pod "pod-455d2ae0-dd28-4de1-ad76-f267c60e55c4": Phase="Pending", Reason="", readiness=false. Elapsed: 13.7824ms +Jul 27 02:23:07.083: INFO: Pod "pod-455d2ae0-dd28-4de1-ad76-f267c60e55c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025714157s +Jul 27 02:23:09.081: INFO: Pod "pod-455d2ae0-dd28-4de1-ad76-f267c60e55c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02360019s +STEP: Saw pod success 07/27/23 02:23:09.081 +Jul 27 02:23:09.081: INFO: Pod "pod-455d2ae0-dd28-4de1-ad76-f267c60e55c4" satisfied condition "Succeeded or Failed" +Jul 27 02:23:09.094: INFO: Trying to get logs from node 10.245.128.19 pod pod-455d2ae0-dd28-4de1-ad76-f267c60e55c4 container test-container: +STEP: delete the pod 07/27/23 02:23:09.116 +Jul 27 02:23:09.151: INFO: Waiting for pod pod-455d2ae0-dd28-4de1-ad76-f267c60e55c4 to disappear +Jul 27 02:23:09.160: INFO: Pod pod-455d2ae0-dd28-4de1-ad76-f267c60e55c4 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/node/init/init.go:32 -Jun 12 21:54:36.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] ReplicationController +Jul 27 02:23:09.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] ReplicationController +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] ReplicationController +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes tear down framework | framework.go:193 -STEP: Destroying namespace "replication-controller-1812" for this suite. 06/12/23 21:54:36.901 +STEP: Destroying namespace "emptydir-7827" for this suite. 07/27/23 02:23:09.174 ------------------------------ -• [SLOW TEST] [6.658 seconds] -[sig-apps] ReplicationController -test/e2e/apps/framework.go:23 - should test the lifecycle of a ReplicationController [Conformance] - test/e2e/apps/rc.go:110 +• [SLOW TEST] [5.239 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:117 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] ReplicationController + [BeforeEach] [sig-storage] EmptyDir volumes set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:54:30.303 - Jun 12 21:54:30.303: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename replication-controller 06/12/23 21:54:30.305 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:54:30.427 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:54:30.438 - [BeforeEach] [sig-apps] ReplicationController + STEP: Creating a kubernetes client 07/27/23 02:23:03.957 + Jul 27 02:23:03.957: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename emptydir 07/27/23 02:23:03.958 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:23:04.008 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:23:04.018 + [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-apps] ReplicationController - test/e2e/apps/rc.go:57 - [It] should test the lifecycle of a ReplicationController [Conformance] - test/e2e/apps/rc.go:110 - STEP: creating a ReplicationController 06/12/23 21:54:30.494 - STEP: waiting for RC to be added 06/12/23 21:54:30.51 - STEP: waiting for available Replicas 06/12/23 21:54:30.511 - STEP: patching ReplicationController 06/12/23 21:54:32.823 - STEP: waiting for RC to be modified 06/12/23 21:54:32.845 - STEP: patching ReplicationController status 06/12/23 21:54:32.845 - STEP: waiting for RC to be modified 06/12/23 21:54:32.862 - STEP: waiting for available Replicas 06/12/23 21:54:32.862 - STEP: fetching ReplicationController status 06/12/23 21:54:32.875 - STEP: patching ReplicationController scale 06/12/23 21:54:32.888 - STEP: waiting for RC to be modified 06/12/23 21:54:32.905 - STEP: waiting for ReplicationController's scale to be the max amount 06/12/23 21:54:32.906 - STEP: fetching ReplicationController; ensuring that it's patched 06/12/23 21:54:36.532 - STEP: updating ReplicationController status 06/12/23 21:54:36.543 - STEP: waiting for RC to be modified 06/12/23 21:54:36.562 - STEP: listing all ReplicationControllers 06/12/23 21:54:36.562 - STEP: checking that ReplicationController has expected values 06/12/23 21:54:36.574 - STEP: deleting ReplicationControllers by collection 06/12/23 21:54:36.574 - STEP: waiting for ReplicationController to have a DELETED watchEvent 06/12/23 21:54:36.599 - [AfterEach] [sig-apps] ReplicationController + [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:117 + STEP: Creating a pod to test emptydir 0777 on tmpfs 07/27/23 02:23:04.028 + Jul 27 02:23:05.057: INFO: Waiting up to 5m0s for pod "pod-455d2ae0-dd28-4de1-ad76-f267c60e55c4" in namespace "emptydir-7827" to be "Succeeded or Failed" + Jul 27 02:23:05.071: INFO: Pod "pod-455d2ae0-dd28-4de1-ad76-f267c60e55c4": Phase="Pending", Reason="", readiness=false. Elapsed: 13.7824ms + Jul 27 02:23:07.083: INFO: Pod "pod-455d2ae0-dd28-4de1-ad76-f267c60e55c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025714157s + Jul 27 02:23:09.081: INFO: Pod "pod-455d2ae0-dd28-4de1-ad76-f267c60e55c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02360019s + STEP: Saw pod success 07/27/23 02:23:09.081 + Jul 27 02:23:09.081: INFO: Pod "pod-455d2ae0-dd28-4de1-ad76-f267c60e55c4" satisfied condition "Succeeded or Failed" + Jul 27 02:23:09.094: INFO: Trying to get logs from node 10.245.128.19 pod pod-455d2ae0-dd28-4de1-ad76-f267c60e55c4 container test-container: + STEP: delete the pod 07/27/23 02:23:09.116 + Jul 27 02:23:09.151: INFO: Waiting for pod pod-455d2ae0-dd28-4de1-ad76-f267c60e55c4 to disappear + Jul 27 02:23:09.160: INFO: Pod pod-455d2ae0-dd28-4de1-ad76-f267c60e55c4 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/node/init/init.go:32 - Jun 12 21:54:36.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] ReplicationController + Jul 27 02:23:09.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] ReplicationController + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] ReplicationController + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes tear down framework | framework.go:193 - STEP: Destroying namespace "replication-controller-1812" for this suite. 06/12/23 21:54:36.901 + STEP: Destroying namespace "emptydir-7827" for this suite. 07/27/23 02:23:09.174 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-api-machinery] Watchers - should observe an object deletion if it stops meeting the requirements of the selector [Conformance] - test/e2e/apimachinery/watch.go:257 -[BeforeEach] [sig-api-machinery] Watchers +[sig-node] Kubelet when scheduling a busybox command that always fails in a pod + should be possible to delete [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:135 +[BeforeEach] [sig-node] Kubelet set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:54:36.964 -Jun 12 21:54:36.964: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename watch 06/12/23 21:54:36.969 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:54:37.097 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:54:37.115 -[BeforeEach] [sig-api-machinery] Watchers +STEP: Creating a kubernetes client 07/27/23 02:23:09.198 +Jul 27 02:23:09.198: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename kubelet-test 07/27/23 02:23:09.198 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:23:09.278 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:23:09.29 +[BeforeEach] [sig-node] Kubelet test/e2e/framework/metrics/init/init.go:31 -[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] - test/e2e/apimachinery/watch.go:257 -STEP: creating a watch on configmaps with a certain label 06/12/23 21:54:37.137 -STEP: creating a new configmap 06/12/23 21:54:37.144 -STEP: modifying the configmap once 06/12/23 21:54:37.239 -STEP: changing the label value of the configmap 06/12/23 21:54:37.296 -STEP: Expecting to observe a delete notification for the watched object 06/12/23 21:54:37.396 -Jun 12 21:54:37.396: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5210 6e1c3d7c-6b19-4bc8-8e25-c3588c120dcf 119218 0 2023-06-12 21:54:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-06-12 21:54:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} -Jun 12 21:54:37.397: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5210 6e1c3d7c-6b19-4bc8-8e25-c3588c120dcf 119223 0 2023-06-12 21:54:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-06-12 21:54:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} -Jun 12 21:54:37.397: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5210 6e1c3d7c-6b19-4bc8-8e25-c3588c120dcf 119225 0 2023-06-12 21:54:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-06-12 21:54:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} -STEP: modifying the configmap a second time 06/12/23 21:54:37.397 -STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements 06/12/23 21:54:37.454 -STEP: changing the label value of the configmap back 06/12/23 21:54:47.456 -STEP: modifying the configmap a third time 06/12/23 21:54:47.485 -STEP: deleting the configmap 06/12/23 21:54:47.515 -STEP: Expecting to observe an add notification for the watched object when the label value was restored 06/12/23 21:54:47.536 -Jun 12 21:54:47.536: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5210 6e1c3d7c-6b19-4bc8-8e25-c3588c120dcf 119326 0 2023-06-12 21:54:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-06-12 21:54:47 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} -Jun 12 21:54:47.536: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5210 6e1c3d7c-6b19-4bc8-8e25-c3588c120dcf 119327 0 2023-06-12 21:54:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-06-12 21:54:47 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} -Jun 12 21:54:47.537: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5210 6e1c3d7c-6b19-4bc8-8e25-c3588c120dcf 119328 0 2023-06-12 21:54:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-06-12 21:54:47 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} -[AfterEach] [sig-api-machinery] Watchers +[BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 +[BeforeEach] when scheduling a busybox command that always fails in a pod + test/e2e/common/node/kubelet.go:85 +[It] should be possible to delete [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:135 +[AfterEach] [sig-node] Kubelet test/e2e/framework/node/init/init.go:32 -Jun 12 21:54:47.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] Watchers +Jul 27 02:23:09.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Kubelet test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] Watchers +[DeferCleanup (Each)] [sig-node] Kubelet dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] Watchers +[DeferCleanup (Each)] [sig-node] Kubelet tear down framework | framework.go:193 -STEP: Destroying namespace "watch-5210" for this suite. 06/12/23 21:54:47.555 +STEP: Destroying namespace "kubelet-test-9450" for this suite. 07/27/23 02:23:09.402 ------------------------------ -• [SLOW TEST] [10.615 seconds] -[sig-api-machinery] Watchers -test/e2e/apimachinery/framework.go:23 - should observe an object deletion if it stops meeting the requirements of the selector [Conformance] - test/e2e/apimachinery/watch.go:257 +• [0.256 seconds] +[sig-node] Kubelet +test/e2e/common/node/framework.go:23 + when scheduling a busybox command that always fails in a pod + test/e2e/common/node/kubelet.go:82 + should be possible to delete [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:135 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] Watchers + [BeforeEach] [sig-node] Kubelet set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:54:36.964 - Jun 12 21:54:36.964: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename watch 06/12/23 21:54:36.969 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:54:37.097 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:54:37.115 - [BeforeEach] [sig-api-machinery] Watchers + STEP: Creating a kubernetes client 07/27/23 02:23:09.198 + Jul 27 02:23:09.198: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename kubelet-test 07/27/23 02:23:09.198 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:23:09.278 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:23:09.29 + [BeforeEach] [sig-node] Kubelet test/e2e/framework/metrics/init/init.go:31 - [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] - test/e2e/apimachinery/watch.go:257 - STEP: creating a watch on configmaps with a certain label 06/12/23 21:54:37.137 - STEP: creating a new configmap 06/12/23 21:54:37.144 - STEP: modifying the configmap once 06/12/23 21:54:37.239 - STEP: changing the label value of the configmap 06/12/23 21:54:37.296 - STEP: Expecting to observe a delete notification for the watched object 06/12/23 21:54:37.396 - Jun 12 21:54:37.396: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5210 6e1c3d7c-6b19-4bc8-8e25-c3588c120dcf 119218 0 2023-06-12 21:54:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-06-12 21:54:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} - Jun 12 21:54:37.397: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5210 6e1c3d7c-6b19-4bc8-8e25-c3588c120dcf 119223 0 2023-06-12 21:54:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-06-12 21:54:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} - Jun 12 21:54:37.397: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5210 6e1c3d7c-6b19-4bc8-8e25-c3588c120dcf 119225 0 2023-06-12 21:54:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-06-12 21:54:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} - STEP: modifying the configmap a second time 06/12/23 21:54:37.397 - STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements 06/12/23 21:54:37.454 - STEP: changing the label value of the configmap back 06/12/23 21:54:47.456 - STEP: modifying the configmap a third time 06/12/23 21:54:47.485 - STEP: deleting the configmap 06/12/23 21:54:47.515 - STEP: Expecting to observe an add notification for the watched object when the label value was restored 06/12/23 21:54:47.536 - Jun 12 21:54:47.536: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5210 6e1c3d7c-6b19-4bc8-8e25-c3588c120dcf 119326 0 2023-06-12 21:54:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-06-12 21:54:47 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} - Jun 12 21:54:47.536: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5210 6e1c3d7c-6b19-4bc8-8e25-c3588c120dcf 119327 0 2023-06-12 21:54:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-06-12 21:54:47 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} - Jun 12 21:54:47.537: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5210 6e1c3d7c-6b19-4bc8-8e25-c3588c120dcf 119328 0 2023-06-12 21:54:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-06-12 21:54:47 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} - [AfterEach] [sig-api-machinery] Watchers + [BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 + [BeforeEach] when scheduling a busybox command that always fails in a pod + test/e2e/common/node/kubelet.go:85 + [It] should be possible to delete [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:135 + [AfterEach] [sig-node] Kubelet test/e2e/framework/node/init/init.go:32 - Jun 12 21:54:47.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] Watchers + Jul 27 02:23:09.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Kubelet test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] Watchers + [DeferCleanup (Each)] [sig-node] Kubelet dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] Watchers + [DeferCleanup (Each)] [sig-node] Kubelet tear down framework | framework.go:193 - STEP: Destroying namespace "watch-5210" for this suite. 06/12/23 21:54:47.555 + STEP: Destroying namespace "kubelet-test-9450" for this suite. 07/27/23 02:23:09.402 << End Captured GinkgoWriter Output ------------------------------ -S +SSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] Downward API volume - should provide container's memory limit [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:207 -[BeforeEach] [sig-storage] Downward API volume +[sig-instrumentation] Events API + should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + test/e2e/instrumentation/events.go:98 +[BeforeEach] [sig-instrumentation] Events API set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:54:47.581 -Jun 12 21:54:47.581: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename downward-api 06/12/23 21:54:47.583 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:54:47.635 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:54:47.654 -[BeforeEach] [sig-storage] Downward API volume +STEP: Creating a kubernetes client 07/27/23 02:23:09.455 +Jul 27 02:23:09.455: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename events 07/27/23 02:23:09.455 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:23:09.497 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:23:09.506 +[BeforeEach] [sig-instrumentation] Events API test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-storage] Downward API volume - test/e2e/common/storage/downwardapi_volume.go:44 -[It] should provide container's memory limit [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:207 -STEP: Creating a pod to test downward API volume plugin 06/12/23 21:54:47.667 -Jun 12 21:54:47.700: INFO: Waiting up to 5m0s for pod "downwardapi-volume-09d286af-4cc3-4733-960f-90c8b9f2091c" in namespace "downward-api-2441" to be "Succeeded or Failed" -Jun 12 21:54:47.714: INFO: Pod "downwardapi-volume-09d286af-4cc3-4733-960f-90c8b9f2091c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.243712ms -Jun 12 21:54:49.728: INFO: Pod "downwardapi-volume-09d286af-4cc3-4733-960f-90c8b9f2091c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028232014s -Jun 12 21:54:51.728: INFO: Pod "downwardapi-volume-09d286af-4cc3-4733-960f-90c8b9f2091c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02789815s -Jun 12 21:54:53.758: INFO: Pod "downwardapi-volume-09d286af-4cc3-4733-960f-90c8b9f2091c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.058694495s -STEP: Saw pod success 06/12/23 21:54:53.759 -Jun 12 21:54:53.759: INFO: Pod "downwardapi-volume-09d286af-4cc3-4733-960f-90c8b9f2091c" satisfied condition "Succeeded or Failed" -Jun 12 21:54:53.794: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-09d286af-4cc3-4733-960f-90c8b9f2091c container client-container: -STEP: delete the pod 06/12/23 21:54:53.902 -Jun 12 21:54:53.953: INFO: Waiting for pod downwardapi-volume-09d286af-4cc3-4733-960f-90c8b9f2091c to disappear -Jun 12 21:54:53.991: INFO: Pod downwardapi-volume-09d286af-4cc3-4733-960f-90c8b9f2091c no longer exists -[AfterEach] [sig-storage] Downward API volume +[BeforeEach] [sig-instrumentation] Events API + test/e2e/instrumentation/events.go:84 +[It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + test/e2e/instrumentation/events.go:98 +STEP: creating a test event 07/27/23 02:23:09.516 +STEP: listing events in all namespaces 07/27/23 02:23:09.547 +STEP: listing events in test namespace 07/27/23 02:23:09.71 +STEP: listing events with field selection filtering on source 07/27/23 02:23:09.737 +STEP: listing events with field selection filtering on reportingController 07/27/23 02:23:09.754 +STEP: getting the test event 07/27/23 02:23:09.769 +STEP: patching the test event 07/27/23 02:23:09.783 +STEP: getting the test event 07/27/23 02:23:09.825 +STEP: updating the test event 07/27/23 02:23:09.838 +STEP: getting the test event 07/27/23 02:23:09.865 +STEP: deleting the test event 07/27/23 02:23:09.88 +STEP: listing events in all namespaces 07/27/23 02:23:09.913 +STEP: listing events in test namespace 07/27/23 02:23:10.019 +[AfterEach] [sig-instrumentation] Events API test/e2e/framework/node/init/init.go:32 -Jun 12 21:54:53.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Downward API volume +Jul 27 02:23:10.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-instrumentation] Events API test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Downward API volume +[DeferCleanup (Each)] [sig-instrumentation] Events API dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Downward API volume +[DeferCleanup (Each)] [sig-instrumentation] Events API tear down framework | framework.go:193 -STEP: Destroying namespace "downward-api-2441" for this suite. 06/12/23 21:54:54.017 +STEP: Destroying namespace "events-4374" for this suite. 07/27/23 02:23:10.046 ------------------------------ -• [SLOW TEST] [6.461 seconds] -[sig-storage] Downward API volume -test/e2e/common/storage/framework.go:23 - should provide container's memory limit [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:207 +• [0.616 seconds] +[sig-instrumentation] Events API +test/e2e/instrumentation/common/framework.go:23 + should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + test/e2e/instrumentation/events.go:98 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Downward API volume + [BeforeEach] [sig-instrumentation] Events API set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:54:47.581 - Jun 12 21:54:47.581: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename downward-api 06/12/23 21:54:47.583 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:54:47.635 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:54:47.654 - [BeforeEach] [sig-storage] Downward API volume + STEP: Creating a kubernetes client 07/27/23 02:23:09.455 + Jul 27 02:23:09.455: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename events 07/27/23 02:23:09.455 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:23:09.497 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:23:09.506 + [BeforeEach] [sig-instrumentation] Events API test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-storage] Downward API volume - test/e2e/common/storage/downwardapi_volume.go:44 - [It] should provide container's memory limit [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:207 - STEP: Creating a pod to test downward API volume plugin 06/12/23 21:54:47.667 - Jun 12 21:54:47.700: INFO: Waiting up to 5m0s for pod "downwardapi-volume-09d286af-4cc3-4733-960f-90c8b9f2091c" in namespace "downward-api-2441" to be "Succeeded or Failed" - Jun 12 21:54:47.714: INFO: Pod "downwardapi-volume-09d286af-4cc3-4733-960f-90c8b9f2091c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.243712ms - Jun 12 21:54:49.728: INFO: Pod "downwardapi-volume-09d286af-4cc3-4733-960f-90c8b9f2091c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028232014s - Jun 12 21:54:51.728: INFO: Pod "downwardapi-volume-09d286af-4cc3-4733-960f-90c8b9f2091c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02789815s - Jun 12 21:54:53.758: INFO: Pod "downwardapi-volume-09d286af-4cc3-4733-960f-90c8b9f2091c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.058694495s - STEP: Saw pod success 06/12/23 21:54:53.759 - Jun 12 21:54:53.759: INFO: Pod "downwardapi-volume-09d286af-4cc3-4733-960f-90c8b9f2091c" satisfied condition "Succeeded or Failed" - Jun 12 21:54:53.794: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-09d286af-4cc3-4733-960f-90c8b9f2091c container client-container: - STEP: delete the pod 06/12/23 21:54:53.902 - Jun 12 21:54:53.953: INFO: Waiting for pod downwardapi-volume-09d286af-4cc3-4733-960f-90c8b9f2091c to disappear - Jun 12 21:54:53.991: INFO: Pod downwardapi-volume-09d286af-4cc3-4733-960f-90c8b9f2091c no longer exists - [AfterEach] [sig-storage] Downward API volume + [BeforeEach] [sig-instrumentation] Events API + test/e2e/instrumentation/events.go:84 + [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + test/e2e/instrumentation/events.go:98 + STEP: creating a test event 07/27/23 02:23:09.516 + STEP: listing events in all namespaces 07/27/23 02:23:09.547 + STEP: listing events in test namespace 07/27/23 02:23:09.71 + STEP: listing events with field selection filtering on source 07/27/23 02:23:09.737 + STEP: listing events with field selection filtering on reportingController 07/27/23 02:23:09.754 + STEP: getting the test event 07/27/23 02:23:09.769 + STEP: patching the test event 07/27/23 02:23:09.783 + STEP: getting the test event 07/27/23 02:23:09.825 + STEP: updating the test event 07/27/23 02:23:09.838 + STEP: getting the test event 07/27/23 02:23:09.865 + STEP: deleting the test event 07/27/23 02:23:09.88 + STEP: listing events in all namespaces 07/27/23 02:23:09.913 + STEP: listing events in test namespace 07/27/23 02:23:10.019 + [AfterEach] [sig-instrumentation] Events API test/e2e/framework/node/init/init.go:32 - Jun 12 21:54:53.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Downward API volume + Jul 27 02:23:10.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-instrumentation] Events API test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Downward API volume + [DeferCleanup (Each)] [sig-instrumentation] Events API dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Downward API volume + [DeferCleanup (Each)] [sig-instrumentation] Events API tear down framework | framework.go:193 - STEP: Destroying namespace "downward-api-2441" for this suite. 06/12/23 21:54:54.017 + STEP: Destroying namespace "events-4374" for this suite. 07/27/23 02:23:10.046 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-apps] Job - should create pods for an Indexed job with completion indexes and specified hostname [Conformance] - test/e2e/apps/job.go:366 -[BeforeEach] [sig-apps] Job +[sig-auth] ServiceAccounts + should mount an API token into pods [Conformance] + test/e2e/auth/service_accounts.go:78 +[BeforeEach] [sig-auth] ServiceAccounts set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:54:54.051 -Jun 12 21:54:54.051: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename job 06/12/23 21:54:54.053 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:54:54.124 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:54:54.144 -[BeforeEach] [sig-apps] Job +STEP: Creating a kubernetes client 07/27/23 02:23:10.073 +Jul 27 02:23:10.073: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename svcaccounts 07/27/23 02:23:10.074 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:23:10.134 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:23:10.144 +[BeforeEach] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:31 -[It] should create pods for an Indexed job with completion indexes and specified hostname [Conformance] - test/e2e/apps/job.go:366 -STEP: Creating Indexed job 06/12/23 21:54:54.159 -STEP: Ensuring job reaches completions 06/12/23 21:54:54.181 -STEP: Ensuring pods with index for job exist 06/12/23 21:55:08.193 -[AfterEach] [sig-apps] Job +[It] should mount an API token into pods [Conformance] + test/e2e/auth/service_accounts.go:78 +Jul 27 02:23:10.203: INFO: Waiting up to 5m0s for pod "pod-service-account-6c9228e7-9aa7-40a2-9ea2-957c9efb0b30" in namespace "svcaccounts-6042" to be "running" +Jul 27 02:23:10.212: INFO: Pod "pod-service-account-6c9228e7-9aa7-40a2-9ea2-957c9efb0b30": Phase="Pending", Reason="", readiness=false. Elapsed: 8.764001ms +Jul 27 02:23:12.233: INFO: Pod "pod-service-account-6c9228e7-9aa7-40a2-9ea2-957c9efb0b30": Phase="Running", Reason="", readiness=true. Elapsed: 2.029276988s +Jul 27 02:23:12.233: INFO: Pod "pod-service-account-6c9228e7-9aa7-40a2-9ea2-957c9efb0b30" satisfied condition "running" +STEP: reading a file in the container 07/27/23 02:23:12.233 +Jul 27 02:23:12.233: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6042 pod-service-account-6c9228e7-9aa7-40a2-9ea2-957c9efb0b30 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' +STEP: reading a file in the container 07/27/23 02:23:12.507 +Jul 27 02:23:12.507: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6042 pod-service-account-6c9228e7-9aa7-40a2-9ea2-957c9efb0b30 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' +STEP: reading a file in the container 07/27/23 02:23:12.794 +Jul 27 02:23:12.794: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6042 pod-service-account-6c9228e7-9aa7-40a2-9ea2-957c9efb0b30 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' +Jul 27 02:23:13.071: INFO: Got root ca configmap in namespace "svcaccounts-6042" +[AfterEach] [sig-auth] ServiceAccounts test/e2e/framework/node/init/init.go:32 -Jun 12 21:55:08.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] Job +Jul 27 02:23:13.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] Job +[DeferCleanup (Each)] [sig-auth] ServiceAccounts dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] Job +[DeferCleanup (Each)] [sig-auth] ServiceAccounts tear down framework | framework.go:193 -STEP: Destroying namespace "job-8317" for this suite. 06/12/23 21:55:08.223 +STEP: Destroying namespace "svcaccounts-6042" for this suite. 07/27/23 02:23:13.092 ------------------------------ -• [SLOW TEST] [14.195 seconds] -[sig-apps] Job -test/e2e/apps/framework.go:23 - should create pods for an Indexed job with completion indexes and specified hostname [Conformance] - test/e2e/apps/job.go:366 +• [3.044 seconds] +[sig-auth] ServiceAccounts +test/e2e/auth/framework.go:23 + should mount an API token into pods [Conformance] + test/e2e/auth/service_accounts.go:78 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] Job + [BeforeEach] [sig-auth] ServiceAccounts set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:54:54.051 - Jun 12 21:54:54.051: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename job 06/12/23 21:54:54.053 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:54:54.124 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:54:54.144 - [BeforeEach] [sig-apps] Job + STEP: Creating a kubernetes client 07/27/23 02:23:10.073 + Jul 27 02:23:10.073: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename svcaccounts 07/27/23 02:23:10.074 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:23:10.134 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:23:10.144 + [BeforeEach] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:31 - [It] should create pods for an Indexed job with completion indexes and specified hostname [Conformance] - test/e2e/apps/job.go:366 - STEP: Creating Indexed job 06/12/23 21:54:54.159 - STEP: Ensuring job reaches completions 06/12/23 21:54:54.181 - STEP: Ensuring pods with index for job exist 06/12/23 21:55:08.193 - [AfterEach] [sig-apps] Job + [It] should mount an API token into pods [Conformance] + test/e2e/auth/service_accounts.go:78 + Jul 27 02:23:10.203: INFO: Waiting up to 5m0s for pod "pod-service-account-6c9228e7-9aa7-40a2-9ea2-957c9efb0b30" in namespace "svcaccounts-6042" to be "running" + Jul 27 02:23:10.212: INFO: Pod "pod-service-account-6c9228e7-9aa7-40a2-9ea2-957c9efb0b30": Phase="Pending", Reason="", readiness=false. Elapsed: 8.764001ms + Jul 27 02:23:12.233: INFO: Pod "pod-service-account-6c9228e7-9aa7-40a2-9ea2-957c9efb0b30": Phase="Running", Reason="", readiness=true. Elapsed: 2.029276988s + Jul 27 02:23:12.233: INFO: Pod "pod-service-account-6c9228e7-9aa7-40a2-9ea2-957c9efb0b30" satisfied condition "running" + STEP: reading a file in the container 07/27/23 02:23:12.233 + Jul 27 02:23:12.233: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6042 pod-service-account-6c9228e7-9aa7-40a2-9ea2-957c9efb0b30 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' + STEP: reading a file in the container 07/27/23 02:23:12.507 + Jul 27 02:23:12.507: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6042 pod-service-account-6c9228e7-9aa7-40a2-9ea2-957c9efb0b30 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' + STEP: reading a file in the container 07/27/23 02:23:12.794 + Jul 27 02:23:12.794: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6042 pod-service-account-6c9228e7-9aa7-40a2-9ea2-957c9efb0b30 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' + Jul 27 02:23:13.071: INFO: Got root ca configmap in namespace "svcaccounts-6042" + [AfterEach] [sig-auth] ServiceAccounts test/e2e/framework/node/init/init.go:32 - Jun 12 21:55:08.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] Job + Jul 27 02:23:13.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] Job + [DeferCleanup (Each)] [sig-auth] ServiceAccounts dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] Job + [DeferCleanup (Each)] [sig-auth] ServiceAccounts tear down framework | framework.go:193 - STEP: Destroying namespace "job-8317" for this suite. 06/12/23 21:55:08.223 + STEP: Destroying namespace "svcaccounts-6042" for this suite. 07/27/23 02:23:13.092 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSS +S ------------------------------ -[sig-api-machinery] ResourceQuota - should create a ResourceQuota and capture the life of a replica set. [Conformance] - test/e2e/apimachinery/resource_quota.go:448 -[BeforeEach] [sig-api-machinery] ResourceQuota +[sig-node] Pods + should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:398 +[BeforeEach] [sig-node] Pods set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:55:08.247 -Jun 12 21:55:08.247: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename resourcequota 06/12/23 21:55:08.248 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:55:08.298 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:55:08.312 -[BeforeEach] [sig-api-machinery] ResourceQuota +STEP: Creating a kubernetes client 07/27/23 02:23:13.116 +Jul 27 02:23:13.117: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename pods 07/27/23 02:23:13.117 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:23:13.157 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:23:13.166 +[BeforeEach] [sig-node] Pods test/e2e/framework/metrics/init/init.go:31 -[It] should create a ResourceQuota and capture the life of a replica set. [Conformance] - test/e2e/apimachinery/resource_quota.go:448 -STEP: Counting existing ResourceQuota 06/12/23 21:55:08.328 -STEP: Creating a ResourceQuota 06/12/23 21:55:13.353 -STEP: Ensuring resource quota status is calculated 06/12/23 21:55:13.405 -STEP: Creating a ReplicaSet 06/12/23 21:55:15.418 -STEP: Ensuring resource quota status captures replicaset creation 06/12/23 21:55:15.498 -STEP: Deleting a ReplicaSet 06/12/23 21:55:17.512 -STEP: Ensuring resource quota status released usage 06/12/23 21:55:17.528 -[AfterEach] [sig-api-machinery] ResourceQuota +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 +[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:398 +STEP: creating the pod 07/27/23 02:23:13.177 +STEP: submitting the pod to kubernetes 07/27/23 02:23:13.177 +Jul 27 02:23:13.207: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-38576161-0c83-4b47-bf73-6f7aa9da2146" in namespace "pods-8771" to be "running and ready" +Jul 27 02:23:13.217: INFO: Pod "pod-update-activedeadlineseconds-38576161-0c83-4b47-bf73-6f7aa9da2146": Phase="Pending", Reason="", readiness=false. Elapsed: 9.415561ms +Jul 27 02:23:13.217: INFO: The phase of Pod pod-update-activedeadlineseconds-38576161-0c83-4b47-bf73-6f7aa9da2146 is Pending, waiting for it to be Running (with Ready = true) +Jul 27 02:23:15.228: INFO: Pod "pod-update-activedeadlineseconds-38576161-0c83-4b47-bf73-6f7aa9da2146": Phase="Running", Reason="", readiness=true. Elapsed: 2.02117552s +Jul 27 02:23:15.228: INFO: The phase of Pod pod-update-activedeadlineseconds-38576161-0c83-4b47-bf73-6f7aa9da2146 is Running (Ready = true) +Jul 27 02:23:15.228: INFO: Pod "pod-update-activedeadlineseconds-38576161-0c83-4b47-bf73-6f7aa9da2146" satisfied condition "running and ready" +STEP: verifying the pod is in kubernetes 07/27/23 02:23:15.239 +STEP: updating the pod 07/27/23 02:23:15.249 +Jul 27 02:23:15.804: INFO: Successfully updated pod "pod-update-activedeadlineseconds-38576161-0c83-4b47-bf73-6f7aa9da2146" +Jul 27 02:23:15.804: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-38576161-0c83-4b47-bf73-6f7aa9da2146" in namespace "pods-8771" to be "terminated with reason DeadlineExceeded" +Jul 27 02:23:15.813: INFO: Pod "pod-update-activedeadlineseconds-38576161-0c83-4b47-bf73-6f7aa9da2146": Phase="Running", Reason="", readiness=true. Elapsed: 8.774175ms +Jul 27 02:23:17.848: INFO: Pod "pod-update-activedeadlineseconds-38576161-0c83-4b47-bf73-6f7aa9da2146": Phase="Running", Reason="", readiness=true. Elapsed: 2.043382077s +Jul 27 02:23:19.825: INFO: Pod "pod-update-activedeadlineseconds-38576161-0c83-4b47-bf73-6f7aa9da2146": Phase="Running", Reason="", readiness=false. Elapsed: 4.02072569s +Jul 27 02:23:21.824: INFO: Pod "pod-update-activedeadlineseconds-38576161-0c83-4b47-bf73-6f7aa9da2146": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 6.019437796s +Jul 27 02:23:21.824: INFO: Pod "pod-update-activedeadlineseconds-38576161-0c83-4b47-bf73-6f7aa9da2146" satisfied condition "terminated with reason DeadlineExceeded" +[AfterEach] [sig-node] Pods test/e2e/framework/node/init/init.go:32 -Jun 12 21:55:19.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +Jul 27 02:23:21.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Pods test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +[DeferCleanup (Each)] [sig-node] Pods dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +[DeferCleanup (Each)] [sig-node] Pods tear down framework | framework.go:193 -STEP: Destroying namespace "resourcequota-2666" for this suite. 06/12/23 21:55:19.559 +STEP: Destroying namespace "pods-8771" for this suite. 07/27/23 02:23:21.878 ------------------------------ -• [SLOW TEST] [11.336 seconds] -[sig-api-machinery] ResourceQuota -test/e2e/apimachinery/framework.go:23 - should create a ResourceQuota and capture the life of a replica set. [Conformance] - test/e2e/apimachinery/resource_quota.go:448 +• [SLOW TEST] [8.816 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:398 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] ResourceQuota + [BeforeEach] [sig-node] Pods set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:55:08.247 - Jun 12 21:55:08.247: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename resourcequota 06/12/23 21:55:08.248 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:55:08.298 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:55:08.312 - [BeforeEach] [sig-api-machinery] ResourceQuota + STEP: Creating a kubernetes client 07/27/23 02:23:13.116 + Jul 27 02:23:13.117: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename pods 07/27/23 02:23:13.117 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:23:13.157 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:23:13.166 + [BeforeEach] [sig-node] Pods test/e2e/framework/metrics/init/init.go:31 - [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] - test/e2e/apimachinery/resource_quota.go:448 - STEP: Counting existing ResourceQuota 06/12/23 21:55:08.328 - STEP: Creating a ResourceQuota 06/12/23 21:55:13.353 - STEP: Ensuring resource quota status is calculated 06/12/23 21:55:13.405 - STEP: Creating a ReplicaSet 06/12/23 21:55:15.418 - STEP: Ensuring resource quota status captures replicaset creation 06/12/23 21:55:15.498 - STEP: Deleting a ReplicaSet 06/12/23 21:55:17.512 - STEP: Ensuring resource quota status released usage 06/12/23 21:55:17.528 - [AfterEach] [sig-api-machinery] ResourceQuota + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 + [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:398 + STEP: creating the pod 07/27/23 02:23:13.177 + STEP: submitting the pod to kubernetes 07/27/23 02:23:13.177 + Jul 27 02:23:13.207: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-38576161-0c83-4b47-bf73-6f7aa9da2146" in namespace "pods-8771" to be "running and ready" + Jul 27 02:23:13.217: INFO: Pod "pod-update-activedeadlineseconds-38576161-0c83-4b47-bf73-6f7aa9da2146": Phase="Pending", Reason="", readiness=false. Elapsed: 9.415561ms + Jul 27 02:23:13.217: INFO: The phase of Pod pod-update-activedeadlineseconds-38576161-0c83-4b47-bf73-6f7aa9da2146 is Pending, waiting for it to be Running (with Ready = true) + Jul 27 02:23:15.228: INFO: Pod "pod-update-activedeadlineseconds-38576161-0c83-4b47-bf73-6f7aa9da2146": Phase="Running", Reason="", readiness=true. Elapsed: 2.02117552s + Jul 27 02:23:15.228: INFO: The phase of Pod pod-update-activedeadlineseconds-38576161-0c83-4b47-bf73-6f7aa9da2146 is Running (Ready = true) + Jul 27 02:23:15.228: INFO: Pod "pod-update-activedeadlineseconds-38576161-0c83-4b47-bf73-6f7aa9da2146" satisfied condition "running and ready" + STEP: verifying the pod is in kubernetes 07/27/23 02:23:15.239 + STEP: updating the pod 07/27/23 02:23:15.249 + Jul 27 02:23:15.804: INFO: Successfully updated pod "pod-update-activedeadlineseconds-38576161-0c83-4b47-bf73-6f7aa9da2146" + Jul 27 02:23:15.804: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-38576161-0c83-4b47-bf73-6f7aa9da2146" in namespace "pods-8771" to be "terminated with reason DeadlineExceeded" + Jul 27 02:23:15.813: INFO: Pod "pod-update-activedeadlineseconds-38576161-0c83-4b47-bf73-6f7aa9da2146": Phase="Running", Reason="", readiness=true. Elapsed: 8.774175ms + Jul 27 02:23:17.848: INFO: Pod "pod-update-activedeadlineseconds-38576161-0c83-4b47-bf73-6f7aa9da2146": Phase="Running", Reason="", readiness=true. Elapsed: 2.043382077s + Jul 27 02:23:19.825: INFO: Pod "pod-update-activedeadlineseconds-38576161-0c83-4b47-bf73-6f7aa9da2146": Phase="Running", Reason="", readiness=false. Elapsed: 4.02072569s + Jul 27 02:23:21.824: INFO: Pod "pod-update-activedeadlineseconds-38576161-0c83-4b47-bf73-6f7aa9da2146": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 6.019437796s + Jul 27 02:23:21.824: INFO: Pod "pod-update-activedeadlineseconds-38576161-0c83-4b47-bf73-6f7aa9da2146" satisfied condition "terminated with reason DeadlineExceeded" + [AfterEach] [sig-node] Pods test/e2e/framework/node/init/init.go:32 - Jun 12 21:55:19.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + Jul 27 02:23:21.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Pods test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + [DeferCleanup (Each)] [sig-node] Pods dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + [DeferCleanup (Each)] [sig-node] Pods tear down framework | framework.go:193 - STEP: Destroying namespace "resourcequota-2666" for this suite. 06/12/23 21:55:19.559 + STEP: Destroying namespace "pods-8771" for this suite. 07/27/23 02:23:21.878 << End Captured GinkgoWriter Output ------------------------------ -[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - should be able to deny attaching pod [Conformance] - test/e2e/apimachinery/webhook.go:209 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +S +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop exec hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:151 +[BeforeEach] [sig-node] Container Lifecycle Hook set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:55:19.585 -Jun 12 21:55:19.585: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename webhook 06/12/23 21:55:19.587 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:55:19.641 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:55:19.66 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +STEP: Creating a kubernetes client 07/27/23 02:23:21.938 +Jul 27 02:23:21.939: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename container-lifecycle-hook 07/27/23 02:23:21.939 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:23:22.001 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:23:22.01 +[BeforeEach] [sig-node] Container Lifecycle Hook test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:90 -STEP: Setting up server cert 06/12/23 21:55:19.727 -STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 21:55:21.083 -STEP: Deploying the webhook pod 06/12/23 21:55:21.146 -STEP: Wait for the deployment to be ready 06/12/23 21:55:21.182 -Jun 12 21:55:21.208: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set -Jun 12 21:55:23.266: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 21, 55, 21, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 55, 21, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 55, 21, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 55, 21, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} -STEP: Deploying the webhook service 06/12/23 21:55:25.279 -STEP: Verifying the service has paired with the endpoint 06/12/23 21:55:25.315 -Jun 12 21:55:26.316: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 -[It] should be able to deny attaching pod [Conformance] - test/e2e/apimachinery/webhook.go:209 -STEP: Registering the webhook via the AdmissionRegistration API 06/12/23 21:55:26.331 -STEP: create a pod 06/12/23 21:55:26.392 -Jun 12 21:55:26.418: INFO: Waiting up to 5m0s for pod "to-be-attached-pod" in namespace "webhook-4823" to be "running" -Jun 12 21:55:26.429: INFO: Pod "to-be-attached-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 10.912005ms -Jun 12 21:55:28.443: INFO: Pod "to-be-attached-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024517183s -Jun 12 21:55:30.463: INFO: Pod "to-be-attached-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.04427302s -Jun 12 21:55:30.463: INFO: Pod "to-be-attached-pod" satisfied condition "running" -STEP: 'kubectl attach' the pod, should be denied by the webhook 06/12/23 21:55:30.463 -Jun 12 21:55:30.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=webhook-4823 attach --namespace=webhook-4823 to-be-attached-pod -i -c=container1' -Jun 12 21:55:30.784: INFO: rc: 1 -[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:77 +STEP: create the container to handle the HTTPGet hook request. 07/27/23 02:23:22.049 +Jul 27 02:23:22.076: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-2077" to be "running and ready" +Jul 27 02:23:22.084: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 8.127374ms +Jul 27 02:23:22.084: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Jul 27 02:23:24.094: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 2.018063996s +Jul 27 02:23:24.094: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) +Jul 27 02:23:24.094: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" +[It] should execute prestop exec hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:151 +STEP: create the pod with lifecycle hook 07/27/23 02:23:24.103 +Jul 27 02:23:24.118: INFO: Waiting up to 5m0s for pod "pod-with-prestop-exec-hook" in namespace "container-lifecycle-hook-2077" to be "running and ready" +Jul 27 02:23:24.125: INFO: Pod "pod-with-prestop-exec-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 7.74086ms +Jul 27 02:23:24.125: INFO: The phase of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) +Jul 27 02:23:26.135: INFO: Pod "pod-with-prestop-exec-hook": Phase="Running", Reason="", readiness=true. Elapsed: 2.017772572s +Jul 27 02:23:26.135: INFO: The phase of Pod pod-with-prestop-exec-hook is Running (Ready = true) +Jul 27 02:23:26.135: INFO: Pod "pod-with-prestop-exec-hook" satisfied condition "running and ready" +STEP: delete the pod with lifecycle hook 07/27/23 02:23:26.144 +Jul 27 02:23:26.159: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jul 27 02:23:26.168: INFO: Pod pod-with-prestop-exec-hook still exists +Jul 27 02:23:28.170: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jul 27 02:23:28.179: INFO: Pod pod-with-prestop-exec-hook still exists +Jul 27 02:23:30.170: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jul 27 02:23:30.180: INFO: Pod pod-with-prestop-exec-hook no longer exists +STEP: check prestop hook 07/27/23 02:23:30.18 +[AfterEach] [sig-node] Container Lifecycle Hook test/e2e/framework/node/init/init.go:32 -Jun 12 21:55:30.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:105 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +Jul 27 02:23:30.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook tear down framework | framework.go:193 -STEP: Destroying namespace "webhook-4823" for this suite. 06/12/23 21:55:31.035 -STEP: Destroying namespace "webhook-4823-markers" for this suite. 06/12/23 21:55:31.106 +STEP: Destroying namespace "container-lifecycle-hook-2077" for this suite. 07/27/23 02:23:30.221 ------------------------------ -• [SLOW TEST] [11.577 seconds] -[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] -test/e2e/apimachinery/framework.go:23 - should be able to deny attaching pod [Conformance] - test/e2e/apimachinery/webhook.go:209 +• [SLOW TEST] [8.305 seconds] +[sig-node] Container Lifecycle Hook +test/e2e/common/node/framework.go:23 + when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:46 + should execute prestop exec hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:151 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [BeforeEach] [sig-node] Container Lifecycle Hook set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:55:19.585 - Jun 12 21:55:19.585: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename webhook 06/12/23 21:55:19.587 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:55:19.641 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:55:19.66 - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + STEP: Creating a kubernetes client 07/27/23 02:23:21.938 + Jul 27 02:23:21.939: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename container-lifecycle-hook 07/27/23 02:23:21.939 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:23:22.001 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:23:22.01 + [BeforeEach] [sig-node] Container Lifecycle Hook test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:90 - STEP: Setting up server cert 06/12/23 21:55:19.727 - STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 21:55:21.083 - STEP: Deploying the webhook pod 06/12/23 21:55:21.146 - STEP: Wait for the deployment to be ready 06/12/23 21:55:21.182 - Jun 12 21:55:21.208: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set - Jun 12 21:55:23.266: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 21, 55, 21, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 55, 21, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 55, 21, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 55, 21, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} - STEP: Deploying the webhook service 06/12/23 21:55:25.279 - STEP: Verifying the service has paired with the endpoint 06/12/23 21:55:25.315 - Jun 12 21:55:26.316: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 - [It] should be able to deny attaching pod [Conformance] - test/e2e/apimachinery/webhook.go:209 - STEP: Registering the webhook via the AdmissionRegistration API 06/12/23 21:55:26.331 - STEP: create a pod 06/12/23 21:55:26.392 - Jun 12 21:55:26.418: INFO: Waiting up to 5m0s for pod "to-be-attached-pod" in namespace "webhook-4823" to be "running" - Jun 12 21:55:26.429: INFO: Pod "to-be-attached-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 10.912005ms - Jun 12 21:55:28.443: INFO: Pod "to-be-attached-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024517183s - Jun 12 21:55:30.463: INFO: Pod "to-be-attached-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.04427302s - Jun 12 21:55:30.463: INFO: Pod "to-be-attached-pod" satisfied condition "running" - STEP: 'kubectl attach' the pod, should be denied by the webhook 06/12/23 21:55:30.463 - Jun 12 21:55:30.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=webhook-4823 attach --namespace=webhook-4823 to-be-attached-pod -i -c=container1' - Jun 12 21:55:30.784: INFO: rc: 1 - [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:77 + STEP: create the container to handle the HTTPGet hook request. 07/27/23 02:23:22.049 + Jul 27 02:23:22.076: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-2077" to be "running and ready" + Jul 27 02:23:22.084: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 8.127374ms + Jul 27 02:23:22.084: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) + Jul 27 02:23:24.094: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 2.018063996s + Jul 27 02:23:24.094: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) + Jul 27 02:23:24.094: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" + [It] should execute prestop exec hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:151 + STEP: create the pod with lifecycle hook 07/27/23 02:23:24.103 + Jul 27 02:23:24.118: INFO: Waiting up to 5m0s for pod "pod-with-prestop-exec-hook" in namespace "container-lifecycle-hook-2077" to be "running and ready" + Jul 27 02:23:24.125: INFO: Pod "pod-with-prestop-exec-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 7.74086ms + Jul 27 02:23:24.125: INFO: The phase of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) + Jul 27 02:23:26.135: INFO: Pod "pod-with-prestop-exec-hook": Phase="Running", Reason="", readiness=true. Elapsed: 2.017772572s + Jul 27 02:23:26.135: INFO: The phase of Pod pod-with-prestop-exec-hook is Running (Ready = true) + Jul 27 02:23:26.135: INFO: Pod "pod-with-prestop-exec-hook" satisfied condition "running and ready" + STEP: delete the pod with lifecycle hook 07/27/23 02:23:26.144 + Jul 27 02:23:26.159: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear + Jul 27 02:23:26.168: INFO: Pod pod-with-prestop-exec-hook still exists + Jul 27 02:23:28.170: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear + Jul 27 02:23:28.179: INFO: Pod pod-with-prestop-exec-hook still exists + Jul 27 02:23:30.170: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear + Jul 27 02:23:30.180: INFO: Pod pod-with-prestop-exec-hook no longer exists + STEP: check prestop hook 07/27/23 02:23:30.18 + [AfterEach] [sig-node] Container Lifecycle Hook test/e2e/framework/node/init/init.go:32 - Jun 12 21:55:30.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:105 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + Jul 27 02:23:30.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook tear down framework | framework.go:193 - STEP: Destroying namespace "webhook-4823" for this suite. 06/12/23 21:55:31.035 - STEP: Destroying namespace "webhook-4823-markers" for this suite. 06/12/23 21:55:31.106 + STEP: Destroying namespace "container-lifecycle-hook-2077" for this suite. 07/27/23 02:23:30.221 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SS ------------------------------ -[sig-node] Kubelet when scheduling a read only busybox container - should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/node/kubelet.go:184 -[BeforeEach] [sig-node] Kubelet - set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:55:31.167 -Jun 12 21:55:31.167: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename kubelet-test 06/12/23 21:55:31.171 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:55:31.303 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:55:31.385 -[BeforeEach] [sig-node] Kubelet - test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-node] Kubelet - test/e2e/common/node/kubelet.go:41 -[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/node/kubelet.go:184 -Jun 12 21:55:31.534: INFO: Waiting up to 5m0s for pod "busybox-readonly-fs7158aa5c-d36d-439d-97b4-7778bf4f59a4" in namespace "kubelet-test-7438" to be "running and ready" -Jun 12 21:55:31.571: INFO: Pod "busybox-readonly-fs7158aa5c-d36d-439d-97b4-7778bf4f59a4": Phase="Pending", Reason="", readiness=false. Elapsed: 37.277213ms -Jun 12 21:55:31.571: INFO: The phase of Pod busybox-readonly-fs7158aa5c-d36d-439d-97b4-7778bf4f59a4 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:55:33.596: INFO: Pod "busybox-readonly-fs7158aa5c-d36d-439d-97b4-7778bf4f59a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062072909s -Jun 12 21:55:33.596: INFO: The phase of Pod busybox-readonly-fs7158aa5c-d36d-439d-97b4-7778bf4f59a4 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:55:35.586: INFO: Pod "busybox-readonly-fs7158aa5c-d36d-439d-97b4-7778bf4f59a4": Phase="Running", Reason="", readiness=true. Elapsed: 4.051750419s -Jun 12 21:55:35.586: INFO: The phase of Pod busybox-readonly-fs7158aa5c-d36d-439d-97b4-7778bf4f59a4 is Running (Ready = true) -Jun 12 21:55:35.586: INFO: Pod "busybox-readonly-fs7158aa5c-d36d-439d-97b4-7778bf4f59a4" satisfied condition "running and ready" -[AfterEach] [sig-node] Kubelet +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + creating/deleting custom resource definition objects works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:58 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 07/27/23 02:23:30.244 +Jul 27 02:23:30.244: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename custom-resource-definition 07/27/23 02:23:30.245 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:23:30.285 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:23:30.294 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] creating/deleting custom resource definition objects works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:58 +Jul 27 02:23:30.304: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 -Jun 12 21:55:35.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Kubelet +Jul 27 02:23:31.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Kubelet +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Kubelet +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] tear down framework | framework.go:193 -STEP: Destroying namespace "kubelet-test-7438" for this suite. 06/12/23 21:55:35.644 +STEP: Destroying namespace "custom-resource-definition-1340" for this suite. 07/27/23 02:23:31.372 ------------------------------ -• [4.502 seconds] -[sig-node] Kubelet -test/e2e/common/node/framework.go:23 - when scheduling a read only busybox container - test/e2e/common/node/kubelet.go:175 - should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/node/kubelet.go:184 +• [1.158 seconds] +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + Simple CustomResourceDefinition + test/e2e/apimachinery/custom_resource_definition.go:50 + creating/deleting custom resource definition objects works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:58 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Kubelet + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:55:31.167 - Jun 12 21:55:31.167: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename kubelet-test 06/12/23 21:55:31.171 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:55:31.303 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:55:31.385 - [BeforeEach] [sig-node] Kubelet + STEP: Creating a kubernetes client 07/27/23 02:23:30.244 + Jul 27 02:23:30.244: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename custom-resource-definition 07/27/23 02:23:30.245 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:23:30.285 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:23:30.294 + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-node] Kubelet - test/e2e/common/node/kubelet.go:41 - [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/node/kubelet.go:184 - Jun 12 21:55:31.534: INFO: Waiting up to 5m0s for pod "busybox-readonly-fs7158aa5c-d36d-439d-97b4-7778bf4f59a4" in namespace "kubelet-test-7438" to be "running and ready" - Jun 12 21:55:31.571: INFO: Pod "busybox-readonly-fs7158aa5c-d36d-439d-97b4-7778bf4f59a4": Phase="Pending", Reason="", readiness=false. Elapsed: 37.277213ms - Jun 12 21:55:31.571: INFO: The phase of Pod busybox-readonly-fs7158aa5c-d36d-439d-97b4-7778bf4f59a4 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:55:33.596: INFO: Pod "busybox-readonly-fs7158aa5c-d36d-439d-97b4-7778bf4f59a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062072909s - Jun 12 21:55:33.596: INFO: The phase of Pod busybox-readonly-fs7158aa5c-d36d-439d-97b4-7778bf4f59a4 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:55:35.586: INFO: Pod "busybox-readonly-fs7158aa5c-d36d-439d-97b4-7778bf4f59a4": Phase="Running", Reason="", readiness=true. Elapsed: 4.051750419s - Jun 12 21:55:35.586: INFO: The phase of Pod busybox-readonly-fs7158aa5c-d36d-439d-97b4-7778bf4f59a4 is Running (Ready = true) - Jun 12 21:55:35.586: INFO: Pod "busybox-readonly-fs7158aa5c-d36d-439d-97b4-7778bf4f59a4" satisfied condition "running and ready" - [AfterEach] [sig-node] Kubelet + [It] creating/deleting custom resource definition objects works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:58 + Jul 27 02:23:30.304: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 21:55:35.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Kubelet + Jul 27 02:23:31.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Kubelet + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Kubelet + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "kubelet-test-7438" for this suite. 06/12/23 21:55:35.644 + STEP: Destroying namespace "custom-resource-definition-1340" for this suite. 07/27/23 02:23:31.372 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +S ------------------------------ -[sig-network] Proxy version v1 - should proxy through a service and a pod [Conformance] - test/e2e/network/proxy.go:101 -[BeforeEach] version v1 +[sig-auth] ServiceAccounts + should run through the lifecycle of a ServiceAccount [Conformance] + test/e2e/auth/service_accounts.go:649 +[BeforeEach] [sig-auth] ServiceAccounts set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:55:35.679 -Jun 12 21:55:35.679: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename proxy 06/12/23 21:55:35.681 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:55:35.733 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:55:35.744 -[BeforeEach] version v1 +STEP: Creating a kubernetes client 07/27/23 02:23:31.402 +Jul 27 02:23:31.403: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename svcaccounts 07/27/23 02:23:31.404 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:23:31.464 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:23:31.473 +[BeforeEach] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:31 -[It] should proxy through a service and a pod [Conformance] - test/e2e/network/proxy.go:101 -STEP: starting an echo server on multiple ports 06/12/23 21:55:35.809 -STEP: creating replication controller proxy-service-d7pn2 in namespace proxy-416 06/12/23 21:55:35.81 -I0612 21:55:35.828148 23 runners.go:193] Created replication controller with name: proxy-service-d7pn2, namespace: proxy-416, replica count: 1 -I0612 21:55:36.890231 23 runners.go:193] proxy-service-d7pn2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady -I0612 21:55:37.891101 23 runners.go:193] proxy-service-d7pn2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady -I0612 21:55:38.893023 23 runners.go:193] proxy-service-d7pn2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady -I0612 21:55:39.893781 23 runners.go:193] proxy-service-d7pn2 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady -Jun 12 21:55:39.906: INFO: setup took 4.145954998s, starting test cases -STEP: running 16 cases, 20 attempts per case, 320 total attempts 06/12/23 21:55:39.907 -Jun 12 21:55:39.945: INFO: (0) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg/proxy/: test (200; 34.805271ms) -Jun 12 21:55:39.950: INFO: (0) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:1080/proxy/: t... (200; 39.619133ms) -Jun 12 21:55:39.950: INFO: (0) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:1080/proxy/: testtestt... (200; 26.560692ms) -Jun 12 21:55:40.013: INFO: (1) /api/v1/namespaces/proxy-416/services/http:proxy-service-d7pn2:portname2/proxy/: bar (200; 29.091017ms) -Jun 12 21:55:40.013: INFO: (1) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:443/proxy/: test (200; 26.89269ms) -Jun 12 21:55:40.013: INFO: (1) /api/v1/namespaces/proxy-416/services/https:proxy-service-d7pn2:tlsportname2/proxy/: tls qux (200; 27.931143ms) -Jun 12 21:55:40.013: INFO: (1) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:462/proxy/: tls qux (200; 27.198889ms) -Jun 12 21:55:40.013: INFO: (1) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 27.562509ms) -Jun 12 21:55:40.019: INFO: (1) /api/v1/namespaces/proxy-416/services/proxy-service-d7pn2:portname2/proxy/: bar (200; 33.064774ms) -Jun 12 21:55:40.019: INFO: (1) /api/v1/namespaces/proxy-416/services/proxy-service-d7pn2:portname1/proxy/: foo (200; 34.162318ms) -Jun 12 21:55:40.020: INFO: (1) /api/v1/namespaces/proxy-416/services/http:proxy-service-d7pn2:portname1/proxy/: foo (200; 34.413869ms) -Jun 12 21:55:40.019: INFO: (1) /api/v1/namespaces/proxy-416/services/https:proxy-service-d7pn2:tlsportname1/proxy/: tls baz (200; 33.581757ms) -Jun 12 21:55:40.046: INFO: (2) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:1080/proxy/: t... (200; 20.852061ms) -Jun 12 21:55:40.050: INFO: (2) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:1080/proxy/: testtest (200; 27.396981ms) -Jun 12 21:55:40.053: INFO: (2) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 26.377015ms) -Jun 12 21:55:40.053: INFO: (2) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:443/proxy/: t... (200; 16.663033ms) -Jun 12 21:55:40.082: INFO: (3) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 19.447881ms) -Jun 12 21:55:40.082: INFO: (3) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg/proxy/: test (200; 18.147273ms) -Jun 12 21:55:40.083: INFO: (3) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 17.779998ms) -Jun 12 21:55:40.083: INFO: (3) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 18.218866ms) -Jun 12 21:55:40.083: INFO: (3) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:1080/proxy/: testtest (200; 20.565301ms) -Jun 12 21:55:40.140: INFO: (4) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:460/proxy/: tls baz (200; 20.100716ms) -Jun 12 21:55:40.140: INFO: (4) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:462/proxy/: tls qux (200; 20.556492ms) -Jun 12 21:55:40.140: INFO: (4) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:443/proxy/: t... (200; 22.920957ms) -Jun 12 21:55:40.143: INFO: (4) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 23.250778ms) -Jun 12 21:55:40.143: INFO: (4) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 23.408769ms) -Jun 12 21:55:40.143: INFO: (4) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:1080/proxy/: testtest (200; 28.448344ms) -Jun 12 21:55:40.180: INFO: (5) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:1080/proxy/: testt... (200; 29.426537ms) -Jun 12 21:55:40.180: INFO: (5) /api/v1/namespaces/proxy-416/services/https:proxy-service-d7pn2:tlsportname2/proxy/: tls qux (200; 29.616199ms) -Jun 12 21:55:40.180: INFO: (5) /api/v1/namespaces/proxy-416/services/https:proxy-service-d7pn2:tlsportname1/proxy/: tls baz (200; 29.11551ms) -Jun 12 21:55:40.180: INFO: (5) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 29.036998ms) -Jun 12 21:55:40.180: INFO: (5) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:460/proxy/: tls baz (200; 29.427242ms) -Jun 12 21:55:40.180: INFO: (5) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:443/proxy/: test (200; 27.191105ms) -Jun 12 21:55:40.217: INFO: (6) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:1080/proxy/: testt... (200; 29.511145ms) -Jun 12 21:55:40.218: INFO: (6) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 29.64716ms) -Jun 12 21:55:40.225: INFO: (6) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 36.208233ms) -Jun 12 21:55:40.225: INFO: (6) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:443/proxy/: test (200; 21.432935ms) -Jun 12 21:55:40.249: INFO: (7) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 22.153146ms) -Jun 12 21:55:40.250: INFO: (7) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:1080/proxy/: testt... (200; 23.686116ms) -Jun 12 21:55:40.251: INFO: (7) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:462/proxy/: tls qux (200; 23.544912ms) -Jun 12 21:55:40.252: INFO: (7) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:443/proxy/: test (200; 20.32129ms) -Jun 12 21:55:40.280: INFO: (8) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 22.31965ms) -Jun 12 21:55:40.280: INFO: (8) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 22.27903ms) -Jun 12 21:55:40.280: INFO: (8) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:1080/proxy/: testt... (200; 22.815041ms) -Jun 12 21:55:40.281: INFO: (8) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:462/proxy/: tls qux (200; 23.231611ms) -Jun 12 21:55:40.281: INFO: (8) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:460/proxy/: tls baz (200; 22.900545ms) -Jun 12 21:55:40.281: INFO: (8) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 23.004853ms) -Jun 12 21:55:40.283: INFO: (8) /api/v1/namespaces/proxy-416/services/http:proxy-service-d7pn2:portname1/proxy/: foo (200; 25.984255ms) -Jun 12 21:55:40.284: INFO: (8) /api/v1/namespaces/proxy-416/services/proxy-service-d7pn2:portname1/proxy/: foo (200; 26.454509ms) -Jun 12 21:55:40.288: INFO: (8) /api/v1/namespaces/proxy-416/services/https:proxy-service-d7pn2:tlsportname2/proxy/: tls qux (200; 29.74284ms) -Jun 12 21:55:40.288: INFO: (8) /api/v1/namespaces/proxy-416/services/https:proxy-service-d7pn2:tlsportname1/proxy/: tls baz (200; 30.077187ms) -Jun 12 21:55:40.288: INFO: (8) /api/v1/namespaces/proxy-416/services/http:proxy-service-d7pn2:portname2/proxy/: bar (200; 30.334869ms) -Jun 12 21:55:40.287: INFO: (8) /api/v1/namespaces/proxy-416/services/proxy-service-d7pn2:portname2/proxy/: bar (200; 28.901008ms) -Jun 12 21:55:40.305: INFO: (9) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:462/proxy/: tls qux (200; 16.118587ms) -Jun 12 21:55:40.308: INFO: (9) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 16.603096ms) -Jun 12 21:55:40.309: INFO: (9) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 18.255764ms) -Jun 12 21:55:40.309: INFO: (9) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 17.386114ms) -Jun 12 21:55:40.309: INFO: (9) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:1080/proxy/: testtest (200; 19.107382ms) -Jun 12 21:55:40.311: INFO: (9) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:1080/proxy/: t... (200; 19.569908ms) -Jun 12 21:55:40.311: INFO: (9) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:443/proxy/: t... (200; 18.199306ms) -Jun 12 21:55:40.336: INFO: (10) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:443/proxy/: test (200; 18.791035ms) -Jun 12 21:55:40.336: INFO: (10) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 19.212032ms) -Jun 12 21:55:40.341: INFO: (10) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:1080/proxy/: testt... (200; 21.169057ms) -Jun 12 21:55:40.370: INFO: (11) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 21.283668ms) -Jun 12 21:55:40.370: INFO: (11) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg/proxy/: test (200; 21.173818ms) -Jun 12 21:55:40.372: INFO: (11) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 22.900423ms) -Jun 12 21:55:40.372: INFO: (11) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:1080/proxy/: testt... (200; 22.538292ms) -Jun 12 21:55:40.402: INFO: (12) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 23.152707ms) -Jun 12 21:55:40.402: INFO: (12) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:1080/proxy/: testtest (200; 23.10695ms) -Jun 12 21:55:40.402: INFO: (12) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:443/proxy/: testt... (200; 17.699949ms) -Jun 12 21:55:40.429: INFO: (13) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:460/proxy/: tls baz (200; 18.97338ms) -Jun 12 21:55:40.430: INFO: (13) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:462/proxy/: tls qux (200; 19.64686ms) -Jun 12 21:55:40.431: INFO: (13) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg/proxy/: test (200; 20.95501ms) -Jun 12 21:55:40.432: INFO: (13) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 22.038354ms) -Jun 12 21:55:40.432: INFO: (13) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 22.374102ms) -Jun 12 21:55:40.432: INFO: (13) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 22.18524ms) -Jun 12 21:55:40.434: INFO: (13) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:443/proxy/: testtest (200; 21.274079ms) -Jun 12 21:55:40.473: INFO: (14) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:1080/proxy/: t... (200; 21.975175ms) -Jun 12 21:55:40.473: INFO: (14) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 22.557199ms) -Jun 12 21:55:40.475: INFO: (14) /api/v1/namespaces/proxy-416/services/proxy-service-d7pn2:portname1/proxy/: foo (200; 25.319975ms) -Jun 12 21:55:40.489: INFO: (14) /api/v1/namespaces/proxy-416/services/http:proxy-service-d7pn2:portname1/proxy/: foo (200; 37.79688ms) -Jun 12 21:55:40.489: INFO: (14) /api/v1/namespaces/proxy-416/services/https:proxy-service-d7pn2:tlsportname1/proxy/: tls baz (200; 38.79994ms) -Jun 12 21:55:40.489: INFO: (14) /api/v1/namespaces/proxy-416/services/proxy-service-d7pn2:portname2/proxy/: bar (200; 38.931639ms) -Jun 12 21:55:40.489: INFO: (14) /api/v1/namespaces/proxy-416/services/http:proxy-service-d7pn2:portname2/proxy/: bar (200; 39.104631ms) -Jun 12 21:55:40.489: INFO: (14) /api/v1/namespaces/proxy-416/services/https:proxy-service-d7pn2:tlsportname2/proxy/: tls qux (200; 38.387475ms) -Jun 12 21:55:40.515: INFO: (15) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg/proxy/: test (200; 25.130506ms) -Jun 12 21:55:40.515: INFO: (15) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 24.877546ms) -Jun 12 21:55:40.515: INFO: (15) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:462/proxy/: tls qux (200; 25.553303ms) -Jun 12 21:55:40.516: INFO: (15) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 26.127703ms) -Jun 12 21:55:40.516: INFO: (15) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:1080/proxy/: t... (200; 26.315765ms) -Jun 12 21:55:40.516: INFO: (15) /api/v1/namespaces/proxy-416/services/https:proxy-service-d7pn2:tlsportname1/proxy/: tls baz (200; 26.077582ms) -Jun 12 21:55:40.516: INFO: (15) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 26.573806ms) -Jun 12 21:55:40.516: INFO: (15) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:460/proxy/: tls baz (200; 26.173275ms) -Jun 12 21:55:40.516: INFO: (15) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:1080/proxy/: testtest (200; 21.031302ms) -Jun 12 21:55:40.545: INFO: (16) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:462/proxy/: tls qux (200; 21.088173ms) -Jun 12 21:55:40.545: INFO: (16) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 21.71081ms) -Jun 12 21:55:40.546: INFO: (16) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 22.8739ms) -Jun 12 21:55:40.546: INFO: (16) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:1080/proxy/: t... (200; 22.880575ms) -Jun 12 21:55:40.546: INFO: (16) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 22.946849ms) -Jun 12 21:55:40.546: INFO: (16) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:460/proxy/: tls baz (200; 22.643334ms) -Jun 12 21:55:40.546: INFO: (16) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:1080/proxy/: testtest (200; 22.627715ms) -Jun 12 21:55:40.577: INFO: (17) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:443/proxy/: t... (200; 22.623648ms) -Jun 12 21:55:40.577: INFO: (17) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:1080/proxy/: testtesttest (200; 20.620548ms) -Jun 12 21:55:40.636: INFO: (18) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 18.625627ms) -Jun 12 21:55:40.636: INFO: (18) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:1080/proxy/: t... (200; 18.646001ms) -Jun 12 21:55:40.636: INFO: (18) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 18.782393ms) -Jun 12 21:55:40.637: INFO: (18) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:462/proxy/: tls qux (200; 20.403142ms) -Jun 12 21:55:40.638: INFO: (18) /api/v1/namespaces/proxy-416/services/http:proxy-service-d7pn2:portname2/proxy/: bar (200; 24.914136ms) -Jun 12 21:55:40.642: INFO: (18) /api/v1/namespaces/proxy-416/services/proxy-service-d7pn2:portname2/proxy/: bar (200; 27.548407ms) -Jun 12 21:55:40.642: INFO: (18) /api/v1/namespaces/proxy-416/services/https:proxy-service-d7pn2:tlsportname1/proxy/: tls baz (200; 27.343772ms) -Jun 12 21:55:40.643: INFO: (18) /api/v1/namespaces/proxy-416/services/https:proxy-service-d7pn2:tlsportname2/proxy/: tls qux (200; 25.516363ms) -Jun 12 21:55:40.644: INFO: (18) /api/v1/namespaces/proxy-416/services/proxy-service-d7pn2:portname1/proxy/: foo (200; 27.160767ms) -Jun 12 21:55:40.644: INFO: (18) /api/v1/namespaces/proxy-416/services/http:proxy-service-d7pn2:portname1/proxy/: foo (200; 26.783992ms) -Jun 12 21:55:40.664: INFO: (19) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:462/proxy/: tls qux (200; 18.497503ms) -Jun 12 21:55:40.664: INFO: (19) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:443/proxy/: test (200; 22.706794ms) -Jun 12 21:55:40.668: INFO: (19) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 22.602585ms) -Jun 12 21:55:40.668: INFO: (19) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 21.844352ms) -Jun 12 21:55:40.668: INFO: (19) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:460/proxy/: tls baz (200; 22.634011ms) -Jun 12 21:55:40.668: INFO: (19) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:1080/proxy/: testt... (200; 23.554062ms) -Jun 12 21:55:40.669: INFO: (19) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 21.989388ms) -Jun 12 21:55:40.672: INFO: (19) /api/v1/namespaces/proxy-416/services/proxy-service-d7pn2:portname2/proxy/: bar (200; 26.224904ms) -Jun 12 21:55:40.674: INFO: (19) /api/v1/namespaces/proxy-416/services/proxy-service-d7pn2:portname1/proxy/: foo (200; 29.421737ms) -Jun 12 21:55:40.674: INFO: (19) /api/v1/namespaces/proxy-416/services/http:proxy-service-d7pn2:portname2/proxy/: bar (200; 28.145234ms) -Jun 12 21:55:40.675: INFO: (19) /api/v1/namespaces/proxy-416/services/http:proxy-service-d7pn2:portname1/proxy/: foo (200; 29.889401ms) -Jun 12 21:55:40.676: INFO: (19) /api/v1/namespaces/proxy-416/services/https:proxy-service-d7pn2:tlsportname2/proxy/: tls qux (200; 30.87952ms) -Jun 12 21:55:40.676: INFO: (19) /api/v1/namespaces/proxy-416/services/https:proxy-service-d7pn2:tlsportname1/proxy/: tls baz (200; 29.500324ms) -STEP: deleting ReplicationController proxy-service-d7pn2 in namespace proxy-416, will wait for the garbage collector to delete the pods 06/12/23 21:55:40.676 -Jun 12 21:55:40.752: INFO: Deleting ReplicationController proxy-service-d7pn2 took: 16.018283ms -Jun 12 21:55:40.853: INFO: Terminating ReplicationController proxy-service-d7pn2 pods took: 100.613096ms -[AfterEach] version v1 +[It] should run through the lifecycle of a ServiceAccount [Conformance] + test/e2e/auth/service_accounts.go:649 +STEP: creating a ServiceAccount 07/27/23 02:23:31.482 +STEP: watching for the ServiceAccount to be added 07/27/23 02:23:31.516 +STEP: patching the ServiceAccount 07/27/23 02:23:31.52 +STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) 07/27/23 02:23:31.543 +STEP: deleting the ServiceAccount 07/27/23 02:23:31.574 +[AfterEach] [sig-auth] ServiceAccounts test/e2e/framework/node/init/init.go:32 -Jun 12 21:55:43.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] version v1 +Jul 27 02:23:31.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] version v1 +[DeferCleanup (Each)] [sig-auth] ServiceAccounts dump namespaces | framework.go:196 -[DeferCleanup (Each)] version v1 +[DeferCleanup (Each)] [sig-auth] ServiceAccounts tear down framework | framework.go:193 -STEP: Destroying namespace "proxy-416" for this suite. 06/12/23 21:55:43.975 +STEP: Destroying namespace "svcaccounts-4609" for this suite. 07/27/23 02:23:31.864 ------------------------------ -• [SLOW TEST] [8.321 seconds] -[sig-network] Proxy -test/e2e/network/common/framework.go:23 - version v1 - test/e2e/network/proxy.go:74 - should proxy through a service and a pod [Conformance] - test/e2e/network/proxy.go:101 +• [0.611 seconds] +[sig-auth] ServiceAccounts +test/e2e/auth/framework.go:23 + should run through the lifecycle of a ServiceAccount [Conformance] + test/e2e/auth/service_accounts.go:649 Begin Captured GinkgoWriter Output >> - [BeforeEach] version v1 + [BeforeEach] [sig-auth] ServiceAccounts set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:55:35.679 - Jun 12 21:55:35.679: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename proxy 06/12/23 21:55:35.681 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:55:35.733 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:55:35.744 - [BeforeEach] version v1 + STEP: Creating a kubernetes client 07/27/23 02:23:31.402 + Jul 27 02:23:31.403: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename svcaccounts 07/27/23 02:23:31.404 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:23:31.464 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:23:31.473 + [BeforeEach] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:31 - [It] should proxy through a service and a pod [Conformance] - test/e2e/network/proxy.go:101 - STEP: starting an echo server on multiple ports 06/12/23 21:55:35.809 - STEP: creating replication controller proxy-service-d7pn2 in namespace proxy-416 06/12/23 21:55:35.81 - I0612 21:55:35.828148 23 runners.go:193] Created replication controller with name: proxy-service-d7pn2, namespace: proxy-416, replica count: 1 - I0612 21:55:36.890231 23 runners.go:193] proxy-service-d7pn2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady - I0612 21:55:37.891101 23 runners.go:193] proxy-service-d7pn2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady - I0612 21:55:38.893023 23 runners.go:193] proxy-service-d7pn2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady - I0612 21:55:39.893781 23 runners.go:193] proxy-service-d7pn2 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady - Jun 12 21:55:39.906: INFO: setup took 4.145954998s, starting test cases - STEP: running 16 cases, 20 attempts per case, 320 total attempts 06/12/23 21:55:39.907 - Jun 12 21:55:39.945: INFO: (0) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg/proxy/: test (200; 34.805271ms) - Jun 12 21:55:39.950: INFO: (0) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:1080/proxy/: t... (200; 39.619133ms) - Jun 12 21:55:39.950: INFO: (0) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:1080/proxy/: testtestt... (200; 26.560692ms) - Jun 12 21:55:40.013: INFO: (1) /api/v1/namespaces/proxy-416/services/http:proxy-service-d7pn2:portname2/proxy/: bar (200; 29.091017ms) - Jun 12 21:55:40.013: INFO: (1) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:443/proxy/: test (200; 26.89269ms) - Jun 12 21:55:40.013: INFO: (1) /api/v1/namespaces/proxy-416/services/https:proxy-service-d7pn2:tlsportname2/proxy/: tls qux (200; 27.931143ms) - Jun 12 21:55:40.013: INFO: (1) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:462/proxy/: tls qux (200; 27.198889ms) - Jun 12 21:55:40.013: INFO: (1) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 27.562509ms) - Jun 12 21:55:40.019: INFO: (1) /api/v1/namespaces/proxy-416/services/proxy-service-d7pn2:portname2/proxy/: bar (200; 33.064774ms) - Jun 12 21:55:40.019: INFO: (1) /api/v1/namespaces/proxy-416/services/proxy-service-d7pn2:portname1/proxy/: foo (200; 34.162318ms) - Jun 12 21:55:40.020: INFO: (1) /api/v1/namespaces/proxy-416/services/http:proxy-service-d7pn2:portname1/proxy/: foo (200; 34.413869ms) - Jun 12 21:55:40.019: INFO: (1) /api/v1/namespaces/proxy-416/services/https:proxy-service-d7pn2:tlsportname1/proxy/: tls baz (200; 33.581757ms) - Jun 12 21:55:40.046: INFO: (2) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:1080/proxy/: t... (200; 20.852061ms) - Jun 12 21:55:40.050: INFO: (2) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:1080/proxy/: testtest (200; 27.396981ms) - Jun 12 21:55:40.053: INFO: (2) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 26.377015ms) - Jun 12 21:55:40.053: INFO: (2) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:443/proxy/: t... (200; 16.663033ms) - Jun 12 21:55:40.082: INFO: (3) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 19.447881ms) - Jun 12 21:55:40.082: INFO: (3) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg/proxy/: test (200; 18.147273ms) - Jun 12 21:55:40.083: INFO: (3) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 17.779998ms) - Jun 12 21:55:40.083: INFO: (3) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 18.218866ms) - Jun 12 21:55:40.083: INFO: (3) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:1080/proxy/: testtest (200; 20.565301ms) - Jun 12 21:55:40.140: INFO: (4) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:460/proxy/: tls baz (200; 20.100716ms) - Jun 12 21:55:40.140: INFO: (4) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:462/proxy/: tls qux (200; 20.556492ms) - Jun 12 21:55:40.140: INFO: (4) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:443/proxy/: t... (200; 22.920957ms) - Jun 12 21:55:40.143: INFO: (4) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 23.250778ms) - Jun 12 21:55:40.143: INFO: (4) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 23.408769ms) - Jun 12 21:55:40.143: INFO: (4) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:1080/proxy/: testtest (200; 28.448344ms) - Jun 12 21:55:40.180: INFO: (5) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:1080/proxy/: testt... (200; 29.426537ms) - Jun 12 21:55:40.180: INFO: (5) /api/v1/namespaces/proxy-416/services/https:proxy-service-d7pn2:tlsportname2/proxy/: tls qux (200; 29.616199ms) - Jun 12 21:55:40.180: INFO: (5) /api/v1/namespaces/proxy-416/services/https:proxy-service-d7pn2:tlsportname1/proxy/: tls baz (200; 29.11551ms) - Jun 12 21:55:40.180: INFO: (5) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 29.036998ms) - Jun 12 21:55:40.180: INFO: (5) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:460/proxy/: tls baz (200; 29.427242ms) - Jun 12 21:55:40.180: INFO: (5) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:443/proxy/: test (200; 27.191105ms) - Jun 12 21:55:40.217: INFO: (6) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:1080/proxy/: testt... (200; 29.511145ms) - Jun 12 21:55:40.218: INFO: (6) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 29.64716ms) - Jun 12 21:55:40.225: INFO: (6) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 36.208233ms) - Jun 12 21:55:40.225: INFO: (6) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:443/proxy/: test (200; 21.432935ms) - Jun 12 21:55:40.249: INFO: (7) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 22.153146ms) - Jun 12 21:55:40.250: INFO: (7) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:1080/proxy/: testt... (200; 23.686116ms) - Jun 12 21:55:40.251: INFO: (7) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:462/proxy/: tls qux (200; 23.544912ms) - Jun 12 21:55:40.252: INFO: (7) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:443/proxy/: test (200; 20.32129ms) - Jun 12 21:55:40.280: INFO: (8) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 22.31965ms) - Jun 12 21:55:40.280: INFO: (8) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 22.27903ms) - Jun 12 21:55:40.280: INFO: (8) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:1080/proxy/: testt... (200; 22.815041ms) - Jun 12 21:55:40.281: INFO: (8) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:462/proxy/: tls qux (200; 23.231611ms) - Jun 12 21:55:40.281: INFO: (8) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:460/proxy/: tls baz (200; 22.900545ms) - Jun 12 21:55:40.281: INFO: (8) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 23.004853ms) - Jun 12 21:55:40.283: INFO: (8) /api/v1/namespaces/proxy-416/services/http:proxy-service-d7pn2:portname1/proxy/: foo (200; 25.984255ms) - Jun 12 21:55:40.284: INFO: (8) /api/v1/namespaces/proxy-416/services/proxy-service-d7pn2:portname1/proxy/: foo (200; 26.454509ms) - Jun 12 21:55:40.288: INFO: (8) /api/v1/namespaces/proxy-416/services/https:proxy-service-d7pn2:tlsportname2/proxy/: tls qux (200; 29.74284ms) - Jun 12 21:55:40.288: INFO: (8) /api/v1/namespaces/proxy-416/services/https:proxy-service-d7pn2:tlsportname1/proxy/: tls baz (200; 30.077187ms) - Jun 12 21:55:40.288: INFO: (8) /api/v1/namespaces/proxy-416/services/http:proxy-service-d7pn2:portname2/proxy/: bar (200; 30.334869ms) - Jun 12 21:55:40.287: INFO: (8) /api/v1/namespaces/proxy-416/services/proxy-service-d7pn2:portname2/proxy/: bar (200; 28.901008ms) - Jun 12 21:55:40.305: INFO: (9) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:462/proxy/: tls qux (200; 16.118587ms) - Jun 12 21:55:40.308: INFO: (9) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 16.603096ms) - Jun 12 21:55:40.309: INFO: (9) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 18.255764ms) - Jun 12 21:55:40.309: INFO: (9) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 17.386114ms) - Jun 12 21:55:40.309: INFO: (9) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:1080/proxy/: testtest (200; 19.107382ms) - Jun 12 21:55:40.311: INFO: (9) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:1080/proxy/: t... (200; 19.569908ms) - Jun 12 21:55:40.311: INFO: (9) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:443/proxy/: t... (200; 18.199306ms) - Jun 12 21:55:40.336: INFO: (10) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:443/proxy/: test (200; 18.791035ms) - Jun 12 21:55:40.336: INFO: (10) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 19.212032ms) - Jun 12 21:55:40.341: INFO: (10) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:1080/proxy/: testt... (200; 21.169057ms) - Jun 12 21:55:40.370: INFO: (11) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 21.283668ms) - Jun 12 21:55:40.370: INFO: (11) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg/proxy/: test (200; 21.173818ms) - Jun 12 21:55:40.372: INFO: (11) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 22.900423ms) - Jun 12 21:55:40.372: INFO: (11) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:1080/proxy/: testt... (200; 22.538292ms) - Jun 12 21:55:40.402: INFO: (12) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 23.152707ms) - Jun 12 21:55:40.402: INFO: (12) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:1080/proxy/: testtest (200; 23.10695ms) - Jun 12 21:55:40.402: INFO: (12) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:443/proxy/: testt... (200; 17.699949ms) - Jun 12 21:55:40.429: INFO: (13) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:460/proxy/: tls baz (200; 18.97338ms) - Jun 12 21:55:40.430: INFO: (13) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:462/proxy/: tls qux (200; 19.64686ms) - Jun 12 21:55:40.431: INFO: (13) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg/proxy/: test (200; 20.95501ms) - Jun 12 21:55:40.432: INFO: (13) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 22.038354ms) - Jun 12 21:55:40.432: INFO: (13) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 22.374102ms) - Jun 12 21:55:40.432: INFO: (13) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 22.18524ms) - Jun 12 21:55:40.434: INFO: (13) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:443/proxy/: testtest (200; 21.274079ms) - Jun 12 21:55:40.473: INFO: (14) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:1080/proxy/: t... (200; 21.975175ms) - Jun 12 21:55:40.473: INFO: (14) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 22.557199ms) - Jun 12 21:55:40.475: INFO: (14) /api/v1/namespaces/proxy-416/services/proxy-service-d7pn2:portname1/proxy/: foo (200; 25.319975ms) - Jun 12 21:55:40.489: INFO: (14) /api/v1/namespaces/proxy-416/services/http:proxy-service-d7pn2:portname1/proxy/: foo (200; 37.79688ms) - Jun 12 21:55:40.489: INFO: (14) /api/v1/namespaces/proxy-416/services/https:proxy-service-d7pn2:tlsportname1/proxy/: tls baz (200; 38.79994ms) - Jun 12 21:55:40.489: INFO: (14) /api/v1/namespaces/proxy-416/services/proxy-service-d7pn2:portname2/proxy/: bar (200; 38.931639ms) - Jun 12 21:55:40.489: INFO: (14) /api/v1/namespaces/proxy-416/services/http:proxy-service-d7pn2:portname2/proxy/: bar (200; 39.104631ms) - Jun 12 21:55:40.489: INFO: (14) /api/v1/namespaces/proxy-416/services/https:proxy-service-d7pn2:tlsportname2/proxy/: tls qux (200; 38.387475ms) - Jun 12 21:55:40.515: INFO: (15) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg/proxy/: test (200; 25.130506ms) - Jun 12 21:55:40.515: INFO: (15) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 24.877546ms) - Jun 12 21:55:40.515: INFO: (15) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:462/proxy/: tls qux (200; 25.553303ms) - Jun 12 21:55:40.516: INFO: (15) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 26.127703ms) - Jun 12 21:55:40.516: INFO: (15) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:1080/proxy/: t... (200; 26.315765ms) - Jun 12 21:55:40.516: INFO: (15) /api/v1/namespaces/proxy-416/services/https:proxy-service-d7pn2:tlsportname1/proxy/: tls baz (200; 26.077582ms) - Jun 12 21:55:40.516: INFO: (15) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 26.573806ms) - Jun 12 21:55:40.516: INFO: (15) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:460/proxy/: tls baz (200; 26.173275ms) - Jun 12 21:55:40.516: INFO: (15) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:1080/proxy/: testtest (200; 21.031302ms) - Jun 12 21:55:40.545: INFO: (16) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:462/proxy/: tls qux (200; 21.088173ms) - Jun 12 21:55:40.545: INFO: (16) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 21.71081ms) - Jun 12 21:55:40.546: INFO: (16) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 22.8739ms) - Jun 12 21:55:40.546: INFO: (16) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:1080/proxy/: t... (200; 22.880575ms) - Jun 12 21:55:40.546: INFO: (16) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 22.946849ms) - Jun 12 21:55:40.546: INFO: (16) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:460/proxy/: tls baz (200; 22.643334ms) - Jun 12 21:55:40.546: INFO: (16) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:1080/proxy/: testtest (200; 22.627715ms) - Jun 12 21:55:40.577: INFO: (17) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:443/proxy/: t... (200; 22.623648ms) - Jun 12 21:55:40.577: INFO: (17) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:1080/proxy/: testtesttest (200; 20.620548ms) - Jun 12 21:55:40.636: INFO: (18) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 18.625627ms) - Jun 12 21:55:40.636: INFO: (18) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:1080/proxy/: t... (200; 18.646001ms) - Jun 12 21:55:40.636: INFO: (18) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 18.782393ms) - Jun 12 21:55:40.637: INFO: (18) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:462/proxy/: tls qux (200; 20.403142ms) - Jun 12 21:55:40.638: INFO: (18) /api/v1/namespaces/proxy-416/services/http:proxy-service-d7pn2:portname2/proxy/: bar (200; 24.914136ms) - Jun 12 21:55:40.642: INFO: (18) /api/v1/namespaces/proxy-416/services/proxy-service-d7pn2:portname2/proxy/: bar (200; 27.548407ms) - Jun 12 21:55:40.642: INFO: (18) /api/v1/namespaces/proxy-416/services/https:proxy-service-d7pn2:tlsportname1/proxy/: tls baz (200; 27.343772ms) - Jun 12 21:55:40.643: INFO: (18) /api/v1/namespaces/proxy-416/services/https:proxy-service-d7pn2:tlsportname2/proxy/: tls qux (200; 25.516363ms) - Jun 12 21:55:40.644: INFO: (18) /api/v1/namespaces/proxy-416/services/proxy-service-d7pn2:portname1/proxy/: foo (200; 27.160767ms) - Jun 12 21:55:40.644: INFO: (18) /api/v1/namespaces/proxy-416/services/http:proxy-service-d7pn2:portname1/proxy/: foo (200; 26.783992ms) - Jun 12 21:55:40.664: INFO: (19) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:462/proxy/: tls qux (200; 18.497503ms) - Jun 12 21:55:40.664: INFO: (19) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:443/proxy/: test (200; 22.706794ms) - Jun 12 21:55:40.668: INFO: (19) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 22.602585ms) - Jun 12 21:55:40.668: INFO: (19) /api/v1/namespaces/proxy-416/pods/http:proxy-service-d7pn2-h2vlg:160/proxy/: foo (200; 21.844352ms) - Jun 12 21:55:40.668: INFO: (19) /api/v1/namespaces/proxy-416/pods/https:proxy-service-d7pn2-h2vlg:460/proxy/: tls baz (200; 22.634011ms) - Jun 12 21:55:40.668: INFO: (19) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:1080/proxy/: testt... (200; 23.554062ms) - Jun 12 21:55:40.669: INFO: (19) /api/v1/namespaces/proxy-416/pods/proxy-service-d7pn2-h2vlg:162/proxy/: bar (200; 21.989388ms) - Jun 12 21:55:40.672: INFO: (19) /api/v1/namespaces/proxy-416/services/proxy-service-d7pn2:portname2/proxy/: bar (200; 26.224904ms) - Jun 12 21:55:40.674: INFO: (19) /api/v1/namespaces/proxy-416/services/proxy-service-d7pn2:portname1/proxy/: foo (200; 29.421737ms) - Jun 12 21:55:40.674: INFO: (19) /api/v1/namespaces/proxy-416/services/http:proxy-service-d7pn2:portname2/proxy/: bar (200; 28.145234ms) - Jun 12 21:55:40.675: INFO: (19) /api/v1/namespaces/proxy-416/services/http:proxy-service-d7pn2:portname1/proxy/: foo (200; 29.889401ms) - Jun 12 21:55:40.676: INFO: (19) /api/v1/namespaces/proxy-416/services/https:proxy-service-d7pn2:tlsportname2/proxy/: tls qux (200; 30.87952ms) - Jun 12 21:55:40.676: INFO: (19) /api/v1/namespaces/proxy-416/services/https:proxy-service-d7pn2:tlsportname1/proxy/: tls baz (200; 29.500324ms) - STEP: deleting ReplicationController proxy-service-d7pn2 in namespace proxy-416, will wait for the garbage collector to delete the pods 06/12/23 21:55:40.676 - Jun 12 21:55:40.752: INFO: Deleting ReplicationController proxy-service-d7pn2 took: 16.018283ms - Jun 12 21:55:40.853: INFO: Terminating ReplicationController proxy-service-d7pn2 pods took: 100.613096ms - [AfterEach] version v1 + [It] should run through the lifecycle of a ServiceAccount [Conformance] + test/e2e/auth/service_accounts.go:649 + STEP: creating a ServiceAccount 07/27/23 02:23:31.482 + STEP: watching for the ServiceAccount to be added 07/27/23 02:23:31.516 + STEP: patching the ServiceAccount 07/27/23 02:23:31.52 + STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) 07/27/23 02:23:31.543 + STEP: deleting the ServiceAccount 07/27/23 02:23:31.574 + [AfterEach] [sig-auth] ServiceAccounts test/e2e/framework/node/init/init.go:32 - Jun 12 21:55:43.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] version v1 + Jul 27 02:23:31.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] version v1 + [DeferCleanup (Each)] [sig-auth] ServiceAccounts dump namespaces | framework.go:196 - [DeferCleanup (Each)] version v1 + [DeferCleanup (Each)] [sig-auth] ServiceAccounts tear down framework | framework.go:193 - STEP: Destroying namespace "proxy-416" for this suite. 06/12/23 21:55:43.975 + STEP: Destroying namespace "svcaccounts-4609" for this suite. 07/27/23 02:23:31.864 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSS +SSS ------------------------------ -[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] - updates the published spec when one version gets renamed [Conformance] - test/e2e/apimachinery/crd_publish_openapi.go:391 -[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +[sig-network] Services + should serve multiport endpoints from pods [Conformance] + test/e2e/network/service.go:848 +[BeforeEach] [sig-network] Services set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:55:44.003 -Jun 12 21:55:44.004: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename crd-publish-openapi 06/12/23 21:55:44.007 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:55:44.064 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:55:44.084 -[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +STEP: Creating a kubernetes client 07/27/23 02:23:32.014 +Jul 27 02:23:32.014: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename services 07/27/23 02:23:32.016 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:23:32.141 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:23:32.154 +[BeforeEach] [sig-network] Services test/e2e/framework/metrics/init/init.go:31 -[It] updates the published spec when one version gets renamed [Conformance] - test/e2e/apimachinery/crd_publish_openapi.go:391 -STEP: set up a multi version CRD 06/12/23 21:55:44.125 -Jun 12 21:55:44.127: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: rename a version 06/12/23 21:56:00.097 -STEP: check the new version name is served 06/12/23 21:56:00.15 -STEP: check the old version name is removed 06/12/23 21:56:08.668 -STEP: check the other version is not changed 06/12/23 21:56:10.441 -[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should serve multiport endpoints from pods [Conformance] + test/e2e/network/service.go:848 +STEP: creating service multi-endpoint-test in namespace services-3422 07/27/23 02:23:32.165 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3422 to expose endpoints map[] 07/27/23 02:23:32.245 +Jul 27 02:23:32.260: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found +Jul 27 02:23:33.285: INFO: successfully validated that service multi-endpoint-test in namespace services-3422 exposes endpoints map[] +STEP: Creating pod pod1 in namespace services-3422 07/27/23 02:23:33.285 +Jul 27 02:23:33.308: INFO: Waiting up to 5m0s for pod "pod1" in namespace "services-3422" to be "running and ready" +Jul 27 02:23:33.316: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 7.717931ms +Jul 27 02:23:33.316: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Jul 27 02:23:35.331: INFO: Pod "pod1": Phase="Running", Reason="", readiness=true. Elapsed: 2.022961777s +Jul 27 02:23:35.331: INFO: The phase of Pod pod1 is Running (Ready = true) +Jul 27 02:23:35.331: INFO: Pod "pod1" satisfied condition "running and ready" +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3422 to expose endpoints map[pod1:[100]] 07/27/23 02:23:35.347 +Jul 27 02:23:35.384: INFO: successfully validated that service multi-endpoint-test in namespace services-3422 exposes endpoints map[pod1:[100]] +STEP: Creating pod pod2 in namespace services-3422 07/27/23 02:23:35.384 +Jul 27 02:23:35.399: INFO: Waiting up to 5m0s for pod "pod2" in namespace "services-3422" to be "running and ready" +Jul 27 02:23:35.407: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.467368ms +Jul 27 02:23:35.407: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Jul 27 02:23:37.416: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 2.017162345s +Jul 27 02:23:37.417: INFO: The phase of Pod pod2 is Running (Ready = true) +Jul 27 02:23:37.417: INFO: Pod "pod2" satisfied condition "running and ready" +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3422 to expose endpoints map[pod1:[100] pod2:[101]] 07/27/23 02:23:37.425 +Jul 27 02:23:37.475: INFO: successfully validated that service multi-endpoint-test in namespace services-3422 exposes endpoints map[pod1:[100] pod2:[101]] +STEP: Checking if the Service forwards traffic to pods 07/27/23 02:23:37.475 +Jul 27 02:23:37.475: INFO: Creating new exec pod +Jul 27 02:23:37.491: INFO: Waiting up to 5m0s for pod "execpodj9hzl" in namespace "services-3422" to be "running" +Jul 27 02:23:37.502: INFO: Pod "execpodj9hzl": Phase="Pending", Reason="", readiness=false. Elapsed: 10.609662ms +Jul 27 02:23:39.512: INFO: Pod "execpodj9hzl": Phase="Running", Reason="", readiness=true. Elapsed: 2.020428328s +Jul 27 02:23:39.512: INFO: Pod "execpodj9hzl" satisfied condition "running" +Jul 27 02:23:40.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-3422 exec execpodj9hzl -- /bin/sh -x -c nc -v -z -w 2 multi-endpoint-test 80' +Jul 27 02:23:40.733: INFO: stderr: "+ nc -v -z -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" +Jul 27 02:23:40.733: INFO: stdout: "" +Jul 27 02:23:40.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-3422 exec execpodj9hzl -- /bin/sh -x -c nc -v -z -w 2 172.21.92.220 80' +Jul 27 02:23:40.923: INFO: stderr: "+ nc -v -z -w 2 172.21.92.220 80\nConnection to 172.21.92.220 80 port [tcp/http] succeeded!\n" +Jul 27 02:23:40.923: INFO: stdout: "" +Jul 27 02:23:40.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-3422 exec execpodj9hzl -- /bin/sh -x -c nc -v -z -w 2 multi-endpoint-test 81' +Jul 27 02:23:41.116: INFO: stderr: "+ nc -v -z -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" +Jul 27 02:23:41.116: INFO: stdout: "" +Jul 27 02:23:41.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-3422 exec execpodj9hzl -- /bin/sh -x -c nc -v -z -w 2 172.21.92.220 81' +Jul 27 02:23:41.341: INFO: stderr: "+ nc -v -z -w 2 172.21.92.220 81\nConnection to 172.21.92.220 81 port [tcp/*] succeeded!\n" +Jul 27 02:23:41.341: INFO: stdout: "" +STEP: Deleting pod pod1 in namespace services-3422 07/27/23 02:23:41.341 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3422 to expose endpoints map[pod2:[101]] 07/27/23 02:23:41.367 +Jul 27 02:23:41.399: INFO: successfully validated that service multi-endpoint-test in namespace services-3422 exposes endpoints map[pod2:[101]] +STEP: Deleting pod pod2 in namespace services-3422 07/27/23 02:23:41.399 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3422 to expose endpoints map[] 07/27/23 02:23:41.429 +Jul 27 02:23:41.461: INFO: successfully validated that service multi-endpoint-test in namespace services-3422 exposes endpoints map[] +[AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 -Jun 12 21:56:23.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +Jul 27 02:23:41.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-network] Services dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 -STEP: Destroying namespace "crd-publish-openapi-5873" for this suite. 06/12/23 21:56:23.462 +STEP: Destroying namespace "services-3422" for this suite. 07/27/23 02:23:41.533 ------------------------------ -• [SLOW TEST] [39.485 seconds] -[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] -test/e2e/apimachinery/framework.go:23 - updates the published spec when one version gets renamed [Conformance] - test/e2e/apimachinery/crd_publish_openapi.go:391 +• [SLOW TEST] [9.541 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should serve multiport endpoints from pods [Conformance] + test/e2e/network/service.go:848 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + [BeforeEach] [sig-network] Services set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:55:44.003 - Jun 12 21:55:44.004: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename crd-publish-openapi 06/12/23 21:55:44.007 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:55:44.064 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:55:44.084 - [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + STEP: Creating a kubernetes client 07/27/23 02:23:32.014 + Jul 27 02:23:32.014: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename services 07/27/23 02:23:32.016 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:23:32.141 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:23:32.154 + [BeforeEach] [sig-network] Services test/e2e/framework/metrics/init/init.go:31 - [It] updates the published spec when one version gets renamed [Conformance] - test/e2e/apimachinery/crd_publish_openapi.go:391 - STEP: set up a multi version CRD 06/12/23 21:55:44.125 - Jun 12 21:55:44.127: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: rename a version 06/12/23 21:56:00.097 - STEP: check the new version name is served 06/12/23 21:56:00.15 - STEP: check the old version name is removed 06/12/23 21:56:08.668 - STEP: check the other version is not changed 06/12/23 21:56:10.441 - [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should serve multiport endpoints from pods [Conformance] + test/e2e/network/service.go:848 + STEP: creating service multi-endpoint-test in namespace services-3422 07/27/23 02:23:32.165 + STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3422 to expose endpoints map[] 07/27/23 02:23:32.245 + Jul 27 02:23:32.260: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found + Jul 27 02:23:33.285: INFO: successfully validated that service multi-endpoint-test in namespace services-3422 exposes endpoints map[] + STEP: Creating pod pod1 in namespace services-3422 07/27/23 02:23:33.285 + Jul 27 02:23:33.308: INFO: Waiting up to 5m0s for pod "pod1" in namespace "services-3422" to be "running and ready" + Jul 27 02:23:33.316: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 7.717931ms + Jul 27 02:23:33.316: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) + Jul 27 02:23:35.331: INFO: Pod "pod1": Phase="Running", Reason="", readiness=true. Elapsed: 2.022961777s + Jul 27 02:23:35.331: INFO: The phase of Pod pod1 is Running (Ready = true) + Jul 27 02:23:35.331: INFO: Pod "pod1" satisfied condition "running and ready" + STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3422 to expose endpoints map[pod1:[100]] 07/27/23 02:23:35.347 + Jul 27 02:23:35.384: INFO: successfully validated that service multi-endpoint-test in namespace services-3422 exposes endpoints map[pod1:[100]] + STEP: Creating pod pod2 in namespace services-3422 07/27/23 02:23:35.384 + Jul 27 02:23:35.399: INFO: Waiting up to 5m0s for pod "pod2" in namespace "services-3422" to be "running and ready" + Jul 27 02:23:35.407: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.467368ms + Jul 27 02:23:35.407: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) + Jul 27 02:23:37.416: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 2.017162345s + Jul 27 02:23:37.417: INFO: The phase of Pod pod2 is Running (Ready = true) + Jul 27 02:23:37.417: INFO: Pod "pod2" satisfied condition "running and ready" + STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3422 to expose endpoints map[pod1:[100] pod2:[101]] 07/27/23 02:23:37.425 + Jul 27 02:23:37.475: INFO: successfully validated that service multi-endpoint-test in namespace services-3422 exposes endpoints map[pod1:[100] pod2:[101]] + STEP: Checking if the Service forwards traffic to pods 07/27/23 02:23:37.475 + Jul 27 02:23:37.475: INFO: Creating new exec pod + Jul 27 02:23:37.491: INFO: Waiting up to 5m0s for pod "execpodj9hzl" in namespace "services-3422" to be "running" + Jul 27 02:23:37.502: INFO: Pod "execpodj9hzl": Phase="Pending", Reason="", readiness=false. Elapsed: 10.609662ms + Jul 27 02:23:39.512: INFO: Pod "execpodj9hzl": Phase="Running", Reason="", readiness=true. Elapsed: 2.020428328s + Jul 27 02:23:39.512: INFO: Pod "execpodj9hzl" satisfied condition "running" + Jul 27 02:23:40.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-3422 exec execpodj9hzl -- /bin/sh -x -c nc -v -z -w 2 multi-endpoint-test 80' + Jul 27 02:23:40.733: INFO: stderr: "+ nc -v -z -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" + Jul 27 02:23:40.733: INFO: stdout: "" + Jul 27 02:23:40.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-3422 exec execpodj9hzl -- /bin/sh -x -c nc -v -z -w 2 172.21.92.220 80' + Jul 27 02:23:40.923: INFO: stderr: "+ nc -v -z -w 2 172.21.92.220 80\nConnection to 172.21.92.220 80 port [tcp/http] succeeded!\n" + Jul 27 02:23:40.923: INFO: stdout: "" + Jul 27 02:23:40.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-3422 exec execpodj9hzl -- /bin/sh -x -c nc -v -z -w 2 multi-endpoint-test 81' + Jul 27 02:23:41.116: INFO: stderr: "+ nc -v -z -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" + Jul 27 02:23:41.116: INFO: stdout: "" + Jul 27 02:23:41.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=services-3422 exec execpodj9hzl -- /bin/sh -x -c nc -v -z -w 2 172.21.92.220 81' + Jul 27 02:23:41.341: INFO: stderr: "+ nc -v -z -w 2 172.21.92.220 81\nConnection to 172.21.92.220 81 port [tcp/*] succeeded!\n" + Jul 27 02:23:41.341: INFO: stdout: "" + STEP: Deleting pod pod1 in namespace services-3422 07/27/23 02:23:41.341 + STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3422 to expose endpoints map[pod2:[101]] 07/27/23 02:23:41.367 + Jul 27 02:23:41.399: INFO: successfully validated that service multi-endpoint-test in namespace services-3422 exposes endpoints map[pod2:[101]] + STEP: Deleting pod pod2 in namespace services-3422 07/27/23 02:23:41.399 + STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3422 to expose endpoints map[] 07/27/23 02:23:41.429 + Jul 27 02:23:41.461: INFO: successfully validated that service multi-endpoint-test in namespace services-3422 exposes endpoints map[] + [AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 - Jun 12 21:56:23.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + Jul 27 02:23:41.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-network] Services dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 - STEP: Destroying namespace "crd-publish-openapi-5873" for this suite. 06/12/23 21:56:23.462 + STEP: Destroying namespace "services-3422" for this suite. 07/27/23 02:23:41.533 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSS +S ------------------------------ -[sig-api-machinery] ResourceQuota - should verify ResourceQuota with best effort scope. [Conformance] - test/e2e/apimachinery/resource_quota.go:803 -[BeforeEach] [sig-api-machinery] ResourceQuota +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should perform rolling updates and roll backs of template modifications [Conformance] + test/e2e/apps/statefulset.go:306 +[BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:56:23.499 -Jun 12 21:56:23.499: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename resourcequota 06/12/23 21:56:23.501 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:56:23.57 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:56:23.596 -[BeforeEach] [sig-api-machinery] ResourceQuota +STEP: Creating a kubernetes client 07/27/23 02:23:41.555 +Jul 27 02:23:41.555: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename statefulset 07/27/23 02:23:41.557 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:23:41.603 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:23:41.612 +[BeforeEach] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:31 -[It] should verify ResourceQuota with best effort scope. [Conformance] - test/e2e/apimachinery/resource_quota.go:803 -STEP: Creating a ResourceQuota with best effort scope 06/12/23 21:56:23.63 -STEP: Ensuring ResourceQuota status is calculated 06/12/23 21:56:23.648 -STEP: Creating a ResourceQuota with not best effort scope 06/12/23 21:56:25.663 -STEP: Ensuring ResourceQuota status is calculated 06/12/23 21:56:25.678 -STEP: Creating a best-effort pod 06/12/23 21:56:27.691 -STEP: Ensuring resource quota with best effort scope captures the pod usage 06/12/23 21:56:27.724 -STEP: Ensuring resource quota with not best effort ignored the pod usage 06/12/23 21:56:29.76 -STEP: Deleting the pod 06/12/23 21:56:31.779 -STEP: Ensuring resource quota status released the pod usage 06/12/23 21:56:31.8 -STEP: Creating a not best-effort pod 06/12/23 21:56:33.818 -STEP: Ensuring resource quota with not best effort scope captures the pod usage 06/12/23 21:56:33.844 -STEP: Ensuring resource quota with best effort scope ignored the pod usage 06/12/23 21:56:35.859 -STEP: Deleting the pod 06/12/23 21:56:37.888 -STEP: Ensuring resource quota status released the pod usage 06/12/23 21:56:37.927 -[AfterEach] [sig-api-machinery] ResourceQuota +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 +STEP: Creating service test in namespace statefulset-6002 07/27/23 02:23:41.625 +[It] should perform rolling updates and roll backs of template modifications [Conformance] + test/e2e/apps/statefulset.go:306 +STEP: Creating a new StatefulSet 07/27/23 02:23:41.647 +Jul 27 02:23:41.707: INFO: Found 0 stateful pods, waiting for 3 +Jul 27 02:23:51.717: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Jul 27 02:23:51.717: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Jul 27 02:23:51.717: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +Jul 27 02:23:51.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-6002 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Jul 27 02:23:51.992: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Jul 27 02:23:51.992: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Jul 27 02:23:51.992: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +STEP: Updating StatefulSet template: update image from registry.k8s.io/e2e-test-images/httpd:2.4.38-4 to registry.k8s.io/e2e-test-images/httpd:2.4.39-4 07/27/23 02:24:02.038 +Jul 27 02:24:02.086: INFO: Updating stateful set ss2 +STEP: Creating a new revision 07/27/23 02:24:02.086 +STEP: Updating Pods in reverse ordinal order 07/27/23 02:24:12.156 +Jul 27 02:24:12.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-6002 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Jul 27 02:24:12.382: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Jul 27 02:24:12.382: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Jul 27 02:24:12.382: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Jul 27 02:24:22.499: INFO: Waiting for StatefulSet statefulset-6002/ss2 to complete update +STEP: Rolling back to a previous revision 07/27/23 02:24:32.522 +Jul 27 02:24:32.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-6002 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Jul 27 02:24:32.719: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Jul 27 02:24:32.719: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Jul 27 02:24:32.719: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Jul 27 02:24:42.811: INFO: Updating stateful set ss2 +STEP: Rolling back update in reverse ordinal order 07/27/23 02:24:52.861 +Jul 27 02:24:52.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-6002 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Jul 27 02:24:53.105: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Jul 27 02:24:53.105: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Jul 27 02:24:53.105: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Jul 27 02:25:03.171: INFO: Waiting for StatefulSet statefulset-6002/ss2 to complete update +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 +Jul 27 02:25:13.193: INFO: Deleting all statefulset in ns statefulset-6002 +Jul 27 02:25:13.205: INFO: Scaling statefulset ss2 to 0 +Jul 27 02:25:23.254: INFO: Waiting for statefulset status.replicas updated to 0 +Jul 27 02:25:23.279: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 -Jun 12 21:56:39.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +Jul 27 02:25:23.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +[DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +[DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 -STEP: Destroying namespace "resourcequota-5872" for this suite. 06/12/23 21:56:39.963 +STEP: Destroying namespace "statefulset-6002" for this suite. 07/27/23 02:25:23.339 ------------------------------ -• [SLOW TEST] [16.487 seconds] -[sig-api-machinery] ResourceQuota -test/e2e/apimachinery/framework.go:23 - should verify ResourceQuota with best effort scope. [Conformance] - test/e2e/apimachinery/resource_quota.go:803 +• [SLOW TEST] [101.807 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:103 + should perform rolling updates and roll backs of template modifications [Conformance] + test/e2e/apps/statefulset.go:306 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] ResourceQuota + [BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:56:23.499 - Jun 12 21:56:23.499: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename resourcequota 06/12/23 21:56:23.501 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:56:23.57 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:56:23.596 - [BeforeEach] [sig-api-machinery] ResourceQuota + STEP: Creating a kubernetes client 07/27/23 02:23:41.555 + Jul 27 02:23:41.555: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename statefulset 07/27/23 02:23:41.557 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:23:41.603 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:23:41.612 + [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:31 - [It] should verify ResourceQuota with best effort scope. [Conformance] - test/e2e/apimachinery/resource_quota.go:803 - STEP: Creating a ResourceQuota with best effort scope 06/12/23 21:56:23.63 - STEP: Ensuring ResourceQuota status is calculated 06/12/23 21:56:23.648 - STEP: Creating a ResourceQuota with not best effort scope 06/12/23 21:56:25.663 - STEP: Ensuring ResourceQuota status is calculated 06/12/23 21:56:25.678 - STEP: Creating a best-effort pod 06/12/23 21:56:27.691 - STEP: Ensuring resource quota with best effort scope captures the pod usage 06/12/23 21:56:27.724 - STEP: Ensuring resource quota with not best effort ignored the pod usage 06/12/23 21:56:29.76 - STEP: Deleting the pod 06/12/23 21:56:31.779 - STEP: Ensuring resource quota status released the pod usage 06/12/23 21:56:31.8 - STEP: Creating a not best-effort pod 06/12/23 21:56:33.818 - STEP: Ensuring resource quota with not best effort scope captures the pod usage 06/12/23 21:56:33.844 - STEP: Ensuring resource quota with best effort scope ignored the pod usage 06/12/23 21:56:35.859 - STEP: Deleting the pod 06/12/23 21:56:37.888 - STEP: Ensuring resource quota status released the pod usage 06/12/23 21:56:37.927 - [AfterEach] [sig-api-machinery] ResourceQuota + [BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 + [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 + STEP: Creating service test in namespace statefulset-6002 07/27/23 02:23:41.625 + [It] should perform rolling updates and roll backs of template modifications [Conformance] + test/e2e/apps/statefulset.go:306 + STEP: Creating a new StatefulSet 07/27/23 02:23:41.647 + Jul 27 02:23:41.707: INFO: Found 0 stateful pods, waiting for 3 + Jul 27 02:23:51.717: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true + Jul 27 02:23:51.717: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true + Jul 27 02:23:51.717: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true + Jul 27 02:23:51.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-6002 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Jul 27 02:23:51.992: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Jul 27 02:23:51.992: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Jul 27 02:23:51.992: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + STEP: Updating StatefulSet template: update image from registry.k8s.io/e2e-test-images/httpd:2.4.38-4 to registry.k8s.io/e2e-test-images/httpd:2.4.39-4 07/27/23 02:24:02.038 + Jul 27 02:24:02.086: INFO: Updating stateful set ss2 + STEP: Creating a new revision 07/27/23 02:24:02.086 + STEP: Updating Pods in reverse ordinal order 07/27/23 02:24:12.156 + Jul 27 02:24:12.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-6002 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Jul 27 02:24:12.382: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" + Jul 27 02:24:12.382: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Jul 27 02:24:12.382: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + Jul 27 02:24:22.499: INFO: Waiting for StatefulSet statefulset-6002/ss2 to complete update + STEP: Rolling back to a previous revision 07/27/23 02:24:32.522 + Jul 27 02:24:32.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-6002 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Jul 27 02:24:32.719: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Jul 27 02:24:32.719: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Jul 27 02:24:32.719: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Jul 27 02:24:42.811: INFO: Updating stateful set ss2 + STEP: Rolling back update in reverse ordinal order 07/27/23 02:24:52.861 + Jul 27 02:24:52.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-6002 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Jul 27 02:24:53.105: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" + Jul 27 02:24:53.105: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Jul 27 02:24:53.105: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + Jul 27 02:25:03.171: INFO: Waiting for StatefulSet statefulset-6002/ss2 to complete update + [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 + Jul 27 02:25:13.193: INFO: Deleting all statefulset in ns statefulset-6002 + Jul 27 02:25:13.205: INFO: Scaling statefulset ss2 to 0 + Jul 27 02:25:23.254: INFO: Waiting for statefulset status.replicas updated to 0 + Jul 27 02:25:23.279: INFO: Deleting statefulset ss2 + [AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 - Jun 12 21:56:39.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + Jul 27 02:25:23.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + [DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 - STEP: Destroying namespace "resourcequota-5872" for this suite. 06/12/23 21:56:39.963 + STEP: Destroying namespace "statefulset-6002" for this suite. 07/27/23 02:25:23.339 << End Captured GinkgoWriter Output ------------------------------ -S +SSS ------------------------------ -[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - should include webhook resources in discovery documents [Conformance] - test/e2e/apimachinery/webhook.go:117 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[sig-node] RuntimeClass + should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:55 +[BeforeEach] [sig-node] RuntimeClass set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:56:39.987 -Jun 12 21:56:39.987: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename webhook 06/12/23 21:56:39.989 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:56:40.046 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:56:40.06 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +STEP: Creating a kubernetes client 07/27/23 02:25:23.362 +Jul 27 02:25:23.363: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename runtimeclass 07/27/23 02:25:23.364 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:25:23.405 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:25:23.414 +[BeforeEach] [sig-node] RuntimeClass test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:90 -STEP: Setting up server cert 06/12/23 21:56:40.133 -STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 21:56:41.839 -STEP: Deploying the webhook pod 06/12/23 21:56:41.873 -STEP: Wait for the deployment to be ready 06/12/23 21:56:41.901 -Jun 12 21:56:41.917: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set -Jun 12 21:56:43.945: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 21, 56, 41, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 56, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 56, 41, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 56, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} -STEP: Deploying the webhook service 06/12/23 21:56:45.954 -STEP: Verifying the service has paired with the endpoint 06/12/23 21:56:45.992 -Jun 12 21:56:46.994: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 -[It] should include webhook resources in discovery documents [Conformance] - test/e2e/apimachinery/webhook.go:117 -STEP: fetching the /apis discovery document 06/12/23 21:56:47.006 -STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document 06/12/23 21:56:47.012 -STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document 06/12/23 21:56:47.012 -STEP: fetching the /apis/admissionregistration.k8s.io discovery document 06/12/23 21:56:47.013 -STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document 06/12/23 21:56:47.018 -STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document 06/12/23 21:56:47.018 -STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document 06/12/23 21:56:47.023 -[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[It] should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:55 +[AfterEach] [sig-node] RuntimeClass test/e2e/framework/node/init/init.go:32 -Jun 12 21:56:47.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:105 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +Jul 27 02:25:23.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] RuntimeClass test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-node] RuntimeClass dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-node] RuntimeClass tear down framework | framework.go:193 -STEP: Destroying namespace "webhook-1989" for this suite. 06/12/23 21:56:47.165 -STEP: Destroying namespace "webhook-1989-markers" for this suite. 06/12/23 21:56:47.192 +STEP: Destroying namespace "runtimeclass-4304" for this suite. 07/27/23 02:25:23.473 ------------------------------ -• [SLOW TEST] [7.233 seconds] -[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] -test/e2e/apimachinery/framework.go:23 - should include webhook resources in discovery documents [Conformance] - test/e2e/apimachinery/webhook.go:117 +• [0.152 seconds] +[sig-node] RuntimeClass +test/e2e/common/node/framework.go:23 + should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:55 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [BeforeEach] [sig-node] RuntimeClass set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:56:39.987 - Jun 12 21:56:39.987: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename webhook 06/12/23 21:56:39.989 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:56:40.046 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:56:40.06 - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + STEP: Creating a kubernetes client 07/27/23 02:25:23.362 + Jul 27 02:25:23.363: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename runtimeclass 07/27/23 02:25:23.364 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:25:23.405 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:25:23.414 + [BeforeEach] [sig-node] RuntimeClass test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:90 - STEP: Setting up server cert 06/12/23 21:56:40.133 - STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 21:56:41.839 - STEP: Deploying the webhook pod 06/12/23 21:56:41.873 - STEP: Wait for the deployment to be ready 06/12/23 21:56:41.901 - Jun 12 21:56:41.917: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set - Jun 12 21:56:43.945: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 21, 56, 41, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 56, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 56, 41, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 56, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} - STEP: Deploying the webhook service 06/12/23 21:56:45.954 - STEP: Verifying the service has paired with the endpoint 06/12/23 21:56:45.992 - Jun 12 21:56:46.994: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 - [It] should include webhook resources in discovery documents [Conformance] - test/e2e/apimachinery/webhook.go:117 - STEP: fetching the /apis discovery document 06/12/23 21:56:47.006 - STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document 06/12/23 21:56:47.012 - STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document 06/12/23 21:56:47.012 - STEP: fetching the /apis/admissionregistration.k8s.io discovery document 06/12/23 21:56:47.013 - STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document 06/12/23 21:56:47.018 - STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document 06/12/23 21:56:47.018 - STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document 06/12/23 21:56:47.023 - [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [It] should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:55 + [AfterEach] [sig-node] RuntimeClass test/e2e/framework/node/init/init.go:32 - Jun 12 21:56:47.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:105 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + Jul 27 02:25:23.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] RuntimeClass test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-node] RuntimeClass dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-node] RuntimeClass tear down framework | framework.go:193 - STEP: Destroying namespace "webhook-1989" for this suite. 06/12/23 21:56:47.165 - STEP: Destroying namespace "webhook-1989-markers" for this suite. 06/12/23 21:56:47.192 + STEP: Destroying namespace "runtimeclass-4304" for this suite. 07/27/23 02:25:23.473 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSS ------------------------------ -[sig-storage] EmptyDir volumes - should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:197 -[BeforeEach] [sig-storage] EmptyDir volumes +[sig-api-machinery] Servers with support for Table transformation + should return a 406 for a backend which does not implement metadata [Conformance] + test/e2e/apimachinery/table_conversion.go:154 +[BeforeEach] [sig-api-machinery] Servers with support for Table transformation set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:56:47.227 -Jun 12 21:56:47.227: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename emptydir 06/12/23 21:56:47.231 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:56:47.29 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:56:47.304 -[BeforeEach] [sig-storage] EmptyDir volumes +STEP: Creating a kubernetes client 07/27/23 02:25:23.515 +Jul 27 02:25:23.515: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename tables 07/27/23 02:25:23.516 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:25:23.564 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:25:23.573 +[BeforeEach] [sig-api-machinery] Servers with support for Table transformation test/e2e/framework/metrics/init/init.go:31 -[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:197 -STEP: Creating a pod to test emptydir 0644 on node default medium 06/12/23 21:56:47.317 -Jun 12 21:56:47.341: INFO: Waiting up to 5m0s for pod "pod-c4b36b9d-b4a7-49e8-a3f4-8c39e08367ca" in namespace "emptydir-6405" to be "Succeeded or Failed" -Jun 12 21:56:47.350: INFO: Pod "pod-c4b36b9d-b4a7-49e8-a3f4-8c39e08367ca": Phase="Pending", Reason="", readiness=false. Elapsed: 8.615989ms -Jun 12 21:56:49.365: INFO: Pod "pod-c4b36b9d-b4a7-49e8-a3f4-8c39e08367ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023759526s -Jun 12 21:56:51.361: INFO: Pod "pod-c4b36b9d-b4a7-49e8-a3f4-8c39e08367ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019531891s -Jun 12 21:56:53.365: INFO: Pod "pod-c4b36b9d-b4a7-49e8-a3f4-8c39e08367ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023601227s -STEP: Saw pod success 06/12/23 21:56:53.365 -Jun 12 21:56:53.366: INFO: Pod "pod-c4b36b9d-b4a7-49e8-a3f4-8c39e08367ca" satisfied condition "Succeeded or Failed" -Jun 12 21:56:53.385: INFO: Trying to get logs from node 10.138.75.70 pod pod-c4b36b9d-b4a7-49e8-a3f4-8c39e08367ca container test-container: -STEP: delete the pod 06/12/23 21:56:53.539 -Jun 12 21:56:53.621: INFO: Waiting for pod pod-c4b36b9d-b4a7-49e8-a3f4-8c39e08367ca to disappear -Jun 12 21:56:53.645: INFO: Pod pod-c4b36b9d-b4a7-49e8-a3f4-8c39e08367ca no longer exists -[AfterEach] [sig-storage] EmptyDir volumes +[BeforeEach] [sig-api-machinery] Servers with support for Table transformation + test/e2e/apimachinery/table_conversion.go:49 +[It] should return a 406 for a backend which does not implement metadata [Conformance] + test/e2e/apimachinery/table_conversion.go:154 +[AfterEach] [sig-api-machinery] Servers with support for Table transformation test/e2e/framework/node/init/init.go:32 -Jun 12 21:56:53.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +Jul 27 02:25:23.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Servers with support for Table transformation test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +[DeferCleanup (Each)] [sig-api-machinery] Servers with support for Table transformation dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +[DeferCleanup (Each)] [sig-api-machinery] Servers with support for Table transformation tear down framework | framework.go:193 -STEP: Destroying namespace "emptydir-6405" for this suite. 06/12/23 21:56:53.668 +STEP: Destroying namespace "tables-361" for this suite. 07/27/23 02:25:23.602 ------------------------------ -• [SLOW TEST] [6.515 seconds] -[sig-storage] EmptyDir volumes -test/e2e/common/storage/framework.go:23 - should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:197 +• [0.151 seconds] +[sig-api-machinery] Servers with support for Table transformation +test/e2e/apimachinery/framework.go:23 + should return a 406 for a backend which does not implement metadata [Conformance] + test/e2e/apimachinery/table_conversion.go:154 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] EmptyDir volumes + [BeforeEach] [sig-api-machinery] Servers with support for Table transformation set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:56:47.227 - Jun 12 21:56:47.227: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename emptydir 06/12/23 21:56:47.231 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:56:47.29 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:56:47.304 - [BeforeEach] [sig-storage] EmptyDir volumes + STEP: Creating a kubernetes client 07/27/23 02:25:23.515 + Jul 27 02:25:23.515: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename tables 07/27/23 02:25:23.516 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:25:23.564 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:25:23.573 + [BeforeEach] [sig-api-machinery] Servers with support for Table transformation test/e2e/framework/metrics/init/init.go:31 - [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:197 - STEP: Creating a pod to test emptydir 0644 on node default medium 06/12/23 21:56:47.317 - Jun 12 21:56:47.341: INFO: Waiting up to 5m0s for pod "pod-c4b36b9d-b4a7-49e8-a3f4-8c39e08367ca" in namespace "emptydir-6405" to be "Succeeded or Failed" - Jun 12 21:56:47.350: INFO: Pod "pod-c4b36b9d-b4a7-49e8-a3f4-8c39e08367ca": Phase="Pending", Reason="", readiness=false. Elapsed: 8.615989ms - Jun 12 21:56:49.365: INFO: Pod "pod-c4b36b9d-b4a7-49e8-a3f4-8c39e08367ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023759526s - Jun 12 21:56:51.361: INFO: Pod "pod-c4b36b9d-b4a7-49e8-a3f4-8c39e08367ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019531891s - Jun 12 21:56:53.365: INFO: Pod "pod-c4b36b9d-b4a7-49e8-a3f4-8c39e08367ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023601227s - STEP: Saw pod success 06/12/23 21:56:53.365 - Jun 12 21:56:53.366: INFO: Pod "pod-c4b36b9d-b4a7-49e8-a3f4-8c39e08367ca" satisfied condition "Succeeded or Failed" - Jun 12 21:56:53.385: INFO: Trying to get logs from node 10.138.75.70 pod pod-c4b36b9d-b4a7-49e8-a3f4-8c39e08367ca container test-container: - STEP: delete the pod 06/12/23 21:56:53.539 - Jun 12 21:56:53.621: INFO: Waiting for pod pod-c4b36b9d-b4a7-49e8-a3f4-8c39e08367ca to disappear - Jun 12 21:56:53.645: INFO: Pod pod-c4b36b9d-b4a7-49e8-a3f4-8c39e08367ca no longer exists - [AfterEach] [sig-storage] EmptyDir volumes + [BeforeEach] [sig-api-machinery] Servers with support for Table transformation + test/e2e/apimachinery/table_conversion.go:49 + [It] should return a 406 for a backend which does not implement metadata [Conformance] + test/e2e/apimachinery/table_conversion.go:154 + [AfterEach] [sig-api-machinery] Servers with support for Table transformation test/e2e/framework/node/init/init.go:32 - Jun 12 21:56:53.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + Jul 27 02:25:23.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Servers with support for Table transformation test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + [DeferCleanup (Each)] [sig-api-machinery] Servers with support for Table transformation dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + [DeferCleanup (Each)] [sig-api-machinery] Servers with support for Table transformation tear down framework | framework.go:193 - STEP: Destroying namespace "emptydir-6405" for this suite. 06/12/23 21:56:53.668 + STEP: Destroying namespace "tables-361" for this suite. 07/27/23 02:25:23.602 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces - should list and delete a collection of PodDisruptionBudgets [Conformance] - test/e2e/apps/disruption.go:87 -[BeforeEach] [sig-apps] DisruptionController - set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:56:53.754 -Jun 12 21:56:53.755: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename disruption 06/12/23 21:56:53.759 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:56:53.888 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:56:53.923 -[BeforeEach] [sig-apps] DisruptionController - test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-apps] DisruptionController - test/e2e/apps/disruption.go:72 -[BeforeEach] Listing PodDisruptionBudgets for all namespaces +[sig-cli] Kubectl client Guestbook application + should create and stop a working application [Conformance] + test/e2e/kubectl/kubectl.go:394 +[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:56:53.936 -Jun 12 21:56:53.936: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename disruption-2 06/12/23 21:56:53.95 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:56:54.029 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:56:54.077 -[BeforeEach] Listing PodDisruptionBudgets for all namespaces +STEP: Creating a kubernetes client 07/27/23 02:25:23.667 +Jul 27 02:25:23.667: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename kubectl 07/27/23 02:25:23.668 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:25:23.74 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:25:23.749 +[BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 -[It] should list and delete a collection of PodDisruptionBudgets [Conformance] - test/e2e/apps/disruption.go:87 -STEP: Waiting for the pdb to be processed 06/12/23 21:56:54.196 -STEP: Waiting for the pdb to be processed 06/12/23 21:56:54.234 -STEP: Waiting for the pdb to be processed 06/12/23 21:56:54.383 -STEP: listing a collection of PDBs across all namespaces 06/12/23 21:56:54.447 -STEP: listing a collection of PDBs in namespace disruption-8689 06/12/23 21:56:54.472 -STEP: deleting a collection of PDBs 06/12/23 21:56:54.499 -STEP: Waiting for the PDB collection to be deleted 06/12/23 21:56:54.561 -[AfterEach] Listing PodDisruptionBudgets for all namespaces - test/e2e/framework/node/init/init.go:32 -Jun 12 21:56:54.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[AfterEach] [sig-apps] DisruptionController +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[It] should create and stop a working application [Conformance] + test/e2e/kubectl/kubectl.go:394 +STEP: creating all guestbook components 07/27/23 02:25:23.758 +Jul 27 02:25:23.758: INFO: apiVersion: v1 +kind: Service +metadata: + name: agnhost-replica + labels: + app: agnhost + role: replica + tier: backend +spec: + ports: + - port: 6379 + selector: + app: agnhost + role: replica + tier: backend + +Jul 27 02:25:23.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7186 create -f -' +Jul 27 02:25:24.315: INFO: stderr: "" +Jul 27 02:25:24.315: INFO: stdout: "service/agnhost-replica created\n" +Jul 27 02:25:24.315: INFO: apiVersion: v1 +kind: Service +metadata: + name: agnhost-primary + labels: + app: agnhost + role: primary + tier: backend +spec: + ports: + - port: 6379 + targetPort: 6379 + selector: + app: agnhost + role: primary + tier: backend + +Jul 27 02:25:24.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7186 create -f -' +Jul 27 02:25:24.816: INFO: stderr: "" +Jul 27 02:25:24.816: INFO: stdout: "service/agnhost-primary created\n" +Jul 27 02:25:24.816: INFO: apiVersion: v1 +kind: Service +metadata: + name: frontend + labels: + app: guestbook + tier: frontend +spec: + # if your cluster supports it, uncomment the following to automatically create + # an external load-balanced IP for the frontend service. + # type: LoadBalancer + ports: + - port: 80 + selector: + app: guestbook + tier: frontend + +Jul 27 02:25:24.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7186 create -f -' +Jul 27 02:25:25.244: INFO: stderr: "" +Jul 27 02:25:25.244: INFO: stdout: "service/frontend created\n" +Jul 27 02:25:25.244: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: frontend +spec: + replicas: 3 + selector: + matchLabels: + app: guestbook + tier: frontend + template: + metadata: + labels: + app: guestbook + tier: frontend + spec: + containers: + - name: guestbook-frontend + image: registry.k8s.io/e2e-test-images/agnhost:2.43 + args: [ "guestbook", "--backend-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 80 + +Jul 27 02:25:25.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7186 create -f -' +Jul 27 02:25:25.706: INFO: stderr: "" +Jul 27 02:25:25.706: INFO: stdout: "deployment.apps/frontend created\n" +Jul 27 02:25:25.706: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: agnhost-primary +spec: + replicas: 1 + selector: + matchLabels: + app: agnhost + role: primary + tier: backend + template: + metadata: + labels: + app: agnhost + role: primary + tier: backend + spec: + containers: + - name: primary + image: registry.k8s.io/e2e-test-images/agnhost:2.43 + args: [ "guestbook", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + +Jul 27 02:25:25.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7186 create -f -' +Jul 27 02:25:26.113: INFO: stderr: "" +Jul 27 02:25:26.113: INFO: stdout: "deployment.apps/agnhost-primary created\n" +Jul 27 02:25:26.113: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: agnhost-replica +spec: + replicas: 2 + selector: + matchLabels: + app: agnhost + role: replica + tier: backend + template: + metadata: + labels: + app: agnhost + role: replica + tier: backend + spec: + containers: + - name: replica + image: registry.k8s.io/e2e-test-images/agnhost:2.43 + args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + +Jul 27 02:25:26.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7186 create -f -' +Jul 27 02:25:26.653: INFO: stderr: "" +Jul 27 02:25:26.653: INFO: stdout: "deployment.apps/agnhost-replica created\n" +STEP: validating guestbook app 07/27/23 02:25:26.653 +Jul 27 02:25:26.654: INFO: Waiting for all frontend pods to be Running. +Jul 27 02:25:31.708: INFO: Waiting for frontend to serve content. +Jul 27 02:25:31.742: INFO: Trying to add a new entry to the guestbook. +Jul 27 02:25:31.786: INFO: Verifying that added entry can be retrieved. +STEP: using delete to clean up resources 07/27/23 02:25:31.807 +Jul 27 02:25:31.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7186 delete --grace-period=0 --force -f -' +Jul 27 02:25:31.938: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jul 27 02:25:31.938: INFO: stdout: "service \"agnhost-replica\" force deleted\n" +STEP: using delete to clean up resources 07/27/23 02:25:31.938 +Jul 27 02:25:31.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7186 delete --grace-period=0 --force -f -' +Jul 27 02:25:32.062: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jul 27 02:25:32.063: INFO: stdout: "service \"agnhost-primary\" force deleted\n" +STEP: using delete to clean up resources 07/27/23 02:25:32.063 +Jul 27 02:25:32.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7186 delete --grace-period=0 --force -f -' +Jul 27 02:25:32.190: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jul 27 02:25:32.190: INFO: stdout: "service \"frontend\" force deleted\n" +STEP: using delete to clean up resources 07/27/23 02:25:32.191 +Jul 27 02:25:32.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7186 delete --grace-period=0 --force -f -' +Jul 27 02:25:32.292: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jul 27 02:25:32.292: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" +STEP: using delete to clean up resources 07/27/23 02:25:32.292 +Jul 27 02:25:32.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7186 delete --grace-period=0 --force -f -' +Jul 27 02:25:32.394: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jul 27 02:25:32.394: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" +STEP: using delete to clean up resources 07/27/23 02:25:32.394 +Jul 27 02:25:32.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7186 delete --grace-period=0 --force -f -' +Jul 27 02:25:32.481: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jul 27 02:25:32.481: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" +[AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 -Jun 12 21:56:54.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] Listing PodDisruptionBudgets for all namespaces - test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] Listing PodDisruptionBudgets for all namespaces - dump namespaces | framework.go:196 -[DeferCleanup (Each)] Listing PodDisruptionBudgets for all namespaces - tear down framework | framework.go:193 -STEP: Destroying namespace "disruption-2-2944" for this suite. 06/12/23 21:56:54.702 -[DeferCleanup (Each)] [sig-apps] DisruptionController +Jul 27 02:25:32.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] DisruptionController +[DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] DisruptionController +[DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 -STEP: Destroying namespace "disruption-8689" for this suite. 06/12/23 21:56:54.737 +STEP: Destroying namespace "kubectl-7186" for this suite. 07/27/23 02:25:32.546 ------------------------------ -• [1.031 seconds] -[sig-apps] DisruptionController -test/e2e/apps/framework.go:23 - Listing PodDisruptionBudgets for all namespaces - test/e2e/apps/disruption.go:78 - should list and delete a collection of PodDisruptionBudgets [Conformance] - test/e2e/apps/disruption.go:87 +• [SLOW TEST] [8.903 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Guestbook application + test/e2e/kubectl/kubectl.go:369 + should create and stop a working application [Conformance] + test/e2e/kubectl/kubectl.go:394 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] DisruptionController + [BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:56:53.754 - Jun 12 21:56:53.755: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename disruption 06/12/23 21:56:53.759 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:56:53.888 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:56:53.923 - [BeforeEach] [sig-apps] DisruptionController + STEP: Creating a kubernetes client 07/27/23 02:25:23.667 + Jul 27 02:25:23.667: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename kubectl 07/27/23 02:25:23.668 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:25:23.74 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:25:23.749 + [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-apps] DisruptionController - test/e2e/apps/disruption.go:72 - [BeforeEach] Listing PodDisruptionBudgets for all namespaces - set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:56:53.936 - Jun 12 21:56:53.936: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename disruption-2 06/12/23 21:56:53.95 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:56:54.029 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:56:54.077 - [BeforeEach] Listing PodDisruptionBudgets for all namespaces - test/e2e/framework/metrics/init/init.go:31 - [It] should list and delete a collection of PodDisruptionBudgets [Conformance] - test/e2e/apps/disruption.go:87 - STEP: Waiting for the pdb to be processed 06/12/23 21:56:54.196 - STEP: Waiting for the pdb to be processed 06/12/23 21:56:54.234 - STEP: Waiting for the pdb to be processed 06/12/23 21:56:54.383 - STEP: listing a collection of PDBs across all namespaces 06/12/23 21:56:54.447 - STEP: listing a collection of PDBs in namespace disruption-8689 06/12/23 21:56:54.472 - STEP: deleting a collection of PDBs 06/12/23 21:56:54.499 - STEP: Waiting for the PDB collection to be deleted 06/12/23 21:56:54.561 - [AfterEach] Listing PodDisruptionBudgets for all namespaces - test/e2e/framework/node/init/init.go:32 - Jun 12 21:56:54.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [AfterEach] [sig-apps] DisruptionController + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [It] should create and stop a working application [Conformance] + test/e2e/kubectl/kubectl.go:394 + STEP: creating all guestbook components 07/27/23 02:25:23.758 + Jul 27 02:25:23.758: INFO: apiVersion: v1 + kind: Service + metadata: + name: agnhost-replica + labels: + app: agnhost + role: replica + tier: backend + spec: + ports: + - port: 6379 + selector: + app: agnhost + role: replica + tier: backend + + Jul 27 02:25:23.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7186 create -f -' + Jul 27 02:25:24.315: INFO: stderr: "" + Jul 27 02:25:24.315: INFO: stdout: "service/agnhost-replica created\n" + Jul 27 02:25:24.315: INFO: apiVersion: v1 + kind: Service + metadata: + name: agnhost-primary + labels: + app: agnhost + role: primary + tier: backend + spec: + ports: + - port: 6379 + targetPort: 6379 + selector: + app: agnhost + role: primary + tier: backend + + Jul 27 02:25:24.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7186 create -f -' + Jul 27 02:25:24.816: INFO: stderr: "" + Jul 27 02:25:24.816: INFO: stdout: "service/agnhost-primary created\n" + Jul 27 02:25:24.816: INFO: apiVersion: v1 + kind: Service + metadata: + name: frontend + labels: + app: guestbook + tier: frontend + spec: + # if your cluster supports it, uncomment the following to automatically create + # an external load-balanced IP for the frontend service. + # type: LoadBalancer + ports: + - port: 80 + selector: + app: guestbook + tier: frontend + + Jul 27 02:25:24.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7186 create -f -' + Jul 27 02:25:25.244: INFO: stderr: "" + Jul 27 02:25:25.244: INFO: stdout: "service/frontend created\n" + Jul 27 02:25:25.244: INFO: apiVersion: apps/v1 + kind: Deployment + metadata: + name: frontend + spec: + replicas: 3 + selector: + matchLabels: + app: guestbook + tier: frontend + template: + metadata: + labels: + app: guestbook + tier: frontend + spec: + containers: + - name: guestbook-frontend + image: registry.k8s.io/e2e-test-images/agnhost:2.43 + args: [ "guestbook", "--backend-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 80 + + Jul 27 02:25:25.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7186 create -f -' + Jul 27 02:25:25.706: INFO: stderr: "" + Jul 27 02:25:25.706: INFO: stdout: "deployment.apps/frontend created\n" + Jul 27 02:25:25.706: INFO: apiVersion: apps/v1 + kind: Deployment + metadata: + name: agnhost-primary + spec: + replicas: 1 + selector: + matchLabels: + app: agnhost + role: primary + tier: backend + template: + metadata: + labels: + app: agnhost + role: primary + tier: backend + spec: + containers: + - name: primary + image: registry.k8s.io/e2e-test-images/agnhost:2.43 + args: [ "guestbook", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + + Jul 27 02:25:25.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7186 create -f -' + Jul 27 02:25:26.113: INFO: stderr: "" + Jul 27 02:25:26.113: INFO: stdout: "deployment.apps/agnhost-primary created\n" + Jul 27 02:25:26.113: INFO: apiVersion: apps/v1 + kind: Deployment + metadata: + name: agnhost-replica + spec: + replicas: 2 + selector: + matchLabels: + app: agnhost + role: replica + tier: backend + template: + metadata: + labels: + app: agnhost + role: replica + tier: backend + spec: + containers: + - name: replica + image: registry.k8s.io/e2e-test-images/agnhost:2.43 + args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + + Jul 27 02:25:26.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7186 create -f -' + Jul 27 02:25:26.653: INFO: stderr: "" + Jul 27 02:25:26.653: INFO: stdout: "deployment.apps/agnhost-replica created\n" + STEP: validating guestbook app 07/27/23 02:25:26.653 + Jul 27 02:25:26.654: INFO: Waiting for all frontend pods to be Running. + Jul 27 02:25:31.708: INFO: Waiting for frontend to serve content. + Jul 27 02:25:31.742: INFO: Trying to add a new entry to the guestbook. + Jul 27 02:25:31.786: INFO: Verifying that added entry can be retrieved. + STEP: using delete to clean up resources 07/27/23 02:25:31.807 + Jul 27 02:25:31.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7186 delete --grace-period=0 --force -f -' + Jul 27 02:25:31.938: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Jul 27 02:25:31.938: INFO: stdout: "service \"agnhost-replica\" force deleted\n" + STEP: using delete to clean up resources 07/27/23 02:25:31.938 + Jul 27 02:25:31.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7186 delete --grace-period=0 --force -f -' + Jul 27 02:25:32.062: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Jul 27 02:25:32.063: INFO: stdout: "service \"agnhost-primary\" force deleted\n" + STEP: using delete to clean up resources 07/27/23 02:25:32.063 + Jul 27 02:25:32.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7186 delete --grace-period=0 --force -f -' + Jul 27 02:25:32.190: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Jul 27 02:25:32.190: INFO: stdout: "service \"frontend\" force deleted\n" + STEP: using delete to clean up resources 07/27/23 02:25:32.191 + Jul 27 02:25:32.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7186 delete --grace-period=0 --force -f -' + Jul 27 02:25:32.292: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Jul 27 02:25:32.292: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" + STEP: using delete to clean up resources 07/27/23 02:25:32.292 + Jul 27 02:25:32.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7186 delete --grace-period=0 --force -f -' + Jul 27 02:25:32.394: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Jul 27 02:25:32.394: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" + STEP: using delete to clean up resources 07/27/23 02:25:32.394 + Jul 27 02:25:32.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7186 delete --grace-period=0 --force -f -' + Jul 27 02:25:32.481: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Jul 27 02:25:32.481: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" + [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 - Jun 12 21:56:54.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] Listing PodDisruptionBudgets for all namespaces - test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] Listing PodDisruptionBudgets for all namespaces - dump namespaces | framework.go:196 - [DeferCleanup (Each)] Listing PodDisruptionBudgets for all namespaces - tear down framework | framework.go:193 - STEP: Destroying namespace "disruption-2-2944" for this suite. 06/12/23 21:56:54.702 - [DeferCleanup (Each)] [sig-apps] DisruptionController + Jul 27 02:25:32.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] DisruptionController + [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] DisruptionController + [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 - STEP: Destroying namespace "disruption-8689" for this suite. 06/12/23 21:56:54.737 + STEP: Destroying namespace "kubectl-7186" for this suite. 07/27/23 02:25:32.546 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSS +S ------------------------------ -[sig-node] Container Runtime blackbox test on terminated container - should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] - test/e2e/common/node/runtime.go:195 -[BeforeEach] [sig-node] Container Runtime +[sig-node] Security Context When creating a pod with readOnlyRootFilesystem + should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:486 +[BeforeEach] [sig-node] Security Context set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:56:54.959 -Jun 12 21:56:54.959: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename container-runtime 06/12/23 21:56:54.962 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:56:55.103 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:56:55.147 -[BeforeEach] [sig-node] Container Runtime +STEP: Creating a kubernetes client 07/27/23 02:25:32.57 +Jul 27 02:25:32.571: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename security-context-test 07/27/23 02:25:32.571 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:25:32.623 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:25:32.668 +[BeforeEach] [sig-node] Security Context test/e2e/framework/metrics/init/init.go:31 -[It] should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] - test/e2e/common/node/runtime.go:195 -STEP: create the container 06/12/23 21:56:55.201 -STEP: wait for the container to reach Succeeded 06/12/23 21:56:55.234 -STEP: get the container status 06/12/23 21:57:01.363 -STEP: the container should be terminated 06/12/23 21:57:01.371 -STEP: the termination message should be set 06/12/23 21:57:01.371 -Jun 12 21:57:01.371: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- -STEP: delete the container 06/12/23 21:57:01.371 -[AfterEach] [sig-node] Container Runtime +[BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:50 +[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:486 +Jul 27 02:25:32.738: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-621167e9-4f6c-4e62-b0ab-6d8f32809114" in namespace "security-context-test-3276" to be "Succeeded or Failed" +Jul 27 02:25:32.760: INFO: Pod "busybox-readonly-false-621167e9-4f6c-4e62-b0ab-6d8f32809114": Phase="Pending", Reason="", readiness=false. Elapsed: 21.17749ms +Jul 27 02:25:34.769: INFO: Pod "busybox-readonly-false-621167e9-4f6c-4e62-b0ab-6d8f32809114": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030883556s +Jul 27 02:25:36.768: INFO: Pod "busybox-readonly-false-621167e9-4f6c-4e62-b0ab-6d8f32809114": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029508198s +Jul 27 02:25:36.768: INFO: Pod "busybox-readonly-false-621167e9-4f6c-4e62-b0ab-6d8f32809114" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context test/e2e/framework/node/init/init.go:32 -Jun 12 21:57:01.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Container Runtime +Jul 27 02:25:36.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Security Context test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Container Runtime +[DeferCleanup (Each)] [sig-node] Security Context dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Container Runtime +[DeferCleanup (Each)] [sig-node] Security Context tear down framework | framework.go:193 -STEP: Destroying namespace "container-runtime-1086" for this suite. 06/12/23 21:57:01.422 +STEP: Destroying namespace "security-context-test-3276" for this suite. 07/27/23 02:25:36.781 ------------------------------ -• [SLOW TEST] [6.484 seconds] -[sig-node] Container Runtime +• [4.233 seconds] +[sig-node] Security Context test/e2e/common/node/framework.go:23 - blackbox test - test/e2e/common/node/runtime.go:44 - on terminated container - test/e2e/common/node/runtime.go:137 - should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] - test/e2e/common/node/runtime.go:195 + When creating a pod with readOnlyRootFilesystem + test/e2e/common/node/security_context.go:430 + should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:486 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Container Runtime + [BeforeEach] [sig-node] Security Context set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:56:54.959 - Jun 12 21:56:54.959: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename container-runtime 06/12/23 21:56:54.962 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:56:55.103 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:56:55.147 - [BeforeEach] [sig-node] Container Runtime + STEP: Creating a kubernetes client 07/27/23 02:25:32.57 + Jul 27 02:25:32.571: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename security-context-test 07/27/23 02:25:32.571 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:25:32.623 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:25:32.668 + [BeforeEach] [sig-node] Security Context test/e2e/framework/metrics/init/init.go:31 - [It] should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] - test/e2e/common/node/runtime.go:195 - STEP: create the container 06/12/23 21:56:55.201 - STEP: wait for the container to reach Succeeded 06/12/23 21:56:55.234 - STEP: get the container status 06/12/23 21:57:01.363 - STEP: the container should be terminated 06/12/23 21:57:01.371 - STEP: the termination message should be set 06/12/23 21:57:01.371 - Jun 12 21:57:01.371: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- - STEP: delete the container 06/12/23 21:57:01.371 - [AfterEach] [sig-node] Container Runtime + [BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:50 + [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:486 + Jul 27 02:25:32.738: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-621167e9-4f6c-4e62-b0ab-6d8f32809114" in namespace "security-context-test-3276" to be "Succeeded or Failed" + Jul 27 02:25:32.760: INFO: Pod "busybox-readonly-false-621167e9-4f6c-4e62-b0ab-6d8f32809114": Phase="Pending", Reason="", readiness=false. Elapsed: 21.17749ms + Jul 27 02:25:34.769: INFO: Pod "busybox-readonly-false-621167e9-4f6c-4e62-b0ab-6d8f32809114": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030883556s + Jul 27 02:25:36.768: INFO: Pod "busybox-readonly-false-621167e9-4f6c-4e62-b0ab-6d8f32809114": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029508198s + Jul 27 02:25:36.768: INFO: Pod "busybox-readonly-false-621167e9-4f6c-4e62-b0ab-6d8f32809114" satisfied condition "Succeeded or Failed" + [AfterEach] [sig-node] Security Context test/e2e/framework/node/init/init.go:32 - Jun 12 21:57:01.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Container Runtime + Jul 27 02:25:36.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Security Context test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Container Runtime + [DeferCleanup (Each)] [sig-node] Security Context dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Container Runtime + [DeferCleanup (Each)] [sig-node] Security Context tear down framework | framework.go:193 - STEP: Destroying namespace "container-runtime-1086" for this suite. 06/12/23 21:57:01.422 + STEP: Destroying namespace "security-context-test-3276" for this suite. 07/27/23 02:25:36.781 << End Captured GinkgoWriter Output ------------------------------ -SSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] Secrets - should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/secrets_volume.go:57 -[BeforeEach] [sig-storage] Secrets +[sig-node] Probing container + should have monotonically increasing restart count [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:199 +[BeforeEach] [sig-node] Probing container set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:57:01.448 -Jun 12 21:57:01.448: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename secrets 06/12/23 21:57:01.449 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:57:01.509 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:57:01.524 -[BeforeEach] [sig-storage] Secrets +STEP: Creating a kubernetes client 07/27/23 02:25:36.806 +Jul 27 02:25:36.806: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename container-probe 07/27/23 02:25:36.806 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:25:36.845 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:25:36.856 +[BeforeEach] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:31 -[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/secrets_volume.go:57 -STEP: Creating secret with name secret-test-46d0066d-5f4c-4d4a-982d-f57d79b5304c 06/12/23 21:57:01.54 -STEP: Creating a pod to test consume secrets 06/12/23 21:57:01.561 -Jun 12 21:57:01.584: INFO: Waiting up to 5m0s for pod "pod-secrets-19c2e383-5102-4160-9955-a2f24dc20f03" in namespace "secrets-6236" to be "Succeeded or Failed" -Jun 12 21:57:01.593: INFO: Pod "pod-secrets-19c2e383-5102-4160-9955-a2f24dc20f03": Phase="Pending", Reason="", readiness=false. Elapsed: 9.309616ms -Jun 12 21:57:03.602: INFO: Pod "pod-secrets-19c2e383-5102-4160-9955-a2f24dc20f03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018316407s -Jun 12 21:57:05.606: INFO: Pod "pod-secrets-19c2e383-5102-4160-9955-a2f24dc20f03": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022200747s -Jun 12 21:57:07.603: INFO: Pod "pod-secrets-19c2e383-5102-4160-9955-a2f24dc20f03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019413427s -STEP: Saw pod success 06/12/23 21:57:07.603 -Jun 12 21:57:07.604: INFO: Pod "pod-secrets-19c2e383-5102-4160-9955-a2f24dc20f03" satisfied condition "Succeeded or Failed" -Jun 12 21:57:07.612: INFO: Trying to get logs from node 10.138.75.70 pod pod-secrets-19c2e383-5102-4160-9955-a2f24dc20f03 container secret-volume-test: -STEP: delete the pod 06/12/23 21:57:07.633 -Jun 12 21:57:07.652: INFO: Waiting for pod pod-secrets-19c2e383-5102-4160-9955-a2f24dc20f03 to disappear -Jun 12 21:57:07.660: INFO: Pod pod-secrets-19c2e383-5102-4160-9955-a2f24dc20f03 no longer exists -[AfterEach] [sig-storage] Secrets +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 +[It] should have monotonically increasing restart count [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:199 +STEP: Creating pod liveness-6344fcc9-d6d8-416d-9d69-41315a5e37c7 in namespace container-probe-6211 07/27/23 02:25:36.865 +W0727 02:25:36.902104 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "agnhost-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "agnhost-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "agnhost-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "agnhost-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +Jul 27 02:25:36.902: INFO: Waiting up to 5m0s for pod "liveness-6344fcc9-d6d8-416d-9d69-41315a5e37c7" in namespace "container-probe-6211" to be "not pending" +Jul 27 02:25:36.921: INFO: Pod "liveness-6344fcc9-d6d8-416d-9d69-41315a5e37c7": Phase="Pending", Reason="", readiness=false. Elapsed: 19.190042ms +Jul 27 02:25:38.935: INFO: Pod "liveness-6344fcc9-d6d8-416d-9d69-41315a5e37c7": Phase="Running", Reason="", readiness=true. Elapsed: 2.033295915s +Jul 27 02:25:38.935: INFO: Pod "liveness-6344fcc9-d6d8-416d-9d69-41315a5e37c7" satisfied condition "not pending" +Jul 27 02:25:38.935: INFO: Started pod liveness-6344fcc9-d6d8-416d-9d69-41315a5e37c7 in namespace container-probe-6211 +STEP: checking the pod's current state and verifying that restartCount is present 07/27/23 02:25:38.935 +Jul 27 02:25:38.948: INFO: Initial restart count of pod liveness-6344fcc9-d6d8-416d-9d69-41315a5e37c7 is 0 +Jul 27 02:25:59.065: INFO: Restart count of pod container-probe-6211/liveness-6344fcc9-d6d8-416d-9d69-41315a5e37c7 is now 1 (20.1172994s elapsed) +Jul 27 02:26:19.261: INFO: Restart count of pod container-probe-6211/liveness-6344fcc9-d6d8-416d-9d69-41315a5e37c7 is now 2 (40.312545287s elapsed) +Jul 27 02:26:39.429: INFO: Restart count of pod container-probe-6211/liveness-6344fcc9-d6d8-416d-9d69-41315a5e37c7 is now 3 (1m0.481143186s elapsed) +Jul 27 02:26:59.563: INFO: Restart count of pod container-probe-6211/liveness-6344fcc9-d6d8-416d-9d69-41315a5e37c7 is now 4 (1m20.615268418s elapsed) +Jul 27 02:28:00.054: INFO: Restart count of pod container-probe-6211/liveness-6344fcc9-d6d8-416d-9d69-41315a5e37c7 is now 5 (2m21.106254423s elapsed) +STEP: deleting the pod 07/27/23 02:28:00.054 +[AfterEach] [sig-node] Probing container test/e2e/framework/node/init/init.go:32 -Jun 12 21:57:07.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Secrets +Jul 27 02:28:00.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Secrets +[DeferCleanup (Each)] [sig-node] Probing container dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Secrets +[DeferCleanup (Each)] [sig-node] Probing container tear down framework | framework.go:193 -STEP: Destroying namespace "secrets-6236" for this suite. 06/12/23 21:57:07.677 +STEP: Destroying namespace "container-probe-6211" for this suite. 07/27/23 02:28:00.106 ------------------------------ -• [SLOW TEST] [6.251 seconds] -[sig-storage] Secrets -test/e2e/common/storage/framework.go:23 - should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/secrets_volume.go:57 +• [SLOW TEST] [143.321 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + should have monotonically increasing restart count [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:199 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Secrets + [BeforeEach] [sig-node] Probing container set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:57:01.448 - Jun 12 21:57:01.448: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename secrets 06/12/23 21:57:01.449 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:57:01.509 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:57:01.524 - [BeforeEach] [sig-storage] Secrets + STEP: Creating a kubernetes client 07/27/23 02:25:36.806 + Jul 27 02:25:36.806: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename container-probe 07/27/23 02:25:36.806 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:25:36.845 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:25:36.856 + [BeforeEach] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:31 - [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/secrets_volume.go:57 - STEP: Creating secret with name secret-test-46d0066d-5f4c-4d4a-982d-f57d79b5304c 06/12/23 21:57:01.54 - STEP: Creating a pod to test consume secrets 06/12/23 21:57:01.561 - Jun 12 21:57:01.584: INFO: Waiting up to 5m0s for pod "pod-secrets-19c2e383-5102-4160-9955-a2f24dc20f03" in namespace "secrets-6236" to be "Succeeded or Failed" - Jun 12 21:57:01.593: INFO: Pod "pod-secrets-19c2e383-5102-4160-9955-a2f24dc20f03": Phase="Pending", Reason="", readiness=false. Elapsed: 9.309616ms - Jun 12 21:57:03.602: INFO: Pod "pod-secrets-19c2e383-5102-4160-9955-a2f24dc20f03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018316407s - Jun 12 21:57:05.606: INFO: Pod "pod-secrets-19c2e383-5102-4160-9955-a2f24dc20f03": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022200747s - Jun 12 21:57:07.603: INFO: Pod "pod-secrets-19c2e383-5102-4160-9955-a2f24dc20f03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019413427s - STEP: Saw pod success 06/12/23 21:57:07.603 - Jun 12 21:57:07.604: INFO: Pod "pod-secrets-19c2e383-5102-4160-9955-a2f24dc20f03" satisfied condition "Succeeded or Failed" - Jun 12 21:57:07.612: INFO: Trying to get logs from node 10.138.75.70 pod pod-secrets-19c2e383-5102-4160-9955-a2f24dc20f03 container secret-volume-test: - STEP: delete the pod 06/12/23 21:57:07.633 - Jun 12 21:57:07.652: INFO: Waiting for pod pod-secrets-19c2e383-5102-4160-9955-a2f24dc20f03 to disappear - Jun 12 21:57:07.660: INFO: Pod pod-secrets-19c2e383-5102-4160-9955-a2f24dc20f03 no longer exists - [AfterEach] [sig-storage] Secrets + [BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 + [It] should have monotonically increasing restart count [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:199 + STEP: Creating pod liveness-6344fcc9-d6d8-416d-9d69-41315a5e37c7 in namespace container-probe-6211 07/27/23 02:25:36.865 + W0727 02:25:36.902104 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "agnhost-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "agnhost-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "agnhost-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "agnhost-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + Jul 27 02:25:36.902: INFO: Waiting up to 5m0s for pod "liveness-6344fcc9-d6d8-416d-9d69-41315a5e37c7" in namespace "container-probe-6211" to be "not pending" + Jul 27 02:25:36.921: INFO: Pod "liveness-6344fcc9-d6d8-416d-9d69-41315a5e37c7": Phase="Pending", Reason="", readiness=false. Elapsed: 19.190042ms + Jul 27 02:25:38.935: INFO: Pod "liveness-6344fcc9-d6d8-416d-9d69-41315a5e37c7": Phase="Running", Reason="", readiness=true. Elapsed: 2.033295915s + Jul 27 02:25:38.935: INFO: Pod "liveness-6344fcc9-d6d8-416d-9d69-41315a5e37c7" satisfied condition "not pending" + Jul 27 02:25:38.935: INFO: Started pod liveness-6344fcc9-d6d8-416d-9d69-41315a5e37c7 in namespace container-probe-6211 + STEP: checking the pod's current state and verifying that restartCount is present 07/27/23 02:25:38.935 + Jul 27 02:25:38.948: INFO: Initial restart count of pod liveness-6344fcc9-d6d8-416d-9d69-41315a5e37c7 is 0 + Jul 27 02:25:59.065: INFO: Restart count of pod container-probe-6211/liveness-6344fcc9-d6d8-416d-9d69-41315a5e37c7 is now 1 (20.1172994s elapsed) + Jul 27 02:26:19.261: INFO: Restart count of pod container-probe-6211/liveness-6344fcc9-d6d8-416d-9d69-41315a5e37c7 is now 2 (40.312545287s elapsed) + Jul 27 02:26:39.429: INFO: Restart count of pod container-probe-6211/liveness-6344fcc9-d6d8-416d-9d69-41315a5e37c7 is now 3 (1m0.481143186s elapsed) + Jul 27 02:26:59.563: INFO: Restart count of pod container-probe-6211/liveness-6344fcc9-d6d8-416d-9d69-41315a5e37c7 is now 4 (1m20.615268418s elapsed) + Jul 27 02:28:00.054: INFO: Restart count of pod container-probe-6211/liveness-6344fcc9-d6d8-416d-9d69-41315a5e37c7 is now 5 (2m21.106254423s elapsed) + STEP: deleting the pod 07/27/23 02:28:00.054 + [AfterEach] [sig-node] Probing container test/e2e/framework/node/init/init.go:32 - Jun 12 21:57:07.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Secrets + Jul 27 02:28:00.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Secrets + [DeferCleanup (Each)] [sig-node] Probing container dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Secrets + [DeferCleanup (Each)] [sig-node] Probing container tear down framework | framework.go:193 - STEP: Destroying namespace "secrets-6236" for this suite. 06/12/23 21:57:07.677 + STEP: Destroying namespace "container-probe-6211" for this suite. 07/27/23 02:28:00.106 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSS +SSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-network] Service endpoints latency - should not be very high [Conformance] - test/e2e/network/service_latency.go:59 -[BeforeEach] [sig-network] Service endpoints latency +[sig-storage] EmptyDir volumes + should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:147 +[BeforeEach] [sig-storage] EmptyDir volumes set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:57:07.7 -Jun 12 21:57:07.700: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename svc-latency 06/12/23 21:57:07.703 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:57:07.755 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:57:07.765 -[BeforeEach] [sig-network] Service endpoints latency +STEP: Creating a kubernetes client 07/27/23 02:28:00.128 +Jul 27 02:28:00.129: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename emptydir 07/27/23 02:28:00.13 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:28:00.171 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:28:00.183 +[BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:31 -[It] should not be very high [Conformance] - test/e2e/network/service_latency.go:59 -Jun 12 21:57:07.801: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: creating replication controller svc-latency-rc in namespace svc-latency-193 06/12/23 21:57:07.806 -I0612 21:57:07.823664 23 runners.go:193] Created replication controller with name: svc-latency-rc, namespace: svc-latency-193, replica count: 1 -I0612 21:57:08.874389 23 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady -I0612 21:57:09.882040 23 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady -I0612 21:57:10.890035 23 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady -Jun 12 21:57:11.073: INFO: Created: latency-svc-lxw4v -Jun 12 21:57:11.096: INFO: Got endpoints: latency-svc-lxw4v [72.242759ms] -Jun 12 21:57:11.141: INFO: Created: latency-svc-g44kc -Jun 12 21:57:11.150: INFO: Got endpoints: latency-svc-g44kc [47.23959ms] -Jun 12 21:57:11.168: INFO: Created: latency-svc-cv68s -Jun 12 21:57:11.207: INFO: Created: latency-svc-fqhrt -Jun 12 21:57:11.207: INFO: Got endpoints: latency-svc-fqhrt [103.158006ms] -Jun 12 21:57:11.207: INFO: Got endpoints: latency-svc-cv68s [104.722523ms] -Jun 12 21:57:11.210: INFO: Created: latency-svc-vxmr2 -Jun 12 21:57:11.210: INFO: Got endpoints: latency-svc-vxmr2 [104.198239ms] -Jun 12 21:57:11.219: INFO: Created: latency-svc-9dkqh -Jun 12 21:57:11.226: INFO: Got endpoints: latency-svc-9dkqh [118.680644ms] -Jun 12 21:57:11.239: INFO: Created: latency-svc-nlkct -Jun 12 21:57:11.270: INFO: Created: latency-svc-5zcdv -Jun 12 21:57:11.270: INFO: Created: latency-svc-lgsfh -Jun 12 21:57:11.270: INFO: Got endpoints: latency-svc-5zcdv [163.518362ms] -Jun 12 21:57:11.271: INFO: Got endpoints: latency-svc-nlkct [164.391951ms] -Jun 12 21:57:11.316: INFO: Created: latency-svc-c2hkv -Jun 12 21:57:11.326: INFO: Created: latency-svc-wbp8l -Jun 12 21:57:11.326: INFO: Got endpoints: latency-svc-c2hkv [220.08255ms] -Jun 12 21:57:11.327: INFO: Got endpoints: latency-svc-lgsfh [222.461774ms] -Jun 12 21:57:11.328: INFO: Got endpoints: latency-svc-wbp8l [224.460963ms] -Jun 12 21:57:11.328: INFO: Created: latency-svc-fl8xk -Jun 12 21:57:11.330: INFO: Got endpoints: latency-svc-fl8xk [225.046093ms] -Jun 12 21:57:11.373: INFO: Created: latency-svc-mb2f8 -Jun 12 21:57:11.377: INFO: Created: latency-svc-gl8xb -Jun 12 21:57:11.377: INFO: Created: latency-svc-jc82p -Jun 12 21:57:11.377: INFO: Got endpoints: latency-svc-mb2f8 [271.726832ms] -Jun 12 21:57:11.378: INFO: Got endpoints: latency-svc-jc82p [272.156373ms] -Jun 12 21:57:11.493: INFO: Created: latency-svc-vgn6q -Jun 12 21:57:11.494: INFO: Created: latency-svc-qzhmg -Jun 12 21:57:11.494: INFO: Created: latency-svc-9q65g -Jun 12 21:57:11.495: INFO: Created: latency-svc-8xz5j -Jun 12 21:57:11.495: INFO: Created: latency-svc-6x5vf -Jun 12 21:57:11.511: INFO: Got endpoints: latency-svc-6x5vf [300.370125ms] -Jun 12 21:57:11.512: INFO: Got endpoints: latency-svc-gl8xb [407.214824ms] -Jun 12 21:57:11.513: INFO: Got endpoints: latency-svc-vgn6q [408.557736ms] -Jun 12 21:57:11.516: INFO: Got endpoints: latency-svc-qzhmg [365.692399ms] -Jun 12 21:57:11.517: INFO: Got endpoints: latency-svc-9q65g [309.251032ms] -Jun 12 21:57:11.517: INFO: Got endpoints: latency-svc-8xz5j [310.227938ms] -Jun 12 21:57:11.518: INFO: Created: latency-svc-bqwcw -Jun 12 21:57:11.525: INFO: Created: latency-svc-7qqs9 -Jun 12 21:57:11.525: INFO: Created: latency-svc-kqzwt -Jun 12 21:57:11.529: INFO: Got endpoints: latency-svc-bqwcw [303.390548ms] -Jun 12 21:57:11.533: INFO: Got endpoints: latency-svc-7qqs9 [263.036197ms] -Jun 12 21:57:11.540: INFO: Got endpoints: latency-svc-kqzwt [268.782726ms] -Jun 12 21:57:11.545: INFO: Created: latency-svc-76hlm -Jun 12 21:57:11.555: INFO: Got endpoints: latency-svc-76hlm [228.30534ms] -Jun 12 21:57:11.560: INFO: Created: latency-svc-8jptv -Jun 12 21:57:11.570: INFO: Got endpoints: latency-svc-8jptv [242.861167ms] -Jun 12 21:57:11.817: INFO: Created: latency-svc-rw4pd -Jun 12 21:57:11.833: INFO: Created: latency-svc-msb6d -Jun 12 21:57:11.833: INFO: Created: latency-svc-cw8c6 -Jun 12 21:57:11.834: INFO: Created: latency-svc-6rk8d -Jun 12 21:57:11.834: INFO: Created: latency-svc-9v5s6 -Jun 12 21:57:11.835: INFO: Created: latency-svc-kpvxc -Jun 12 21:57:11.835: INFO: Created: latency-svc-c4x5g -Jun 12 21:57:11.837: INFO: Created: latency-svc-98fz8 -Jun 12 21:57:11.837: INFO: Created: latency-svc-xgxrn -Jun 12 21:57:11.837: INFO: Created: latency-svc-zmgcc -Jun 12 21:57:11.838: INFO: Got endpoints: latency-svc-rw4pd [321.622939ms] -Jun 12 21:57:11.839: INFO: Created: latency-svc-p2tn6 -Jun 12 21:57:11.839: INFO: Created: latency-svc-6xww7 -Jun 12 21:57:11.840: INFO: Created: latency-svc-c6tbf -Jun 12 21:57:11.840: INFO: Created: latency-svc-w7prv -Jun 12 21:57:11.841: INFO: Created: latency-svc-fnn29 -Jun 12 21:57:11.841: INFO: Got endpoints: latency-svc-msb6d [328.169284ms] -Jun 12 21:57:11.844: INFO: Got endpoints: latency-svc-c4x5g [332.660728ms] -Jun 12 21:57:11.844: INFO: Got endpoints: latency-svc-kpvxc [326.927758ms] -Jun 12 21:57:11.846: INFO: Got endpoints: latency-svc-xgxrn [316.87002ms] -Jun 12 21:57:11.847: INFO: Got endpoints: latency-svc-cw8c6 [313.105637ms] -Jun 12 21:57:11.848: INFO: Got endpoints: latency-svc-6rk8d [308.152991ms] -Jun 12 21:57:11.849: INFO: Got endpoints: latency-svc-9v5s6 [278.427438ms] -Jun 12 21:57:11.851: INFO: Got endpoints: latency-svc-zmgcc [521.233822ms] -Jun 12 21:57:11.851: INFO: Got endpoints: latency-svc-98fz8 [338.820254ms] -Jun 12 21:57:11.852: INFO: Got endpoints: latency-svc-6xww7 [335.444125ms] -Jun 12 21:57:11.855: INFO: Got endpoints: latency-svc-fnn29 [525.445897ms] -Jun 12 21:57:11.859: INFO: Got endpoints: latency-svc-c6tbf [304.243316ms] -Jun 12 21:57:11.860: INFO: Got endpoints: latency-svc-w7prv [481.892906ms] -Jun 12 21:57:11.862: INFO: Got endpoints: latency-svc-p2tn6 [484.222499ms] -Jun 12 21:57:11.905: INFO: Created: latency-svc-2985p -Jun 12 21:57:11.907: INFO: Created: latency-svc-qbvg9 -Jun 12 21:57:11.907: INFO: Got endpoints: latency-svc-2985p [69.016876ms] -Jun 12 21:57:11.907: INFO: Got endpoints: latency-svc-qbvg9 [65.975818ms] -Jun 12 21:57:11.933: INFO: Created: latency-svc-rl2m6 -Jun 12 21:57:11.933: INFO: Got endpoints: latency-svc-rl2m6 [89.615977ms] -Jun 12 21:57:11.952: INFO: Created: latency-svc-8k577 -Jun 12 21:57:11.952: INFO: Got endpoints: latency-svc-8k577 [107.983533ms] -Jun 12 21:57:11.953: INFO: Created: latency-svc-sr7j2 -Jun 12 21:57:11.960: INFO: Got endpoints: latency-svc-sr7j2 [113.508528ms] -Jun 12 21:57:11.969: INFO: Created: latency-svc-s8qcb -Jun 12 21:57:11.980: INFO: Got endpoints: latency-svc-s8qcb [132.599494ms] -Jun 12 21:57:11.990: INFO: Created: latency-svc-mxwqz -Jun 12 21:57:11.998: INFO: Got endpoints: latency-svc-mxwqz [149.46293ms] -Jun 12 21:57:12.008: INFO: Created: latency-svc-c962q -Jun 12 21:57:12.015: INFO: Got endpoints: latency-svc-c962q [166.015222ms] -Jun 12 21:57:12.023: INFO: Created: latency-svc-p6xhv -Jun 12 21:57:12.033: INFO: Got endpoints: latency-svc-p6xhv [182.378756ms] -Jun 12 21:57:12.044: INFO: Created: latency-svc-rmf6f -Jun 12 21:57:12.053: INFO: Got endpoints: latency-svc-rmf6f [201.41833ms] -Jun 12 21:57:12.059: INFO: Created: latency-svc-xc6r8 -Jun 12 21:57:12.071: INFO: Got endpoints: latency-svc-xc6r8 [218.555903ms] -Jun 12 21:57:12.079: INFO: Created: latency-svc-lb56c -Jun 12 21:57:12.089: INFO: Got endpoints: latency-svc-lb56c [233.788065ms] -Jun 12 21:57:12.101: INFO: Created: latency-svc-4gl79 -Jun 12 21:57:12.119: INFO: Got endpoints: latency-svc-4gl79 [259.565549ms] -Jun 12 21:57:12.119: INFO: Created: latency-svc-dxfh6 -Jun 12 21:57:12.125: INFO: Got endpoints: latency-svc-dxfh6 [264.941643ms] -Jun 12 21:57:12.131: INFO: Created: latency-svc-f5sk8 -Jun 12 21:57:12.141: INFO: Got endpoints: latency-svc-f5sk8 [279.256257ms] -Jun 12 21:57:12.148: INFO: Created: latency-svc-ctdrw -Jun 12 21:57:12.160: INFO: Got endpoints: latency-svc-ctdrw [252.593403ms] -Jun 12 21:57:12.173: INFO: Created: latency-svc-h862x -Jun 12 21:57:12.195: INFO: Got endpoints: latency-svc-h862x [287.673602ms] -Jun 12 21:57:12.198: INFO: Created: latency-svc-jqt84 -Jun 12 21:57:12.215: INFO: Got endpoints: latency-svc-jqt84 [281.094011ms] -Jun 12 21:57:12.217: INFO: Created: latency-svc-t55wq -Jun 12 21:57:12.230: INFO: Got endpoints: latency-svc-t55wq [277.491416ms] -Jun 12 21:57:12.232: INFO: Created: latency-svc-kn9zr -Jun 12 21:57:12.241: INFO: Got endpoints: latency-svc-kn9zr [281.53342ms] -Jun 12 21:57:12.260: INFO: Created: latency-svc-tv7bb -Jun 12 21:57:12.261: INFO: Got endpoints: latency-svc-tv7bb [280.261511ms] -Jun 12 21:57:12.269: INFO: Created: latency-svc-8thht -Jun 12 21:57:12.278: INFO: Got endpoints: latency-svc-8thht [280.052834ms] -Jun 12 21:57:12.290: INFO: Created: latency-svc-r8g9c -Jun 12 21:57:12.312: INFO: Got endpoints: latency-svc-r8g9c [297.457341ms] -Jun 12 21:57:12.314: INFO: Created: latency-svc-frqt2 -Jun 12 21:57:12.354: INFO: Got endpoints: latency-svc-frqt2 [321.067731ms] -Jun 12 21:57:12.354: INFO: Created: latency-svc-bl6x5 -Jun 12 21:57:12.356: INFO: Got endpoints: latency-svc-bl6x5 [303.581238ms] -Jun 12 21:57:12.358: INFO: Created: latency-svc-g57cl -Jun 12 21:57:12.358: INFO: Got endpoints: latency-svc-g57cl [286.964642ms] -Jun 12 21:57:12.364: INFO: Created: latency-svc-b8nvw -Jun 12 21:57:12.377: INFO: Got endpoints: latency-svc-b8nvw [287.311471ms] -Jun 12 21:57:12.385: INFO: Created: latency-svc-l7z4d -Jun 12 21:57:12.393: INFO: Got endpoints: latency-svc-l7z4d [273.99189ms] -Jun 12 21:57:12.400: INFO: Created: latency-svc-c5bvn -Jun 12 21:57:12.444: INFO: Created: latency-svc-kwwgm -Jun 12 21:57:12.445: INFO: Created: latency-svc-sn22c -Jun 12 21:57:12.445: INFO: Got endpoints: latency-svc-kwwgm [303.459222ms] -Jun 12 21:57:12.447: INFO: Got endpoints: latency-svc-c5bvn [321.889612ms] -Jun 12 21:57:12.487: INFO: Created: latency-svc-26v9n -Jun 12 21:57:12.487: INFO: Got endpoints: latency-svc-26v9n [285.153293ms] -Jun 12 21:57:12.488: INFO: Got endpoints: latency-svc-sn22c [327.944242ms] -Jun 12 21:57:12.488: INFO: Created: latency-svc-fgvgp -Jun 12 21:57:12.490: INFO: Got endpoints: latency-svc-fgvgp [275.359729ms] -Jun 12 21:57:12.496: INFO: Created: latency-svc-tgz6h -Jun 12 21:57:12.515: INFO: Got endpoints: latency-svc-tgz6h [284.642186ms] -Jun 12 21:57:12.516: INFO: Created: latency-svc-gj7c6 -Jun 12 21:57:12.525: INFO: Got endpoints: latency-svc-gj7c6 [282.645079ms] -Jun 12 21:57:12.532: INFO: Created: latency-svc-kmhhs -Jun 12 21:57:12.546: INFO: Got endpoints: latency-svc-kmhhs [284.768252ms] -Jun 12 21:57:12.553: INFO: Created: latency-svc-gvr7v -Jun 12 21:57:12.568: INFO: Created: latency-svc-9z5nd -Jun 12 21:57:12.569: INFO: Got endpoints: latency-svc-gvr7v [290.921251ms] -Jun 12 21:57:12.580: INFO: Got endpoints: latency-svc-9z5nd [267.259748ms] -Jun 12 21:57:12.587: INFO: Created: latency-svc-6s62f -Jun 12 21:57:12.611: INFO: Got endpoints: latency-svc-6s62f [254.150161ms] -Jun 12 21:57:12.613: INFO: Created: latency-svc-m29gq -Jun 12 21:57:12.616: INFO: Got endpoints: latency-svc-m29gq [261.096577ms] -Jun 12 21:57:12.653: INFO: Created: latency-svc-qp862 -Jun 12 21:57:12.672: INFO: Got endpoints: latency-svc-qp862 [313.409795ms] -Jun 12 21:57:12.679: INFO: Created: latency-svc-twnzc -Jun 12 21:57:12.681: INFO: Got endpoints: latency-svc-twnzc [303.556822ms] -Jun 12 21:57:12.691: INFO: Created: latency-svc-rvf5h -Jun 12 21:57:12.700: INFO: Got endpoints: latency-svc-rvf5h [307.724221ms] -Jun 12 21:57:12.707: INFO: Created: latency-svc-v486p -Jun 12 21:57:12.718: INFO: Got endpoints: latency-svc-v486p [271.390961ms] -Jun 12 21:57:12.726: INFO: Created: latency-svc-k7h9h -Jun 12 21:57:12.744: INFO: Got endpoints: latency-svc-k7h9h [299.074941ms] -Jun 12 21:57:12.748: INFO: Created: latency-svc-6js77 -Jun 12 21:57:12.759: INFO: Got endpoints: latency-svc-6js77 [271.983481ms] -Jun 12 21:57:12.777: INFO: Created: latency-svc-7bfwh -Jun 12 21:57:12.782: INFO: Got endpoints: latency-svc-7bfwh [293.851872ms] -Jun 12 21:57:12.785: INFO: Created: latency-svc-56r52 -Jun 12 21:57:12.796: INFO: Got endpoints: latency-svc-56r52 [305.55353ms] -Jun 12 21:57:12.806: INFO: Created: latency-svc-wwqs9 -Jun 12 21:57:12.816: INFO: Got endpoints: latency-svc-wwqs9 [300.729282ms] -Jun 12 21:57:12.825: INFO: Created: latency-svc-x48x6 -Jun 12 21:57:12.832: INFO: Got endpoints: latency-svc-x48x6 [307.401453ms] -Jun 12 21:57:12.847: INFO: Created: latency-svc-j7dnl -Jun 12 21:57:12.858: INFO: Got endpoints: latency-svc-j7dnl [312.175512ms] -Jun 12 21:57:12.886: INFO: Created: latency-svc-j8xlf -Jun 12 21:57:12.886: INFO: Got endpoints: latency-svc-j8xlf [317.299333ms] -Jun 12 21:57:12.887: INFO: Created: latency-svc-pvzhd -Jun 12 21:57:12.900: INFO: Got endpoints: latency-svc-pvzhd [320.412703ms] -Jun 12 21:57:12.902: INFO: Created: latency-svc-f989z -Jun 12 21:57:12.913: INFO: Got endpoints: latency-svc-f989z [301.822988ms] -Jun 12 21:57:12.946: INFO: Created: latency-svc-ppvgf -Jun 12 21:57:12.960: INFO: Got endpoints: latency-svc-ppvgf [344.094649ms] -Jun 12 21:57:12.967: INFO: Created: latency-svc-6nzbj -Jun 12 21:57:12.995: INFO: Got endpoints: latency-svc-6nzbj [323.759593ms] -Jun 12 21:57:13.012: INFO: Created: latency-svc-nkslj -Jun 12 21:57:13.012: INFO: Created: latency-svc-ptxd2 -Jun 12 21:57:13.018: INFO: Got endpoints: latency-svc-ptxd2 [337.429388ms] -Jun 12 21:57:13.023: INFO: Got endpoints: latency-svc-nkslj [322.86918ms] -Jun 12 21:57:13.040: INFO: Created: latency-svc-hmlrc -Jun 12 21:57:13.051: INFO: Got endpoints: latency-svc-hmlrc [331.651485ms] -Jun 12 21:57:13.052: INFO: Created: latency-svc-cns86 -Jun 12 21:57:13.060: INFO: Got endpoints: latency-svc-cns86 [315.787584ms] -Jun 12 21:57:13.093: INFO: Created: latency-svc-k98td -Jun 12 21:57:13.104: INFO: Got endpoints: latency-svc-k98td [344.436632ms] -Jun 12 21:57:13.116: INFO: Created: latency-svc-7n6z8 -Jun 12 21:57:13.124: INFO: Got endpoints: latency-svc-7n6z8 [342.124515ms] -Jun 12 21:57:13.146: INFO: Created: latency-svc-x9npt -Jun 12 21:57:13.165: INFO: Created: latency-svc-brm2d -Jun 12 21:57:13.165: INFO: Got endpoints: latency-svc-brm2d [349.525177ms] -Jun 12 21:57:13.166: INFO: Got endpoints: latency-svc-x9npt [369.957265ms] -Jun 12 21:57:13.234: INFO: Created: latency-svc-6q2m6 -Jun 12 21:57:13.237: INFO: Created: latency-svc-h65z5 -Jun 12 21:57:13.237: INFO: Got endpoints: latency-svc-h65z5 [404.738924ms] -Jun 12 21:57:13.239: INFO: Created: latency-svc-59whm -Jun 12 21:57:13.239: INFO: Got endpoints: latency-svc-59whm [381.086594ms] -Jun 12 21:57:13.247: INFO: Got endpoints: latency-svc-6q2m6 [360.846691ms] -Jun 12 21:57:13.262: INFO: Created: latency-svc-nbcsb -Jun 12 21:57:13.267: INFO: Got endpoints: latency-svc-nbcsb [366.750281ms] -Jun 12 21:57:13.274: INFO: Created: latency-svc-6lckq -Jun 12 21:57:13.285: INFO: Got endpoints: latency-svc-6lckq [371.083158ms] -Jun 12 21:57:13.374: INFO: Created: latency-svc-7gptt -Jun 12 21:57:13.374: INFO: Got endpoints: latency-svc-7gptt [378.498748ms] -Jun 12 21:57:13.383: INFO: Created: latency-svc-827t8 -Jun 12 21:57:13.383: INFO: Got endpoints: latency-svc-827t8 [423.39384ms] -Jun 12 21:57:13.409: INFO: Created: latency-svc-q2j2m -Jun 12 21:57:13.426: INFO: Created: latency-svc-cjhq8 -Jun 12 21:57:13.428: INFO: Got endpoints: latency-svc-q2j2m [410.083776ms] -Jun 12 21:57:13.431: INFO: Got endpoints: latency-svc-cjhq8 [407.655592ms] -Jun 12 21:57:13.530: INFO: Created: latency-svc-dx6jm -Jun 12 21:57:13.531: INFO: Created: latency-svc-c552v -Jun 12 21:57:13.532: INFO: Got endpoints: latency-svc-dx6jm [480.877171ms] -Jun 12 21:57:13.532: INFO: Got endpoints: latency-svc-c552v [427.337207ms] -Jun 12 21:57:13.543: INFO: Created: latency-svc-dkswq -Jun 12 21:57:13.549: INFO: Got endpoints: latency-svc-dkswq [489.312788ms] -Jun 12 21:57:13.556: INFO: Created: latency-svc-8x2lk -Jun 12 21:57:13.567: INFO: Got endpoints: latency-svc-8x2lk [443.338056ms] -Jun 12 21:57:13.648: INFO: Created: latency-svc-vwg88 -Jun 12 21:57:13.648: INFO: Got endpoints: latency-svc-vwg88 [481.584937ms] -Jun 12 21:57:13.728: INFO: Created: latency-svc-rchqp -Jun 12 21:57:13.729: INFO: Got endpoints: latency-svc-rchqp [563.499618ms] -Jun 12 21:57:13.730: INFO: Created: latency-svc-tt4v7 -Jun 12 21:57:13.756: INFO: Got endpoints: latency-svc-tt4v7 [518.870665ms] -Jun 12 21:57:13.760: INFO: Created: latency-svc-sqg99 -Jun 12 21:57:13.774: INFO: Got endpoints: latency-svc-sqg99 [534.766667ms] -Jun 12 21:57:13.780: INFO: Created: latency-svc-lcnbp -Jun 12 21:57:13.807: INFO: Got endpoints: latency-svc-lcnbp [558.44543ms] -Jun 12 21:57:13.807: INFO: Created: latency-svc-8rwpb -Jun 12 21:57:13.822: INFO: Got endpoints: latency-svc-8rwpb [554.18169ms] -Jun 12 21:57:13.834: INFO: Created: latency-svc-7gvrj -Jun 12 21:57:13.840: INFO: Created: latency-svc-khg8j -Jun 12 21:57:13.841: INFO: Got endpoints: latency-svc-7gvrj [537.166755ms] -Jun 12 21:57:13.856: INFO: Got endpoints: latency-svc-khg8j [480.922971ms] -Jun 12 21:57:13.860: INFO: Created: latency-svc-vdx5h -Jun 12 21:57:13.876: INFO: Got endpoints: latency-svc-vdx5h [492.137927ms] -Jun 12 21:57:13.893: INFO: Created: latency-svc-qnwkm -Jun 12 21:57:13.924: INFO: Created: latency-svc-q68pk -Jun 12 21:57:13.924: INFO: Got endpoints: latency-svc-q68pk [493.258396ms] -Jun 12 21:57:13.925: INFO: Got endpoints: latency-svc-qnwkm [496.26894ms] -Jun 12 21:57:13.933: INFO: Created: latency-svc-bgzp7 -Jun 12 21:57:13.967: INFO: Got endpoints: latency-svc-bgzp7 [434.729896ms] -Jun 12 21:57:13.968: INFO: Created: latency-svc-fx5c5 -Jun 12 21:57:13.978: INFO: Created: latency-svc-74rqd -Jun 12 21:57:13.983: INFO: Got endpoints: latency-svc-74rqd [427.840892ms] -Jun 12 21:57:13.983: INFO: Got endpoints: latency-svc-fx5c5 [451.011385ms] -Jun 12 21:57:13.989: INFO: Created: latency-svc-sxxzs -Jun 12 21:57:14.017: INFO: Created: latency-svc-mvckz -Jun 12 21:57:14.021: INFO: Got endpoints: latency-svc-mvckz [373.193313ms] -Jun 12 21:57:14.022: INFO: Got endpoints: latency-svc-sxxzs [454.356835ms] -Jun 12 21:57:14.023: INFO: Created: latency-svc-mknrl -Jun 12 21:57:14.034: INFO: Got endpoints: latency-svc-mknrl [304.792183ms] -Jun 12 21:57:14.041: INFO: Created: latency-svc-6vn5h -Jun 12 21:57:14.052: INFO: Got endpoints: latency-svc-6vn5h [295.700335ms] -Jun 12 21:57:14.068: INFO: Created: latency-svc-mplgk -Jun 12 21:57:14.079: INFO: Got endpoints: latency-svc-mplgk [304.585974ms] -Jun 12 21:57:14.080: INFO: Created: latency-svc-822vg -Jun 12 21:57:14.097: INFO: Got endpoints: latency-svc-822vg [290.196366ms] -Jun 12 21:57:14.103: INFO: Created: latency-svc-d2p99 -Jun 12 21:57:14.109: INFO: Got endpoints: latency-svc-d2p99 [286.885055ms] -Jun 12 21:57:14.119: INFO: Created: latency-svc-tbpk7 -Jun 12 21:57:14.128: INFO: Got endpoints: latency-svc-tbpk7 [287.320723ms] -Jun 12 21:57:14.139: INFO: Created: latency-svc-rxk4j -Jun 12 21:57:14.152: INFO: Got endpoints: latency-svc-rxk4j [296.508855ms] -Jun 12 21:57:14.164: INFO: Created: latency-svc-srhxb -Jun 12 21:57:14.182: INFO: Created: latency-svc-flv26 -Jun 12 21:57:14.185: INFO: Got endpoints: latency-svc-srhxb [308.994448ms] -Jun 12 21:57:14.194: INFO: Got endpoints: latency-svc-flv26 [269.143354ms] -Jun 12 21:57:14.198: INFO: Created: latency-svc-fkhr6 -Jun 12 21:57:14.211: INFO: Got endpoints: latency-svc-fkhr6 [285.777766ms] -Jun 12 21:57:14.218: INFO: Created: latency-svc-b2679 -Jun 12 21:57:14.237: INFO: Got endpoints: latency-svc-b2679 [270.247117ms] -Jun 12 21:57:14.254: INFO: Created: latency-svc-vglwv -Jun 12 21:57:14.264: INFO: Got endpoints: latency-svc-vglwv [281.405171ms] -Jun 12 21:57:14.265: INFO: Created: latency-svc-d4pcz -Jun 12 21:57:14.277: INFO: Got endpoints: latency-svc-d4pcz [293.316261ms] -Jun 12 21:57:14.285: INFO: Created: latency-svc-zbg49 -Jun 12 21:57:14.293: INFO: Got endpoints: latency-svc-zbg49 [270.897874ms] -Jun 12 21:57:14.303: INFO: Created: latency-svc-85xst -Jun 12 21:57:14.315: INFO: Got endpoints: latency-svc-85xst [293.998969ms] -Jun 12 21:57:14.321: INFO: Created: latency-svc-dj4v5 -Jun 12 21:57:14.332: INFO: Got endpoints: latency-svc-dj4v5 [297.870152ms] -Jun 12 21:57:14.342: INFO: Created: latency-svc-jdxx8 -Jun 12 21:57:14.368: INFO: Got endpoints: latency-svc-jdxx8 [315.558981ms] -Jun 12 21:57:14.412: INFO: Created: latency-svc-qb5pj -Jun 12 21:57:14.412: INFO: Got endpoints: latency-svc-qb5pj [311.0456ms] -Jun 12 21:57:14.413: INFO: Created: latency-svc-9ncdh -Jun 12 21:57:14.413: INFO: Got endpoints: latency-svc-9ncdh [334.651029ms] -Jun 12 21:57:14.442: INFO: Created: latency-svc-vwzjb -Jun 12 21:57:14.442: INFO: Got endpoints: latency-svc-vwzjb [332.134655ms] -Jun 12 21:57:14.442: INFO: Created: latency-svc-mv5jh -Jun 12 21:57:14.452: INFO: Got endpoints: latency-svc-mv5jh [323.553306ms] -Jun 12 21:57:14.494: INFO: Created: latency-svc-5b2zq -Jun 12 21:57:14.495: INFO: Got endpoints: latency-svc-5b2zq [342.450161ms] -Jun 12 21:57:14.495: INFO: Created: latency-svc-96sgh -Jun 12 21:57:14.526: INFO: Got endpoints: latency-svc-96sgh [340.518367ms] -Jun 12 21:57:14.528: INFO: Created: latency-svc-jr6xp -Jun 12 21:57:14.528: INFO: Got endpoints: latency-svc-jr6xp [333.906881ms] -Jun 12 21:57:14.556: INFO: Created: latency-svc-v8zg2 -Jun 12 21:57:14.557: INFO: Got endpoints: latency-svc-v8zg2 [345.535353ms] -Jun 12 21:57:14.558: INFO: Created: latency-svc-4c9ln -Jun 12 21:57:14.562: INFO: Got endpoints: latency-svc-4c9ln [324.414718ms] -Jun 12 21:57:14.580: INFO: Created: latency-svc-r8grg -Jun 12 21:57:14.590: INFO: Got endpoints: latency-svc-r8grg [326.014601ms] -Jun 12 21:57:14.599: INFO: Created: latency-svc-thv7r -Jun 12 21:57:14.613: INFO: Got endpoints: latency-svc-thv7r [335.606077ms] -Jun 12 21:57:14.618: INFO: Created: latency-svc-d69z9 -Jun 12 21:57:14.632: INFO: Got endpoints: latency-svc-d69z9 [339.160202ms] -Jun 12 21:57:14.640: INFO: Created: latency-svc-clbt5 -Jun 12 21:57:14.652: INFO: Got endpoints: latency-svc-clbt5 [337.287826ms] -Jun 12 21:57:14.660: INFO: Created: latency-svc-jbmlb -Jun 12 21:57:14.674: INFO: Got endpoints: latency-svc-jbmlb [342.063887ms] -Jun 12 21:57:14.690: INFO: Created: latency-svc-2d2pf -Jun 12 21:57:14.691: INFO: Got endpoints: latency-svc-2d2pf [319.303541ms] -Jun 12 21:57:14.713: INFO: Created: latency-svc-mdjh6 -Jun 12 21:57:14.714: INFO: Got endpoints: latency-svc-mdjh6 [301.760276ms] -Jun 12 21:57:14.720: INFO: Created: latency-svc-bh7kr -Jun 12 21:57:14.737: INFO: Got endpoints: latency-svc-bh7kr [323.188708ms] -Jun 12 21:57:14.737: INFO: Created: latency-svc-9cfsm -Jun 12 21:57:14.749: INFO: Got endpoints: latency-svc-9cfsm [306.971117ms] -Jun 12 21:57:14.756: INFO: Created: latency-svc-qb8rr -Jun 12 21:57:14.770: INFO: Got endpoints: latency-svc-qb8rr [318.264793ms] -Jun 12 21:57:14.775: INFO: Created: latency-svc-qlpfc -Jun 12 21:57:14.787: INFO: Got endpoints: latency-svc-qlpfc [292.758503ms] -Jun 12 21:57:14.796: INFO: Created: latency-svc-tpw7m -Jun 12 21:57:14.807: INFO: Got endpoints: latency-svc-tpw7m [280.640147ms] -Jun 12 21:57:14.815: INFO: Created: latency-svc-25xnh -Jun 12 21:57:14.827: INFO: Got endpoints: latency-svc-25xnh [298.664697ms] -Jun 12 21:57:14.833: INFO: Created: latency-svc-tqh2h -Jun 12 21:57:14.848: INFO: Got endpoints: latency-svc-tqh2h [291.067453ms] -Jun 12 21:57:14.859: INFO: Created: latency-svc-dxk2l -Jun 12 21:57:14.868: INFO: Got endpoints: latency-svc-dxk2l [305.687124ms] -Jun 12 21:57:14.875: INFO: Created: latency-svc-mxbdn -Jun 12 21:57:14.887: INFO: Got endpoints: latency-svc-mxbdn [295.378956ms] -Jun 12 21:57:14.897: INFO: Created: latency-svc-cc5br -Jun 12 21:57:14.904: INFO: Got endpoints: latency-svc-cc5br [290.913736ms] -Jun 12 21:57:14.914: INFO: Created: latency-svc-xhq7l -Jun 12 21:57:14.926: INFO: Got endpoints: latency-svc-xhq7l [293.953472ms] -Jun 12 21:57:14.931: INFO: Created: latency-svc-jc9b8 -Jun 12 21:57:14.942: INFO: Got endpoints: latency-svc-jc9b8 [289.049518ms] -Jun 12 21:57:14.948: INFO: Created: latency-svc-6c2xz -Jun 12 21:57:14.959: INFO: Got endpoints: latency-svc-6c2xz [284.360846ms] -Jun 12 21:57:14.970: INFO: Created: latency-svc-9j57v -Jun 12 21:57:14.978: INFO: Got endpoints: latency-svc-9j57v [286.778183ms] -Jun 12 21:57:14.986: INFO: Created: latency-svc-ksnt7 -Jun 12 21:57:14.995: INFO: Got endpoints: latency-svc-ksnt7 [280.446568ms] -Jun 12 21:57:15.010: INFO: Created: latency-svc-mfxhk -Jun 12 21:57:15.012: INFO: Got endpoints: latency-svc-mfxhk [275.657056ms] -Jun 12 21:57:15.022: INFO: Created: latency-svc-jgsxt -Jun 12 21:57:15.033: INFO: Got endpoints: latency-svc-jgsxt [284.019219ms] -Jun 12 21:57:15.042: INFO: Created: latency-svc-jqxp7 -Jun 12 21:57:15.052: INFO: Got endpoints: latency-svc-jqxp7 [281.610523ms] -Jun 12 21:57:15.059: INFO: Created: latency-svc-pf5fn -Jun 12 21:57:15.069: INFO: Got endpoints: latency-svc-pf5fn [281.431959ms] -Jun 12 21:57:15.084: INFO: Created: latency-svc-c4cdc -Jun 12 21:57:15.090: INFO: Got endpoints: latency-svc-c4cdc [283.095448ms] -Jun 12 21:57:15.101: INFO: Created: latency-svc-2v576 -Jun 12 21:57:15.111: INFO: Got endpoints: latency-svc-2v576 [283.72439ms] -Jun 12 21:57:15.120: INFO: Created: latency-svc-tr54k -Jun 12 21:57:15.132: INFO: Got endpoints: latency-svc-tr54k [284.002686ms] -Jun 12 21:57:15.138: INFO: Created: latency-svc-ml4sb -Jun 12 21:57:15.152: INFO: Got endpoints: latency-svc-ml4sb [284.222001ms] -Jun 12 21:57:15.163: INFO: Created: latency-svc-q44qr -Jun 12 21:57:15.169: INFO: Got endpoints: latency-svc-q44qr [282.017992ms] -Jun 12 21:57:15.178: INFO: Created: latency-svc-479pk -Jun 12 21:57:15.193: INFO: Got endpoints: latency-svc-479pk [289.018987ms] -Jun 12 21:57:15.198: INFO: Created: latency-svc-tb895 -Jun 12 21:57:15.213: INFO: Got endpoints: latency-svc-tb895 [287.320077ms] -Jun 12 21:57:15.221: INFO: Created: latency-svc-x6xq6 -Jun 12 21:57:15.234: INFO: Got endpoints: latency-svc-x6xq6 [292.177946ms] -Jun 12 21:57:15.244: INFO: Created: latency-svc-kzgnj -Jun 12 21:57:15.250: INFO: Got endpoints: latency-svc-kzgnj [291.761559ms] -Jun 12 21:57:15.288: INFO: Created: latency-svc-lpdds -Jun 12 21:57:15.290: INFO: Got endpoints: latency-svc-lpdds [311.491668ms] -Jun 12 21:57:15.289: INFO: Created: latency-svc-jwbsh -Jun 12 21:57:15.293: INFO: Got endpoints: latency-svc-jwbsh [298.030291ms] -Jun 12 21:57:15.303: INFO: Created: latency-svc-jxc9t -Jun 12 21:57:15.313: INFO: Got endpoints: latency-svc-jxc9t [300.429736ms] -Jun 12 21:57:15.333: INFO: Created: latency-svc-6d22j -Jun 12 21:57:15.337: INFO: Got endpoints: latency-svc-6d22j [304.122658ms] -Jun 12 21:57:15.338: INFO: Created: latency-svc-sbd98 -Jun 12 21:57:15.352: INFO: Got endpoints: latency-svc-sbd98 [299.613548ms] -Jun 12 21:57:15.366: INFO: Created: latency-svc-t76tp -Jun 12 21:57:15.366: INFO: Got endpoints: latency-svc-t76tp [296.880283ms] -Jun 12 21:57:15.375: INFO: Created: latency-svc-zpg8x -Jun 12 21:57:15.419: INFO: Got endpoints: latency-svc-zpg8x [329.292815ms] -Jun 12 21:57:15.419: INFO: Latencies: [47.23959ms 65.975818ms 69.016876ms 89.615977ms 103.158006ms 104.198239ms 104.722523ms 107.983533ms 113.508528ms 118.680644ms 132.599494ms 149.46293ms 163.518362ms 164.391951ms 166.015222ms 182.378756ms 201.41833ms 218.555903ms 220.08255ms 222.461774ms 224.460963ms 225.046093ms 228.30534ms 233.788065ms 242.861167ms 252.593403ms 254.150161ms 259.565549ms 261.096577ms 263.036197ms 264.941643ms 267.259748ms 268.782726ms 269.143354ms 270.247117ms 270.897874ms 271.390961ms 271.726832ms 271.983481ms 272.156373ms 273.99189ms 275.359729ms 275.657056ms 277.491416ms 278.427438ms 279.256257ms 280.052834ms 280.261511ms 280.446568ms 280.640147ms 281.094011ms 281.405171ms 281.431959ms 281.53342ms 281.610523ms 282.017992ms 282.645079ms 283.095448ms 283.72439ms 284.002686ms 284.019219ms 284.222001ms 284.360846ms 284.642186ms 284.768252ms 285.153293ms 285.777766ms 286.778183ms 286.885055ms 286.964642ms 287.311471ms 287.320077ms 287.320723ms 287.673602ms 289.018987ms 289.049518ms 290.196366ms 290.913736ms 290.921251ms 291.067453ms 291.761559ms 292.177946ms 292.758503ms 293.316261ms 293.851872ms 293.953472ms 293.998969ms 295.378956ms 295.700335ms 296.508855ms 296.880283ms 297.457341ms 297.870152ms 298.030291ms 298.664697ms 299.074941ms 299.613548ms 300.370125ms 300.429736ms 300.729282ms 301.760276ms 301.822988ms 303.390548ms 303.459222ms 303.556822ms 303.581238ms 304.122658ms 304.243316ms 304.585974ms 304.792183ms 305.55353ms 305.687124ms 306.971117ms 307.401453ms 307.724221ms 308.152991ms 308.994448ms 309.251032ms 310.227938ms 311.0456ms 311.491668ms 312.175512ms 313.105637ms 313.409795ms 315.558981ms 315.787584ms 316.87002ms 317.299333ms 318.264793ms 319.303541ms 320.412703ms 321.067731ms 321.622939ms 321.889612ms 322.86918ms 323.188708ms 323.553306ms 323.759593ms 324.414718ms 326.014601ms 326.927758ms 327.944242ms 328.169284ms 329.292815ms 331.651485ms 332.134655ms 332.660728ms 333.906881ms 334.651029ms 335.444125ms 335.606077ms 337.287826ms 337.429388ms 338.820254ms 339.160202ms 340.518367ms 342.063887ms 342.124515ms 342.450161ms 344.094649ms 344.436632ms 345.535353ms 349.525177ms 360.846691ms 365.692399ms 366.750281ms 369.957265ms 371.083158ms 373.193313ms 378.498748ms 381.086594ms 404.738924ms 407.214824ms 407.655592ms 408.557736ms 410.083776ms 423.39384ms 427.337207ms 427.840892ms 434.729896ms 443.338056ms 451.011385ms 454.356835ms 480.877171ms 480.922971ms 481.584937ms 481.892906ms 484.222499ms 489.312788ms 492.137927ms 493.258396ms 496.26894ms 518.870665ms 521.233822ms 525.445897ms 534.766667ms 537.166755ms 554.18169ms 558.44543ms 563.499618ms] -Jun 12 21:57:15.420: INFO: 50 %ile: 301.760276ms -Jun 12 21:57:15.420: INFO: 90 %ile: 443.338056ms -Jun 12 21:57:15.420: INFO: 99 %ile: 558.44543ms -Jun 12 21:57:15.420: INFO: Total sample count: 200 -[AfterEach] [sig-network] Service endpoints latency +[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:147 +STEP: Creating a pod to test emptydir 0777 on tmpfs 07/27/23 02:28:00.191 +Jul 27 02:28:00.222: INFO: Waiting up to 5m0s for pod "pod-60cdf801-fc31-489e-8359-a29b9c99a7d5" in namespace "emptydir-6463" to be "Succeeded or Failed" +Jul 27 02:28:00.230: INFO: Pod "pod-60cdf801-fc31-489e-8359-a29b9c99a7d5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.135841ms +Jul 27 02:28:02.240: INFO: Pod "pod-60cdf801-fc31-489e-8359-a29b9c99a7d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017891076s +Jul 27 02:28:04.240: INFO: Pod "pod-60cdf801-fc31-489e-8359-a29b9c99a7d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017854137s +STEP: Saw pod success 07/27/23 02:28:04.24 +Jul 27 02:28:04.240: INFO: Pod "pod-60cdf801-fc31-489e-8359-a29b9c99a7d5" satisfied condition "Succeeded or Failed" +Jul 27 02:28:04.248: INFO: Trying to get logs from node 10.245.128.19 pod pod-60cdf801-fc31-489e-8359-a29b9c99a7d5 container test-container: +STEP: delete the pod 07/27/23 02:28:04.292 +Jul 27 02:28:04.311: INFO: Waiting for pod pod-60cdf801-fc31-489e-8359-a29b9c99a7d5 to disappear +Jul 27 02:28:04.319: INFO: Pod pod-60cdf801-fc31-489e-8359-a29b9c99a7d5 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/node/init/init.go:32 -Jun 12 21:57:15.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] Service endpoints latency +Jul 27 02:28:04.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] Service endpoints latency +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] Service endpoints latency +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes tear down framework | framework.go:193 -STEP: Destroying namespace "svc-latency-193" for this suite. 06/12/23 21:57:15.442 +STEP: Destroying namespace "emptydir-6463" for this suite. 07/27/23 02:28:04.332 ------------------------------ -• [SLOW TEST] [7.767 seconds] -[sig-network] Service endpoints latency -test/e2e/network/common/framework.go:23 - should not be very high [Conformance] - test/e2e/network/service_latency.go:59 +• [4.225 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:147 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] Service endpoints latency + [BeforeEach] [sig-storage] EmptyDir volumes set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:57:07.7 - Jun 12 21:57:07.700: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename svc-latency 06/12/23 21:57:07.703 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:57:07.755 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:57:07.765 - [BeforeEach] [sig-network] Service endpoints latency + STEP: Creating a kubernetes client 07/27/23 02:28:00.128 + Jul 27 02:28:00.129: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename emptydir 07/27/23 02:28:00.13 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:28:00.171 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:28:00.183 + [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:31 - [It] should not be very high [Conformance] - test/e2e/network/service_latency.go:59 - Jun 12 21:57:07.801: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: creating replication controller svc-latency-rc in namespace svc-latency-193 06/12/23 21:57:07.806 - I0612 21:57:07.823664 23 runners.go:193] Created replication controller with name: svc-latency-rc, namespace: svc-latency-193, replica count: 1 - I0612 21:57:08.874389 23 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady - I0612 21:57:09.882040 23 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady - I0612 21:57:10.890035 23 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady - Jun 12 21:57:11.073: INFO: Created: latency-svc-lxw4v - Jun 12 21:57:11.096: INFO: Got endpoints: latency-svc-lxw4v [72.242759ms] - Jun 12 21:57:11.141: INFO: Created: latency-svc-g44kc - Jun 12 21:57:11.150: INFO: Got endpoints: latency-svc-g44kc [47.23959ms] - Jun 12 21:57:11.168: INFO: Created: latency-svc-cv68s - Jun 12 21:57:11.207: INFO: Created: latency-svc-fqhrt - Jun 12 21:57:11.207: INFO: Got endpoints: latency-svc-fqhrt [103.158006ms] - Jun 12 21:57:11.207: INFO: Got endpoints: latency-svc-cv68s [104.722523ms] - Jun 12 21:57:11.210: INFO: Created: latency-svc-vxmr2 - Jun 12 21:57:11.210: INFO: Got endpoints: latency-svc-vxmr2 [104.198239ms] - Jun 12 21:57:11.219: INFO: Created: latency-svc-9dkqh - Jun 12 21:57:11.226: INFO: Got endpoints: latency-svc-9dkqh [118.680644ms] - Jun 12 21:57:11.239: INFO: Created: latency-svc-nlkct - Jun 12 21:57:11.270: INFO: Created: latency-svc-5zcdv - Jun 12 21:57:11.270: INFO: Created: latency-svc-lgsfh - Jun 12 21:57:11.270: INFO: Got endpoints: latency-svc-5zcdv [163.518362ms] - Jun 12 21:57:11.271: INFO: Got endpoints: latency-svc-nlkct [164.391951ms] - Jun 12 21:57:11.316: INFO: Created: latency-svc-c2hkv - Jun 12 21:57:11.326: INFO: Created: latency-svc-wbp8l - Jun 12 21:57:11.326: INFO: Got endpoints: latency-svc-c2hkv [220.08255ms] - Jun 12 21:57:11.327: INFO: Got endpoints: latency-svc-lgsfh [222.461774ms] - Jun 12 21:57:11.328: INFO: Got endpoints: latency-svc-wbp8l [224.460963ms] - Jun 12 21:57:11.328: INFO: Created: latency-svc-fl8xk - Jun 12 21:57:11.330: INFO: Got endpoints: latency-svc-fl8xk [225.046093ms] - Jun 12 21:57:11.373: INFO: Created: latency-svc-mb2f8 - Jun 12 21:57:11.377: INFO: Created: latency-svc-gl8xb - Jun 12 21:57:11.377: INFO: Created: latency-svc-jc82p - Jun 12 21:57:11.377: INFO: Got endpoints: latency-svc-mb2f8 [271.726832ms] - Jun 12 21:57:11.378: INFO: Got endpoints: latency-svc-jc82p [272.156373ms] - Jun 12 21:57:11.493: INFO: Created: latency-svc-vgn6q - Jun 12 21:57:11.494: INFO: Created: latency-svc-qzhmg - Jun 12 21:57:11.494: INFO: Created: latency-svc-9q65g - Jun 12 21:57:11.495: INFO: Created: latency-svc-8xz5j - Jun 12 21:57:11.495: INFO: Created: latency-svc-6x5vf - Jun 12 21:57:11.511: INFO: Got endpoints: latency-svc-6x5vf [300.370125ms] - Jun 12 21:57:11.512: INFO: Got endpoints: latency-svc-gl8xb [407.214824ms] - Jun 12 21:57:11.513: INFO: Got endpoints: latency-svc-vgn6q [408.557736ms] - Jun 12 21:57:11.516: INFO: Got endpoints: latency-svc-qzhmg [365.692399ms] - Jun 12 21:57:11.517: INFO: Got endpoints: latency-svc-9q65g [309.251032ms] - Jun 12 21:57:11.517: INFO: Got endpoints: latency-svc-8xz5j [310.227938ms] - Jun 12 21:57:11.518: INFO: Created: latency-svc-bqwcw - Jun 12 21:57:11.525: INFO: Created: latency-svc-7qqs9 - Jun 12 21:57:11.525: INFO: Created: latency-svc-kqzwt - Jun 12 21:57:11.529: INFO: Got endpoints: latency-svc-bqwcw [303.390548ms] - Jun 12 21:57:11.533: INFO: Got endpoints: latency-svc-7qqs9 [263.036197ms] - Jun 12 21:57:11.540: INFO: Got endpoints: latency-svc-kqzwt [268.782726ms] - Jun 12 21:57:11.545: INFO: Created: latency-svc-76hlm - Jun 12 21:57:11.555: INFO: Got endpoints: latency-svc-76hlm [228.30534ms] - Jun 12 21:57:11.560: INFO: Created: latency-svc-8jptv - Jun 12 21:57:11.570: INFO: Got endpoints: latency-svc-8jptv [242.861167ms] - Jun 12 21:57:11.817: INFO: Created: latency-svc-rw4pd - Jun 12 21:57:11.833: INFO: Created: latency-svc-msb6d - Jun 12 21:57:11.833: INFO: Created: latency-svc-cw8c6 - Jun 12 21:57:11.834: INFO: Created: latency-svc-6rk8d - Jun 12 21:57:11.834: INFO: Created: latency-svc-9v5s6 - Jun 12 21:57:11.835: INFO: Created: latency-svc-kpvxc - Jun 12 21:57:11.835: INFO: Created: latency-svc-c4x5g - Jun 12 21:57:11.837: INFO: Created: latency-svc-98fz8 - Jun 12 21:57:11.837: INFO: Created: latency-svc-xgxrn - Jun 12 21:57:11.837: INFO: Created: latency-svc-zmgcc - Jun 12 21:57:11.838: INFO: Got endpoints: latency-svc-rw4pd [321.622939ms] - Jun 12 21:57:11.839: INFO: Created: latency-svc-p2tn6 - Jun 12 21:57:11.839: INFO: Created: latency-svc-6xww7 - Jun 12 21:57:11.840: INFO: Created: latency-svc-c6tbf - Jun 12 21:57:11.840: INFO: Created: latency-svc-w7prv - Jun 12 21:57:11.841: INFO: Created: latency-svc-fnn29 - Jun 12 21:57:11.841: INFO: Got endpoints: latency-svc-msb6d [328.169284ms] - Jun 12 21:57:11.844: INFO: Got endpoints: latency-svc-c4x5g [332.660728ms] - Jun 12 21:57:11.844: INFO: Got endpoints: latency-svc-kpvxc [326.927758ms] - Jun 12 21:57:11.846: INFO: Got endpoints: latency-svc-xgxrn [316.87002ms] - Jun 12 21:57:11.847: INFO: Got endpoints: latency-svc-cw8c6 [313.105637ms] - Jun 12 21:57:11.848: INFO: Got endpoints: latency-svc-6rk8d [308.152991ms] - Jun 12 21:57:11.849: INFO: Got endpoints: latency-svc-9v5s6 [278.427438ms] - Jun 12 21:57:11.851: INFO: Got endpoints: latency-svc-zmgcc [521.233822ms] - Jun 12 21:57:11.851: INFO: Got endpoints: latency-svc-98fz8 [338.820254ms] - Jun 12 21:57:11.852: INFO: Got endpoints: latency-svc-6xww7 [335.444125ms] - Jun 12 21:57:11.855: INFO: Got endpoints: latency-svc-fnn29 [525.445897ms] - Jun 12 21:57:11.859: INFO: Got endpoints: latency-svc-c6tbf [304.243316ms] - Jun 12 21:57:11.860: INFO: Got endpoints: latency-svc-w7prv [481.892906ms] - Jun 12 21:57:11.862: INFO: Got endpoints: latency-svc-p2tn6 [484.222499ms] - Jun 12 21:57:11.905: INFO: Created: latency-svc-2985p - Jun 12 21:57:11.907: INFO: Created: latency-svc-qbvg9 - Jun 12 21:57:11.907: INFO: Got endpoints: latency-svc-2985p [69.016876ms] - Jun 12 21:57:11.907: INFO: Got endpoints: latency-svc-qbvg9 [65.975818ms] - Jun 12 21:57:11.933: INFO: Created: latency-svc-rl2m6 - Jun 12 21:57:11.933: INFO: Got endpoints: latency-svc-rl2m6 [89.615977ms] - Jun 12 21:57:11.952: INFO: Created: latency-svc-8k577 - Jun 12 21:57:11.952: INFO: Got endpoints: latency-svc-8k577 [107.983533ms] - Jun 12 21:57:11.953: INFO: Created: latency-svc-sr7j2 - Jun 12 21:57:11.960: INFO: Got endpoints: latency-svc-sr7j2 [113.508528ms] - Jun 12 21:57:11.969: INFO: Created: latency-svc-s8qcb - Jun 12 21:57:11.980: INFO: Got endpoints: latency-svc-s8qcb [132.599494ms] - Jun 12 21:57:11.990: INFO: Created: latency-svc-mxwqz - Jun 12 21:57:11.998: INFO: Got endpoints: latency-svc-mxwqz [149.46293ms] - Jun 12 21:57:12.008: INFO: Created: latency-svc-c962q - Jun 12 21:57:12.015: INFO: Got endpoints: latency-svc-c962q [166.015222ms] - Jun 12 21:57:12.023: INFO: Created: latency-svc-p6xhv - Jun 12 21:57:12.033: INFO: Got endpoints: latency-svc-p6xhv [182.378756ms] - Jun 12 21:57:12.044: INFO: Created: latency-svc-rmf6f - Jun 12 21:57:12.053: INFO: Got endpoints: latency-svc-rmf6f [201.41833ms] - Jun 12 21:57:12.059: INFO: Created: latency-svc-xc6r8 - Jun 12 21:57:12.071: INFO: Got endpoints: latency-svc-xc6r8 [218.555903ms] - Jun 12 21:57:12.079: INFO: Created: latency-svc-lb56c - Jun 12 21:57:12.089: INFO: Got endpoints: latency-svc-lb56c [233.788065ms] - Jun 12 21:57:12.101: INFO: Created: latency-svc-4gl79 - Jun 12 21:57:12.119: INFO: Got endpoints: latency-svc-4gl79 [259.565549ms] - Jun 12 21:57:12.119: INFO: Created: latency-svc-dxfh6 - Jun 12 21:57:12.125: INFO: Got endpoints: latency-svc-dxfh6 [264.941643ms] - Jun 12 21:57:12.131: INFO: Created: latency-svc-f5sk8 - Jun 12 21:57:12.141: INFO: Got endpoints: latency-svc-f5sk8 [279.256257ms] - Jun 12 21:57:12.148: INFO: Created: latency-svc-ctdrw - Jun 12 21:57:12.160: INFO: Got endpoints: latency-svc-ctdrw [252.593403ms] - Jun 12 21:57:12.173: INFO: Created: latency-svc-h862x - Jun 12 21:57:12.195: INFO: Got endpoints: latency-svc-h862x [287.673602ms] - Jun 12 21:57:12.198: INFO: Created: latency-svc-jqt84 - Jun 12 21:57:12.215: INFO: Got endpoints: latency-svc-jqt84 [281.094011ms] - Jun 12 21:57:12.217: INFO: Created: latency-svc-t55wq - Jun 12 21:57:12.230: INFO: Got endpoints: latency-svc-t55wq [277.491416ms] - Jun 12 21:57:12.232: INFO: Created: latency-svc-kn9zr - Jun 12 21:57:12.241: INFO: Got endpoints: latency-svc-kn9zr [281.53342ms] - Jun 12 21:57:12.260: INFO: Created: latency-svc-tv7bb - Jun 12 21:57:12.261: INFO: Got endpoints: latency-svc-tv7bb [280.261511ms] - Jun 12 21:57:12.269: INFO: Created: latency-svc-8thht - Jun 12 21:57:12.278: INFO: Got endpoints: latency-svc-8thht [280.052834ms] - Jun 12 21:57:12.290: INFO: Created: latency-svc-r8g9c - Jun 12 21:57:12.312: INFO: Got endpoints: latency-svc-r8g9c [297.457341ms] - Jun 12 21:57:12.314: INFO: Created: latency-svc-frqt2 - Jun 12 21:57:12.354: INFO: Got endpoints: latency-svc-frqt2 [321.067731ms] - Jun 12 21:57:12.354: INFO: Created: latency-svc-bl6x5 - Jun 12 21:57:12.356: INFO: Got endpoints: latency-svc-bl6x5 [303.581238ms] - Jun 12 21:57:12.358: INFO: Created: latency-svc-g57cl - Jun 12 21:57:12.358: INFO: Got endpoints: latency-svc-g57cl [286.964642ms] - Jun 12 21:57:12.364: INFO: Created: latency-svc-b8nvw - Jun 12 21:57:12.377: INFO: Got endpoints: latency-svc-b8nvw [287.311471ms] - Jun 12 21:57:12.385: INFO: Created: latency-svc-l7z4d - Jun 12 21:57:12.393: INFO: Got endpoints: latency-svc-l7z4d [273.99189ms] - Jun 12 21:57:12.400: INFO: Created: latency-svc-c5bvn - Jun 12 21:57:12.444: INFO: Created: latency-svc-kwwgm - Jun 12 21:57:12.445: INFO: Created: latency-svc-sn22c - Jun 12 21:57:12.445: INFO: Got endpoints: latency-svc-kwwgm [303.459222ms] - Jun 12 21:57:12.447: INFO: Got endpoints: latency-svc-c5bvn [321.889612ms] - Jun 12 21:57:12.487: INFO: Created: latency-svc-26v9n - Jun 12 21:57:12.487: INFO: Got endpoints: latency-svc-26v9n [285.153293ms] - Jun 12 21:57:12.488: INFO: Got endpoints: latency-svc-sn22c [327.944242ms] - Jun 12 21:57:12.488: INFO: Created: latency-svc-fgvgp - Jun 12 21:57:12.490: INFO: Got endpoints: latency-svc-fgvgp [275.359729ms] - Jun 12 21:57:12.496: INFO: Created: latency-svc-tgz6h - Jun 12 21:57:12.515: INFO: Got endpoints: latency-svc-tgz6h [284.642186ms] - Jun 12 21:57:12.516: INFO: Created: latency-svc-gj7c6 - Jun 12 21:57:12.525: INFO: Got endpoints: latency-svc-gj7c6 [282.645079ms] - Jun 12 21:57:12.532: INFO: Created: latency-svc-kmhhs - Jun 12 21:57:12.546: INFO: Got endpoints: latency-svc-kmhhs [284.768252ms] - Jun 12 21:57:12.553: INFO: Created: latency-svc-gvr7v - Jun 12 21:57:12.568: INFO: Created: latency-svc-9z5nd - Jun 12 21:57:12.569: INFO: Got endpoints: latency-svc-gvr7v [290.921251ms] - Jun 12 21:57:12.580: INFO: Got endpoints: latency-svc-9z5nd [267.259748ms] - Jun 12 21:57:12.587: INFO: Created: latency-svc-6s62f - Jun 12 21:57:12.611: INFO: Got endpoints: latency-svc-6s62f [254.150161ms] - Jun 12 21:57:12.613: INFO: Created: latency-svc-m29gq - Jun 12 21:57:12.616: INFO: Got endpoints: latency-svc-m29gq [261.096577ms] - Jun 12 21:57:12.653: INFO: Created: latency-svc-qp862 - Jun 12 21:57:12.672: INFO: Got endpoints: latency-svc-qp862 [313.409795ms] - Jun 12 21:57:12.679: INFO: Created: latency-svc-twnzc - Jun 12 21:57:12.681: INFO: Got endpoints: latency-svc-twnzc [303.556822ms] - Jun 12 21:57:12.691: INFO: Created: latency-svc-rvf5h - Jun 12 21:57:12.700: INFO: Got endpoints: latency-svc-rvf5h [307.724221ms] - Jun 12 21:57:12.707: INFO: Created: latency-svc-v486p - Jun 12 21:57:12.718: INFO: Got endpoints: latency-svc-v486p [271.390961ms] - Jun 12 21:57:12.726: INFO: Created: latency-svc-k7h9h - Jun 12 21:57:12.744: INFO: Got endpoints: latency-svc-k7h9h [299.074941ms] - Jun 12 21:57:12.748: INFO: Created: latency-svc-6js77 - Jun 12 21:57:12.759: INFO: Got endpoints: latency-svc-6js77 [271.983481ms] - Jun 12 21:57:12.777: INFO: Created: latency-svc-7bfwh - Jun 12 21:57:12.782: INFO: Got endpoints: latency-svc-7bfwh [293.851872ms] - Jun 12 21:57:12.785: INFO: Created: latency-svc-56r52 - Jun 12 21:57:12.796: INFO: Got endpoints: latency-svc-56r52 [305.55353ms] - Jun 12 21:57:12.806: INFO: Created: latency-svc-wwqs9 - Jun 12 21:57:12.816: INFO: Got endpoints: latency-svc-wwqs9 [300.729282ms] - Jun 12 21:57:12.825: INFO: Created: latency-svc-x48x6 - Jun 12 21:57:12.832: INFO: Got endpoints: latency-svc-x48x6 [307.401453ms] - Jun 12 21:57:12.847: INFO: Created: latency-svc-j7dnl - Jun 12 21:57:12.858: INFO: Got endpoints: latency-svc-j7dnl [312.175512ms] - Jun 12 21:57:12.886: INFO: Created: latency-svc-j8xlf - Jun 12 21:57:12.886: INFO: Got endpoints: latency-svc-j8xlf [317.299333ms] - Jun 12 21:57:12.887: INFO: Created: latency-svc-pvzhd - Jun 12 21:57:12.900: INFO: Got endpoints: latency-svc-pvzhd [320.412703ms] - Jun 12 21:57:12.902: INFO: Created: latency-svc-f989z - Jun 12 21:57:12.913: INFO: Got endpoints: latency-svc-f989z [301.822988ms] - Jun 12 21:57:12.946: INFO: Created: latency-svc-ppvgf - Jun 12 21:57:12.960: INFO: Got endpoints: latency-svc-ppvgf [344.094649ms] - Jun 12 21:57:12.967: INFO: Created: latency-svc-6nzbj - Jun 12 21:57:12.995: INFO: Got endpoints: latency-svc-6nzbj [323.759593ms] - Jun 12 21:57:13.012: INFO: Created: latency-svc-nkslj - Jun 12 21:57:13.012: INFO: Created: latency-svc-ptxd2 - Jun 12 21:57:13.018: INFO: Got endpoints: latency-svc-ptxd2 [337.429388ms] - Jun 12 21:57:13.023: INFO: Got endpoints: latency-svc-nkslj [322.86918ms] - Jun 12 21:57:13.040: INFO: Created: latency-svc-hmlrc - Jun 12 21:57:13.051: INFO: Got endpoints: latency-svc-hmlrc [331.651485ms] - Jun 12 21:57:13.052: INFO: Created: latency-svc-cns86 - Jun 12 21:57:13.060: INFO: Got endpoints: latency-svc-cns86 [315.787584ms] - Jun 12 21:57:13.093: INFO: Created: latency-svc-k98td - Jun 12 21:57:13.104: INFO: Got endpoints: latency-svc-k98td [344.436632ms] - Jun 12 21:57:13.116: INFO: Created: latency-svc-7n6z8 - Jun 12 21:57:13.124: INFO: Got endpoints: latency-svc-7n6z8 [342.124515ms] - Jun 12 21:57:13.146: INFO: Created: latency-svc-x9npt - Jun 12 21:57:13.165: INFO: Created: latency-svc-brm2d - Jun 12 21:57:13.165: INFO: Got endpoints: latency-svc-brm2d [349.525177ms] - Jun 12 21:57:13.166: INFO: Got endpoints: latency-svc-x9npt [369.957265ms] - Jun 12 21:57:13.234: INFO: Created: latency-svc-6q2m6 - Jun 12 21:57:13.237: INFO: Created: latency-svc-h65z5 - Jun 12 21:57:13.237: INFO: Got endpoints: latency-svc-h65z5 [404.738924ms] - Jun 12 21:57:13.239: INFO: Created: latency-svc-59whm - Jun 12 21:57:13.239: INFO: Got endpoints: latency-svc-59whm [381.086594ms] - Jun 12 21:57:13.247: INFO: Got endpoints: latency-svc-6q2m6 [360.846691ms] - Jun 12 21:57:13.262: INFO: Created: latency-svc-nbcsb - Jun 12 21:57:13.267: INFO: Got endpoints: latency-svc-nbcsb [366.750281ms] - Jun 12 21:57:13.274: INFO: Created: latency-svc-6lckq - Jun 12 21:57:13.285: INFO: Got endpoints: latency-svc-6lckq [371.083158ms] - Jun 12 21:57:13.374: INFO: Created: latency-svc-7gptt - Jun 12 21:57:13.374: INFO: Got endpoints: latency-svc-7gptt [378.498748ms] - Jun 12 21:57:13.383: INFO: Created: latency-svc-827t8 - Jun 12 21:57:13.383: INFO: Got endpoints: latency-svc-827t8 [423.39384ms] - Jun 12 21:57:13.409: INFO: Created: latency-svc-q2j2m - Jun 12 21:57:13.426: INFO: Created: latency-svc-cjhq8 - Jun 12 21:57:13.428: INFO: Got endpoints: latency-svc-q2j2m [410.083776ms] - Jun 12 21:57:13.431: INFO: Got endpoints: latency-svc-cjhq8 [407.655592ms] - Jun 12 21:57:13.530: INFO: Created: latency-svc-dx6jm - Jun 12 21:57:13.531: INFO: Created: latency-svc-c552v - Jun 12 21:57:13.532: INFO: Got endpoints: latency-svc-dx6jm [480.877171ms] - Jun 12 21:57:13.532: INFO: Got endpoints: latency-svc-c552v [427.337207ms] - Jun 12 21:57:13.543: INFO: Created: latency-svc-dkswq - Jun 12 21:57:13.549: INFO: Got endpoints: latency-svc-dkswq [489.312788ms] - Jun 12 21:57:13.556: INFO: Created: latency-svc-8x2lk - Jun 12 21:57:13.567: INFO: Got endpoints: latency-svc-8x2lk [443.338056ms] - Jun 12 21:57:13.648: INFO: Created: latency-svc-vwg88 - Jun 12 21:57:13.648: INFO: Got endpoints: latency-svc-vwg88 [481.584937ms] - Jun 12 21:57:13.728: INFO: Created: latency-svc-rchqp - Jun 12 21:57:13.729: INFO: Got endpoints: latency-svc-rchqp [563.499618ms] - Jun 12 21:57:13.730: INFO: Created: latency-svc-tt4v7 - Jun 12 21:57:13.756: INFO: Got endpoints: latency-svc-tt4v7 [518.870665ms] - Jun 12 21:57:13.760: INFO: Created: latency-svc-sqg99 - Jun 12 21:57:13.774: INFO: Got endpoints: latency-svc-sqg99 [534.766667ms] - Jun 12 21:57:13.780: INFO: Created: latency-svc-lcnbp - Jun 12 21:57:13.807: INFO: Got endpoints: latency-svc-lcnbp [558.44543ms] - Jun 12 21:57:13.807: INFO: Created: latency-svc-8rwpb - Jun 12 21:57:13.822: INFO: Got endpoints: latency-svc-8rwpb [554.18169ms] - Jun 12 21:57:13.834: INFO: Created: latency-svc-7gvrj - Jun 12 21:57:13.840: INFO: Created: latency-svc-khg8j - Jun 12 21:57:13.841: INFO: Got endpoints: latency-svc-7gvrj [537.166755ms] - Jun 12 21:57:13.856: INFO: Got endpoints: latency-svc-khg8j [480.922971ms] - Jun 12 21:57:13.860: INFO: Created: latency-svc-vdx5h - Jun 12 21:57:13.876: INFO: Got endpoints: latency-svc-vdx5h [492.137927ms] - Jun 12 21:57:13.893: INFO: Created: latency-svc-qnwkm - Jun 12 21:57:13.924: INFO: Created: latency-svc-q68pk - Jun 12 21:57:13.924: INFO: Got endpoints: latency-svc-q68pk [493.258396ms] - Jun 12 21:57:13.925: INFO: Got endpoints: latency-svc-qnwkm [496.26894ms] - Jun 12 21:57:13.933: INFO: Created: latency-svc-bgzp7 - Jun 12 21:57:13.967: INFO: Got endpoints: latency-svc-bgzp7 [434.729896ms] - Jun 12 21:57:13.968: INFO: Created: latency-svc-fx5c5 - Jun 12 21:57:13.978: INFO: Created: latency-svc-74rqd - Jun 12 21:57:13.983: INFO: Got endpoints: latency-svc-74rqd [427.840892ms] - Jun 12 21:57:13.983: INFO: Got endpoints: latency-svc-fx5c5 [451.011385ms] - Jun 12 21:57:13.989: INFO: Created: latency-svc-sxxzs - Jun 12 21:57:14.017: INFO: Created: latency-svc-mvckz - Jun 12 21:57:14.021: INFO: Got endpoints: latency-svc-mvckz [373.193313ms] - Jun 12 21:57:14.022: INFO: Got endpoints: latency-svc-sxxzs [454.356835ms] - Jun 12 21:57:14.023: INFO: Created: latency-svc-mknrl - Jun 12 21:57:14.034: INFO: Got endpoints: latency-svc-mknrl [304.792183ms] - Jun 12 21:57:14.041: INFO: Created: latency-svc-6vn5h - Jun 12 21:57:14.052: INFO: Got endpoints: latency-svc-6vn5h [295.700335ms] - Jun 12 21:57:14.068: INFO: Created: latency-svc-mplgk - Jun 12 21:57:14.079: INFO: Got endpoints: latency-svc-mplgk [304.585974ms] - Jun 12 21:57:14.080: INFO: Created: latency-svc-822vg - Jun 12 21:57:14.097: INFO: Got endpoints: latency-svc-822vg [290.196366ms] - Jun 12 21:57:14.103: INFO: Created: latency-svc-d2p99 - Jun 12 21:57:14.109: INFO: Got endpoints: latency-svc-d2p99 [286.885055ms] - Jun 12 21:57:14.119: INFO: Created: latency-svc-tbpk7 - Jun 12 21:57:14.128: INFO: Got endpoints: latency-svc-tbpk7 [287.320723ms] - Jun 12 21:57:14.139: INFO: Created: latency-svc-rxk4j - Jun 12 21:57:14.152: INFO: Got endpoints: latency-svc-rxk4j [296.508855ms] - Jun 12 21:57:14.164: INFO: Created: latency-svc-srhxb - Jun 12 21:57:14.182: INFO: Created: latency-svc-flv26 - Jun 12 21:57:14.185: INFO: Got endpoints: latency-svc-srhxb [308.994448ms] - Jun 12 21:57:14.194: INFO: Got endpoints: latency-svc-flv26 [269.143354ms] - Jun 12 21:57:14.198: INFO: Created: latency-svc-fkhr6 - Jun 12 21:57:14.211: INFO: Got endpoints: latency-svc-fkhr6 [285.777766ms] - Jun 12 21:57:14.218: INFO: Created: latency-svc-b2679 - Jun 12 21:57:14.237: INFO: Got endpoints: latency-svc-b2679 [270.247117ms] - Jun 12 21:57:14.254: INFO: Created: latency-svc-vglwv - Jun 12 21:57:14.264: INFO: Got endpoints: latency-svc-vglwv [281.405171ms] - Jun 12 21:57:14.265: INFO: Created: latency-svc-d4pcz - Jun 12 21:57:14.277: INFO: Got endpoints: latency-svc-d4pcz [293.316261ms] - Jun 12 21:57:14.285: INFO: Created: latency-svc-zbg49 - Jun 12 21:57:14.293: INFO: Got endpoints: latency-svc-zbg49 [270.897874ms] - Jun 12 21:57:14.303: INFO: Created: latency-svc-85xst - Jun 12 21:57:14.315: INFO: Got endpoints: latency-svc-85xst [293.998969ms] - Jun 12 21:57:14.321: INFO: Created: latency-svc-dj4v5 - Jun 12 21:57:14.332: INFO: Got endpoints: latency-svc-dj4v5 [297.870152ms] - Jun 12 21:57:14.342: INFO: Created: latency-svc-jdxx8 - Jun 12 21:57:14.368: INFO: Got endpoints: latency-svc-jdxx8 [315.558981ms] - Jun 12 21:57:14.412: INFO: Created: latency-svc-qb5pj - Jun 12 21:57:14.412: INFO: Got endpoints: latency-svc-qb5pj [311.0456ms] - Jun 12 21:57:14.413: INFO: Created: latency-svc-9ncdh - Jun 12 21:57:14.413: INFO: Got endpoints: latency-svc-9ncdh [334.651029ms] - Jun 12 21:57:14.442: INFO: Created: latency-svc-vwzjb - Jun 12 21:57:14.442: INFO: Got endpoints: latency-svc-vwzjb [332.134655ms] - Jun 12 21:57:14.442: INFO: Created: latency-svc-mv5jh - Jun 12 21:57:14.452: INFO: Got endpoints: latency-svc-mv5jh [323.553306ms] - Jun 12 21:57:14.494: INFO: Created: latency-svc-5b2zq - Jun 12 21:57:14.495: INFO: Got endpoints: latency-svc-5b2zq [342.450161ms] - Jun 12 21:57:14.495: INFO: Created: latency-svc-96sgh - Jun 12 21:57:14.526: INFO: Got endpoints: latency-svc-96sgh [340.518367ms] - Jun 12 21:57:14.528: INFO: Created: latency-svc-jr6xp - Jun 12 21:57:14.528: INFO: Got endpoints: latency-svc-jr6xp [333.906881ms] - Jun 12 21:57:14.556: INFO: Created: latency-svc-v8zg2 - Jun 12 21:57:14.557: INFO: Got endpoints: latency-svc-v8zg2 [345.535353ms] - Jun 12 21:57:14.558: INFO: Created: latency-svc-4c9ln - Jun 12 21:57:14.562: INFO: Got endpoints: latency-svc-4c9ln [324.414718ms] - Jun 12 21:57:14.580: INFO: Created: latency-svc-r8grg - Jun 12 21:57:14.590: INFO: Got endpoints: latency-svc-r8grg [326.014601ms] - Jun 12 21:57:14.599: INFO: Created: latency-svc-thv7r - Jun 12 21:57:14.613: INFO: Got endpoints: latency-svc-thv7r [335.606077ms] - Jun 12 21:57:14.618: INFO: Created: latency-svc-d69z9 - Jun 12 21:57:14.632: INFO: Got endpoints: latency-svc-d69z9 [339.160202ms] - Jun 12 21:57:14.640: INFO: Created: latency-svc-clbt5 - Jun 12 21:57:14.652: INFO: Got endpoints: latency-svc-clbt5 [337.287826ms] - Jun 12 21:57:14.660: INFO: Created: latency-svc-jbmlb - Jun 12 21:57:14.674: INFO: Got endpoints: latency-svc-jbmlb [342.063887ms] - Jun 12 21:57:14.690: INFO: Created: latency-svc-2d2pf - Jun 12 21:57:14.691: INFO: Got endpoints: latency-svc-2d2pf [319.303541ms] - Jun 12 21:57:14.713: INFO: Created: latency-svc-mdjh6 - Jun 12 21:57:14.714: INFO: Got endpoints: latency-svc-mdjh6 [301.760276ms] - Jun 12 21:57:14.720: INFO: Created: latency-svc-bh7kr - Jun 12 21:57:14.737: INFO: Got endpoints: latency-svc-bh7kr [323.188708ms] - Jun 12 21:57:14.737: INFO: Created: latency-svc-9cfsm - Jun 12 21:57:14.749: INFO: Got endpoints: latency-svc-9cfsm [306.971117ms] - Jun 12 21:57:14.756: INFO: Created: latency-svc-qb8rr - Jun 12 21:57:14.770: INFO: Got endpoints: latency-svc-qb8rr [318.264793ms] - Jun 12 21:57:14.775: INFO: Created: latency-svc-qlpfc - Jun 12 21:57:14.787: INFO: Got endpoints: latency-svc-qlpfc [292.758503ms] - Jun 12 21:57:14.796: INFO: Created: latency-svc-tpw7m - Jun 12 21:57:14.807: INFO: Got endpoints: latency-svc-tpw7m [280.640147ms] - Jun 12 21:57:14.815: INFO: Created: latency-svc-25xnh - Jun 12 21:57:14.827: INFO: Got endpoints: latency-svc-25xnh [298.664697ms] - Jun 12 21:57:14.833: INFO: Created: latency-svc-tqh2h - Jun 12 21:57:14.848: INFO: Got endpoints: latency-svc-tqh2h [291.067453ms] - Jun 12 21:57:14.859: INFO: Created: latency-svc-dxk2l - Jun 12 21:57:14.868: INFO: Got endpoints: latency-svc-dxk2l [305.687124ms] - Jun 12 21:57:14.875: INFO: Created: latency-svc-mxbdn - Jun 12 21:57:14.887: INFO: Got endpoints: latency-svc-mxbdn [295.378956ms] - Jun 12 21:57:14.897: INFO: Created: latency-svc-cc5br - Jun 12 21:57:14.904: INFO: Got endpoints: latency-svc-cc5br [290.913736ms] - Jun 12 21:57:14.914: INFO: Created: latency-svc-xhq7l - Jun 12 21:57:14.926: INFO: Got endpoints: latency-svc-xhq7l [293.953472ms] - Jun 12 21:57:14.931: INFO: Created: latency-svc-jc9b8 - Jun 12 21:57:14.942: INFO: Got endpoints: latency-svc-jc9b8 [289.049518ms] - Jun 12 21:57:14.948: INFO: Created: latency-svc-6c2xz - Jun 12 21:57:14.959: INFO: Got endpoints: latency-svc-6c2xz [284.360846ms] - Jun 12 21:57:14.970: INFO: Created: latency-svc-9j57v - Jun 12 21:57:14.978: INFO: Got endpoints: latency-svc-9j57v [286.778183ms] - Jun 12 21:57:14.986: INFO: Created: latency-svc-ksnt7 - Jun 12 21:57:14.995: INFO: Got endpoints: latency-svc-ksnt7 [280.446568ms] - Jun 12 21:57:15.010: INFO: Created: latency-svc-mfxhk - Jun 12 21:57:15.012: INFO: Got endpoints: latency-svc-mfxhk [275.657056ms] - Jun 12 21:57:15.022: INFO: Created: latency-svc-jgsxt - Jun 12 21:57:15.033: INFO: Got endpoints: latency-svc-jgsxt [284.019219ms] - Jun 12 21:57:15.042: INFO: Created: latency-svc-jqxp7 - Jun 12 21:57:15.052: INFO: Got endpoints: latency-svc-jqxp7 [281.610523ms] - Jun 12 21:57:15.059: INFO: Created: latency-svc-pf5fn - Jun 12 21:57:15.069: INFO: Got endpoints: latency-svc-pf5fn [281.431959ms] - Jun 12 21:57:15.084: INFO: Created: latency-svc-c4cdc - Jun 12 21:57:15.090: INFO: Got endpoints: latency-svc-c4cdc [283.095448ms] - Jun 12 21:57:15.101: INFO: Created: latency-svc-2v576 - Jun 12 21:57:15.111: INFO: Got endpoints: latency-svc-2v576 [283.72439ms] - Jun 12 21:57:15.120: INFO: Created: latency-svc-tr54k - Jun 12 21:57:15.132: INFO: Got endpoints: latency-svc-tr54k [284.002686ms] - Jun 12 21:57:15.138: INFO: Created: latency-svc-ml4sb - Jun 12 21:57:15.152: INFO: Got endpoints: latency-svc-ml4sb [284.222001ms] - Jun 12 21:57:15.163: INFO: Created: latency-svc-q44qr - Jun 12 21:57:15.169: INFO: Got endpoints: latency-svc-q44qr [282.017992ms] - Jun 12 21:57:15.178: INFO: Created: latency-svc-479pk - Jun 12 21:57:15.193: INFO: Got endpoints: latency-svc-479pk [289.018987ms] - Jun 12 21:57:15.198: INFO: Created: latency-svc-tb895 - Jun 12 21:57:15.213: INFO: Got endpoints: latency-svc-tb895 [287.320077ms] - Jun 12 21:57:15.221: INFO: Created: latency-svc-x6xq6 - Jun 12 21:57:15.234: INFO: Got endpoints: latency-svc-x6xq6 [292.177946ms] - Jun 12 21:57:15.244: INFO: Created: latency-svc-kzgnj - Jun 12 21:57:15.250: INFO: Got endpoints: latency-svc-kzgnj [291.761559ms] - Jun 12 21:57:15.288: INFO: Created: latency-svc-lpdds - Jun 12 21:57:15.290: INFO: Got endpoints: latency-svc-lpdds [311.491668ms] - Jun 12 21:57:15.289: INFO: Created: latency-svc-jwbsh - Jun 12 21:57:15.293: INFO: Got endpoints: latency-svc-jwbsh [298.030291ms] - Jun 12 21:57:15.303: INFO: Created: latency-svc-jxc9t - Jun 12 21:57:15.313: INFO: Got endpoints: latency-svc-jxc9t [300.429736ms] - Jun 12 21:57:15.333: INFO: Created: latency-svc-6d22j - Jun 12 21:57:15.337: INFO: Got endpoints: latency-svc-6d22j [304.122658ms] - Jun 12 21:57:15.338: INFO: Created: latency-svc-sbd98 - Jun 12 21:57:15.352: INFO: Got endpoints: latency-svc-sbd98 [299.613548ms] - Jun 12 21:57:15.366: INFO: Created: latency-svc-t76tp - Jun 12 21:57:15.366: INFO: Got endpoints: latency-svc-t76tp [296.880283ms] - Jun 12 21:57:15.375: INFO: Created: latency-svc-zpg8x - Jun 12 21:57:15.419: INFO: Got endpoints: latency-svc-zpg8x [329.292815ms] - Jun 12 21:57:15.419: INFO: Latencies: [47.23959ms 65.975818ms 69.016876ms 89.615977ms 103.158006ms 104.198239ms 104.722523ms 107.983533ms 113.508528ms 118.680644ms 132.599494ms 149.46293ms 163.518362ms 164.391951ms 166.015222ms 182.378756ms 201.41833ms 218.555903ms 220.08255ms 222.461774ms 224.460963ms 225.046093ms 228.30534ms 233.788065ms 242.861167ms 252.593403ms 254.150161ms 259.565549ms 261.096577ms 263.036197ms 264.941643ms 267.259748ms 268.782726ms 269.143354ms 270.247117ms 270.897874ms 271.390961ms 271.726832ms 271.983481ms 272.156373ms 273.99189ms 275.359729ms 275.657056ms 277.491416ms 278.427438ms 279.256257ms 280.052834ms 280.261511ms 280.446568ms 280.640147ms 281.094011ms 281.405171ms 281.431959ms 281.53342ms 281.610523ms 282.017992ms 282.645079ms 283.095448ms 283.72439ms 284.002686ms 284.019219ms 284.222001ms 284.360846ms 284.642186ms 284.768252ms 285.153293ms 285.777766ms 286.778183ms 286.885055ms 286.964642ms 287.311471ms 287.320077ms 287.320723ms 287.673602ms 289.018987ms 289.049518ms 290.196366ms 290.913736ms 290.921251ms 291.067453ms 291.761559ms 292.177946ms 292.758503ms 293.316261ms 293.851872ms 293.953472ms 293.998969ms 295.378956ms 295.700335ms 296.508855ms 296.880283ms 297.457341ms 297.870152ms 298.030291ms 298.664697ms 299.074941ms 299.613548ms 300.370125ms 300.429736ms 300.729282ms 301.760276ms 301.822988ms 303.390548ms 303.459222ms 303.556822ms 303.581238ms 304.122658ms 304.243316ms 304.585974ms 304.792183ms 305.55353ms 305.687124ms 306.971117ms 307.401453ms 307.724221ms 308.152991ms 308.994448ms 309.251032ms 310.227938ms 311.0456ms 311.491668ms 312.175512ms 313.105637ms 313.409795ms 315.558981ms 315.787584ms 316.87002ms 317.299333ms 318.264793ms 319.303541ms 320.412703ms 321.067731ms 321.622939ms 321.889612ms 322.86918ms 323.188708ms 323.553306ms 323.759593ms 324.414718ms 326.014601ms 326.927758ms 327.944242ms 328.169284ms 329.292815ms 331.651485ms 332.134655ms 332.660728ms 333.906881ms 334.651029ms 335.444125ms 335.606077ms 337.287826ms 337.429388ms 338.820254ms 339.160202ms 340.518367ms 342.063887ms 342.124515ms 342.450161ms 344.094649ms 344.436632ms 345.535353ms 349.525177ms 360.846691ms 365.692399ms 366.750281ms 369.957265ms 371.083158ms 373.193313ms 378.498748ms 381.086594ms 404.738924ms 407.214824ms 407.655592ms 408.557736ms 410.083776ms 423.39384ms 427.337207ms 427.840892ms 434.729896ms 443.338056ms 451.011385ms 454.356835ms 480.877171ms 480.922971ms 481.584937ms 481.892906ms 484.222499ms 489.312788ms 492.137927ms 493.258396ms 496.26894ms 518.870665ms 521.233822ms 525.445897ms 534.766667ms 537.166755ms 554.18169ms 558.44543ms 563.499618ms] - Jun 12 21:57:15.420: INFO: 50 %ile: 301.760276ms - Jun 12 21:57:15.420: INFO: 90 %ile: 443.338056ms - Jun 12 21:57:15.420: INFO: 99 %ile: 558.44543ms - Jun 12 21:57:15.420: INFO: Total sample count: 200 - [AfterEach] [sig-network] Service endpoints latency + [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:147 + STEP: Creating a pod to test emptydir 0777 on tmpfs 07/27/23 02:28:00.191 + Jul 27 02:28:00.222: INFO: Waiting up to 5m0s for pod "pod-60cdf801-fc31-489e-8359-a29b9c99a7d5" in namespace "emptydir-6463" to be "Succeeded or Failed" + Jul 27 02:28:00.230: INFO: Pod "pod-60cdf801-fc31-489e-8359-a29b9c99a7d5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.135841ms + Jul 27 02:28:02.240: INFO: Pod "pod-60cdf801-fc31-489e-8359-a29b9c99a7d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017891076s + Jul 27 02:28:04.240: INFO: Pod "pod-60cdf801-fc31-489e-8359-a29b9c99a7d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017854137s + STEP: Saw pod success 07/27/23 02:28:04.24 + Jul 27 02:28:04.240: INFO: Pod "pod-60cdf801-fc31-489e-8359-a29b9c99a7d5" satisfied condition "Succeeded or Failed" + Jul 27 02:28:04.248: INFO: Trying to get logs from node 10.245.128.19 pod pod-60cdf801-fc31-489e-8359-a29b9c99a7d5 container test-container: + STEP: delete the pod 07/27/23 02:28:04.292 + Jul 27 02:28:04.311: INFO: Waiting for pod pod-60cdf801-fc31-489e-8359-a29b9c99a7d5 to disappear + Jul 27 02:28:04.319: INFO: Pod pod-60cdf801-fc31-489e-8359-a29b9c99a7d5 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/node/init/init.go:32 - Jun 12 21:57:15.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] Service endpoints latency + Jul 27 02:28:04.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] Service endpoints latency + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] Service endpoints latency + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes tear down framework | framework.go:193 - STEP: Destroying namespace "svc-latency-193" for this suite. 06/12/23 21:57:15.442 + STEP: Destroying namespace "emptydir-6463" for this suite. 07/27/23 02:28:04.332 << End Captured GinkgoWriter Output ------------------------------ -[sig-node] PodTemplates - should delete a collection of pod templates [Conformance] - test/e2e/common/node/podtemplates.go:122 -[BeforeEach] [sig-node] PodTemplates +SSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should deny crd creation [Conformance] + test/e2e/apimachinery/webhook.go:308 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:57:15.467 -Jun 12 21:57:15.468: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename podtemplate 06/12/23 21:57:15.471 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:57:15.521 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:57:15.533 -[BeforeEach] [sig-node] PodTemplates +STEP: Creating a kubernetes client 07/27/23 02:28:04.353 +Jul 27 02:28:04.353: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename webhook 07/27/23 02:28:04.354 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:28:04.393 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:28:04.401 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 -[It] should delete a collection of pod templates [Conformance] - test/e2e/common/node/podtemplates.go:122 -STEP: Create set of pod templates 06/12/23 21:57:15.551 -Jun 12 21:57:15.563: INFO: created test-podtemplate-1 -Jun 12 21:57:15.576: INFO: created test-podtemplate-2 -Jun 12 21:57:15.588: INFO: created test-podtemplate-3 -STEP: get a list of pod templates with a label in the current namespace 06/12/23 21:57:15.588 -STEP: delete collection of pod templates 06/12/23 21:57:15.597 -Jun 12 21:57:15.597: INFO: requesting DeleteCollection of pod templates -STEP: check that the list of pod templates matches the requested quantity 06/12/23 21:57:15.626 -Jun 12 21:57:15.626: INFO: requesting list of pod templates to confirm quantity -[AfterEach] [sig-node] PodTemplates +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 07/27/23 02:28:04.474 +STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 02:28:04.651 +STEP: Deploying the webhook pod 07/27/23 02:28:04.683 +STEP: Wait for the deployment to be ready 07/27/23 02:28:04.708 +Jul 27 02:28:04.725: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Jul 27 02:28:06.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 2, 28, 4, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 2, 28, 4, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 2, 28, 4, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 2, 28, 4, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service 07/27/23 02:28:08.814 +STEP: Verifying the service has paired with the endpoint 07/27/23 02:28:08.885 +Jul 27 02:28:09.886: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should deny crd creation [Conformance] + test/e2e/apimachinery/webhook.go:308 +STEP: Registering the crd webhook via the AdmissionRegistration API 07/27/23 02:28:09.896 +STEP: Creating a custom resource definition that should be denied by the webhook 07/27/23 02:28:09.941 +Jul 27 02:28:09.941: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 -Jun 12 21:57:15.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] PodTemplates +Jul 27 02:28:10.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] PodTemplates +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] PodTemplates +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 -STEP: Destroying namespace "podtemplate-1073" for this suite. 06/12/23 21:57:15.66 +STEP: Destroying namespace "webhook-3871" for this suite. 07/27/23 02:28:10.145 +STEP: Destroying namespace "webhook-3871-markers" for this suite. 07/27/23 02:28:10.171 ------------------------------ -• [0.218 seconds] -[sig-node] PodTemplates -test/e2e/common/node/framework.go:23 - should delete a collection of pod templates [Conformance] - test/e2e/common/node/podtemplates.go:122 +• [SLOW TEST] [5.844 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should deny crd creation [Conformance] + test/e2e/apimachinery/webhook.go:308 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] PodTemplates + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:57:15.467 - Jun 12 21:57:15.468: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename podtemplate 06/12/23 21:57:15.471 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:57:15.521 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:57:15.533 - [BeforeEach] [sig-node] PodTemplates + STEP: Creating a kubernetes client 07/27/23 02:28:04.353 + Jul 27 02:28:04.353: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename webhook 07/27/23 02:28:04.354 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:28:04.393 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:28:04.401 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 - [It] should delete a collection of pod templates [Conformance] - test/e2e/common/node/podtemplates.go:122 - STEP: Create set of pod templates 06/12/23 21:57:15.551 - Jun 12 21:57:15.563: INFO: created test-podtemplate-1 - Jun 12 21:57:15.576: INFO: created test-podtemplate-2 - Jun 12 21:57:15.588: INFO: created test-podtemplate-3 - STEP: get a list of pod templates with a label in the current namespace 06/12/23 21:57:15.588 - STEP: delete collection of pod templates 06/12/23 21:57:15.597 - Jun 12 21:57:15.597: INFO: requesting DeleteCollection of pod templates - STEP: check that the list of pod templates matches the requested quantity 06/12/23 21:57:15.626 - Jun 12 21:57:15.626: INFO: requesting list of pod templates to confirm quantity - [AfterEach] [sig-node] PodTemplates + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 07/27/23 02:28:04.474 + STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 02:28:04.651 + STEP: Deploying the webhook pod 07/27/23 02:28:04.683 + STEP: Wait for the deployment to be ready 07/27/23 02:28:04.708 + Jul 27 02:28:04.725: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + Jul 27 02:28:06.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 2, 28, 4, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 2, 28, 4, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 2, 28, 4, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 2, 28, 4, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} + STEP: Deploying the webhook service 07/27/23 02:28:08.814 + STEP: Verifying the service has paired with the endpoint 07/27/23 02:28:08.885 + Jul 27 02:28:09.886: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should deny crd creation [Conformance] + test/e2e/apimachinery/webhook.go:308 + STEP: Registering the crd webhook via the AdmissionRegistration API 07/27/23 02:28:09.896 + STEP: Creating a custom resource definition that should be denied by the webhook 07/27/23 02:28:09.941 + Jul 27 02:28:09.941: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 21:57:15.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] PodTemplates + Jul 27 02:28:10.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] PodTemplates + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] PodTemplates + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "podtemplate-1073" for this suite. 06/12/23 21:57:15.66 + STEP: Destroying namespace "webhook-3871" for this suite. 07/27/23 02:28:10.145 + STEP: Destroying namespace "webhook-3871-markers" for this suite. 07/27/23 02:28:10.171 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-apps] ReplicationController - should adopt matching pods on creation [Conformance] - test/e2e/apps/rc.go:92 -[BeforeEach] [sig-apps] ReplicationController +[sig-storage] EmptyDir volumes + should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:197 +[BeforeEach] [sig-storage] EmptyDir volumes set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:57:15.699 -Jun 12 21:57:15.699: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename replication-controller 06/12/23 21:57:15.702 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:57:15.754 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:57:15.767 -[BeforeEach] [sig-apps] ReplicationController +STEP: Creating a kubernetes client 07/27/23 02:28:10.203 +Jul 27 02:28:10.203: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename emptydir 07/27/23 02:28:10.204 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:28:10.248 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:28:10.257 +[BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-apps] ReplicationController - test/e2e/apps/rc.go:57 -[It] should adopt matching pods on creation [Conformance] - test/e2e/apps/rc.go:92 -STEP: Given a Pod with a 'name' label pod-adoption is created 06/12/23 21:57:15.782 -Jun 12 21:57:15.800: INFO: Waiting up to 5m0s for pod "pod-adoption" in namespace "replication-controller-5664" to be "running and ready" -Jun 12 21:57:15.810: INFO: Pod "pod-adoption": Phase="Pending", Reason="", readiness=false. Elapsed: 9.72396ms -Jun 12 21:57:15.810: INFO: The phase of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:57:17.820: INFO: Pod "pod-adoption": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019526931s -Jun 12 21:57:17.820: INFO: The phase of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:57:19.820: INFO: Pod "pod-adoption": Phase="Running", Reason="", readiness=true. Elapsed: 4.0201166s -Jun 12 21:57:19.820: INFO: The phase of Pod pod-adoption is Running (Ready = true) -Jun 12 21:57:19.820: INFO: Pod "pod-adoption" satisfied condition "running and ready" -STEP: When a replication controller with a matching selector is created 06/12/23 21:57:19.828 -STEP: Then the orphan pod is adopted 06/12/23 21:57:19.846 -[AfterEach] [sig-apps] ReplicationController +[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:197 +STEP: Creating a pod to test emptydir 0644 on node default medium 07/27/23 02:28:10.266 +W0727 02:28:10.301221 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "test-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "test-container" must set securityContext.capabilities.drop=["ALL"]), seccompProfile (pod or container "test-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +Jul 27 02:28:10.301: INFO: Waiting up to 5m0s for pod "pod-a82de4a6-6eb6-416d-97a4-3649b5c1f972" in namespace "emptydir-3131" to be "Succeeded or Failed" +Jul 27 02:28:10.321: INFO: Pod "pod-a82de4a6-6eb6-416d-97a4-3649b5c1f972": Phase="Pending", Reason="", readiness=false. Elapsed: 19.716454ms +Jul 27 02:28:12.330: INFO: Pod "pod-a82de4a6-6eb6-416d-97a4-3649b5c1f972": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029334817s +Jul 27 02:28:14.330: INFO: Pod "pod-a82de4a6-6eb6-416d-97a4-3649b5c1f972": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028853869s +STEP: Saw pod success 07/27/23 02:28:14.33 +Jul 27 02:28:14.330: INFO: Pod "pod-a82de4a6-6eb6-416d-97a4-3649b5c1f972" satisfied condition "Succeeded or Failed" +Jul 27 02:28:14.338: INFO: Trying to get logs from node 10.245.128.19 pod pod-a82de4a6-6eb6-416d-97a4-3649b5c1f972 container test-container: +STEP: delete the pod 07/27/23 02:28:14.356 +Jul 27 02:28:14.382: INFO: Waiting for pod pod-a82de4a6-6eb6-416d-97a4-3649b5c1f972 to disappear +Jul 27 02:28:14.390: INFO: Pod pod-a82de4a6-6eb6-416d-97a4-3649b5c1f972 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/node/init/init.go:32 -Jun 12 21:57:20.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] ReplicationController +Jul 27 02:28:14.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] ReplicationController +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] ReplicationController +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes tear down framework | framework.go:193 -STEP: Destroying namespace "replication-controller-5664" for this suite. 06/12/23 21:57:20.886 +STEP: Destroying namespace "emptydir-3131" for this suite. 07/27/23 02:28:14.412 ------------------------------ -• [SLOW TEST] [5.213 seconds] -[sig-apps] ReplicationController -test/e2e/apps/framework.go:23 - should adopt matching pods on creation [Conformance] - test/e2e/apps/rc.go:92 +• [4.231 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:197 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] ReplicationController + [BeforeEach] [sig-storage] EmptyDir volumes set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:57:15.699 - Jun 12 21:57:15.699: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename replication-controller 06/12/23 21:57:15.702 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:57:15.754 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:57:15.767 - [BeforeEach] [sig-apps] ReplicationController + STEP: Creating a kubernetes client 07/27/23 02:28:10.203 + Jul 27 02:28:10.203: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename emptydir 07/27/23 02:28:10.204 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:28:10.248 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:28:10.257 + [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-apps] ReplicationController - test/e2e/apps/rc.go:57 - [It] should adopt matching pods on creation [Conformance] - test/e2e/apps/rc.go:92 - STEP: Given a Pod with a 'name' label pod-adoption is created 06/12/23 21:57:15.782 - Jun 12 21:57:15.800: INFO: Waiting up to 5m0s for pod "pod-adoption" in namespace "replication-controller-5664" to be "running and ready" - Jun 12 21:57:15.810: INFO: Pod "pod-adoption": Phase="Pending", Reason="", readiness=false. Elapsed: 9.72396ms - Jun 12 21:57:15.810: INFO: The phase of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:57:17.820: INFO: Pod "pod-adoption": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019526931s - Jun 12 21:57:17.820: INFO: The phase of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:57:19.820: INFO: Pod "pod-adoption": Phase="Running", Reason="", readiness=true. Elapsed: 4.0201166s - Jun 12 21:57:19.820: INFO: The phase of Pod pod-adoption is Running (Ready = true) - Jun 12 21:57:19.820: INFO: Pod "pod-adoption" satisfied condition "running and ready" - STEP: When a replication controller with a matching selector is created 06/12/23 21:57:19.828 - STEP: Then the orphan pod is adopted 06/12/23 21:57:19.846 - [AfterEach] [sig-apps] ReplicationController + [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:197 + STEP: Creating a pod to test emptydir 0644 on node default medium 07/27/23 02:28:10.266 + W0727 02:28:10.301221 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "test-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "test-container" must set securityContext.capabilities.drop=["ALL"]), seccompProfile (pod or container "test-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + Jul 27 02:28:10.301: INFO: Waiting up to 5m0s for pod "pod-a82de4a6-6eb6-416d-97a4-3649b5c1f972" in namespace "emptydir-3131" to be "Succeeded or Failed" + Jul 27 02:28:10.321: INFO: Pod "pod-a82de4a6-6eb6-416d-97a4-3649b5c1f972": Phase="Pending", Reason="", readiness=false. Elapsed: 19.716454ms + Jul 27 02:28:12.330: INFO: Pod "pod-a82de4a6-6eb6-416d-97a4-3649b5c1f972": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029334817s + Jul 27 02:28:14.330: INFO: Pod "pod-a82de4a6-6eb6-416d-97a4-3649b5c1f972": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028853869s + STEP: Saw pod success 07/27/23 02:28:14.33 + Jul 27 02:28:14.330: INFO: Pod "pod-a82de4a6-6eb6-416d-97a4-3649b5c1f972" satisfied condition "Succeeded or Failed" + Jul 27 02:28:14.338: INFO: Trying to get logs from node 10.245.128.19 pod pod-a82de4a6-6eb6-416d-97a4-3649b5c1f972 container test-container: + STEP: delete the pod 07/27/23 02:28:14.356 + Jul 27 02:28:14.382: INFO: Waiting for pod pod-a82de4a6-6eb6-416d-97a4-3649b5c1f972 to disappear + Jul 27 02:28:14.390: INFO: Pod pod-a82de4a6-6eb6-416d-97a4-3649b5c1f972 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/node/init/init.go:32 - Jun 12 21:57:20.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] ReplicationController + Jul 27 02:28:14.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] ReplicationController + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] ReplicationController + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes tear down framework | framework.go:193 - STEP: Destroying namespace "replication-controller-5664" for this suite. 06/12/23 21:57:20.886 + STEP: Destroying namespace "emptydir-3131" for this suite. 07/27/23 02:28:14.412 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - should mutate pod and apply defaults after mutation [Conformance] - test/e2e/apimachinery/webhook.go:264 + should be able to deny attaching pod [Conformance] + test/e2e/apimachinery/webhook.go:209 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:57:20.917 -Jun 12 21:57:20.918: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename webhook 06/12/23 21:57:20.935 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:57:21.012 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:57:21.055 +STEP: Creating a kubernetes client 07/27/23 02:28:14.436 +Jul 27 02:28:14.436: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename webhook 07/27/23 02:28:14.436 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:28:14.486 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:28:14.494 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/apimachinery/webhook.go:90 -STEP: Setting up server cert 06/12/23 21:57:21.18 -STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 21:57:22.266 -STEP: Deploying the webhook pod 06/12/23 21:57:22.303 -STEP: Wait for the deployment to be ready 06/12/23 21:57:22.345 -Jun 12 21:57:22.368: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set -Jun 12 21:57:24.514: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 21, 57, 22, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 57, 22, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 57, 22, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 57, 22, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} -STEP: Deploying the webhook service 06/12/23 21:57:26.524 -STEP: Verifying the service has paired with the endpoint 06/12/23 21:57:26.559 -Jun 12 21:57:27.562: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 -[It] should mutate pod and apply defaults after mutation [Conformance] - test/e2e/apimachinery/webhook.go:264 -STEP: Registering the mutating pod webhook via the AdmissionRegistration API 06/12/23 21:57:27.607 -STEP: create a pod that should be updated by the webhook 06/12/23 21:57:27.826 +STEP: Setting up server cert 07/27/23 02:28:14.604 +STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 02:28:14.981 +STEP: Deploying the webhook pod 07/27/23 02:28:15.015 +STEP: Wait for the deployment to be ready 07/27/23 02:28:15.041 +Jul 27 02:28:15.057: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Jul 27 02:28:17.091: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 2, 28, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 2, 28, 15, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 2, 28, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 2, 28, 15, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service 07/27/23 02:28:19.101 +STEP: Verifying the service has paired with the endpoint 07/27/23 02:28:19.198 +Jul 27 02:28:20.199: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny attaching pod [Conformance] + test/e2e/apimachinery/webhook.go:209 +STEP: Registering the webhook via the AdmissionRegistration API 07/27/23 02:28:20.209 +STEP: create a pod 07/27/23 02:28:20.254 +Jul 27 02:28:20.277: INFO: Waiting up to 5m0s for pod "to-be-attached-pod" in namespace "webhook-3332" to be "running" +Jul 27 02:28:20.285: INFO: Pod "to-be-attached-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 8.07921ms +Jul 27 02:28:22.297: INFO: Pod "to-be-attached-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.019552763s +Jul 27 02:28:22.297: INFO: Pod "to-be-attached-pod" satisfied condition "running" +STEP: 'kubectl attach' the pod, should be denied by the webhook 07/27/23 02:28:22.297 +Jul 27 02:28:22.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=webhook-3332 attach --namespace=webhook-3332 to-be-attached-pod -i -c=container1' +Jul 27 02:28:22.497: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 -Jun 12 21:57:28.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +Jul 27 02:28:22.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/apimachinery/webhook.go:105 [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] @@ -28085,43 +25726,50 @@ Jun 12 21:57:28.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 -STEP: Destroying namespace "webhook-2913" for this suite. 06/12/23 21:57:28.503 -STEP: Destroying namespace "webhook-2913-markers" for this suite. 06/12/23 21:57:28.583 +STEP: Destroying namespace "webhook-3332" for this suite. 07/27/23 02:28:22.696 +STEP: Destroying namespace "webhook-3332-markers" for this suite. 07/27/23 02:28:22.719 ------------------------------ -• [SLOW TEST] [7.711 seconds] +• [SLOW TEST] [8.313 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/apimachinery/framework.go:23 - should mutate pod and apply defaults after mutation [Conformance] - test/e2e/apimachinery/webhook.go:264 + should be able to deny attaching pod [Conformance] + test/e2e/apimachinery/webhook.go:209 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:57:20.917 - Jun 12 21:57:20.918: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename webhook 06/12/23 21:57:20.935 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:57:21.012 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:57:21.055 + STEP: Creating a kubernetes client 07/27/23 02:28:14.436 + Jul 27 02:28:14.436: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename webhook 07/27/23 02:28:14.436 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:28:14.486 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:28:14.494 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/apimachinery/webhook.go:90 - STEP: Setting up server cert 06/12/23 21:57:21.18 - STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 21:57:22.266 - STEP: Deploying the webhook pod 06/12/23 21:57:22.303 - STEP: Wait for the deployment to be ready 06/12/23 21:57:22.345 - Jun 12 21:57:22.368: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set - Jun 12 21:57:24.514: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 21, 57, 22, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 57, 22, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 21, 57, 22, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 21, 57, 22, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} - STEP: Deploying the webhook service 06/12/23 21:57:26.524 - STEP: Verifying the service has paired with the endpoint 06/12/23 21:57:26.559 - Jun 12 21:57:27.562: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 - [It] should mutate pod and apply defaults after mutation [Conformance] - test/e2e/apimachinery/webhook.go:264 - STEP: Registering the mutating pod webhook via the AdmissionRegistration API 06/12/23 21:57:27.607 - STEP: create a pod that should be updated by the webhook 06/12/23 21:57:27.826 + STEP: Setting up server cert 07/27/23 02:28:14.604 + STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 02:28:14.981 + STEP: Deploying the webhook pod 07/27/23 02:28:15.015 + STEP: Wait for the deployment to be ready 07/27/23 02:28:15.041 + Jul 27 02:28:15.057: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + Jul 27 02:28:17.091: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 2, 28, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 2, 28, 15, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 2, 28, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 2, 28, 15, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} + STEP: Deploying the webhook service 07/27/23 02:28:19.101 + STEP: Verifying the service has paired with the endpoint 07/27/23 02:28:19.198 + Jul 27 02:28:20.199: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should be able to deny attaching pod [Conformance] + test/e2e/apimachinery/webhook.go:209 + STEP: Registering the webhook via the AdmissionRegistration API 07/27/23 02:28:20.209 + STEP: create a pod 07/27/23 02:28:20.254 + Jul 27 02:28:20.277: INFO: Waiting up to 5m0s for pod "to-be-attached-pod" in namespace "webhook-3332" to be "running" + Jul 27 02:28:20.285: INFO: Pod "to-be-attached-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 8.07921ms + Jul 27 02:28:22.297: INFO: Pod "to-be-attached-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.019552763s + Jul 27 02:28:22.297: INFO: Pod "to-be-attached-pod" satisfied condition "running" + STEP: 'kubectl attach' the pod, should be denied by the webhook 07/27/23 02:28:22.297 + Jul 27 02:28:22.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=webhook-3332 attach --namespace=webhook-3332 to-be-attached-pod -i -c=container1' + Jul 27 02:28:22.497: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 21:57:28.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + Jul 27 02:28:22.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/apimachinery/webhook.go:105 [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] @@ -28130,581 +25778,518 @@ test/e2e/apimachinery/framework.go:23 dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "webhook-2913" for this suite. 06/12/23 21:57:28.503 - STEP: Destroying namespace "webhook-2913-markers" for this suite. 06/12/23 21:57:28.583 + STEP: Destroying namespace "webhook-3332" for this suite. 07/27/23 02:28:22.696 + STEP: Destroying namespace "webhook-3332-markers" for this suite. 07/27/23 02:28:22.719 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSS +SS ------------------------------ -[sig-api-machinery] Namespaces [Serial] - should patch a Namespace [Conformance] - test/e2e/apimachinery/namespace.go:268 -[BeforeEach] [sig-api-machinery] Namespaces [Serial] +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a configMap. [Conformance] + test/e2e/apimachinery/resource_quota.go:326 +[BeforeEach] [sig-api-machinery] ResourceQuota set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:57:28.702 -Jun 12 21:57:28.702: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename namespaces 06/12/23 21:57:28.705 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:57:29.034 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:57:29.254 -[BeforeEach] [sig-api-machinery] Namespaces [Serial] +STEP: Creating a kubernetes client 07/27/23 02:28:22.749 +Jul 27 02:28:22.749: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename resourcequota 07/27/23 02:28:22.75 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:28:22.792 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:28:22.802 +[BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:31 -[It] should patch a Namespace [Conformance] - test/e2e/apimachinery/namespace.go:268 -STEP: creating a Namespace 06/12/23 21:57:29.497 -STEP: patching the Namespace 06/12/23 21:57:29.647 -STEP: get the Namespace and ensuring it has the label 06/12/23 21:57:29.725 -[AfterEach] [sig-api-machinery] Namespaces [Serial] +[It] should create a ResourceQuota and capture the life of a configMap. [Conformance] + test/e2e/apimachinery/resource_quota.go:326 +STEP: Counting existing ResourceQuota 07/27/23 02:28:39.828 +STEP: Creating a ResourceQuota 07/27/23 02:28:44.837 +STEP: Ensuring resource quota status is calculated 07/27/23 02:28:44.85 +STEP: Creating a ConfigMap 07/27/23 02:28:46.861 +STEP: Ensuring resource quota status captures configMap creation 07/27/23 02:28:46.897 +STEP: Deleting a ConfigMap 07/27/23 02:28:48.925 +STEP: Ensuring resource quota status released usage 07/27/23 02:28:48.948 +[AfterEach] [sig-api-machinery] ResourceQuota test/e2e/framework/node/init/init.go:32 -Jun 12 21:57:29.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] +Jul 27 02:28:50.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota tear down framework | framework.go:193 -STEP: Destroying namespace "namespaces-8038" for this suite. 06/12/23 21:57:29.863 -STEP: Destroying namespace "nspatchtest-8cd03e0d-21b5-4aa0-b17a-2bb1688ecb20-2960" for this suite. 06/12/23 21:57:29.892 +STEP: Destroying namespace "resourcequota-6763" for this suite. 07/27/23 02:28:50.969 ------------------------------ -• [1.217 seconds] -[sig-api-machinery] Namespaces [Serial] +• [SLOW TEST] [28.244 seconds] +[sig-api-machinery] ResourceQuota test/e2e/apimachinery/framework.go:23 - should patch a Namespace [Conformance] - test/e2e/apimachinery/namespace.go:268 + should create a ResourceQuota and capture the life of a configMap. [Conformance] + test/e2e/apimachinery/resource_quota.go:326 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] Namespaces [Serial] + [BeforeEach] [sig-api-machinery] ResourceQuota set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:57:28.702 - Jun 12 21:57:28.702: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename namespaces 06/12/23 21:57:28.705 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:57:29.034 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:57:29.254 - [BeforeEach] [sig-api-machinery] Namespaces [Serial] + STEP: Creating a kubernetes client 07/27/23 02:28:22.749 + Jul 27 02:28:22.749: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename resourcequota 07/27/23 02:28:22.75 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:28:22.792 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:28:22.802 + [BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:31 - [It] should patch a Namespace [Conformance] - test/e2e/apimachinery/namespace.go:268 - STEP: creating a Namespace 06/12/23 21:57:29.497 - STEP: patching the Namespace 06/12/23 21:57:29.647 - STEP: get the Namespace and ensuring it has the label 06/12/23 21:57:29.725 - [AfterEach] [sig-api-machinery] Namespaces [Serial] + [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] + test/e2e/apimachinery/resource_quota.go:326 + STEP: Counting existing ResourceQuota 07/27/23 02:28:39.828 + STEP: Creating a ResourceQuota 07/27/23 02:28:44.837 + STEP: Ensuring resource quota status is calculated 07/27/23 02:28:44.85 + STEP: Creating a ConfigMap 07/27/23 02:28:46.861 + STEP: Ensuring resource quota status captures configMap creation 07/27/23 02:28:46.897 + STEP: Deleting a ConfigMap 07/27/23 02:28:48.925 + STEP: Ensuring resource quota status released usage 07/27/23 02:28:48.948 + [AfterEach] [sig-api-machinery] ResourceQuota test/e2e/framework/node/init/init.go:32 - Jun 12 21:57:29.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + Jul 27 02:28:50.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota tear down framework | framework.go:193 - STEP: Destroying namespace "namespaces-8038" for this suite. 06/12/23 21:57:29.863 - STEP: Destroying namespace "nspatchtest-8cd03e0d-21b5-4aa0-b17a-2bb1688ecb20-2960" for this suite. 06/12/23 21:57:29.892 + STEP: Destroying namespace "resourcequota-6763" for this suite. 07/27/23 02:28:50.969 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] Downward API volume - should update labels on modification [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:130 -[BeforeEach] [sig-storage] Downward API volume +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of same group but different versions [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:309 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:57:30.022 -Jun 12 21:57:30.022: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename downward-api 06/12/23 21:57:30.05 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:57:30.221 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:57:30.342 -[BeforeEach] [sig-storage] Downward API volume +STEP: Creating a kubernetes client 07/27/23 02:28:50.993 +Jul 27 02:28:50.993: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename crd-publish-openapi 07/27/23 02:28:50.993 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:28:51.035 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:28:51.044 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-storage] Downward API volume - test/e2e/common/storage/downwardapi_volume.go:44 -[It] should update labels on modification [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:130 -STEP: Creating the pod 06/12/23 21:57:30.468 -Jun 12 21:57:30.538: INFO: Waiting up to 5m0s for pod "labelsupdate02454ad8-3aff-4a62-87f4-71898cc261d7" in namespace "downward-api-8731" to be "running and ready" -Jun 12 21:57:30.704: INFO: Pod "labelsupdate02454ad8-3aff-4a62-87f4-71898cc261d7": Phase="Pending", Reason="", readiness=false. Elapsed: 166.399159ms -Jun 12 21:57:30.704: INFO: The phase of Pod labelsupdate02454ad8-3aff-4a62-87f4-71898cc261d7 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:57:32.730: INFO: Pod "labelsupdate02454ad8-3aff-4a62-87f4-71898cc261d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.192272188s -Jun 12 21:57:32.730: INFO: The phase of Pod labelsupdate02454ad8-3aff-4a62-87f4-71898cc261d7 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:57:34.720: INFO: Pod "labelsupdate02454ad8-3aff-4a62-87f4-71898cc261d7": Phase="Running", Reason="", readiness=true. Elapsed: 4.181788873s -Jun 12 21:57:34.721: INFO: The phase of Pod labelsupdate02454ad8-3aff-4a62-87f4-71898cc261d7 is Running (Ready = true) -Jun 12 21:57:34.721: INFO: Pod "labelsupdate02454ad8-3aff-4a62-87f4-71898cc261d7" satisfied condition "running and ready" -Jun 12 21:57:35.318: INFO: Successfully updated pod "labelsupdate02454ad8-3aff-4a62-87f4-71898cc261d7" -[AfterEach] [sig-storage] Downward API volume +[It] works for multiple CRDs of same group but different versions [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:309 +STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation 07/27/23 02:28:51.054 +Jul 27 02:28:51.055: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation 07/27/23 02:29:11.212 +Jul 27 02:29:11.213: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 02:29:19.197: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 -Jun 12 21:57:37.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Downward API volume +Jul 27 02:29:43.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Downward API volume +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Downward API volume +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] tear down framework | framework.go:193 -STEP: Destroying namespace "downward-api-8731" for this suite. 06/12/23 21:57:37.407 +STEP: Destroying namespace "crd-publish-openapi-5864" for this suite. 07/27/23 02:29:43.796 ------------------------------ -• [SLOW TEST] [7.437 seconds] -[sig-storage] Downward API volume -test/e2e/common/storage/framework.go:23 - should update labels on modification [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:130 - - Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Downward API volume - set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:57:30.022 - Jun 12 21:57:30.022: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename downward-api 06/12/23 21:57:30.05 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:57:30.221 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:57:30.342 - [BeforeEach] [sig-storage] Downward API volume - test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-storage] Downward API volume - test/e2e/common/storage/downwardapi_volume.go:44 - [It] should update labels on modification [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:130 - STEP: Creating the pod 06/12/23 21:57:30.468 - Jun 12 21:57:30.538: INFO: Waiting up to 5m0s for pod "labelsupdate02454ad8-3aff-4a62-87f4-71898cc261d7" in namespace "downward-api-8731" to be "running and ready" - Jun 12 21:57:30.704: INFO: Pod "labelsupdate02454ad8-3aff-4a62-87f4-71898cc261d7": Phase="Pending", Reason="", readiness=false. Elapsed: 166.399159ms - Jun 12 21:57:30.704: INFO: The phase of Pod labelsupdate02454ad8-3aff-4a62-87f4-71898cc261d7 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:57:32.730: INFO: Pod "labelsupdate02454ad8-3aff-4a62-87f4-71898cc261d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.192272188s - Jun 12 21:57:32.730: INFO: The phase of Pod labelsupdate02454ad8-3aff-4a62-87f4-71898cc261d7 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:57:34.720: INFO: Pod "labelsupdate02454ad8-3aff-4a62-87f4-71898cc261d7": Phase="Running", Reason="", readiness=true. Elapsed: 4.181788873s - Jun 12 21:57:34.721: INFO: The phase of Pod labelsupdate02454ad8-3aff-4a62-87f4-71898cc261d7 is Running (Ready = true) - Jun 12 21:57:34.721: INFO: Pod "labelsupdate02454ad8-3aff-4a62-87f4-71898cc261d7" satisfied condition "running and ready" - Jun 12 21:57:35.318: INFO: Successfully updated pod "labelsupdate02454ad8-3aff-4a62-87f4-71898cc261d7" - [AfterEach] [sig-storage] Downward API volume - test/e2e/framework/node/init/init.go:32 - Jun 12 21:57:37.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Downward API volume - test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Downward API volume - dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Downward API volume - tear down framework | framework.go:193 - STEP: Destroying namespace "downward-api-8731" for this suite. 06/12/23 21:57:37.407 - << End Captured GinkgoWriter Output ------------------------------- -SSSSSSSSSSSSS ------------------------------- -[sig-node] Pods - should contain environment variables for services [NodeConformance] [Conformance] - test/e2e/common/node/pods.go:444 -[BeforeEach] [sig-node] Pods - set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:57:37.463 -Jun 12 21:57:37.463: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename pods 06/12/23 21:57:37.465 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:57:37.547 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:57:37.563 -[BeforeEach] [sig-node] Pods - test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-node] Pods - test/e2e/common/node/pods.go:194 -[It] should contain environment variables for services [NodeConformance] [Conformance] - test/e2e/common/node/pods.go:444 -Jun 12 21:57:37.602: INFO: Waiting up to 5m0s for pod "server-envvars-56872913-7ea4-4bf8-85a7-9e9cd84bcd4a" in namespace "pods-526" to be "running and ready" -Jun 12 21:57:37.616: INFO: Pod "server-envvars-56872913-7ea4-4bf8-85a7-9e9cd84bcd4a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.591486ms -Jun 12 21:57:37.616: INFO: The phase of Pod server-envvars-56872913-7ea4-4bf8-85a7-9e9cd84bcd4a is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:57:39.627: INFO: Pod "server-envvars-56872913-7ea4-4bf8-85a7-9e9cd84bcd4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025181435s -Jun 12 21:57:39.627: INFO: The phase of Pod server-envvars-56872913-7ea4-4bf8-85a7-9e9cd84bcd4a is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:57:41.629: INFO: Pod "server-envvars-56872913-7ea4-4bf8-85a7-9e9cd84bcd4a": Phase="Running", Reason="", readiness=true. Elapsed: 4.026844658s -Jun 12 21:57:41.629: INFO: The phase of Pod server-envvars-56872913-7ea4-4bf8-85a7-9e9cd84bcd4a is Running (Ready = true) -Jun 12 21:57:41.629: INFO: Pod "server-envvars-56872913-7ea4-4bf8-85a7-9e9cd84bcd4a" satisfied condition "running and ready" -Jun 12 21:57:41.696: INFO: Waiting up to 5m0s for pod "client-envvars-be0b3100-9323-4449-b804-c85525246bbe" in namespace "pods-526" to be "Succeeded or Failed" -Jun 12 21:57:41.704: INFO: Pod "client-envvars-be0b3100-9323-4449-b804-c85525246bbe": Phase="Pending", Reason="", readiness=false. Elapsed: 8.254932ms -Jun 12 21:57:43.749: INFO: Pod "client-envvars-be0b3100-9323-4449-b804-c85525246bbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053113411s -Jun 12 21:57:45.714: INFO: Pod "client-envvars-be0b3100-9323-4449-b804-c85525246bbe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018244374s -Jun 12 21:57:47.715: INFO: Pod "client-envvars-be0b3100-9323-4449-b804-c85525246bbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01878093s -STEP: Saw pod success 06/12/23 21:57:47.715 -Jun 12 21:57:47.715: INFO: Pod "client-envvars-be0b3100-9323-4449-b804-c85525246bbe" satisfied condition "Succeeded or Failed" -Jun 12 21:57:47.724: INFO: Trying to get logs from node 10.138.75.70 pod client-envvars-be0b3100-9323-4449-b804-c85525246bbe container env3cont: -STEP: delete the pod 06/12/23 21:57:47.74 -Jun 12 21:57:47.763: INFO: Waiting for pod client-envvars-be0b3100-9323-4449-b804-c85525246bbe to disappear -Jun 12 21:57:47.772: INFO: Pod client-envvars-be0b3100-9323-4449-b804-c85525246bbe no longer exists -[AfterEach] [sig-node] Pods - test/e2e/framework/node/init/init.go:32 -Jun 12 21:57:47.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Pods - test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Pods - dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Pods - tear down framework | framework.go:193 -STEP: Destroying namespace "pods-526" for this suite. 06/12/23 21:57:47.788 ------------------------------- -• [SLOW TEST] [10.349 seconds] -[sig-node] Pods -test/e2e/common/node/framework.go:23 - should contain environment variables for services [NodeConformance] [Conformance] - test/e2e/common/node/pods.go:444 +• [SLOW TEST] [52.817 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for multiple CRDs of same group but different versions [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:309 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Pods + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:57:37.463 - Jun 12 21:57:37.463: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename pods 06/12/23 21:57:37.465 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:57:37.547 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:57:37.563 - [BeforeEach] [sig-node] Pods + STEP: Creating a kubernetes client 07/27/23 02:28:50.993 + Jul 27 02:28:50.993: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename crd-publish-openapi 07/27/23 02:28:50.993 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:28:51.035 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:28:51.044 + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-node] Pods - test/e2e/common/node/pods.go:194 - [It] should contain environment variables for services [NodeConformance] [Conformance] - test/e2e/common/node/pods.go:444 - Jun 12 21:57:37.602: INFO: Waiting up to 5m0s for pod "server-envvars-56872913-7ea4-4bf8-85a7-9e9cd84bcd4a" in namespace "pods-526" to be "running and ready" - Jun 12 21:57:37.616: INFO: Pod "server-envvars-56872913-7ea4-4bf8-85a7-9e9cd84bcd4a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.591486ms - Jun 12 21:57:37.616: INFO: The phase of Pod server-envvars-56872913-7ea4-4bf8-85a7-9e9cd84bcd4a is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:57:39.627: INFO: Pod "server-envvars-56872913-7ea4-4bf8-85a7-9e9cd84bcd4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025181435s - Jun 12 21:57:39.627: INFO: The phase of Pod server-envvars-56872913-7ea4-4bf8-85a7-9e9cd84bcd4a is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:57:41.629: INFO: Pod "server-envvars-56872913-7ea4-4bf8-85a7-9e9cd84bcd4a": Phase="Running", Reason="", readiness=true. Elapsed: 4.026844658s - Jun 12 21:57:41.629: INFO: The phase of Pod server-envvars-56872913-7ea4-4bf8-85a7-9e9cd84bcd4a is Running (Ready = true) - Jun 12 21:57:41.629: INFO: Pod "server-envvars-56872913-7ea4-4bf8-85a7-9e9cd84bcd4a" satisfied condition "running and ready" - Jun 12 21:57:41.696: INFO: Waiting up to 5m0s for pod "client-envvars-be0b3100-9323-4449-b804-c85525246bbe" in namespace "pods-526" to be "Succeeded or Failed" - Jun 12 21:57:41.704: INFO: Pod "client-envvars-be0b3100-9323-4449-b804-c85525246bbe": Phase="Pending", Reason="", readiness=false. Elapsed: 8.254932ms - Jun 12 21:57:43.749: INFO: Pod "client-envvars-be0b3100-9323-4449-b804-c85525246bbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053113411s - Jun 12 21:57:45.714: INFO: Pod "client-envvars-be0b3100-9323-4449-b804-c85525246bbe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018244374s - Jun 12 21:57:47.715: INFO: Pod "client-envvars-be0b3100-9323-4449-b804-c85525246bbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01878093s - STEP: Saw pod success 06/12/23 21:57:47.715 - Jun 12 21:57:47.715: INFO: Pod "client-envvars-be0b3100-9323-4449-b804-c85525246bbe" satisfied condition "Succeeded or Failed" - Jun 12 21:57:47.724: INFO: Trying to get logs from node 10.138.75.70 pod client-envvars-be0b3100-9323-4449-b804-c85525246bbe container env3cont: - STEP: delete the pod 06/12/23 21:57:47.74 - Jun 12 21:57:47.763: INFO: Waiting for pod client-envvars-be0b3100-9323-4449-b804-c85525246bbe to disappear - Jun 12 21:57:47.772: INFO: Pod client-envvars-be0b3100-9323-4449-b804-c85525246bbe no longer exists - [AfterEach] [sig-node] Pods + [It] works for multiple CRDs of same group but different versions [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:309 + STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation 07/27/23 02:28:51.054 + Jul 27 02:28:51.055: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation 07/27/23 02:29:11.212 + Jul 27 02:29:11.213: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 02:29:19.197: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 21:57:47.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Pods + Jul 27 02:29:43.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Pods + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Pods + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "pods-526" for this suite. 06/12/23 21:57:47.788 + STEP: Destroying namespace "crd-publish-openapi-5864" for this suite. 07/27/23 02:29:43.796 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSS +SSSSSS ------------------------------ -[sig-storage] ConfigMap +[sig-storage] Secrets should be immutable if `immutable` field is set [Conformance] - test/e2e/common/storage/configmap_volume.go:504 -[BeforeEach] [sig-storage] ConfigMap + test/e2e/common/storage/secrets_volume.go:386 +[BeforeEach] [sig-storage] Secrets set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:57:47.813 -Jun 12 21:57:47.813: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename configmap 06/12/23 21:57:47.817 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:57:47.87 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:57:47.884 -[BeforeEach] [sig-storage] ConfigMap +STEP: Creating a kubernetes client 07/27/23 02:29:43.81 +Jul 27 02:29:43.810: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename secrets 07/27/23 02:29:43.811 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:29:43.838 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:29:43.845 +[BeforeEach] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:31 [It] should be immutable if `immutable` field is set [Conformance] - test/e2e/common/storage/configmap_volume.go:504 -[AfterEach] [sig-storage] ConfigMap + test/e2e/common/storage/secrets_volume.go:386 +[AfterEach] [sig-storage] Secrets test/e2e/framework/node/init/init.go:32 -Jun 12 21:57:48.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] ConfigMap +Jul 27 02:29:44.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] ConfigMap +[DeferCleanup (Each)] [sig-storage] Secrets dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] ConfigMap +[DeferCleanup (Each)] [sig-storage] Secrets tear down framework | framework.go:193 -STEP: Destroying namespace "configmap-2720" for this suite. 06/12/23 21:57:48.078 +STEP: Destroying namespace "secrets-8312" for this suite. 07/27/23 02:29:44.058 ------------------------------ -• [0.290 seconds] -[sig-storage] ConfigMap +• [0.261 seconds] +[sig-storage] Secrets test/e2e/common/storage/framework.go:23 should be immutable if `immutable` field is set [Conformance] - test/e2e/common/storage/configmap_volume.go:504 + test/e2e/common/storage/secrets_volume.go:386 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] ConfigMap + [BeforeEach] [sig-storage] Secrets set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:57:47.813 - Jun 12 21:57:47.813: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename configmap 06/12/23 21:57:47.817 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:57:47.87 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:57:47.884 - [BeforeEach] [sig-storage] ConfigMap + STEP: Creating a kubernetes client 07/27/23 02:29:43.81 + Jul 27 02:29:43.810: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename secrets 07/27/23 02:29:43.811 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:29:43.838 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:29:43.845 + [BeforeEach] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:31 [It] should be immutable if `immutable` field is set [Conformance] - test/e2e/common/storage/configmap_volume.go:504 - [AfterEach] [sig-storage] ConfigMap + test/e2e/common/storage/secrets_volume.go:386 + [AfterEach] [sig-storage] Secrets test/e2e/framework/node/init/init.go:32 - Jun 12 21:57:48.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] ConfigMap + Jul 27 02:29:44.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] ConfigMap + [DeferCleanup (Each)] [sig-storage] Secrets dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] ConfigMap + [DeferCleanup (Each)] [sig-storage] Secrets tear down framework | framework.go:193 - STEP: Destroying namespace "configmap-2720" for this suite. 06/12/23 21:57:48.078 + STEP: Destroying namespace "secrets-8312" for this suite. 07/27/23 02:29:44.058 << End Captured GinkgoWriter Output ------------------------------ -[sig-node] Containers - should be able to override the image's default command and arguments [NodeConformance] [Conformance] - test/e2e/common/node/containers.go:87 -[BeforeEach] [sig-node] Containers +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should allow opting out of API token automount [Conformance] + test/e2e/auth/service_accounts.go:161 +[BeforeEach] [sig-auth] ServiceAccounts set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:57:48.103 -Jun 12 21:57:48.103: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename containers 06/12/23 21:57:48.106 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:57:48.159 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:57:48.17 -[BeforeEach] [sig-node] Containers +STEP: Creating a kubernetes client 07/27/23 02:29:44.074 +Jul 27 02:29:44.074: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename svcaccounts 07/27/23 02:29:44.075 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:29:44.101 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:29:44.11 +[BeforeEach] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:31 -[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] - test/e2e/common/node/containers.go:87 -STEP: Creating a pod to test override all 06/12/23 21:57:48.18 -Jun 12 21:57:48.212: INFO: Waiting up to 5m0s for pod "client-containers-e4b3a4a2-3a6b-4270-9f07-65c1ef90b4d9" in namespace "containers-4411" to be "Succeeded or Failed" -Jun 12 21:57:48.220: INFO: Pod "client-containers-e4b3a4a2-3a6b-4270-9f07-65c1ef90b4d9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.596217ms -Jun 12 21:57:50.230: INFO: Pod "client-containers-e4b3a4a2-3a6b-4270-9f07-65c1ef90b4d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018213516s -Jun 12 21:57:52.230: INFO: Pod "client-containers-e4b3a4a2-3a6b-4270-9f07-65c1ef90b4d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018532607s -Jun 12 21:57:54.230: INFO: Pod "client-containers-e4b3a4a2-3a6b-4270-9f07-65c1ef90b4d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018241921s -STEP: Saw pod success 06/12/23 21:57:54.23 -Jun 12 21:57:54.230: INFO: Pod "client-containers-e4b3a4a2-3a6b-4270-9f07-65c1ef90b4d9" satisfied condition "Succeeded or Failed" -Jun 12 21:57:54.238: INFO: Trying to get logs from node 10.138.75.70 pod client-containers-e4b3a4a2-3a6b-4270-9f07-65c1ef90b4d9 container agnhost-container: -STEP: delete the pod 06/12/23 21:57:54.279 -Jun 12 21:57:54.302: INFO: Waiting for pod client-containers-e4b3a4a2-3a6b-4270-9f07-65c1ef90b4d9 to disappear -Jun 12 21:57:54.310: INFO: Pod client-containers-e4b3a4a2-3a6b-4270-9f07-65c1ef90b4d9 no longer exists -[AfterEach] [sig-node] Containers +[It] should allow opting out of API token automount [Conformance] + test/e2e/auth/service_accounts.go:161 +Jul 27 02:29:44.160: INFO: created pod pod-service-account-defaultsa +Jul 27 02:29:44.160: INFO: pod pod-service-account-defaultsa service account token volume mount: true +Jul 27 02:29:44.177: INFO: created pod pod-service-account-mountsa +Jul 27 02:29:44.177: INFO: pod pod-service-account-mountsa service account token volume mount: true +Jul 27 02:29:44.195: INFO: created pod pod-service-account-nomountsa +Jul 27 02:29:44.195: INFO: pod pod-service-account-nomountsa service account token volume mount: false +Jul 27 02:29:44.224: INFO: created pod pod-service-account-defaultsa-mountspec +Jul 27 02:29:44.224: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true +Jul 27 02:29:44.236: INFO: created pod pod-service-account-mountsa-mountspec +Jul 27 02:29:44.237: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true +Jul 27 02:29:44.252: INFO: created pod pod-service-account-nomountsa-mountspec +Jul 27 02:29:44.252: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true +Jul 27 02:29:44.265: INFO: created pod pod-service-account-defaultsa-nomountspec +Jul 27 02:29:44.265: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false +Jul 27 02:29:44.279: INFO: created pod pod-service-account-mountsa-nomountspec +Jul 27 02:29:44.279: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false +Jul 27 02:29:44.295: INFO: created pod pod-service-account-nomountsa-nomountspec +Jul 27 02:29:44.295: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false +[AfterEach] [sig-auth] ServiceAccounts test/e2e/framework/node/init/init.go:32 -Jun 12 21:57:54.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Containers +Jul 27 02:29:44.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Containers +[DeferCleanup (Each)] [sig-auth] ServiceAccounts dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Containers +[DeferCleanup (Each)] [sig-auth] ServiceAccounts tear down framework | framework.go:193 -STEP: Destroying namespace "containers-4411" for this suite. 06/12/23 21:57:54.326 +STEP: Destroying namespace "svcaccounts-2726" for this suite. 07/27/23 02:29:44.309 ------------------------------ -• [SLOW TEST] [6.246 seconds] -[sig-node] Containers -test/e2e/common/node/framework.go:23 - should be able to override the image's default command and arguments [NodeConformance] [Conformance] - test/e2e/common/node/containers.go:87 +• [0.249 seconds] +[sig-auth] ServiceAccounts +test/e2e/auth/framework.go:23 + should allow opting out of API token automount [Conformance] + test/e2e/auth/service_accounts.go:161 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Containers + [BeforeEach] [sig-auth] ServiceAccounts set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:57:48.103 - Jun 12 21:57:48.103: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename containers 06/12/23 21:57:48.106 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:57:48.159 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:57:48.17 - [BeforeEach] [sig-node] Containers + STEP: Creating a kubernetes client 07/27/23 02:29:44.074 + Jul 27 02:29:44.074: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename svcaccounts 07/27/23 02:29:44.075 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:29:44.101 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:29:44.11 + [BeforeEach] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:31 - [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] - test/e2e/common/node/containers.go:87 - STEP: Creating a pod to test override all 06/12/23 21:57:48.18 - Jun 12 21:57:48.212: INFO: Waiting up to 5m0s for pod "client-containers-e4b3a4a2-3a6b-4270-9f07-65c1ef90b4d9" in namespace "containers-4411" to be "Succeeded or Failed" - Jun 12 21:57:48.220: INFO: Pod "client-containers-e4b3a4a2-3a6b-4270-9f07-65c1ef90b4d9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.596217ms - Jun 12 21:57:50.230: INFO: Pod "client-containers-e4b3a4a2-3a6b-4270-9f07-65c1ef90b4d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018213516s - Jun 12 21:57:52.230: INFO: Pod "client-containers-e4b3a4a2-3a6b-4270-9f07-65c1ef90b4d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018532607s - Jun 12 21:57:54.230: INFO: Pod "client-containers-e4b3a4a2-3a6b-4270-9f07-65c1ef90b4d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018241921s - STEP: Saw pod success 06/12/23 21:57:54.23 - Jun 12 21:57:54.230: INFO: Pod "client-containers-e4b3a4a2-3a6b-4270-9f07-65c1ef90b4d9" satisfied condition "Succeeded or Failed" - Jun 12 21:57:54.238: INFO: Trying to get logs from node 10.138.75.70 pod client-containers-e4b3a4a2-3a6b-4270-9f07-65c1ef90b4d9 container agnhost-container: - STEP: delete the pod 06/12/23 21:57:54.279 - Jun 12 21:57:54.302: INFO: Waiting for pod client-containers-e4b3a4a2-3a6b-4270-9f07-65c1ef90b4d9 to disappear - Jun 12 21:57:54.310: INFO: Pod client-containers-e4b3a4a2-3a6b-4270-9f07-65c1ef90b4d9 no longer exists - [AfterEach] [sig-node] Containers + [It] should allow opting out of API token automount [Conformance] + test/e2e/auth/service_accounts.go:161 + Jul 27 02:29:44.160: INFO: created pod pod-service-account-defaultsa + Jul 27 02:29:44.160: INFO: pod pod-service-account-defaultsa service account token volume mount: true + Jul 27 02:29:44.177: INFO: created pod pod-service-account-mountsa + Jul 27 02:29:44.177: INFO: pod pod-service-account-mountsa service account token volume mount: true + Jul 27 02:29:44.195: INFO: created pod pod-service-account-nomountsa + Jul 27 02:29:44.195: INFO: pod pod-service-account-nomountsa service account token volume mount: false + Jul 27 02:29:44.224: INFO: created pod pod-service-account-defaultsa-mountspec + Jul 27 02:29:44.224: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true + Jul 27 02:29:44.236: INFO: created pod pod-service-account-mountsa-mountspec + Jul 27 02:29:44.237: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true + Jul 27 02:29:44.252: INFO: created pod pod-service-account-nomountsa-mountspec + Jul 27 02:29:44.252: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true + Jul 27 02:29:44.265: INFO: created pod pod-service-account-defaultsa-nomountspec + Jul 27 02:29:44.265: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false + Jul 27 02:29:44.279: INFO: created pod pod-service-account-mountsa-nomountspec + Jul 27 02:29:44.279: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false + Jul 27 02:29:44.295: INFO: created pod pod-service-account-nomountsa-nomountspec + Jul 27 02:29:44.295: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false + [AfterEach] [sig-auth] ServiceAccounts test/e2e/framework/node/init/init.go:32 - Jun 12 21:57:54.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Containers + Jul 27 02:29:44.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Containers + [DeferCleanup (Each)] [sig-auth] ServiceAccounts dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Containers + [DeferCleanup (Each)] [sig-auth] ServiceAccounts tear down framework | framework.go:193 - STEP: Destroying namespace "containers-4411" for this suite. 06/12/23 21:57:54.326 + STEP: Destroying namespace "svcaccounts-2726" for this suite. 07/27/23 02:29:44.309 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSS +S ------------------------------ -[sig-api-machinery] Garbage collector - should not be blocked by dependency circle [Conformance] - test/e2e/apimachinery/garbage_collector.go:849 -[BeforeEach] [sig-api-machinery] Garbage collector +[sig-node] Downward API + should provide pod UID as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:267 +[BeforeEach] [sig-node] Downward API set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:57:54.356 -Jun 12 21:57:54.356: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename gc 06/12/23 21:57:54.359 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:57:54.436 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:57:54.451 -[BeforeEach] [sig-api-machinery] Garbage collector +STEP: Creating a kubernetes client 07/27/23 02:29:44.323 +Jul 27 02:29:44.323: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename downward-api 07/27/23 02:29:44.324 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:29:44.356 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:29:44.365 +[BeforeEach] [sig-node] Downward API test/e2e/framework/metrics/init/init.go:31 -[It] should not be blocked by dependency circle [Conformance] - test/e2e/apimachinery/garbage_collector.go:849 -Jun 12 21:57:54.544: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"9d348764-a0fc-4c89-aad8-cde88401d43a", Controller:(*bool)(0xc009f41f32), BlockOwnerDeletion:(*bool)(0xc009f41f33)}} -Jun 12 21:57:54.561: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"5ca87ce2-efc5-4642-9c2c-6677053ac7dd", Controller:(*bool)(0xc003571802), BlockOwnerDeletion:(*bool)(0xc003571803)}} -Jun 12 21:57:54.580: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"6ca58aba-347b-4188-a909-55622f3dfc9b", Controller:(*bool)(0xc007ed8222), BlockOwnerDeletion:(*bool)(0xc007ed8223)}} -[AfterEach] [sig-api-machinery] Garbage collector +[It] should provide pod UID as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:267 +STEP: Creating a pod to test downward api env vars 07/27/23 02:29:44.375 +W0727 02:29:44.397817 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "dapi-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "dapi-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "dapi-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "dapi-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +Jul 27 02:29:44.397: INFO: Waiting up to 5m0s for pod "downward-api-6d11f3b1-39bc-4f71-ba70-d938e4abda56" in namespace "downward-api-1015" to be "Succeeded or Failed" +Jul 27 02:29:44.407: INFO: Pod "downward-api-6d11f3b1-39bc-4f71-ba70-d938e4abda56": Phase="Pending", Reason="", readiness=false. Elapsed: 9.546987ms +Jul 27 02:29:46.416: INFO: Pod "downward-api-6d11f3b1-39bc-4f71-ba70-d938e4abda56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018449952s +Jul 27 02:29:48.417: INFO: Pod "downward-api-6d11f3b1-39bc-4f71-ba70-d938e4abda56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019578251s +Jul 27 02:29:50.417: INFO: Pod "downward-api-6d11f3b1-39bc-4f71-ba70-d938e4abda56": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019493066s +Jul 27 02:29:52.430: INFO: Pod "downward-api-6d11f3b1-39bc-4f71-ba70-d938e4abda56": Phase="Pending", Reason="", readiness=false. Elapsed: 8.032425234s +Jul 27 02:29:54.416: INFO: Pod "downward-api-6d11f3b1-39bc-4f71-ba70-d938e4abda56": Phase="Pending", Reason="", readiness=false. Elapsed: 10.018256629s +Jul 27 02:29:56.417: INFO: Pod "downward-api-6d11f3b1-39bc-4f71-ba70-d938e4abda56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.01963837s +STEP: Saw pod success 07/27/23 02:29:56.417 +Jul 27 02:29:56.417: INFO: Pod "downward-api-6d11f3b1-39bc-4f71-ba70-d938e4abda56" satisfied condition "Succeeded or Failed" +Jul 27 02:29:56.431: INFO: Trying to get logs from node 10.245.128.17 pod downward-api-6d11f3b1-39bc-4f71-ba70-d938e4abda56 container dapi-container: +STEP: delete the pod 07/27/23 02:29:56.468 +Jul 27 02:29:56.493: INFO: Waiting for pod downward-api-6d11f3b1-39bc-4f71-ba70-d938e4abda56 to disappear +Jul 27 02:29:56.501: INFO: Pod downward-api-6d11f3b1-39bc-4f71-ba70-d938e4abda56 no longer exists +[AfterEach] [sig-node] Downward API test/e2e/framework/node/init/init.go:32 -Jun 12 21:57:59.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] Garbage collector +Jul 27 02:29:56.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Downward API test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] Garbage collector +[DeferCleanup (Each)] [sig-node] Downward API dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] Garbage collector +[DeferCleanup (Each)] [sig-node] Downward API tear down framework | framework.go:193 -STEP: Destroying namespace "gc-2530" for this suite. 06/12/23 21:57:59.641 +STEP: Destroying namespace "downward-api-1015" for this suite. 07/27/23 02:29:56.516 ------------------------------ -• [SLOW TEST] [5.367 seconds] -[sig-api-machinery] Garbage collector -test/e2e/apimachinery/framework.go:23 - should not be blocked by dependency circle [Conformance] - test/e2e/apimachinery/garbage_collector.go:849 +• [SLOW TEST] [12.208 seconds] +[sig-node] Downward API +test/e2e/common/node/framework.go:23 + should provide pod UID as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:267 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] Garbage collector + [BeforeEach] [sig-node] Downward API set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:57:54.356 - Jun 12 21:57:54.356: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename gc 06/12/23 21:57:54.359 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:57:54.436 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:57:54.451 - [BeforeEach] [sig-api-machinery] Garbage collector + STEP: Creating a kubernetes client 07/27/23 02:29:44.323 + Jul 27 02:29:44.323: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename downward-api 07/27/23 02:29:44.324 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:29:44.356 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:29:44.365 + [BeforeEach] [sig-node] Downward API test/e2e/framework/metrics/init/init.go:31 - [It] should not be blocked by dependency circle [Conformance] - test/e2e/apimachinery/garbage_collector.go:849 - Jun 12 21:57:54.544: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"9d348764-a0fc-4c89-aad8-cde88401d43a", Controller:(*bool)(0xc009f41f32), BlockOwnerDeletion:(*bool)(0xc009f41f33)}} - Jun 12 21:57:54.561: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"5ca87ce2-efc5-4642-9c2c-6677053ac7dd", Controller:(*bool)(0xc003571802), BlockOwnerDeletion:(*bool)(0xc003571803)}} - Jun 12 21:57:54.580: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"6ca58aba-347b-4188-a909-55622f3dfc9b", Controller:(*bool)(0xc007ed8222), BlockOwnerDeletion:(*bool)(0xc007ed8223)}} - [AfterEach] [sig-api-machinery] Garbage collector + [It] should provide pod UID as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:267 + STEP: Creating a pod to test downward api env vars 07/27/23 02:29:44.375 + W0727 02:29:44.397817 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "dapi-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "dapi-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "dapi-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "dapi-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + Jul 27 02:29:44.397: INFO: Waiting up to 5m0s for pod "downward-api-6d11f3b1-39bc-4f71-ba70-d938e4abda56" in namespace "downward-api-1015" to be "Succeeded or Failed" + Jul 27 02:29:44.407: INFO: Pod "downward-api-6d11f3b1-39bc-4f71-ba70-d938e4abda56": Phase="Pending", Reason="", readiness=false. Elapsed: 9.546987ms + Jul 27 02:29:46.416: INFO: Pod "downward-api-6d11f3b1-39bc-4f71-ba70-d938e4abda56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018449952s + Jul 27 02:29:48.417: INFO: Pod "downward-api-6d11f3b1-39bc-4f71-ba70-d938e4abda56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019578251s + Jul 27 02:29:50.417: INFO: Pod "downward-api-6d11f3b1-39bc-4f71-ba70-d938e4abda56": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019493066s + Jul 27 02:29:52.430: INFO: Pod "downward-api-6d11f3b1-39bc-4f71-ba70-d938e4abda56": Phase="Pending", Reason="", readiness=false. Elapsed: 8.032425234s + Jul 27 02:29:54.416: INFO: Pod "downward-api-6d11f3b1-39bc-4f71-ba70-d938e4abda56": Phase="Pending", Reason="", readiness=false. Elapsed: 10.018256629s + Jul 27 02:29:56.417: INFO: Pod "downward-api-6d11f3b1-39bc-4f71-ba70-d938e4abda56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.01963837s + STEP: Saw pod success 07/27/23 02:29:56.417 + Jul 27 02:29:56.417: INFO: Pod "downward-api-6d11f3b1-39bc-4f71-ba70-d938e4abda56" satisfied condition "Succeeded or Failed" + Jul 27 02:29:56.431: INFO: Trying to get logs from node 10.245.128.17 pod downward-api-6d11f3b1-39bc-4f71-ba70-d938e4abda56 container dapi-container: + STEP: delete the pod 07/27/23 02:29:56.468 + Jul 27 02:29:56.493: INFO: Waiting for pod downward-api-6d11f3b1-39bc-4f71-ba70-d938e4abda56 to disappear + Jul 27 02:29:56.501: INFO: Pod downward-api-6d11f3b1-39bc-4f71-ba70-d938e4abda56 no longer exists + [AfterEach] [sig-node] Downward API test/e2e/framework/node/init/init.go:32 - Jun 12 21:57:59.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + Jul 27 02:29:56.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Downward API test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + [DeferCleanup (Each)] [sig-node] Downward API dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + [DeferCleanup (Each)] [sig-node] Downward API tear down framework | framework.go:193 - STEP: Destroying namespace "gc-2530" for this suite. 06/12/23 21:57:59.641 + STEP: Destroying namespace "downward-api-1015" for this suite. 07/27/23 02:29:56.516 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSS ------------------------------ -[sig-storage] EmptyDir volumes - should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:127 -[BeforeEach] [sig-storage] EmptyDir volumes +[sig-storage] Downward API volume + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:261 +[BeforeEach] [sig-storage] Downward API volume set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:57:59.724 -Jun 12 21:57:59.724: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename emptydir 06/12/23 21:57:59.726 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:57:59.835 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:57:59.849 -[BeforeEach] [sig-storage] EmptyDir volumes +STEP: Creating a kubernetes client 07/27/23 02:29:56.532 +Jul 27 02:29:56.532: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename downward-api 07/27/23 02:29:56.533 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:29:56.566 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:29:56.573 +[BeforeEach] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:31 -[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:127 -STEP: Creating a pod to test emptydir 0644 on tmpfs 06/12/23 21:57:59.862 -Jun 12 21:57:59.887: INFO: Waiting up to 5m0s for pod "pod-c81353e9-f628-4efe-adf1-4bf8e97688a1" in namespace "emptydir-4052" to be "Succeeded or Failed" -Jun 12 21:57:59.903: INFO: Pod "pod-c81353e9-f628-4efe-adf1-4bf8e97688a1": Phase="Pending", Reason="", readiness=false. Elapsed: 15.416962ms -Jun 12 21:58:01.940: INFO: Pod "pod-c81353e9-f628-4efe-adf1-4bf8e97688a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053267337s -Jun 12 21:58:03.957: INFO: Pod "pod-c81353e9-f628-4efe-adf1-4bf8e97688a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070008684s -Jun 12 21:58:05.937: INFO: Pod "pod-c81353e9-f628-4efe-adf1-4bf8e97688a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049567719s -STEP: Saw pod success 06/12/23 21:58:05.937 -Jun 12 21:58:05.937: INFO: Pod "pod-c81353e9-f628-4efe-adf1-4bf8e97688a1" satisfied condition "Succeeded or Failed" -Jun 12 21:58:05.970: INFO: Trying to get logs from node 10.138.75.70 pod pod-c81353e9-f628-4efe-adf1-4bf8e97688a1 container test-container: -STEP: delete the pod 06/12/23 21:58:06.057 -Jun 12 21:58:06.160: INFO: Waiting for pod pod-c81353e9-f628-4efe-adf1-4bf8e97688a1 to disappear -Jun 12 21:58:06.179: INFO: Pod pod-c81353e9-f628-4efe-adf1-4bf8e97688a1 no longer exists -[AfterEach] [sig-storage] EmptyDir volumes +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:261 +STEP: Creating a pod to test downward API volume plugin 07/27/23 02:29:56.58 +Jul 27 02:29:57.622: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e4521605-cfeb-460e-9545-48c64a1f461c" in namespace "downward-api-1653" to be "Succeeded or Failed" +Jul 27 02:29:57.631: INFO: Pod "downwardapi-volume-e4521605-cfeb-460e-9545-48c64a1f461c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.405647ms +Jul 27 02:29:59.649: INFO: Pod "downwardapi-volume-e4521605-cfeb-460e-9545-48c64a1f461c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026832551s +Jul 27 02:30:01.641: INFO: Pod "downwardapi-volume-e4521605-cfeb-460e-9545-48c64a1f461c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018986972s +Jul 27 02:30:03.649: INFO: Pod "downwardapi-volume-e4521605-cfeb-460e-9545-48c64a1f461c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026355335s +STEP: Saw pod success 07/27/23 02:30:03.649 +Jul 27 02:30:03.649: INFO: Pod "downwardapi-volume-e4521605-cfeb-460e-9545-48c64a1f461c" satisfied condition "Succeeded or Failed" +Jul 27 02:30:03.668: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-e4521605-cfeb-460e-9545-48c64a1f461c container client-container: +STEP: delete the pod 07/27/23 02:30:03.715 +Jul 27 02:30:03.742: INFO: Waiting for pod downwardapi-volume-e4521605-cfeb-460e-9545-48c64a1f461c to disappear +Jul 27 02:30:03.751: INFO: Pod downwardapi-volume-e4521605-cfeb-460e-9545-48c64a1f461c no longer exists +[AfterEach] [sig-storage] Downward API volume test/e2e/framework/node/init/init.go:32 -Jun 12 21:58:06.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +Jul 27 02:30:03.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +[DeferCleanup (Each)] [sig-storage] Downward API volume dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +[DeferCleanup (Each)] [sig-storage] Downward API volume tear down framework | framework.go:193 -STEP: Destroying namespace "emptydir-4052" for this suite. 06/12/23 21:58:06.199 +STEP: Destroying namespace "downward-api-1653" for this suite. 07/27/23 02:30:03.775 ------------------------------ -• [SLOW TEST] [6.502 seconds] -[sig-storage] EmptyDir volumes +• [SLOW TEST] [7.258 seconds] +[sig-storage] Downward API volume test/e2e/common/storage/framework.go:23 - should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:127 + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:261 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] EmptyDir volumes + [BeforeEach] [sig-storage] Downward API volume set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:57:59.724 - Jun 12 21:57:59.724: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename emptydir 06/12/23 21:57:59.726 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:57:59.835 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:57:59.849 - [BeforeEach] [sig-storage] EmptyDir volumes + STEP: Creating a kubernetes client 07/27/23 02:29:56.532 + Jul 27 02:29:56.532: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename downward-api 07/27/23 02:29:56.533 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:29:56.566 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:29:56.573 + [BeforeEach] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:31 - [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:127 - STEP: Creating a pod to test emptydir 0644 on tmpfs 06/12/23 21:57:59.862 - Jun 12 21:57:59.887: INFO: Waiting up to 5m0s for pod "pod-c81353e9-f628-4efe-adf1-4bf8e97688a1" in namespace "emptydir-4052" to be "Succeeded or Failed" - Jun 12 21:57:59.903: INFO: Pod "pod-c81353e9-f628-4efe-adf1-4bf8e97688a1": Phase="Pending", Reason="", readiness=false. Elapsed: 15.416962ms - Jun 12 21:58:01.940: INFO: Pod "pod-c81353e9-f628-4efe-adf1-4bf8e97688a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053267337s - Jun 12 21:58:03.957: INFO: Pod "pod-c81353e9-f628-4efe-adf1-4bf8e97688a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070008684s - Jun 12 21:58:05.937: INFO: Pod "pod-c81353e9-f628-4efe-adf1-4bf8e97688a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049567719s - STEP: Saw pod success 06/12/23 21:58:05.937 - Jun 12 21:58:05.937: INFO: Pod "pod-c81353e9-f628-4efe-adf1-4bf8e97688a1" satisfied condition "Succeeded or Failed" - Jun 12 21:58:05.970: INFO: Trying to get logs from node 10.138.75.70 pod pod-c81353e9-f628-4efe-adf1-4bf8e97688a1 container test-container: - STEP: delete the pod 06/12/23 21:58:06.057 - Jun 12 21:58:06.160: INFO: Waiting for pod pod-c81353e9-f628-4efe-adf1-4bf8e97688a1 to disappear - Jun 12 21:58:06.179: INFO: Pod pod-c81353e9-f628-4efe-adf1-4bf8e97688a1 no longer exists - [AfterEach] [sig-storage] EmptyDir volumes + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 + [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:261 + STEP: Creating a pod to test downward API volume plugin 07/27/23 02:29:56.58 + Jul 27 02:29:57.622: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e4521605-cfeb-460e-9545-48c64a1f461c" in namespace "downward-api-1653" to be "Succeeded or Failed" + Jul 27 02:29:57.631: INFO: Pod "downwardapi-volume-e4521605-cfeb-460e-9545-48c64a1f461c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.405647ms + Jul 27 02:29:59.649: INFO: Pod "downwardapi-volume-e4521605-cfeb-460e-9545-48c64a1f461c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026832551s + Jul 27 02:30:01.641: INFO: Pod "downwardapi-volume-e4521605-cfeb-460e-9545-48c64a1f461c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018986972s + Jul 27 02:30:03.649: INFO: Pod "downwardapi-volume-e4521605-cfeb-460e-9545-48c64a1f461c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026355335s + STEP: Saw pod success 07/27/23 02:30:03.649 + Jul 27 02:30:03.649: INFO: Pod "downwardapi-volume-e4521605-cfeb-460e-9545-48c64a1f461c" satisfied condition "Succeeded or Failed" + Jul 27 02:30:03.668: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-e4521605-cfeb-460e-9545-48c64a1f461c container client-container: + STEP: delete the pod 07/27/23 02:30:03.715 + Jul 27 02:30:03.742: INFO: Waiting for pod downwardapi-volume-e4521605-cfeb-460e-9545-48c64a1f461c to disappear + Jul 27 02:30:03.751: INFO: Pod downwardapi-volume-e4521605-cfeb-460e-9545-48c64a1f461c no longer exists + [AfterEach] [sig-storage] Downward API volume test/e2e/framework/node/init/init.go:32 - Jun 12 21:58:06.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + Jul 27 02:30:03.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + [DeferCleanup (Each)] [sig-storage] Downward API volume dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + [DeferCleanup (Each)] [sig-storage] Downward API volume tear down framework | framework.go:193 - STEP: Destroying namespace "emptydir-4052" for this suite. 06/12/23 21:58:06.199 + STEP: Destroying namespace "downward-api-1653" for this suite. 07/27/23 02:30:03.775 << End Captured GinkgoWriter Output ------------------------------ -SSSSS +SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] test/e2e/apimachinery/crd_publish_openapi.go:153 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:58:06.235 -Jun 12 21:58:06.235: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename crd-publish-openapi 06/12/23 21:58:06.238 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:58:06.318 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:58:06.334 +STEP: Creating a kubernetes client 07/27/23 02:30:03.791 +Jul 27 02:30:03.791: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename crd-publish-openapi 07/27/23 02:30:03.792 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:30:03.824 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:30:03.876 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 [It] works for CRD without validation schema [Conformance] test/e2e/apimachinery/crd_publish_openapi.go:153 -Jun 12 21:58:06.381: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: kubectl validation (kubectl create and apply) allows request with any unknown properties 06/12/23 21:58:13.432 -Jun 12 21:58:13.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-2318 --namespace=crd-publish-openapi-2318 create -f -' -Jun 12 21:58:16.135: INFO: stderr: "" -Jun 12 21:58:16.135: INFO: stdout: "e2e-test-crd-publish-openapi-9908-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" -Jun 12 21:58:16.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-2318 --namespace=crd-publish-openapi-2318 delete e2e-test-crd-publish-openapi-9908-crds test-cr' -Jun 12 21:58:16.855: INFO: stderr: "" -Jun 12 21:58:16.855: INFO: stdout: "e2e-test-crd-publish-openapi-9908-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" -Jun 12 21:58:16.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-2318 --namespace=crd-publish-openapi-2318 apply -f -' -Jun 12 21:58:20.683: INFO: stderr: "" -Jun 12 21:58:20.684: INFO: stdout: "e2e-test-crd-publish-openapi-9908-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" -Jun 12 21:58:20.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-2318 --namespace=crd-publish-openapi-2318 delete e2e-test-crd-publish-openapi-9908-crds test-cr' -Jun 12 21:58:20.825: INFO: stderr: "" -Jun 12 21:58:20.825: INFO: stdout: "e2e-test-crd-publish-openapi-9908-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" -STEP: kubectl explain works to explain CR without validation schema 06/12/23 21:58:20.825 -Jun 12 21:58:20.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-2318 explain e2e-test-crd-publish-openapi-9908-crds' -Jun 12 21:58:21.896: INFO: stderr: "" -Jun 12 21:58:21.896: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9908-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" +Jul 27 02:30:03.892: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: kubectl validation (kubectl create and apply) allows request with any unknown properties 07/27/23 02:30:11.547 +Jul 27 02:30:11.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-194 --namespace=crd-publish-openapi-194 create -f -' +Jul 27 02:30:14.656: INFO: stderr: "" +Jul 27 02:30:14.656: INFO: stdout: "e2e-test-crd-publish-openapi-8911-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" +Jul 27 02:30:14.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-194 --namespace=crd-publish-openapi-194 delete e2e-test-crd-publish-openapi-8911-crds test-cr' +Jul 27 02:30:14.747: INFO: stderr: "" +Jul 27 02:30:14.747: INFO: stdout: "e2e-test-crd-publish-openapi-8911-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" +Jul 27 02:30:14.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-194 --namespace=crd-publish-openapi-194 apply -f -' +Jul 27 02:30:16.295: INFO: stderr: "" +Jul 27 02:30:16.295: INFO: stdout: "e2e-test-crd-publish-openapi-8911-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" +Jul 27 02:30:16.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-194 --namespace=crd-publish-openapi-194 delete e2e-test-crd-publish-openapi-8911-crds test-cr' +Jul 27 02:30:16.404: INFO: stderr: "" +Jul 27 02:30:16.404: INFO: stdout: "e2e-test-crd-publish-openapi-8911-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR without validation schema 07/27/23 02:30:16.404 +Jul 27 02:30:16.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-194 explain e2e-test-crd-publish-openapi-8911-crds' +Jul 27 02:30:16.785: INFO: stderr: "" +Jul 27 02:30:16.785: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-8911-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 -Jun 12 21:58:31.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +Jul 27 02:30:24.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] tear down framework | framework.go:193 -STEP: Destroying namespace "crd-publish-openapi-2318" for this suite. 06/12/23 21:58:31.608 +STEP: Destroying namespace "crd-publish-openapi-194" for this suite. 07/27/23 02:30:24.622 ------------------------------ -• [SLOW TEST] [25.389 seconds] +• [SLOW TEST] [20.860 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] @@ -28713,2341 +26298,3767 @@ test/e2e/apimachinery/framework.go:23 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:58:06.235 - Jun 12 21:58:06.235: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename crd-publish-openapi 06/12/23 21:58:06.238 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:58:06.318 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:58:06.334 + STEP: Creating a kubernetes client 07/27/23 02:30:03.791 + Jul 27 02:30:03.791: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename crd-publish-openapi 07/27/23 02:30:03.792 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:30:03.824 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:30:03.876 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 [It] works for CRD without validation schema [Conformance] test/e2e/apimachinery/crd_publish_openapi.go:153 - Jun 12 21:58:06.381: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: kubectl validation (kubectl create and apply) allows request with any unknown properties 06/12/23 21:58:13.432 - Jun 12 21:58:13.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-2318 --namespace=crd-publish-openapi-2318 create -f -' - Jun 12 21:58:16.135: INFO: stderr: "" - Jun 12 21:58:16.135: INFO: stdout: "e2e-test-crd-publish-openapi-9908-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" - Jun 12 21:58:16.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-2318 --namespace=crd-publish-openapi-2318 delete e2e-test-crd-publish-openapi-9908-crds test-cr' - Jun 12 21:58:16.855: INFO: stderr: "" - Jun 12 21:58:16.855: INFO: stdout: "e2e-test-crd-publish-openapi-9908-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" - Jun 12 21:58:16.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-2318 --namespace=crd-publish-openapi-2318 apply -f -' - Jun 12 21:58:20.683: INFO: stderr: "" - Jun 12 21:58:20.684: INFO: stdout: "e2e-test-crd-publish-openapi-9908-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" - Jun 12 21:58:20.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-2318 --namespace=crd-publish-openapi-2318 delete e2e-test-crd-publish-openapi-9908-crds test-cr' - Jun 12 21:58:20.825: INFO: stderr: "" - Jun 12 21:58:20.825: INFO: stdout: "e2e-test-crd-publish-openapi-9908-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" - STEP: kubectl explain works to explain CR without validation schema 06/12/23 21:58:20.825 - Jun 12 21:58:20.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-2318 explain e2e-test-crd-publish-openapi-9908-crds' - Jun 12 21:58:21.896: INFO: stderr: "" - Jun 12 21:58:21.896: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9908-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" + Jul 27 02:30:03.892: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: kubectl validation (kubectl create and apply) allows request with any unknown properties 07/27/23 02:30:11.547 + Jul 27 02:30:11.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-194 --namespace=crd-publish-openapi-194 create -f -' + Jul 27 02:30:14.656: INFO: stderr: "" + Jul 27 02:30:14.656: INFO: stdout: "e2e-test-crd-publish-openapi-8911-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" + Jul 27 02:30:14.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-194 --namespace=crd-publish-openapi-194 delete e2e-test-crd-publish-openapi-8911-crds test-cr' + Jul 27 02:30:14.747: INFO: stderr: "" + Jul 27 02:30:14.747: INFO: stdout: "e2e-test-crd-publish-openapi-8911-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" + Jul 27 02:30:14.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-194 --namespace=crd-publish-openapi-194 apply -f -' + Jul 27 02:30:16.295: INFO: stderr: "" + Jul 27 02:30:16.295: INFO: stdout: "e2e-test-crd-publish-openapi-8911-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" + Jul 27 02:30:16.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-194 --namespace=crd-publish-openapi-194 delete e2e-test-crd-publish-openapi-8911-crds test-cr' + Jul 27 02:30:16.404: INFO: stderr: "" + Jul 27 02:30:16.404: INFO: stdout: "e2e-test-crd-publish-openapi-8911-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" + STEP: kubectl explain works to explain CR without validation schema 07/27/23 02:30:16.404 + Jul 27 02:30:16.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-194 explain e2e-test-crd-publish-openapi-8911-crds' + Jul 27 02:30:16.785: INFO: stderr: "" + Jul 27 02:30:16.785: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-8911-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 21:58:31.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + Jul 27 02:30:24.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "crd-publish-openapi-2318" for this suite. 06/12/23 21:58:31.608 + STEP: Destroying namespace "crd-publish-openapi-194" for this suite. 07/27/23 02:30:24.622 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSS +SSSSSSSSS ------------------------------ -[sig-network] Services - should be able to change the type from ExternalName to ClusterIP [Conformance] - test/e2e/network/service.go:1438 -[BeforeEach] [sig-network] Services +[sig-apps] Deployment + deployment should delete old replica sets [Conformance] + test/e2e/apps/deployment.go:122 +[BeforeEach] [sig-apps] Deployment set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:58:31.632 -Jun 12 21:58:31.632: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename services 06/12/23 21:58:31.635 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:58:31.714 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:58:31.731 -[BeforeEach] [sig-network] Services +STEP: Creating a kubernetes client 07/27/23 02:30:24.651 +Jul 27 02:30:24.651: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename deployment 07/27/23 02:30:24.653 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:30:24.692 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:30:24.704 +[BeforeEach] [sig-apps] Deployment test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-network] Services - test/e2e/network/service.go:766 -[It] should be able to change the type from ExternalName to ClusterIP [Conformance] - test/e2e/network/service.go:1438 -STEP: creating a service externalname-service with the type=ExternalName in namespace services-5323 06/12/23 21:58:31.8 -STEP: changing the ExternalName service to type=ClusterIP 06/12/23 21:58:31.848 -STEP: creating replication controller externalname-service in namespace services-5323 06/12/23 21:58:31.987 -I0612 21:58:32.014143 23 runners.go:193] Created replication controller with name: externalname-service, namespace: services-5323, replica count: 2 -I0612 21:58:35.066070 23 runners.go:193] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady -I0612 21:58:38.067232 23 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady -Jun 12 21:58:38.067: INFO: Creating new exec pod -Jun 12 21:58:38.093: INFO: Waiting up to 5m0s for pod "execpodsv8n7" in namespace "services-5323" to be "running" -Jun 12 21:58:38.171: INFO: Pod "execpodsv8n7": Phase="Pending", Reason="", readiness=false. Elapsed: 78.276347ms -Jun 12 21:58:40.215: INFO: Pod "execpodsv8n7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121472433s -Jun 12 21:58:42.184: INFO: Pod "execpodsv8n7": Phase="Running", Reason="", readiness=true. Elapsed: 4.090772623s -Jun 12 21:58:42.184: INFO: Pod "execpodsv8n7" satisfied condition "running" -Jun 12 21:58:43.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-5323 exec execpodsv8n7 -- /bin/sh -x -c nc -v -z -w 2 externalname-service 80' -Jun 12 21:58:43.875: INFO: stderr: "+ nc -v -z -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" -Jun 12 21:58:43.875: INFO: stdout: "" -Jun 12 21:58:43.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-5323 exec execpodsv8n7 -- /bin/sh -x -c nc -v -z -w 2 172.21.195.17 80' -Jun 12 21:58:44.397: INFO: stderr: "+ nc -v -z -w 2 172.21.195.17 80\nConnection to 172.21.195.17 80 port [tcp/http] succeeded!\n" -Jun 12 21:58:44.397: INFO: stdout: "" -Jun 12 21:58:44.397: INFO: Cleaning up the ExternalName to ClusterIP test service -[AfterEach] [sig-network] Services +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] deployment should delete old replica sets [Conformance] + test/e2e/apps/deployment.go:122 +W0727 02:30:24.733662 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "httpd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "httpd" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "httpd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "httpd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +Jul 27 02:30:24.743: INFO: Pod name cleanup-pod: Found 0 pods out of 1 +Jul 27 02:30:29.755: INFO: Pod name cleanup-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running 07/27/23 02:30:29.755 +Jul 27 02:30:29.756: INFO: Creating deployment test-cleanup-deployment +STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up 07/27/23 02:30:29.785 +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Jul 27 02:30:29.813: INFO: Deployment "test-cleanup-deployment": +&Deployment{ObjectMeta:{test-cleanup-deployment deployment-7432 059cca81-1b81-45c1-9bfe-38809ff0439f 107580 1 2023-07-27 02:30:29 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2023-07-27 02:30:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0034662a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} + +Jul 27 02:30:29.861: INFO: New ReplicaSet "test-cleanup-deployment-7698ff6f6b" of Deployment "test-cleanup-deployment": +&ReplicaSet{ObjectMeta:{test-cleanup-deployment-7698ff6f6b deployment-7432 8ee3300e-cf9b-41df-a827-cec773f50120 107582 1 2023-07-27 02:30:29 +0000 UTC map[name:cleanup-pod pod-template-hash:7698ff6f6b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 059cca81-1b81-45c1-9bfe-38809ff0439f 0xc0034668d7 0xc0034668d8}] [] [{kube-controller-manager Update apps/v1 2023-07-27 02:30:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"059cca81-1b81-45c1-9bfe-38809ff0439f\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 7698ff6f6b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:7698ff6f6b] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003466968 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Jul 27 02:30:29.861: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": +Jul 27 02:30:29.862: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-7432 9de44b18-0318-406a-8272-68b70919e63c 107581 1 2023-07-27 02:30:24 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 059cca81-1b81-45c1-9bfe-38809ff0439f 0xc0034667a7 0xc0034667a8}] [] [{e2e.test Update apps/v1 2023-07-27 02:30:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 02:30:25 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status} {kube-controller-manager Update apps/v1 2023-07-27 02:30:29 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"059cca81-1b81-45c1-9bfe-38809ff0439f\"}":{}}}} }]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003466868 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Jul 27 02:30:29.915: INFO: Pod "test-cleanup-controller-bw6n7" is available: +&Pod{ObjectMeta:{test-cleanup-controller-bw6n7 test-cleanup-controller- deployment-7432 c9c60c2b-e59e-4bae-b0b8-b03a9ec13108 107556 0 2023-07-27 02:30:24 +0000 UTC map[name:cleanup-pod pod:httpd] map[cni.projectcalico.org/containerID:53224d8d91ec2a68264774368b98418029e8120aad4005d2973fe2fbcb2d8149 cni.projectcalico.org/podIP:172.17.225.48/32 cni.projectcalico.org/podIPs:172.17.225.48/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.225.48" + ], + "default": true, + "dns": {} +}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-cleanup-controller 9de44b18-0318-406a-8272-68b70919e63c 0xc003466dd7 0xc003466dd8}] [] [{kube-controller-manager Update v1 2023-07-27 02:30:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9de44b18-0318-406a-8272-68b70919e63c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-07-27 02:30:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-07-27 02:30:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.225.48\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status} {multus Update v1 2023-07-27 02:30:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xjzj4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xjzj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.19,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c58,c47,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:30:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:30:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:30:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:30:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.19,PodIP:172.17.225.48,StartTime:2023-07-27 02:30:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-07-27 02:30:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://cf8f7431ef8d7d24cc3691a2da246dc4bf264ba1dcfb9740201ee67a0964bd00,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.17.225.48,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jul 27 02:30:29.915: INFO: Pod "test-cleanup-deployment-7698ff6f6b-cp2q4" is not available: +&Pod{ObjectMeta:{test-cleanup-deployment-7698ff6f6b-cp2q4 test-cleanup-deployment-7698ff6f6b- deployment-7432 d5a2a1b7-e68e-4bcd-a001-a5b20495d0dd 107587 0 2023-07-27 02:30:29 +0000 UTC map[name:cleanup-pod pod-template-hash:7698ff6f6b] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-cleanup-deployment-7698ff6f6b 8ee3300e-cf9b-41df-a827-cec773f50120 0xc003467057 0xc003467058}] [] [{kube-controller-manager Update v1 2023-07-27 02:30:29 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ee3300e-cf9b-41df-a827-cec773f50120\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lqfnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lqfnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.19,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c58,c47,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-r8xzb,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:30:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment test/e2e/framework/node/init/init.go:32 -Jun 12 21:58:44.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] Services +Jul 27 02:30:29.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Deployment test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] Services +[DeferCleanup (Each)] [sig-apps] Deployment dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] Services +[DeferCleanup (Each)] [sig-apps] Deployment tear down framework | framework.go:193 -STEP: Destroying namespace "services-5323" for this suite. 06/12/23 21:58:44.49 +STEP: Destroying namespace "deployment-7432" for this suite. 07/27/23 02:30:29.97 ------------------------------ -• [SLOW TEST] [12.878 seconds] -[sig-network] Services -test/e2e/network/common/framework.go:23 - should be able to change the type from ExternalName to ClusterIP [Conformance] - test/e2e/network/service.go:1438 +• [SLOW TEST] [5.373 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + deployment should delete old replica sets [Conformance] + test/e2e/apps/deployment.go:122 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] Services + [BeforeEach] [sig-apps] Deployment set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:58:31.632 - Jun 12 21:58:31.632: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename services 06/12/23 21:58:31.635 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:58:31.714 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:58:31.731 - [BeforeEach] [sig-network] Services + STEP: Creating a kubernetes client 07/27/23 02:30:24.651 + Jul 27 02:30:24.651: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename deployment 07/27/23 02:30:24.653 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:30:24.692 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:30:24.704 + [BeforeEach] [sig-apps] Deployment test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-network] Services - test/e2e/network/service.go:766 - [It] should be able to change the type from ExternalName to ClusterIP [Conformance] - test/e2e/network/service.go:1438 - STEP: creating a service externalname-service with the type=ExternalName in namespace services-5323 06/12/23 21:58:31.8 - STEP: changing the ExternalName service to type=ClusterIP 06/12/23 21:58:31.848 - STEP: creating replication controller externalname-service in namespace services-5323 06/12/23 21:58:31.987 - I0612 21:58:32.014143 23 runners.go:193] Created replication controller with name: externalname-service, namespace: services-5323, replica count: 2 - I0612 21:58:35.066070 23 runners.go:193] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady - I0612 21:58:38.067232 23 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady - Jun 12 21:58:38.067: INFO: Creating new exec pod - Jun 12 21:58:38.093: INFO: Waiting up to 5m0s for pod "execpodsv8n7" in namespace "services-5323" to be "running" - Jun 12 21:58:38.171: INFO: Pod "execpodsv8n7": Phase="Pending", Reason="", readiness=false. Elapsed: 78.276347ms - Jun 12 21:58:40.215: INFO: Pod "execpodsv8n7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121472433s - Jun 12 21:58:42.184: INFO: Pod "execpodsv8n7": Phase="Running", Reason="", readiness=true. Elapsed: 4.090772623s - Jun 12 21:58:42.184: INFO: Pod "execpodsv8n7" satisfied condition "running" - Jun 12 21:58:43.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-5323 exec execpodsv8n7 -- /bin/sh -x -c nc -v -z -w 2 externalname-service 80' - Jun 12 21:58:43.875: INFO: stderr: "+ nc -v -z -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" - Jun 12 21:58:43.875: INFO: stdout: "" - Jun 12 21:58:43.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-5323 exec execpodsv8n7 -- /bin/sh -x -c nc -v -z -w 2 172.21.195.17 80' - Jun 12 21:58:44.397: INFO: stderr: "+ nc -v -z -w 2 172.21.195.17 80\nConnection to 172.21.195.17 80 port [tcp/http] succeeded!\n" - Jun 12 21:58:44.397: INFO: stdout: "" - Jun 12 21:58:44.397: INFO: Cleaning up the ExternalName to ClusterIP test service - [AfterEach] [sig-network] Services + [BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 + [It] deployment should delete old replica sets [Conformance] + test/e2e/apps/deployment.go:122 + W0727 02:30:24.733662 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "httpd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "httpd" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "httpd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "httpd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + Jul 27 02:30:24.743: INFO: Pod name cleanup-pod: Found 0 pods out of 1 + Jul 27 02:30:29.755: INFO: Pod name cleanup-pod: Found 1 pods out of 1 + STEP: ensuring each pod is running 07/27/23 02:30:29.755 + Jul 27 02:30:29.756: INFO: Creating deployment test-cleanup-deployment + STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up 07/27/23 02:30:29.785 + [AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 + Jul 27 02:30:29.813: INFO: Deployment "test-cleanup-deployment": + &Deployment{ObjectMeta:{test-cleanup-deployment deployment-7432 059cca81-1b81-45c1-9bfe-38809ff0439f 107580 1 2023-07-27 02:30:29 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2023-07-27 02:30:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0034662a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} + + Jul 27 02:30:29.861: INFO: New ReplicaSet "test-cleanup-deployment-7698ff6f6b" of Deployment "test-cleanup-deployment": + &ReplicaSet{ObjectMeta:{test-cleanup-deployment-7698ff6f6b deployment-7432 8ee3300e-cf9b-41df-a827-cec773f50120 107582 1 2023-07-27 02:30:29 +0000 UTC map[name:cleanup-pod pod-template-hash:7698ff6f6b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 059cca81-1b81-45c1-9bfe-38809ff0439f 0xc0034668d7 0xc0034668d8}] [] [{kube-controller-manager Update apps/v1 2023-07-27 02:30:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"059cca81-1b81-45c1-9bfe-38809ff0439f\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 7698ff6f6b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:7698ff6f6b] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003466968 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + Jul 27 02:30:29.861: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": + Jul 27 02:30:29.862: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-7432 9de44b18-0318-406a-8272-68b70919e63c 107581 1 2023-07-27 02:30:24 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 059cca81-1b81-45c1-9bfe-38809ff0439f 0xc0034667a7 0xc0034667a8}] [] [{e2e.test Update apps/v1 2023-07-27 02:30:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 02:30:25 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status} {kube-controller-manager Update apps/v1 2023-07-27 02:30:29 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"059cca81-1b81-45c1-9bfe-38809ff0439f\"}":{}}}} }]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003466868 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} + Jul 27 02:30:29.915: INFO: Pod "test-cleanup-controller-bw6n7" is available: + &Pod{ObjectMeta:{test-cleanup-controller-bw6n7 test-cleanup-controller- deployment-7432 c9c60c2b-e59e-4bae-b0b8-b03a9ec13108 107556 0 2023-07-27 02:30:24 +0000 UTC map[name:cleanup-pod pod:httpd] map[cni.projectcalico.org/containerID:53224d8d91ec2a68264774368b98418029e8120aad4005d2973fe2fbcb2d8149 cni.projectcalico.org/podIP:172.17.225.48/32 cni.projectcalico.org/podIPs:172.17.225.48/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.225.48" + ], + "default": true, + "dns": {} + }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-cleanup-controller 9de44b18-0318-406a-8272-68b70919e63c 0xc003466dd7 0xc003466dd8}] [] [{kube-controller-manager Update v1 2023-07-27 02:30:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9de44b18-0318-406a-8272-68b70919e63c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-07-27 02:30:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-07-27 02:30:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.225.48\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status} {multus Update v1 2023-07-27 02:30:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xjzj4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xjzj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.19,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c58,c47,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:30:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:30:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:30:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:30:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.19,PodIP:172.17.225.48,StartTime:2023-07-27 02:30:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-07-27 02:30:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://cf8f7431ef8d7d24cc3691a2da246dc4bf264ba1dcfb9740201ee67a0964bd00,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.17.225.48,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jul 27 02:30:29.915: INFO: Pod "test-cleanup-deployment-7698ff6f6b-cp2q4" is not available: + &Pod{ObjectMeta:{test-cleanup-deployment-7698ff6f6b-cp2q4 test-cleanup-deployment-7698ff6f6b- deployment-7432 d5a2a1b7-e68e-4bcd-a001-a5b20495d0dd 107587 0 2023-07-27 02:30:29 +0000 UTC map[name:cleanup-pod pod-template-hash:7698ff6f6b] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-cleanup-deployment-7698ff6f6b 8ee3300e-cf9b-41df-a827-cec773f50120 0xc003467057 0xc003467058}] [] [{kube-controller-manager Update v1 2023-07-27 02:30:29 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ee3300e-cf9b-41df-a827-cec773f50120\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lqfnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lqfnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.19,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c58,c47,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-r8xzb,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:30:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + [AfterEach] [sig-apps] Deployment test/e2e/framework/node/init/init.go:32 - Jun 12 21:58:44.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] Services + Jul 27 02:30:29.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Deployment test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] Services + [DeferCleanup (Each)] [sig-apps] Deployment dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] Services + [DeferCleanup (Each)] [sig-apps] Deployment tear down framework | framework.go:193 - STEP: Destroying namespace "services-5323" for this suite. 06/12/23 21:58:44.49 + STEP: Destroying namespace "deployment-7432" for this suite. 07/27/23 02:30:29.97 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook - should execute prestop exec hook properly [NodeConformance] [Conformance] - test/e2e/common/node/lifecycle_hook.go:151 -[BeforeEach] [sig-node] Container Lifecycle Hook +[sig-apps] ReplicaSet + should list and delete a collection of ReplicaSets [Conformance] + test/e2e/apps/replica_set.go:165 +[BeforeEach] [sig-apps] ReplicaSet set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:58:44.516 -Jun 12 21:58:44.516: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename container-lifecycle-hook 06/12/23 21:58:44.518 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:58:44.593 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:58:44.614 -[BeforeEach] [sig-node] Container Lifecycle Hook +STEP: Creating a kubernetes client 07/27/23 02:30:30.027 +Jul 27 02:30:30.027: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename replicaset 07/27/23 02:30:30.028 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:30:30.095 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:30:30.125 +[BeforeEach] [sig-apps] ReplicaSet test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] when create a pod with lifecycle hook - test/e2e/common/node/lifecycle_hook.go:77 -STEP: create the container to handle the HTTPGet hook request. 06/12/23 21:58:44.666 -Jun 12 21:58:44.698: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-1304" to be "running and ready" -Jun 12 21:58:44.772: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 73.055272ms -Jun 12 21:58:44.772: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:58:46.785: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085832371s -Jun 12 21:58:46.785: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:58:48.848: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 4.149310215s -Jun 12 21:58:48.848: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) -Jun 12 21:58:48.848: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" -[It] should execute prestop exec hook properly [NodeConformance] [Conformance] - test/e2e/common/node/lifecycle_hook.go:151 -STEP: create the pod with lifecycle hook 06/12/23 21:58:48.87 -Jun 12 21:58:48.915: INFO: Waiting up to 5m0s for pod "pod-with-prestop-exec-hook" in namespace "container-lifecycle-hook-1304" to be "running and ready" -Jun 12 21:58:48.975: INFO: Pod "pod-with-prestop-exec-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 59.705167ms -Jun 12 21:58:48.975: INFO: The phase of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:58:50.995: INFO: Pod "pod-with-prestop-exec-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080148292s -Jun 12 21:58:51.009: INFO: The phase of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) -Jun 12 21:58:52.987: INFO: Pod "pod-with-prestop-exec-hook": Phase="Running", Reason="", readiness=true. Elapsed: 4.071479671s -Jun 12 21:58:52.987: INFO: The phase of Pod pod-with-prestop-exec-hook is Running (Ready = true) -Jun 12 21:58:52.987: INFO: Pod "pod-with-prestop-exec-hook" satisfied condition "running and ready" -STEP: delete the pod with lifecycle hook 06/12/23 21:58:52.999 -Jun 12 21:58:53.020: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear -Jun 12 21:58:53.035: INFO: Pod pod-with-prestop-exec-hook still exists -Jun 12 21:58:55.035: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear -Jun 12 21:58:55.049: INFO: Pod pod-with-prestop-exec-hook still exists -Jun 12 21:58:57.046: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear -Jun 12 21:58:57.091: INFO: Pod pod-with-prestop-exec-hook no longer exists -STEP: check prestop hook 06/12/23 21:58:57.091 -[AfterEach] [sig-node] Container Lifecycle Hook +[It] should list and delete a collection of ReplicaSets [Conformance] + test/e2e/apps/replica_set.go:165 +STEP: Create a ReplicaSet 07/27/23 02:30:30.139 +W0727 02:30:31.157234 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "httpd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "httpd" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "httpd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "httpd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +STEP: Verify that the required pods have come up 07/27/23 02:30:31.157 +Jul 27 02:30:31.168: INFO: Pod name sample-pod: Found 0 pods out of 3 +Jul 27 02:30:36.179: INFO: Pod name sample-pod: Found 3 pods out of 3 +STEP: ensuring each pod is running 07/27/23 02:30:36.179 +Jul 27 02:30:36.189: INFO: Replica Status: {Replicas:3 FullyLabeledReplicas:3 ReadyReplicas:3 AvailableReplicas:3 ObservedGeneration:1 Conditions:[]} +STEP: Listing all ReplicaSets 07/27/23 02:30:36.189 +STEP: DeleteCollection of the ReplicaSets 07/27/23 02:30:36.208 +STEP: After DeleteCollection verify that ReplicaSets have been deleted 07/27/23 02:30:36.227 +[AfterEach] [sig-apps] ReplicaSet test/e2e/framework/node/init/init.go:32 -Jun 12 21:58:57.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook +Jul 27 02:30:36.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicaSet test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook +[DeferCleanup (Each)] [sig-apps] ReplicaSet dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook +[DeferCleanup (Each)] [sig-apps] ReplicaSet tear down framework | framework.go:193 -STEP: Destroying namespace "container-lifecycle-hook-1304" for this suite. 06/12/23 21:58:57.208 +STEP: Destroying namespace "replicaset-1432" for this suite. 07/27/23 02:30:36.312 ------------------------------ -• [SLOW TEST] [12.715 seconds] -[sig-node] Container Lifecycle Hook -test/e2e/common/node/framework.go:23 - when create a pod with lifecycle hook - test/e2e/common/node/lifecycle_hook.go:46 - should execute prestop exec hook properly [NodeConformance] [Conformance] - test/e2e/common/node/lifecycle_hook.go:151 +• [SLOW TEST] [6.309 seconds] +[sig-apps] ReplicaSet +test/e2e/apps/framework.go:23 + should list and delete a collection of ReplicaSets [Conformance] + test/e2e/apps/replica_set.go:165 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Container Lifecycle Hook + [BeforeEach] [sig-apps] ReplicaSet set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:58:44.516 - Jun 12 21:58:44.516: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename container-lifecycle-hook 06/12/23 21:58:44.518 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:58:44.593 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:58:44.614 - [BeforeEach] [sig-node] Container Lifecycle Hook + STEP: Creating a kubernetes client 07/27/23 02:30:30.027 + Jul 27 02:30:30.027: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename replicaset 07/27/23 02:30:30.028 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:30:30.095 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:30:30.125 + [BeforeEach] [sig-apps] ReplicaSet test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] when create a pod with lifecycle hook - test/e2e/common/node/lifecycle_hook.go:77 - STEP: create the container to handle the HTTPGet hook request. 06/12/23 21:58:44.666 - Jun 12 21:58:44.698: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-1304" to be "running and ready" - Jun 12 21:58:44.772: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 73.055272ms - Jun 12 21:58:44.772: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:58:46.785: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085832371s - Jun 12 21:58:46.785: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:58:48.848: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 4.149310215s - Jun 12 21:58:48.848: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) - Jun 12 21:58:48.848: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" - [It] should execute prestop exec hook properly [NodeConformance] [Conformance] - test/e2e/common/node/lifecycle_hook.go:151 - STEP: create the pod with lifecycle hook 06/12/23 21:58:48.87 - Jun 12 21:58:48.915: INFO: Waiting up to 5m0s for pod "pod-with-prestop-exec-hook" in namespace "container-lifecycle-hook-1304" to be "running and ready" - Jun 12 21:58:48.975: INFO: Pod "pod-with-prestop-exec-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 59.705167ms - Jun 12 21:58:48.975: INFO: The phase of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:58:50.995: INFO: Pod "pod-with-prestop-exec-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080148292s - Jun 12 21:58:51.009: INFO: The phase of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) - Jun 12 21:58:52.987: INFO: Pod "pod-with-prestop-exec-hook": Phase="Running", Reason="", readiness=true. Elapsed: 4.071479671s - Jun 12 21:58:52.987: INFO: The phase of Pod pod-with-prestop-exec-hook is Running (Ready = true) - Jun 12 21:58:52.987: INFO: Pod "pod-with-prestop-exec-hook" satisfied condition "running and ready" - STEP: delete the pod with lifecycle hook 06/12/23 21:58:52.999 - Jun 12 21:58:53.020: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear - Jun 12 21:58:53.035: INFO: Pod pod-with-prestop-exec-hook still exists - Jun 12 21:58:55.035: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear - Jun 12 21:58:55.049: INFO: Pod pod-with-prestop-exec-hook still exists - Jun 12 21:58:57.046: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear - Jun 12 21:58:57.091: INFO: Pod pod-with-prestop-exec-hook no longer exists - STEP: check prestop hook 06/12/23 21:58:57.091 - [AfterEach] [sig-node] Container Lifecycle Hook + [It] should list and delete a collection of ReplicaSets [Conformance] + test/e2e/apps/replica_set.go:165 + STEP: Create a ReplicaSet 07/27/23 02:30:30.139 + W0727 02:30:31.157234 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "httpd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "httpd" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "httpd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "httpd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + STEP: Verify that the required pods have come up 07/27/23 02:30:31.157 + Jul 27 02:30:31.168: INFO: Pod name sample-pod: Found 0 pods out of 3 + Jul 27 02:30:36.179: INFO: Pod name sample-pod: Found 3 pods out of 3 + STEP: ensuring each pod is running 07/27/23 02:30:36.179 + Jul 27 02:30:36.189: INFO: Replica Status: {Replicas:3 FullyLabeledReplicas:3 ReadyReplicas:3 AvailableReplicas:3 ObservedGeneration:1 Conditions:[]} + STEP: Listing all ReplicaSets 07/27/23 02:30:36.189 + STEP: DeleteCollection of the ReplicaSets 07/27/23 02:30:36.208 + STEP: After DeleteCollection verify that ReplicaSets have been deleted 07/27/23 02:30:36.227 + [AfterEach] [sig-apps] ReplicaSet test/e2e/framework/node/init/init.go:32 - Jun 12 21:58:57.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + Jul 27 02:30:36.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicaSet test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + [DeferCleanup (Each)] [sig-apps] ReplicaSet dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + [DeferCleanup (Each)] [sig-apps] ReplicaSet tear down framework | framework.go:193 - STEP: Destroying namespace "container-lifecycle-hook-1304" for this suite. 06/12/23 21:58:57.208 + STEP: Destroying namespace "replicaset-1432" for this suite. 07/27/23 02:30:36.312 << End Captured GinkgoWriter Output ------------------------------ -SSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-node] Kubelet when scheduling a busybox command that always fails in a pod - should have an terminated reason [NodeConformance] [Conformance] - test/e2e/common/node/kubelet.go:110 -[BeforeEach] [sig-node] Kubelet +[sig-storage] EmptyDir volumes + should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:217 +[BeforeEach] [sig-storage] EmptyDir volumes set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:58:57.236 -Jun 12 21:58:57.236: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename kubelet-test 06/12/23 21:58:57.238 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:58:57.326 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:58:57.346 -[BeforeEach] [sig-node] Kubelet +STEP: Creating a kubernetes client 07/27/23 02:30:36.341 +Jul 27 02:30:36.341: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename emptydir 07/27/23 02:30:36.342 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:30:36.381 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:30:36.393 +[BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-node] Kubelet - test/e2e/common/node/kubelet.go:41 -[BeforeEach] when scheduling a busybox command that always fails in a pod - test/e2e/common/node/kubelet.go:85 -[It] should have an terminated reason [NodeConformance] [Conformance] - test/e2e/common/node/kubelet.go:110 -[AfterEach] [sig-node] Kubelet +[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:217 +STEP: Creating a pod to test emptydir 0777 on node default medium 07/27/23 02:30:36.407 +Jul 27 02:30:36.447: INFO: Waiting up to 5m0s for pod "pod-58e01007-a8cf-4012-a91a-811ecfb00083" in namespace "emptydir-4589" to be "Succeeded or Failed" +Jul 27 02:30:36.460: INFO: Pod "pod-58e01007-a8cf-4012-a91a-811ecfb00083": Phase="Pending", Reason="", readiness=false. Elapsed: 12.700441ms +Jul 27 02:30:38.478: INFO: Pod "pod-58e01007-a8cf-4012-a91a-811ecfb00083": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03117538s +Jul 27 02:30:40.471: INFO: Pod "pod-58e01007-a8cf-4012-a91a-811ecfb00083": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023607328s +STEP: Saw pod success 07/27/23 02:30:40.471 +Jul 27 02:30:40.471: INFO: Pod "pod-58e01007-a8cf-4012-a91a-811ecfb00083" satisfied condition "Succeeded or Failed" +Jul 27 02:30:40.486: INFO: Trying to get logs from node 10.245.128.19 pod pod-58e01007-a8cf-4012-a91a-811ecfb00083 container test-container: +STEP: delete the pod 07/27/23 02:30:40.531 +Jul 27 02:30:40.564: INFO: Waiting for pod pod-58e01007-a8cf-4012-a91a-811ecfb00083 to disappear +Jul 27 02:30:40.579: INFO: Pod pod-58e01007-a8cf-4012-a91a-811ecfb00083 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/node/init/init.go:32 -Jun 12 21:59:05.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Kubelet +Jul 27 02:30:40.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Kubelet +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Kubelet +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes tear down framework | framework.go:193 -STEP: Destroying namespace "kubelet-test-9847" for this suite. 06/12/23 21:59:05.521 +STEP: Destroying namespace "emptydir-4589" for this suite. 07/27/23 02:30:40.616 ------------------------------ -• [SLOW TEST] [8.306 seconds] -[sig-node] Kubelet -test/e2e/common/node/framework.go:23 - when scheduling a busybox command that always fails in a pod - test/e2e/common/node/kubelet.go:82 - should have an terminated reason [NodeConformance] [Conformance] - test/e2e/common/node/kubelet.go:110 +• [4.300 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:217 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Kubelet + [BeforeEach] [sig-storage] EmptyDir volumes set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:58:57.236 - Jun 12 21:58:57.236: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename kubelet-test 06/12/23 21:58:57.238 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:58:57.326 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:58:57.346 - [BeforeEach] [sig-node] Kubelet + STEP: Creating a kubernetes client 07/27/23 02:30:36.341 + Jul 27 02:30:36.341: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename emptydir 07/27/23 02:30:36.342 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:30:36.381 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:30:36.393 + [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-node] Kubelet - test/e2e/common/node/kubelet.go:41 - [BeforeEach] when scheduling a busybox command that always fails in a pod - test/e2e/common/node/kubelet.go:85 - [It] should have an terminated reason [NodeConformance] [Conformance] - test/e2e/common/node/kubelet.go:110 - [AfterEach] [sig-node] Kubelet + [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:217 + STEP: Creating a pod to test emptydir 0777 on node default medium 07/27/23 02:30:36.407 + Jul 27 02:30:36.447: INFO: Waiting up to 5m0s for pod "pod-58e01007-a8cf-4012-a91a-811ecfb00083" in namespace "emptydir-4589" to be "Succeeded or Failed" + Jul 27 02:30:36.460: INFO: Pod "pod-58e01007-a8cf-4012-a91a-811ecfb00083": Phase="Pending", Reason="", readiness=false. Elapsed: 12.700441ms + Jul 27 02:30:38.478: INFO: Pod "pod-58e01007-a8cf-4012-a91a-811ecfb00083": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03117538s + Jul 27 02:30:40.471: INFO: Pod "pod-58e01007-a8cf-4012-a91a-811ecfb00083": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023607328s + STEP: Saw pod success 07/27/23 02:30:40.471 + Jul 27 02:30:40.471: INFO: Pod "pod-58e01007-a8cf-4012-a91a-811ecfb00083" satisfied condition "Succeeded or Failed" + Jul 27 02:30:40.486: INFO: Trying to get logs from node 10.245.128.19 pod pod-58e01007-a8cf-4012-a91a-811ecfb00083 container test-container: + STEP: delete the pod 07/27/23 02:30:40.531 + Jul 27 02:30:40.564: INFO: Waiting for pod pod-58e01007-a8cf-4012-a91a-811ecfb00083 to disappear + Jul 27 02:30:40.579: INFO: Pod pod-58e01007-a8cf-4012-a91a-811ecfb00083 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/node/init/init.go:32 - Jun 12 21:59:05.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Kubelet + Jul 27 02:30:40.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Kubelet + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Kubelet + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes tear down framework | framework.go:193 - STEP: Destroying namespace "kubelet-test-9847" for this suite. 06/12/23 21:59:05.521 + STEP: Destroying namespace "emptydir-4589" for this suite. 07/27/23 02:30:40.616 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSS +SSSSS ------------------------------ -[sig-storage] ConfigMap - binary data should be reflected in volume [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:175 -[BeforeEach] [sig-storage] ConfigMap +[sig-api-machinery] Garbage collector + should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + test/e2e/apimachinery/garbage_collector.go:650 +[BeforeEach] [sig-api-machinery] Garbage collector set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:59:05.543 -Jun 12 21:59:05.543: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename configmap 06/12/23 21:59:05.544 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:59:05.59 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:59:05.602 -[BeforeEach] [sig-storage] ConfigMap +STEP: Creating a kubernetes client 07/27/23 02:30:40.641 +Jul 27 02:30:40.641: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename gc 07/27/23 02:30:40.642 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:30:40.684 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:30:40.697 +[BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:31 -[It] binary data should be reflected in volume [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:175 -Jun 12 21:59:05.639: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node -STEP: Creating configMap with name configmap-test-upd-bedc5132-ecca-4946-9bb3-88c7c03a8732 06/12/23 21:59:05.639 -STEP: Creating the pod 06/12/23 21:59:05.664 -Jun 12 21:59:05.694: INFO: Waiting up to 5m0s for pod "pod-configmaps-309c1ddc-f095-417b-a8b5-18de3a7bdb8c" in namespace "configmap-388" to be "running" -Jun 12 21:59:05.707: INFO: Pod "pod-configmaps-309c1ddc-f095-417b-a8b5-18de3a7bdb8c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.710742ms -Jun 12 21:59:07.719: INFO: Pod "pod-configmaps-309c1ddc-f095-417b-a8b5-18de3a7bdb8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022846627s -Jun 12 21:59:09.721: INFO: Pod "pod-configmaps-309c1ddc-f095-417b-a8b5-18de3a7bdb8c": Phase="Running", Reason="", readiness=false. Elapsed: 4.024430085s -Jun 12 21:59:09.722: INFO: Pod "pod-configmaps-309c1ddc-f095-417b-a8b5-18de3a7bdb8c" satisfied condition "running" -STEP: Waiting for pod with text data 06/12/23 21:59:09.722 -STEP: Waiting for pod with binary data 06/12/23 21:59:09.76 -[AfterEach] [sig-storage] ConfigMap +[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + test/e2e/apimachinery/garbage_collector.go:650 +STEP: create the rc 07/27/23 02:30:40.729 +STEP: delete the rc 07/27/23 02:30:45.791 +STEP: wait for the rc to be deleted 07/27/23 02:30:45.837 +STEP: Gathering metrics 07/27/23 02:30:46.863 +W0727 02:30:46.890343 20 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +Jul 27 02:30:46.890: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 -Jun 12 21:59:09.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] ConfigMap +Jul 27 02:30:46.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] ConfigMap +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] ConfigMap +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector tear down framework | framework.go:193 -STEP: Destroying namespace "configmap-388" for this suite. 06/12/23 21:59:09.815 +STEP: Destroying namespace "gc-5464" for this suite. 07/27/23 02:30:46.911 ------------------------------ -• [4.293 seconds] -[sig-storage] ConfigMap -test/e2e/common/storage/framework.go:23 - binary data should be reflected in volume [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:175 +• [SLOW TEST] [6.297 seconds] +[sig-api-machinery] Garbage collector +test/e2e/apimachinery/framework.go:23 + should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + test/e2e/apimachinery/garbage_collector.go:650 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] ConfigMap + [BeforeEach] [sig-api-machinery] Garbage collector set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:59:05.543 - Jun 12 21:59:05.543: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename configmap 06/12/23 21:59:05.544 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:59:05.59 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:59:05.602 - [BeforeEach] [sig-storage] ConfigMap + STEP: Creating a kubernetes client 07/27/23 02:30:40.641 + Jul 27 02:30:40.641: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename gc 07/27/23 02:30:40.642 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:30:40.684 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:30:40.697 + [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:31 - [It] binary data should be reflected in volume [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:175 - Jun 12 21:59:05.639: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node - STEP: Creating configMap with name configmap-test-upd-bedc5132-ecca-4946-9bb3-88c7c03a8732 06/12/23 21:59:05.639 - STEP: Creating the pod 06/12/23 21:59:05.664 - Jun 12 21:59:05.694: INFO: Waiting up to 5m0s for pod "pod-configmaps-309c1ddc-f095-417b-a8b5-18de3a7bdb8c" in namespace "configmap-388" to be "running" - Jun 12 21:59:05.707: INFO: Pod "pod-configmaps-309c1ddc-f095-417b-a8b5-18de3a7bdb8c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.710742ms - Jun 12 21:59:07.719: INFO: Pod "pod-configmaps-309c1ddc-f095-417b-a8b5-18de3a7bdb8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022846627s - Jun 12 21:59:09.721: INFO: Pod "pod-configmaps-309c1ddc-f095-417b-a8b5-18de3a7bdb8c": Phase="Running", Reason="", readiness=false. Elapsed: 4.024430085s - Jun 12 21:59:09.722: INFO: Pod "pod-configmaps-309c1ddc-f095-417b-a8b5-18de3a7bdb8c" satisfied condition "running" - STEP: Waiting for pod with text data 06/12/23 21:59:09.722 - STEP: Waiting for pod with binary data 06/12/23 21:59:09.76 - [AfterEach] [sig-storage] ConfigMap - test/e2e/framework/node/init/init.go:32 - Jun 12 21:59:09.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] ConfigMap - test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] ConfigMap - dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] ConfigMap - tear down framework | framework.go:193 - STEP: Destroying namespace "configmap-388" for this suite. 06/12/23 21:59:09.815 - << End Captured GinkgoWriter Output ------------------------------- -SSSSS ------------------------------- -[sig-network] Services - should be able to create a functioning NodePort service [Conformance] - test/e2e/network/service.go:1302 -[BeforeEach] [sig-network] Services - set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:59:09.843 -Jun 12 21:59:09.843: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename services 06/12/23 21:59:09.847 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:59:09.901 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:59:09.915 -[BeforeEach] [sig-network] Services - test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-network] Services - test/e2e/network/service.go:766 -[It] should be able to create a functioning NodePort service [Conformance] - test/e2e/network/service.go:1302 -STEP: creating service nodeport-test with type=NodePort in namespace services-5335 06/12/23 21:59:09.936 -STEP: creating replication controller nodeport-test in namespace services-5335 06/12/23 21:59:10.006 -I0612 21:59:10.037611 23 runners.go:193] Created replication controller with name: nodeport-test, namespace: services-5335, replica count: 2 -I0612 21:59:13.446495 23 runners.go:193] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady -I0612 21:59:16.447247 23 runners.go:193] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady -Jun 12 21:59:16.447: INFO: Creating new exec pod -Jun 12 21:59:16.520: INFO: Waiting up to 5m0s for pod "execpodrw92l" in namespace "services-5335" to be "running" -Jun 12 21:59:16.538: INFO: Pod "execpodrw92l": Phase="Pending", Reason="", readiness=false. Elapsed: 17.428107ms -Jun 12 21:59:18.550: INFO: Pod "execpodrw92l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029354556s -Jun 12 21:59:20.550: INFO: Pod "execpodrw92l": Phase="Running", Reason="", readiness=true. Elapsed: 4.029547366s -Jun 12 21:59:20.550: INFO: Pod "execpodrw92l" satisfied condition "running" -Jun 12 21:59:21.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-5335 exec execpodrw92l -- /bin/sh -x -c nc -v -z -w 2 nodeport-test 80' -Jun 12 21:59:22.433: INFO: stderr: "+ nc -v -z -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" -Jun 12 21:59:22.434: INFO: stdout: "" -Jun 12 21:59:22.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-5335 exec execpodrw92l -- /bin/sh -x -c nc -v -z -w 2 172.21.195.126 80' -Jun 12 21:59:23.592: INFO: stderr: "+ nc -v -z -w 2 172.21.195.126 80\nConnection to 172.21.195.126 80 port [tcp/http] succeeded!\n" -Jun 12 21:59:23.592: INFO: stdout: "" -Jun 12 21:59:23.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-5335 exec execpodrw92l -- /bin/sh -x -c nc -v -z -w 2 10.138.75.70 32640' -Jun 12 21:59:23.996: INFO: stderr: "+ nc -v -z -w 2 10.138.75.70 32640\nConnection to 10.138.75.70 32640 port [tcp/*] succeeded!\n" -Jun 12 21:59:23.996: INFO: stdout: "" -Jun 12 21:59:23.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-5335 exec execpodrw92l -- /bin/sh -x -c nc -v -z -w 2 10.138.75.112 32640' -Jun 12 21:59:24.634: INFO: stderr: "+ nc -v -z -w 2 10.138.75.112 32640\nConnection to 10.138.75.112 32640 port [tcp/*] succeeded!\n" -Jun 12 21:59:24.634: INFO: stdout: "" -[AfterEach] [sig-network] Services + [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + test/e2e/apimachinery/garbage_collector.go:650 + STEP: create the rc 07/27/23 02:30:40.729 + STEP: delete the rc 07/27/23 02:30:45.791 + STEP: wait for the rc to be deleted 07/27/23 02:30:45.837 + STEP: Gathering metrics 07/27/23 02:30:46.863 + W0727 02:30:46.890343 20 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. + Jul 27 02:30:46.890: INFO: For apiserver_request_total: + For apiserver_request_latency_seconds: + For apiserver_init_events_total: + For garbage_collector_attempt_to_delete_queue_latency: + For garbage_collector_attempt_to_delete_work_duration: + For garbage_collector_attempt_to_orphan_queue_latency: + For garbage_collector_attempt_to_orphan_work_duration: + For garbage_collector_dirty_processing_latency_microseconds: + For garbage_collector_event_processing_latency_microseconds: + For garbage_collector_graph_changes_queue_latency: + For garbage_collector_graph_changes_work_duration: + For garbage_collector_orphan_processing_latency_microseconds: + For namespace_queue_latency: + For namespace_queue_latency_sum: + For namespace_queue_latency_count: + For namespace_retries: + For namespace_work_duration: + For namespace_work_duration_sum: + For namespace_work_duration_count: + For function_duration_seconds: + For errors_total: + For evicted_pods_total: + + [AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/node/init/init.go:32 + Jul 27 02:30:46.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + tear down framework | framework.go:193 + STEP: Destroying namespace "gc-5464" for this suite. 07/27/23 02:30:46.911 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + test/e2e/apimachinery/resource_quota.go:75 +[BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 +STEP: Creating a kubernetes client 07/27/23 02:30:46.938 +Jul 27 02:30:46.938: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename resourcequota 07/27/23 02:30:46.939 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:30:46.988 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:30:47 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 +[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + test/e2e/apimachinery/resource_quota.go:75 +STEP: Counting existing ResourceQuota 07/27/23 02:30:47.013 +STEP: Creating a ResourceQuota 07/27/23 02:30:52.042 +STEP: Ensuring resource quota status is calculated 07/27/23 02:30:52.078 +[AfterEach] [sig-api-machinery] ResourceQuota test/e2e/framework/node/init/init.go:32 -Jun 12 21:59:24.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] Services +Jul 27 02:30:54.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] Services +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] Services +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota tear down framework | framework.go:193 -STEP: Destroying namespace "services-5335" for this suite. 06/12/23 21:59:24.654 +STEP: Destroying namespace "resourcequota-402" for this suite. 07/27/23 02:30:54.108 ------------------------------ -• [SLOW TEST] [14.827 seconds] -[sig-network] Services -test/e2e/network/common/framework.go:23 - should be able to create a functioning NodePort service [Conformance] - test/e2e/network/service.go:1302 +• [SLOW TEST] [7.199 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + test/e2e/apimachinery/resource_quota.go:75 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] Services + [BeforeEach] [sig-api-machinery] ResourceQuota set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:59:09.843 - Jun 12 21:59:09.843: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename services 06/12/23 21:59:09.847 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:59:09.901 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:59:09.915 - [BeforeEach] [sig-network] Services + STEP: Creating a kubernetes client 07/27/23 02:30:46.938 + Jul 27 02:30:46.938: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename resourcequota 07/27/23 02:30:46.939 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:30:46.988 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:30:47 + [BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-network] Services - test/e2e/network/service.go:766 - [It] should be able to create a functioning NodePort service [Conformance] - test/e2e/network/service.go:1302 - STEP: creating service nodeport-test with type=NodePort in namespace services-5335 06/12/23 21:59:09.936 - STEP: creating replication controller nodeport-test in namespace services-5335 06/12/23 21:59:10.006 - I0612 21:59:10.037611 23 runners.go:193] Created replication controller with name: nodeport-test, namespace: services-5335, replica count: 2 - I0612 21:59:13.446495 23 runners.go:193] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady - I0612 21:59:16.447247 23 runners.go:193] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady - Jun 12 21:59:16.447: INFO: Creating new exec pod - Jun 12 21:59:16.520: INFO: Waiting up to 5m0s for pod "execpodrw92l" in namespace "services-5335" to be "running" - Jun 12 21:59:16.538: INFO: Pod "execpodrw92l": Phase="Pending", Reason="", readiness=false. Elapsed: 17.428107ms - Jun 12 21:59:18.550: INFO: Pod "execpodrw92l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029354556s - Jun 12 21:59:20.550: INFO: Pod "execpodrw92l": Phase="Running", Reason="", readiness=true. Elapsed: 4.029547366s - Jun 12 21:59:20.550: INFO: Pod "execpodrw92l" satisfied condition "running" - Jun 12 21:59:21.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-5335 exec execpodrw92l -- /bin/sh -x -c nc -v -z -w 2 nodeport-test 80' - Jun 12 21:59:22.433: INFO: stderr: "+ nc -v -z -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" - Jun 12 21:59:22.434: INFO: stdout: "" - Jun 12 21:59:22.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-5335 exec execpodrw92l -- /bin/sh -x -c nc -v -z -w 2 172.21.195.126 80' - Jun 12 21:59:23.592: INFO: stderr: "+ nc -v -z -w 2 172.21.195.126 80\nConnection to 172.21.195.126 80 port [tcp/http] succeeded!\n" - Jun 12 21:59:23.592: INFO: stdout: "" - Jun 12 21:59:23.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-5335 exec execpodrw92l -- /bin/sh -x -c nc -v -z -w 2 10.138.75.70 32640' - Jun 12 21:59:23.996: INFO: stderr: "+ nc -v -z -w 2 10.138.75.70 32640\nConnection to 10.138.75.70 32640 port [tcp/*] succeeded!\n" - Jun 12 21:59:23.996: INFO: stdout: "" - Jun 12 21:59:23.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-5335 exec execpodrw92l -- /bin/sh -x -c nc -v -z -w 2 10.138.75.112 32640' - Jun 12 21:59:24.634: INFO: stderr: "+ nc -v -z -w 2 10.138.75.112 32640\nConnection to 10.138.75.112 32640 port [tcp/*] succeeded!\n" - Jun 12 21:59:24.634: INFO: stdout: "" - [AfterEach] [sig-network] Services + [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + test/e2e/apimachinery/resource_quota.go:75 + STEP: Counting existing ResourceQuota 07/27/23 02:30:47.013 + STEP: Creating a ResourceQuota 07/27/23 02:30:52.042 + STEP: Ensuring resource quota status is calculated 07/27/23 02:30:52.078 + [AfterEach] [sig-api-machinery] ResourceQuota test/e2e/framework/node/init/init.go:32 - Jun 12 21:59:24.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] Services + Jul 27 02:30:54.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] Services + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] Services + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota tear down framework | framework.go:193 - STEP: Destroying namespace "services-5335" for this suite. 06/12/23 21:59:24.654 + STEP: Destroying namespace "resourcequota-402" for this suite. 07/27/23 02:30:54.108 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSS ------------------------------- -[sig-node] Downward API - should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] - test/e2e/common/node/downwardapi.go:166 -[BeforeEach] [sig-node] Downward API +[sig-node] Container Runtime blackbox test on terminated container + should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:248 +[BeforeEach] [sig-node] Container Runtime set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:59:24.676 -Jun 12 21:59:24.676: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename downward-api 06/12/23 21:59:24.678 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:59:24.73 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:59:24.745 -[BeforeEach] [sig-node] Downward API +STEP: Creating a kubernetes client 07/27/23 02:30:54.138 +Jul 27 02:30:54.138: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename container-runtime 07/27/23 02:30:54.139 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:30:54.201 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:30:54.218 +[BeforeEach] [sig-node] Container Runtime test/e2e/framework/metrics/init/init.go:31 -[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] - test/e2e/common/node/downwardapi.go:166 -STEP: Creating a pod to test downward api env vars 06/12/23 21:59:24.766 -Jun 12 21:59:24.801: INFO: Waiting up to 5m0s for pod "downward-api-17355c6d-7627-4655-89a2-dbca1a6527cc" in namespace "downward-api-1388" to be "Succeeded or Failed" -Jun 12 21:59:24.822: INFO: Pod "downward-api-17355c6d-7627-4655-89a2-dbca1a6527cc": Phase="Pending", Reason="", readiness=false. Elapsed: 21.00315ms -Jun 12 21:59:26.835: INFO: Pod "downward-api-17355c6d-7627-4655-89a2-dbca1a6527cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034273025s -Jun 12 21:59:28.833: INFO: Pod "downward-api-17355c6d-7627-4655-89a2-dbca1a6527cc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032614457s -Jun 12 21:59:30.861: INFO: Pod "downward-api-17355c6d-7627-4655-89a2-dbca1a6527cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.060270036s -STEP: Saw pod success 06/12/23 21:59:30.861 -Jun 12 21:59:30.862: INFO: Pod "downward-api-17355c6d-7627-4655-89a2-dbca1a6527cc" satisfied condition "Succeeded or Failed" -Jun 12 21:59:30.883: INFO: Trying to get logs from node 10.138.75.70 pod downward-api-17355c6d-7627-4655-89a2-dbca1a6527cc container dapi-container: -STEP: delete the pod 06/12/23 21:59:30.911 -Jun 12 21:59:30.945: INFO: Waiting for pod downward-api-17355c6d-7627-4655-89a2-dbca1a6527cc to disappear -Jun 12 21:59:30.959: INFO: Pod downward-api-17355c6d-7627-4655-89a2-dbca1a6527cc no longer exists -[AfterEach] [sig-node] Downward API +[It] should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:248 +STEP: create the container 07/27/23 02:30:54.236 +W0727 02:30:54.274485 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "termination-message-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "termination-message-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "termination-message-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "termination-message-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +STEP: wait for the container to reach Succeeded 07/27/23 02:30:54.274 +STEP: get the container status 07/27/23 02:31:01.379 +STEP: the container should be terminated 07/27/23 02:31:01.39 +STEP: the termination message should be set 07/27/23 02:31:01.39 +Jul 27 02:31:01.390: INFO: Expected: &{OK} to match Container's Termination Message: OK -- +STEP: delete the container 07/27/23 02:31:01.39 +[AfterEach] [sig-node] Container Runtime test/e2e/framework/node/init/init.go:32 -Jun 12 21:59:30.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Downward API +Jul 27 02:31:01.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Container Runtime test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Downward API +[DeferCleanup (Each)] [sig-node] Container Runtime dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Downward API +[DeferCleanup (Each)] [sig-node] Container Runtime tear down framework | framework.go:193 -STEP: Destroying namespace "downward-api-1388" for this suite. 06/12/23 21:59:30.979 +STEP: Destroying namespace "container-runtime-4893" for this suite. 07/27/23 02:31:01.447 ------------------------------ -• [SLOW TEST] [6.320 seconds] -[sig-node] Downward API +• [SLOW TEST] [7.334 seconds] +[sig-node] Container Runtime test/e2e/common/node/framework.go:23 - should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] - test/e2e/common/node/downwardapi.go:166 + blackbox test + test/e2e/common/node/runtime.go:44 + on terminated container + test/e2e/common/node/runtime.go:137 + should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:248 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Downward API + [BeforeEach] [sig-node] Container Runtime set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:59:24.676 - Jun 12 21:59:24.676: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename downward-api 06/12/23 21:59:24.678 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:59:24.73 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:59:24.745 - [BeforeEach] [sig-node] Downward API + STEP: Creating a kubernetes client 07/27/23 02:30:54.138 + Jul 27 02:30:54.138: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename container-runtime 07/27/23 02:30:54.139 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:30:54.201 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:30:54.218 + [BeforeEach] [sig-node] Container Runtime test/e2e/framework/metrics/init/init.go:31 - [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] - test/e2e/common/node/downwardapi.go:166 - STEP: Creating a pod to test downward api env vars 06/12/23 21:59:24.766 - Jun 12 21:59:24.801: INFO: Waiting up to 5m0s for pod "downward-api-17355c6d-7627-4655-89a2-dbca1a6527cc" in namespace "downward-api-1388" to be "Succeeded or Failed" - Jun 12 21:59:24.822: INFO: Pod "downward-api-17355c6d-7627-4655-89a2-dbca1a6527cc": Phase="Pending", Reason="", readiness=false. Elapsed: 21.00315ms - Jun 12 21:59:26.835: INFO: Pod "downward-api-17355c6d-7627-4655-89a2-dbca1a6527cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034273025s - Jun 12 21:59:28.833: INFO: Pod "downward-api-17355c6d-7627-4655-89a2-dbca1a6527cc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032614457s - Jun 12 21:59:30.861: INFO: Pod "downward-api-17355c6d-7627-4655-89a2-dbca1a6527cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.060270036s - STEP: Saw pod success 06/12/23 21:59:30.861 - Jun 12 21:59:30.862: INFO: Pod "downward-api-17355c6d-7627-4655-89a2-dbca1a6527cc" satisfied condition "Succeeded or Failed" - Jun 12 21:59:30.883: INFO: Trying to get logs from node 10.138.75.70 pod downward-api-17355c6d-7627-4655-89a2-dbca1a6527cc container dapi-container: - STEP: delete the pod 06/12/23 21:59:30.911 - Jun 12 21:59:30.945: INFO: Waiting for pod downward-api-17355c6d-7627-4655-89a2-dbca1a6527cc to disappear - Jun 12 21:59:30.959: INFO: Pod downward-api-17355c6d-7627-4655-89a2-dbca1a6527cc no longer exists - [AfterEach] [sig-node] Downward API + [It] should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:248 + STEP: create the container 07/27/23 02:30:54.236 + W0727 02:30:54.274485 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "termination-message-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "termination-message-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "termination-message-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "termination-message-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + STEP: wait for the container to reach Succeeded 07/27/23 02:30:54.274 + STEP: get the container status 07/27/23 02:31:01.379 + STEP: the container should be terminated 07/27/23 02:31:01.39 + STEP: the termination message should be set 07/27/23 02:31:01.39 + Jul 27 02:31:01.390: INFO: Expected: &{OK} to match Container's Termination Message: OK -- + STEP: delete the container 07/27/23 02:31:01.39 + [AfterEach] [sig-node] Container Runtime test/e2e/framework/node/init/init.go:32 - Jun 12 21:59:30.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Downward API + Jul 27 02:31:01.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Container Runtime test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Downward API + [DeferCleanup (Each)] [sig-node] Container Runtime dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Downward API + [DeferCleanup (Each)] [sig-node] Container Runtime tear down framework | framework.go:193 - STEP: Destroying namespace "downward-api-1388" for this suite. 06/12/23 21:59:30.979 + STEP: Destroying namespace "container-runtime-4893" for this suite. 07/27/23 02:31:01.447 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] - should perform rolling updates and roll backs of template modifications [Conformance] - test/e2e/apps/statefulset.go:306 -[BeforeEach] [sig-apps] StatefulSet +[sig-scheduling] SchedulerPredicates [Serial] + validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + test/e2e/scheduling/predicates.go:704 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 21:59:31 -Jun 12 21:59:31.000: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename statefulset 06/12/23 21:59:31.003 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:59:31.052 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:59:31.071 -[BeforeEach] [sig-apps] StatefulSet +STEP: Creating a kubernetes client 07/27/23 02:31:01.473 +Jul 27 02:31:01.474: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename sched-pred 07/27/23 02:31:01.475 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:31:01.527 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:31:01.539 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-apps] StatefulSet - test/e2e/apps/statefulset.go:98 -[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:113 -STEP: Creating service test in namespace statefulset-504 06/12/23 21:59:31.11 -[It] should perform rolling updates and roll backs of template modifications [Conformance] - test/e2e/apps/statefulset.go:306 -STEP: Creating a new StatefulSet 06/12/23 21:59:31.132 -Jun 12 21:59:31.186: INFO: Found 0 stateful pods, waiting for 3 -Jun 12 21:59:41.221: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true -Jun 12 21:59:41.221: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true -Jun 12 21:59:41.221: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false -Jun 12 21:59:51.201: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true -Jun 12 21:59:51.201: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true -Jun 12 21:59:51.201: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true -Jun 12 21:59:51.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-504 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' -Jun 12 21:59:51.739: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" -Jun 12 21:59:51.739: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" -Jun 12 21:59:51.739: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' - -STEP: Updating StatefulSet template: update image from registry.k8s.io/e2e-test-images/httpd:2.4.38-4 to registry.k8s.io/e2e-test-images/httpd:2.4.39-4 06/12/23 22:00:01.812 -Jun 12 22:00:01.882: INFO: Updating stateful set ss2 -STEP: Creating a new revision 06/12/23 22:00:01.882 -STEP: Updating Pods in reverse ordinal order 06/12/23 22:00:11.957 -Jun 12 22:00:11.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-504 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' -Jun 12 22:00:12.602: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" -Jun 12 22:00:12.603: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" -Jun 12 22:00:12.603: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' - -STEP: Rolling back to a previous revision 06/12/23 22:00:32.69 -Jun 12 22:00:32.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-504 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' -Jun 12 22:00:33.190: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" -Jun 12 22:00:33.190: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" -Jun 12 22:00:33.190: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' - -Jun 12 22:00:43.299: INFO: Updating stateful set ss2 -STEP: Rolling back update in reverse ordinal order 06/12/23 22:00:53.355 -Jun 12 22:00:53.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-504 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' -Jun 12 22:00:54.617: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" -Jun 12 22:00:54.617: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" -Jun 12 22:00:54.617: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' - -[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:124 -Jun 12 22:01:14.721: INFO: Deleting all statefulset in ns statefulset-504 -Jun 12 22:01:14.741: INFO: Scaling statefulset ss2 to 0 -Jun 12 22:01:24.811: INFO: Waiting for statefulset status.replicas updated to 0 -Jun 12 22:01:24.824: INFO: Deleting statefulset ss2 -[AfterEach] [sig-apps] StatefulSet +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:97 +Jul 27 02:31:01.553: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Jul 27 02:31:01.591: INFO: Waiting for terminating namespaces to be deleted... +Jul 27 02:31:01.624: INFO: +Logging pods the apiserver thinks is on node 10.245.128.17 before test +Jul 27 02:31:01.685: INFO: calico-node-6gb7d from calico-system started at 2023-07-26 23:19:48 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container calico-node ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: ibm-keepalived-watcher-krnnt from kube-system started at 2023-07-26 23:12:13 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container keepalived-watcher ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: ibm-master-proxy-static-10.245.128.17 from kube-system started at 2023-07-26 23:12:09 +0000 UTC (2 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container ibm-master-proxy-static ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container pause ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: ibm-vpc-block-csi-controller-0 from kube-system started at 2023-07-26 23:25:41 +0000 UTC (7 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container csi-attacher ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container csi-provisioner ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container csi-resizer ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container csi-snapshotter ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container iks-vpc-block-driver ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container liveness-probe ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container storage-secret-sidecar ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: ibm-vpc-block-csi-node-pb2sj from kube-system started at 2023-07-26 23:12:13 +0000 UTC (4 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container csi-driver-registrar ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container iks-vpc-block-node-driver ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container liveness-probe ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container storage-secret-sidecar ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: vpn-7d8b749c64-87d9s from kube-system started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container vpn ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: tuned-wnh5v from openshift-cluster-node-tuning-operator started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container tuned ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: csi-snapshot-controller-5b77984679-frszr from openshift-cluster-storage-operator started at 2023-07-26 23:21:55 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container snapshot-controller ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: csi-snapshot-webhook-78b8c8d77c-2pk6s from openshift-cluster-storage-operator started at 2023-07-26 23:21:55 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container webhook ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: console-7fd48bd95f-wksvb from openshift-console started at 2023-07-26 23:27:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container console ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: downloads-6874b45df6-w7xkq from openshift-console started at 2023-07-26 23:22:05 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container download-server ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: dns-default-5mw2g from openshift-dns started at 2023-07-26 23:25:39 +0000 UTC (2 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container dns ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: node-resolver-2kt92 from openshift-dns started at 2023-07-26 23:25:38 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container dns-node-resolver ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: image-registry-69fbbd6d88-6xgnp from openshift-image-registry started at 2023-07-27 01:50:07 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container registry ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: node-ca-pmxp9 from openshift-image-registry started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container node-ca ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: ingress-canary-wh5qj from openshift-ingress-canary started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container serve-healthcheck-canary ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: router-default-865b575f54-qjwfv from openshift-ingress started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container router ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: openshift-kube-proxy-r7t77 from openshift-kube-proxy started at 2023-07-26 23:17:29 +0000 UTC (2 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container kube-proxy ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: migrator-77d7ddf546-9g7xm from openshift-kube-storage-version-migrator started at 2023-07-26 23:25:38 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container migrator ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: certified-operators-qlqcc from openshift-marketplace started at 2023-07-26 23:23:19 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container registry-server ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: community-operators-dtgmg from openshift-marketplace started at 2023-07-26 23:23:19 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container registry-server ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: redhat-marketplace-vnvdb from openshift-marketplace started at 2023-07-26 23:23:19 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container registry-server ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: redhat-operators-9qw52 from openshift-marketplace started at 2023-07-27 01:30:34 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container registry-server ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: alertmanager-main-1 from openshift-monitoring started at 2023-07-26 23:27:44 +0000 UTC (6 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container alertmanager ready: true, restart count 1 +Jul 27 02:31:01.685: INFO: Container alertmanager-proxy ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container config-reloader ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy-metric ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container prom-label-proxy ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: kube-state-metrics-575bd9d6b6-2wk6g from openshift-monitoring started at 2023-07-27 01:50:07 +0000 UTC (3 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy-main ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy-self ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container kube-state-metrics ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: node-exporter-2tscc from openshift-monitoring started at 2023-07-26 23:26:18 +0000 UTC (2 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container node-exporter ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: openshift-state-metrics-99754b784-vdbrs from openshift-monitoring started at 2023-07-27 01:50:07 +0000 UTC (3 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy-main ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy-self ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container openshift-state-metrics ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: prometheus-adapter-657855c676-qlc95 from openshift-monitoring started at 2023-07-26 23:26:23 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container prometheus-adapter ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: prometheus-k8s-1 from openshift-monitoring started at 2023-07-26 23:27:58 +0000 UTC (6 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container config-reloader ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy-thanos ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container prometheus ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container prometheus-proxy ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container thanos-sidecar ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: prometheus-operator-765bbdfd45-twq98 from openshift-monitoring started at 2023-07-27 01:50:07 +0000 UTC (2 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container prometheus-operator ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: prometheus-operator-admission-webhook-84c7bbc8cc-hct4l from openshift-monitoring started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container prometheus-operator-admission-webhook ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: telemeter-client-c964ff8c9-xszvz from openshift-monitoring started at 2023-07-27 01:50:07 +0000 UTC (3 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container reload ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container telemeter-client ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: thanos-querier-7f9c896d7f-xqld6 from openshift-monitoring started at 2023-07-26 23:26:32 +0000 UTC (6 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy-metrics ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy-rules ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container oauth-proxy ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container prom-label-proxy ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container thanos-query ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: multus-5x56j from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container kube-multus ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: multus-additional-cni-plugins-p7gf5 from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: multus-admission-controller-8ccd764f4-j68g7 from openshift-multus started at 2023-07-26 23:25:38 +0000 UTC (2 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container multus-admission-controller ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: network-metrics-daemon-djvdx from openshift-multus started at 2023-07-26 23:17:25 +0000 UTC (2 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container network-metrics-daemon ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: network-check-target-2j7hq from openshift-network-diagnostics started at 2023-07-26 23:17:32 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container network-check-target-container ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: collect-profiles-28173720-hn9xm from openshift-operator-lifecycle-manager started at 2023-07-27 02:00:00 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container collect-profiles ready: false, restart count 0 +Jul 27 02:31:01.685: INFO: collect-profiles-28173735-ln8gp from openshift-operator-lifecycle-manager started at 2023-07-27 02:15:00 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container collect-profiles ready: false, restart count 0 +Jul 27 02:31:01.685: INFO: collect-profiles-28173750-skvm6 from openshift-operator-lifecycle-manager started at 2023-07-27 02:30:00 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container collect-profiles ready: false, restart count 0 +Jul 27 02:31:01.685: INFO: packageserver-b9964c68-p2fd4 from openshift-operator-lifecycle-manager started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container packageserver ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: service-ca-665db46585-9cprv from openshift-service-ca started at 2023-07-26 23:21:59 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container service-ca-controller ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: sonobuoy-e2e-job-17fd703895604ed7 from sonobuoy started at 2023-07-27 01:27:09 +0000 UTC (2 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container e2e ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: sonobuoy-systemd-logs-daemon-set-2f6dede7a68c4213-vft4d from sonobuoy started at 2023-07-27 01:27:09 +0000 UTC (2 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: Container systemd-logs ready: true, restart count 0 +Jul 27 02:31:01.685: INFO: tigera-operator-5b48cf996b-5zb5v from tigera-operator started at 2023-07-26 23:12:21 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.685: INFO: Container tigera-operator ready: true, restart count 6 +Jul 27 02:31:01.685: INFO: +Logging pods the apiserver thinks is on node 10.245.128.18 before test +Jul 27 02:31:01.746: INFO: calico-kube-controllers-5575667dcd-ps6n9 from calico-system started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.746: INFO: Container calico-kube-controllers ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: calico-node-2vsm9 from calico-system started at 2023-07-26 23:19:48 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.746: INFO: Container calico-node ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: calico-typha-5549cc5cdc-nsmq8 from calico-system started at 2023-07-26 23:19:56 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.746: INFO: Container calico-typha ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: managed-storage-validation-webhooks-6dfcff48fb-4xxsq from ibm-odf-validation-webhook started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.746: INFO: Container managed-storage-validation-webhooks ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: managed-storage-validation-webhooks-6dfcff48fb-k6pcc from ibm-odf-validation-webhook started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.746: INFO: Container managed-storage-validation-webhooks ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: managed-storage-validation-webhooks-6dfcff48fb-swht2 from ibm-odf-validation-webhook started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.746: INFO: Container managed-storage-validation-webhooks ready: true, restart count 1 +Jul 27 02:31:01.746: INFO: ibm-keepalived-watcher-wjqkn from kube-system started at 2023-07-26 23:12:23 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.746: INFO: Container keepalived-watcher ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: ibm-master-proxy-static-10.245.128.18 from kube-system started at 2023-07-26 23:12:20 +0000 UTC (2 container statuses recorded) +Jul 27 02:31:01.746: INFO: Container ibm-master-proxy-static ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: Container pause ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: ibm-storage-metrics-agent-9fd89b544-292dm from kube-system started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) +Jul 27 02:31:01.746: INFO: Container ibm-storage-metrics-agent ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: Container storage-secret-sidecar ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: ibm-vpc-block-csi-node-lp4cr from kube-system started at 2023-07-26 23:12:23 +0000 UTC (4 container statuses recorded) +Jul 27 02:31:01.746: INFO: Container csi-driver-registrar ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: Container iks-vpc-block-node-driver ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: Container liveness-probe ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: Container storage-secret-sidecar ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: cluster-node-tuning-operator-5b85c5d47b-9cbp5 from openshift-cluster-node-tuning-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.746: INFO: Container cluster-node-tuning-operator ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: tuned-zxrv4 from openshift-cluster-node-tuning-operator started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.746: INFO: Container tuned ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: cluster-samples-operator-588cc6f8cc-fh5hj from openshift-cluster-samples-operator started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) +Jul 27 02:31:01.746: INFO: Container cluster-samples-operator ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: Container cluster-samples-operator-watch ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: cluster-storage-operator-586d5b4d95-tq97j from openshift-cluster-storage-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.746: INFO: Container cluster-storage-operator ready: true, restart count 1 +Jul 27 02:31:01.746: INFO: csi-snapshot-controller-5b77984679-wxrv8 from openshift-cluster-storage-operator started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.746: INFO: Container snapshot-controller ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: csi-snapshot-controller-operator-7c998b6874-9flch from openshift-cluster-storage-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.746: INFO: Container csi-snapshot-controller-operator ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: csi-snapshot-webhook-78b8c8d77c-jqbww from openshift-cluster-storage-operator started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.746: INFO: Container webhook ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: console-operator-8486d48d6-4xzr7 from openshift-console-operator started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) +Jul 27 02:31:01.746: INFO: Container console-operator ready: true, restart count 1 +Jul 27 02:31:01.746: INFO: Container conversion-webhook-server ready: true, restart count 2 +Jul 27 02:31:01.746: INFO: console-7fd48bd95f-pzr2s from openshift-console started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.746: INFO: Container console ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: downloads-6874b45df6-nm9q6 from openshift-console started at 2023-07-27 01:50:07 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.746: INFO: Container download-server ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: dns-operator-7c549b76fd-t56tt from openshift-dns-operator started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) +Jul 27 02:31:01.746: INFO: Container dns-operator ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: dns-default-r982z from openshift-dns started at 2023-07-26 23:25:39 +0000 UTC (2 container statuses recorded) +Jul 27 02:31:01.746: INFO: Container dns ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: node-resolver-txjwq from openshift-dns started at 2023-07-26 23:25:38 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.746: INFO: Container dns-node-resolver ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: cluster-image-registry-operator-96d4d84cf-65k8l from openshift-image-registry started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.746: INFO: Container cluster-image-registry-operator ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: node-ca-ntzct from openshift-image-registry started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.746: INFO: Container node-ca ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: ingress-canary-jphk8 from openshift-ingress-canary started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.746: INFO: Container serve-healthcheck-canary ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: ingress-operator-64bc7f7964-9sbtr from openshift-ingress-operator started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) +Jul 27 02:31:01.746: INFO: Container ingress-operator ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: router-default-865b575f54-b946s from openshift-ingress started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.746: INFO: Container router ready: true, restart count 0 +Jul 27 02:31:01.746: INFO: insights-operator-5db47f7654-r8xdq from openshift-insights started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.746: INFO: Container insights-operator ready: true, restart count 1 +Jul 27 02:31:01.747: INFO: openshift-kube-proxy-6hxmn from openshift-kube-proxy started at 2023-07-26 23:17:29 +0000 UTC (2 container statuses recorded) +Jul 27 02:31:01.747: INFO: Container kube-proxy ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: kube-storage-version-migrator-operator-f4b8bf677-c24bz from openshift-kube-storage-version-migrator-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.747: INFO: Container kube-storage-version-migrator-operator ready: true, restart count 1 +Jul 27 02:31:01.747: INFO: marketplace-operator-5ddbd9fdbc-lrhrq from openshift-marketplace started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.747: INFO: Container marketplace-operator ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: alertmanager-main-0 from openshift-monitoring started at 2023-07-27 01:50:10 +0000 UTC (6 container statuses recorded) +Jul 27 02:31:01.747: INFO: Container alertmanager ready: true, restart count 1 +Jul 27 02:31:01.747: INFO: Container alertmanager-proxy ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: Container config-reloader ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: Container kube-rbac-proxy-metric ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: Container prom-label-proxy ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: cluster-monitoring-operator-7448698f65-65wn9 from openshift-monitoring started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.747: INFO: Container cluster-monitoring-operator ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: node-exporter-d46sh from openshift-monitoring started at 2023-07-26 23:26:18 +0000 UTC (2 container statuses recorded) +Jul 27 02:31:01.747: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: Container node-exporter ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: prometheus-adapter-657855c676-hwbr7 from openshift-monitoring started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.747: INFO: Container prometheus-adapter ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: prometheus-k8s-0 from openshift-monitoring started at 2023-07-27 01:50:11 +0000 UTC (6 container statuses recorded) +Jul 27 02:31:01.747: INFO: Container config-reloader ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: Container kube-rbac-proxy-thanos ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: Container prometheus ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: Container prometheus-proxy ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: Container thanos-sidecar ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: prometheus-operator-admission-webhook-84c7bbc8cc-jvbxn from openshift-monitoring started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.747: INFO: Container prometheus-operator-admission-webhook ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: thanos-querier-7f9c896d7f-fk8mk from openshift-monitoring started at 2023-07-27 01:45:59 +0000 UTC (6 container statuses recorded) +Jul 27 02:31:01.747: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: Container kube-rbac-proxy-metrics ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: Container kube-rbac-proxy-rules ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: Container oauth-proxy ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: Container prom-label-proxy ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: Container thanos-query ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: multus-additional-cni-plugins-njhzm from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.747: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: multus-admission-controller-8ccd764f4-7kmkg from openshift-multus started at 2023-07-26 23:25:53 +0000 UTC (2 container statuses recorded) +Jul 27 02:31:01.747: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: Container multus-admission-controller ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: multus-zhftn from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.747: INFO: Container kube-multus ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: network-metrics-daemon-cglg2 from openshift-multus started at 2023-07-26 23:17:25 +0000 UTC (2 container statuses recorded) +Jul 27 02:31:01.747: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: Container network-metrics-daemon ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: network-check-source-6777f6456-pt5nn from openshift-network-diagnostics started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.747: INFO: Container check-endpoints ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: network-check-target-85dgs from openshift-network-diagnostics started at 2023-07-26 23:17:32 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.747: INFO: Container network-check-target-container ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: network-operator-6dddb4f685-gc764 from openshift-network-operator started at 2023-07-26 23:17:11 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.747: INFO: Container network-operator ready: true, restart count 1 +Jul 27 02:31:01.747: INFO: catalog-operator-69ccd5899d-lrpkv from openshift-operator-lifecycle-manager started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.747: INFO: Container catalog-operator ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: olm-operator-8448b5677d-bf2sl from openshift-operator-lifecycle-manager started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.747: INFO: Container olm-operator ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: package-server-manager-579d664b8c-klrwt from openshift-operator-lifecycle-manager started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.747: INFO: Container package-server-manager ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: packageserver-b9964c68-6gdlp from openshift-operator-lifecycle-manager started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.747: INFO: Container packageserver ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: metrics-6ff747d58d-llt7w from openshift-roks-metrics started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.747: INFO: Container metrics ready: true, restart count 2 +Jul 27 02:31:01.747: INFO: push-gateway-6448c6788-hrxtl from openshift-roks-metrics started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.747: INFO: Container push-gateway ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: service-ca-operator-5db987957b-pftl9 from openshift-service-ca-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.747: INFO: Container service-ca-operator ready: true, restart count 1 +Jul 27 02:31:01.747: INFO: sonobuoy from sonobuoy started at 2023-07-27 01:26:57 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.747: INFO: Container kube-sonobuoy ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: sonobuoy-systemd-logs-daemon-set-2f6dede7a68c4213-7p2cx from sonobuoy started at 2023-07-27 01:27:09 +0000 UTC (2 container statuses recorded) +Jul 27 02:31:01.747: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: Container systemd-logs ready: true, restart count 0 +Jul 27 02:31:01.747: INFO: +Logging pods the apiserver thinks is on node 10.245.128.19 before test +Jul 27 02:31:01.800: INFO: calico-node-tnbmn from calico-system started at 2023-07-26 23:19:48 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.800: INFO: Container calico-node ready: true, restart count 0 +Jul 27 02:31:01.800: INFO: calico-typha-5549cc5cdc-25l9k from calico-system started at 2023-07-26 23:19:48 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.800: INFO: Container calico-typha ready: true, restart count 0 +Jul 27 02:31:01.800: INFO: ibm-keepalived-watcher-228gb from kube-system started at 2023-07-26 23:12:15 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.800: INFO: Container keepalived-watcher ready: true, restart count 0 +Jul 27 02:31:01.800: INFO: ibm-master-proxy-static-10.245.128.19 from kube-system started at 2023-07-26 23:12:13 +0000 UTC (2 container statuses recorded) +Jul 27 02:31:01.800: INFO: Container ibm-master-proxy-static ready: true, restart count 0 +Jul 27 02:31:01.800: INFO: Container pause ready: true, restart count 0 +Jul 27 02:31:01.800: INFO: ibm-vpc-block-csi-node-m8dqf from kube-system started at 2023-07-26 23:12:15 +0000 UTC (4 container statuses recorded) +Jul 27 02:31:01.800: INFO: Container csi-driver-registrar ready: true, restart count 0 +Jul 27 02:31:01.800: INFO: Container iks-vpc-block-node-driver ready: true, restart count 0 +Jul 27 02:31:01.800: INFO: Container liveness-probe ready: true, restart count 0 +Jul 27 02:31:01.800: INFO: Container storage-secret-sidecar ready: true, restart count 0 +Jul 27 02:31:01.800: INFO: tuned-8xqng from openshift-cluster-node-tuning-operator started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.800: INFO: Container tuned ready: true, restart count 0 +Jul 27 02:31:01.800: INFO: dns-default-9k25b from openshift-dns started at 2023-07-27 01:50:33 +0000 UTC (2 container statuses recorded) +Jul 27 02:31:01.800: INFO: Container dns ready: true, restart count 0 +Jul 27 02:31:01.800: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:31:01.800: INFO: node-resolver-s2q44 from openshift-dns started at 2023-07-26 23:25:38 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.800: INFO: Container dns-node-resolver ready: true, restart count 0 +Jul 27 02:31:01.800: INFO: node-ca-kz4vp from openshift-image-registry started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.800: INFO: Container node-ca ready: true, restart count 0 +Jul 27 02:31:01.800: INFO: ingress-canary-nf2dw from openshift-ingress-canary started at 2023-07-27 01:50:33 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.800: INFO: Container serve-healthcheck-canary ready: true, restart count 0 +Jul 27 02:31:01.800: INFO: openshift-kube-proxy-4qg5c from openshift-kube-proxy started at 2023-07-26 23:17:29 +0000 UTC (2 container statuses recorded) +Jul 27 02:31:01.800: INFO: Container kube-proxy ready: true, restart count 0 +Jul 27 02:31:01.800: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:31:01.800: INFO: node-exporter-vz8m9 from openshift-monitoring started at 2023-07-26 23:26:18 +0000 UTC (2 container statuses recorded) +Jul 27 02:31:01.800: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:31:01.800: INFO: Container node-exporter ready: true, restart count 0 +Jul 27 02:31:01.800: INFO: multus-287s2 from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.800: INFO: Container kube-multus ready: true, restart count 0 +Jul 27 02:31:01.800: INFO: multus-additional-cni-plugins-xns7c from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.800: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 +Jul 27 02:31:01.800: INFO: network-metrics-daemon-xpw2q from openshift-multus started at 2023-07-26 23:17:25 +0000 UTC (2 container statuses recorded) +Jul 27 02:31:01.800: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Jul 27 02:31:01.800: INFO: Container network-metrics-daemon ready: true, restart count 0 +Jul 27 02:31:01.800: INFO: network-check-target-hf22d from openshift-network-diagnostics started at 2023-07-26 23:17:32 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.800: INFO: Container network-check-target-container ready: true, restart count 0 +Jul 27 02:31:01.800: INFO: sonobuoy-systemd-logs-daemon-set-2f6dede7a68c4213-p74pn from sonobuoy started at 2023-07-27 01:27:09 +0000 UTC (2 container statuses recorded) +Jul 27 02:31:01.800: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jul 27 02:31:01.800: INFO: Container systemd-logs ready: true, restart count 0 +Jul 27 02:31:01.800: INFO: pod-service-account-defaultsa from svcaccounts-2726 started at 2023-07-27 02:29:44 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.800: INFO: Container token-test ready: false, restart count 0 +Jul 27 02:31:01.800: INFO: pod-service-account-defaultsa-mountspec from svcaccounts-2726 started at 2023-07-27 02:29:44 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.800: INFO: Container token-test ready: false, restart count 0 +Jul 27 02:31:01.800: INFO: pod-service-account-mountsa from svcaccounts-2726 started at 2023-07-27 02:29:44 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.800: INFO: Container token-test ready: false, restart count 0 +Jul 27 02:31:01.800: INFO: pod-service-account-mountsa-mountspec from svcaccounts-2726 started at 2023-07-27 02:29:44 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.800: INFO: Container token-test ready: false, restart count 0 +Jul 27 02:31:01.800: INFO: pod-service-account-nomountsa-mountspec from svcaccounts-2726 started at 2023-07-27 02:29:44 +0000 UTC (1 container statuses recorded) +Jul 27 02:31:01.800: INFO: Container token-test ready: false, restart count 0 +[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + test/e2e/scheduling/predicates.go:704 +STEP: Trying to launch a pod without a label to get a node which can launch it. 07/27/23 02:31:01.8 +Jul 27 02:31:01.832: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-2248" to be "running" +Jul 27 02:31:01.845: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 13.065929ms +Jul 27 02:31:03.856: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.024110442s +Jul 27 02:31:03.856: INFO: Pod "without-label" satisfied condition "running" +STEP: Explicitly delete pod here to free the resource it takes. 07/27/23 02:31:03.866 +STEP: Trying to apply a random label on the found node. 07/27/23 02:31:03.913 +STEP: verifying the node has the label kubernetes.io/e2e-0a328552-a0cb-492d-9f48-b5bdb4dfc50b 95 07/27/23 02:31:03.946 +STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled 07/27/23 02:31:03.96 +Jul 27 02:31:03.977: INFO: Waiting up to 5m0s for pod "pod4" in namespace "sched-pred-2248" to be "not pending" +Jul 27 02:31:03.986: INFO: Pod "pod4": Phase="Pending", Reason="", readiness=false. Elapsed: 9.224281ms +Jul 27 02:31:05.998: INFO: Pod "pod4": Phase="Running", Reason="", readiness=false. Elapsed: 2.021368741s +Jul 27 02:31:05.998: INFO: Pod "pod4" satisfied condition "not pending" +STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 10.245.128.19 on the node which pod4 resides and expect not scheduled 07/27/23 02:31:05.998 +Jul 27 02:31:06.017: INFO: Waiting up to 5m0s for pod "pod5" in namespace "sched-pred-2248" to be "not pending" +Jul 27 02:31:06.027: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.538525ms +Jul 27 02:31:08.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020439346s +Jul 27 02:31:10.041: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023978377s +Jul 27 02:31:12.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.021025129s +Jul 27 02:31:14.039: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.02161787s +Jul 27 02:31:16.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020826413s +Jul 27 02:31:18.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.020662586s +Jul 27 02:31:20.039: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.021336578s +Jul 27 02:31:22.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.020242856s +Jul 27 02:31:24.043: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.02626006s +Jul 27 02:31:26.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.020784856s +Jul 27 02:31:28.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.020441245s +Jul 27 02:31:30.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.020831921s +Jul 27 02:31:32.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.019780658s +Jul 27 02:31:34.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.019692821s +Jul 27 02:31:36.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.020902114s +Jul 27 02:31:38.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.020422699s +Jul 27 02:31:40.042: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.025077066s +Jul 27 02:31:42.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 36.020116936s +Jul 27 02:31:44.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 38.020429648s +Jul 27 02:31:46.039: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 40.021344313s +Jul 27 02:31:48.039: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 42.021674118s +Jul 27 02:31:50.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 44.020432268s +Jul 27 02:31:52.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 46.019738571s +Jul 27 02:31:54.039: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 48.021588085s +Jul 27 02:31:56.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 50.020692997s +Jul 27 02:31:58.040: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 52.02257402s +Jul 27 02:32:00.041: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 54.023528042s +Jul 27 02:32:02.039: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 56.02190497s +Jul 27 02:32:04.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 58.020789338s +Jul 27 02:32:06.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.020908044s +Jul 27 02:32:08.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.02123425s +Jul 27 02:32:10.042: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.024775998s +Jul 27 02:32:12.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.019375304s +Jul 27 02:32:14.042: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.025123171s +Jul 27 02:32:16.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.020808239s +Jul 27 02:32:18.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.020236955s +Jul 27 02:32:20.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.021210199s +Jul 27 02:32:22.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.020297082s +Jul 27 02:32:24.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.019354486s +Jul 27 02:32:26.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.020791213s +Jul 27 02:32:28.039: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.022074676s +Jul 27 02:32:30.048: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.031264313s +Jul 27 02:32:32.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.020053833s +Jul 27 02:32:34.093: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.07606171s +Jul 27 02:32:36.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.020386913s +Jul 27 02:32:38.048: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.030937555s +Jul 27 02:32:40.041: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.023733958s +Jul 27 02:32:42.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.020019895s +Jul 27 02:32:44.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.020254659s +Jul 27 02:32:46.052: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.035121816s +Jul 27 02:32:48.053: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.035604592s +Jul 27 02:32:50.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.020950563s +Jul 27 02:32:52.045: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.027930566s +Jul 27 02:32:54.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.02093263s +Jul 27 02:32:56.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.02116322s +Jul 27 02:32:58.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.020518741s +Jul 27 02:33:00.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.02088656s +Jul 27 02:33:02.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.020518177s +Jul 27 02:33:04.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.020410593s +Jul 27 02:33:06.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.019895535s +Jul 27 02:33:08.039: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.021982975s +Jul 27 02:33:10.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.021135624s +Jul 27 02:33:12.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.019745875s +Jul 27 02:33:14.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.020299733s +Jul 27 02:33:16.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.01939512s +Jul 27 02:33:18.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.020515379s +Jul 27 02:33:20.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.020153911s +Jul 27 02:33:22.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.020384105s +Jul 27 02:33:24.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.020264624s +Jul 27 02:33:26.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.020673416s +Jul 27 02:33:28.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.021087077s +Jul 27 02:33:30.040: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.022711606s +Jul 27 02:33:32.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.020749644s +Jul 27 02:33:34.039: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.022300699s +Jul 27 02:33:36.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.020655058s +Jul 27 02:33:38.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.020611722s +Jul 27 02:33:40.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.021088875s +Jul 27 02:33:42.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.019771448s +Jul 27 02:33:44.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.020727361s +Jul 27 02:33:46.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.020288625s +Jul 27 02:33:48.059: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.041501853s +Jul 27 02:33:50.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.020263537s +Jul 27 02:33:52.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.020696433s +Jul 27 02:33:54.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.020888258s +Jul 27 02:33:56.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.020698649s +Jul 27 02:33:58.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.019527339s +Jul 27 02:34:00.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.019729008s +Jul 27 02:34:02.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.020615707s +Jul 27 02:34:04.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.019556953s +Jul 27 02:34:06.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.020106466s +Jul 27 02:34:08.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.019967416s +Jul 27 02:34:10.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.021111919s +Jul 27 02:34:12.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.019865253s +Jul 27 02:34:14.041: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.023360573s +Jul 27 02:34:16.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.019596512s +Jul 27 02:34:18.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.020914508s +Jul 27 02:34:20.075: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.058020058s +Jul 27 02:34:22.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.019955317s +Jul 27 02:34:24.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.020960936s +Jul 27 02:34:26.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.020901364s +Jul 27 02:34:28.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.020433635s +Jul 27 02:34:30.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.020522376s +Jul 27 02:34:32.040: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.022783379s +Jul 27 02:34:34.054: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.036807304s +Jul 27 02:34:36.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.019949309s +Jul 27 02:34:38.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.02124244s +Jul 27 02:34:40.047: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.029797497s +Jul 27 02:34:42.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.019961622s +Jul 27 02:34:44.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.019946774s +Jul 27 02:34:46.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.020841333s +Jul 27 02:34:48.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.020825871s +Jul 27 02:34:50.039: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.021798268s +Jul 27 02:34:52.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.01997352s +Jul 27 02:34:54.052: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.034801562s +Jul 27 02:34:56.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.021100737s +Jul 27 02:34:58.039: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.02184123s +Jul 27 02:35:00.041: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.02391146s +Jul 27 02:35:02.058: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.041247774s +Jul 27 02:35:04.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.019570723s +Jul 27 02:35:06.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.020069387s +Jul 27 02:35:08.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.020447546s +Jul 27 02:35:10.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.021095603s +Jul 27 02:35:12.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.020671017s +Jul 27 02:35:14.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.020152088s +Jul 27 02:35:16.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.020085673s +Jul 27 02:35:18.039: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.022141216s +Jul 27 02:35:20.040: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.022401064s +Jul 27 02:35:22.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.020095752s +Jul 27 02:35:24.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.019717097s +Jul 27 02:35:26.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.020782571s +Jul 27 02:35:28.048: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.030790796s +Jul 27 02:35:30.042: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.024536409s +Jul 27 02:35:32.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.019857847s +Jul 27 02:35:34.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.020131937s +Jul 27 02:35:36.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.020392947s +Jul 27 02:35:38.036: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.019062604s +Jul 27 02:35:40.044: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.026350548s +Jul 27 02:35:42.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.021165738s +Jul 27 02:35:44.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.020930792s +Jul 27 02:35:46.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.021104923s +Jul 27 02:35:48.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.021228186s +Jul 27 02:35:50.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.020025497s +Jul 27 02:35:52.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.019766955s +Jul 27 02:35:54.039: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.021489166s +Jul 27 02:35:56.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.020087443s +Jul 27 02:35:58.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.019800026s +Jul 27 02:36:00.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.020216341s +Jul 27 02:36:02.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.020941354s +Jul 27 02:36:04.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.020393998s +Jul 27 02:36:06.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.020697391s +Jul 27 02:36:06.047: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.03022765s +STEP: removing the label kubernetes.io/e2e-0a328552-a0cb-492d-9f48-b5bdb4dfc50b off the node 10.245.128.19 07/27/23 02:36:06.047 +STEP: verifying the node doesn't have the label kubernetes.io/e2e-0a328552-a0cb-492d-9f48-b5bdb4dfc50b 07/27/23 02:36:06.09 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 -Jun 12 22:01:24.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] StatefulSet +Jul 27 02:36:06.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:88 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] StatefulSet +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] StatefulSet +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 -STEP: Destroying namespace "statefulset-504" for this suite. 06/12/23 22:01:24.893 +STEP: Destroying namespace "sched-pred-2248" for this suite. 07/27/23 02:36:06.114 ------------------------------ -• [SLOW TEST] [113.909 seconds] -[sig-apps] StatefulSet -test/e2e/apps/framework.go:23 - Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:103 - should perform rolling updates and roll backs of template modifications [Conformance] - test/e2e/apps/statefulset.go:306 +• [SLOW TEST] [304.665 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +test/e2e/scheduling/framework.go:40 + validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + test/e2e/scheduling/predicates.go:704 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] StatefulSet + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 21:59:31 - Jun 12 21:59:31.000: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename statefulset 06/12/23 21:59:31.003 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 21:59:31.052 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 21:59:31.071 - [BeforeEach] [sig-apps] StatefulSet + STEP: Creating a kubernetes client 07/27/23 02:31:01.473 + Jul 27 02:31:01.474: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename sched-pred 07/27/23 02:31:01.475 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:31:01.527 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:31:01.539 + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-apps] StatefulSet - test/e2e/apps/statefulset.go:98 - [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:113 - STEP: Creating service test in namespace statefulset-504 06/12/23 21:59:31.11 - [It] should perform rolling updates and roll backs of template modifications [Conformance] - test/e2e/apps/statefulset.go:306 - STEP: Creating a new StatefulSet 06/12/23 21:59:31.132 - Jun 12 21:59:31.186: INFO: Found 0 stateful pods, waiting for 3 - Jun 12 21:59:41.221: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true - Jun 12 21:59:41.221: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true - Jun 12 21:59:41.221: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false - Jun 12 21:59:51.201: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true - Jun 12 21:59:51.201: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true - Jun 12 21:59:51.201: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true - Jun 12 21:59:51.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-504 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' - Jun 12 21:59:51.739: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" - Jun 12 21:59:51.739: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" - Jun 12 21:59:51.739: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' - - STEP: Updating StatefulSet template: update image from registry.k8s.io/e2e-test-images/httpd:2.4.38-4 to registry.k8s.io/e2e-test-images/httpd:2.4.39-4 06/12/23 22:00:01.812 - Jun 12 22:00:01.882: INFO: Updating stateful set ss2 - STEP: Creating a new revision 06/12/23 22:00:01.882 - STEP: Updating Pods in reverse ordinal order 06/12/23 22:00:11.957 - Jun 12 22:00:11.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-504 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' - Jun 12 22:00:12.602: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" - Jun 12 22:00:12.603: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" - Jun 12 22:00:12.603: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' - - STEP: Rolling back to a previous revision 06/12/23 22:00:32.69 - Jun 12 22:00:32.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-504 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' - Jun 12 22:00:33.190: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" - Jun 12 22:00:33.190: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" - Jun 12 22:00:33.190: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' - - Jun 12 22:00:43.299: INFO: Updating stateful set ss2 - STEP: Rolling back update in reverse ordinal order 06/12/23 22:00:53.355 - Jun 12 22:00:53.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-504 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' - Jun 12 22:00:54.617: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" - Jun 12 22:00:54.617: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" - Jun 12 22:00:54.617: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' - - [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:124 - Jun 12 22:01:14.721: INFO: Deleting all statefulset in ns statefulset-504 - Jun 12 22:01:14.741: INFO: Scaling statefulset ss2 to 0 - Jun 12 22:01:24.811: INFO: Waiting for statefulset status.replicas updated to 0 - Jun 12 22:01:24.824: INFO: Deleting statefulset ss2 - [AfterEach] [sig-apps] StatefulSet + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:97 + Jul 27 02:31:01.553: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready + Jul 27 02:31:01.591: INFO: Waiting for terminating namespaces to be deleted... + Jul 27 02:31:01.624: INFO: + Logging pods the apiserver thinks is on node 10.245.128.17 before test + Jul 27 02:31:01.685: INFO: calico-node-6gb7d from calico-system started at 2023-07-26 23:19:48 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container calico-node ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: ibm-keepalived-watcher-krnnt from kube-system started at 2023-07-26 23:12:13 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container keepalived-watcher ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: ibm-master-proxy-static-10.245.128.17 from kube-system started at 2023-07-26 23:12:09 +0000 UTC (2 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container ibm-master-proxy-static ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container pause ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: ibm-vpc-block-csi-controller-0 from kube-system started at 2023-07-26 23:25:41 +0000 UTC (7 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container csi-attacher ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container csi-provisioner ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container csi-resizer ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container csi-snapshotter ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container iks-vpc-block-driver ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container liveness-probe ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container storage-secret-sidecar ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: ibm-vpc-block-csi-node-pb2sj from kube-system started at 2023-07-26 23:12:13 +0000 UTC (4 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container csi-driver-registrar ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container iks-vpc-block-node-driver ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container liveness-probe ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container storage-secret-sidecar ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: vpn-7d8b749c64-87d9s from kube-system started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container vpn ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: tuned-wnh5v from openshift-cluster-node-tuning-operator started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container tuned ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: csi-snapshot-controller-5b77984679-frszr from openshift-cluster-storage-operator started at 2023-07-26 23:21:55 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container snapshot-controller ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: csi-snapshot-webhook-78b8c8d77c-2pk6s from openshift-cluster-storage-operator started at 2023-07-26 23:21:55 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container webhook ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: console-7fd48bd95f-wksvb from openshift-console started at 2023-07-26 23:27:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container console ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: downloads-6874b45df6-w7xkq from openshift-console started at 2023-07-26 23:22:05 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container download-server ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: dns-default-5mw2g from openshift-dns started at 2023-07-26 23:25:39 +0000 UTC (2 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container dns ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: node-resolver-2kt92 from openshift-dns started at 2023-07-26 23:25:38 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container dns-node-resolver ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: image-registry-69fbbd6d88-6xgnp from openshift-image-registry started at 2023-07-27 01:50:07 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container registry ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: node-ca-pmxp9 from openshift-image-registry started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container node-ca ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: ingress-canary-wh5qj from openshift-ingress-canary started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container serve-healthcheck-canary ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: router-default-865b575f54-qjwfv from openshift-ingress started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container router ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: openshift-kube-proxy-r7t77 from openshift-kube-proxy started at 2023-07-26 23:17:29 +0000 UTC (2 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container kube-proxy ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: migrator-77d7ddf546-9g7xm from openshift-kube-storage-version-migrator started at 2023-07-26 23:25:38 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container migrator ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: certified-operators-qlqcc from openshift-marketplace started at 2023-07-26 23:23:19 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container registry-server ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: community-operators-dtgmg from openshift-marketplace started at 2023-07-26 23:23:19 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container registry-server ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: redhat-marketplace-vnvdb from openshift-marketplace started at 2023-07-26 23:23:19 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container registry-server ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: redhat-operators-9qw52 from openshift-marketplace started at 2023-07-27 01:30:34 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container registry-server ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: alertmanager-main-1 from openshift-monitoring started at 2023-07-26 23:27:44 +0000 UTC (6 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container alertmanager ready: true, restart count 1 + Jul 27 02:31:01.685: INFO: Container alertmanager-proxy ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container config-reloader ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy-metric ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container prom-label-proxy ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: kube-state-metrics-575bd9d6b6-2wk6g from openshift-monitoring started at 2023-07-27 01:50:07 +0000 UTC (3 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy-main ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy-self ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container kube-state-metrics ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: node-exporter-2tscc from openshift-monitoring started at 2023-07-26 23:26:18 +0000 UTC (2 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container node-exporter ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: openshift-state-metrics-99754b784-vdbrs from openshift-monitoring started at 2023-07-27 01:50:07 +0000 UTC (3 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy-main ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy-self ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container openshift-state-metrics ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: prometheus-adapter-657855c676-qlc95 from openshift-monitoring started at 2023-07-26 23:26:23 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container prometheus-adapter ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: prometheus-k8s-1 from openshift-monitoring started at 2023-07-26 23:27:58 +0000 UTC (6 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container config-reloader ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy-thanos ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container prometheus ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container prometheus-proxy ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container thanos-sidecar ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: prometheus-operator-765bbdfd45-twq98 from openshift-monitoring started at 2023-07-27 01:50:07 +0000 UTC (2 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container prometheus-operator ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: prometheus-operator-admission-webhook-84c7bbc8cc-hct4l from openshift-monitoring started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container prometheus-operator-admission-webhook ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: telemeter-client-c964ff8c9-xszvz from openshift-monitoring started at 2023-07-27 01:50:07 +0000 UTC (3 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container reload ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container telemeter-client ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: thanos-querier-7f9c896d7f-xqld6 from openshift-monitoring started at 2023-07-26 23:26:32 +0000 UTC (6 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy-metrics ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy-rules ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container oauth-proxy ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container prom-label-proxy ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container thanos-query ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: multus-5x56j from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container kube-multus ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: multus-additional-cni-plugins-p7gf5 from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: multus-admission-controller-8ccd764f4-j68g7 from openshift-multus started at 2023-07-26 23:25:38 +0000 UTC (2 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container multus-admission-controller ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: network-metrics-daemon-djvdx from openshift-multus started at 2023-07-26 23:17:25 +0000 UTC (2 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container network-metrics-daemon ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: network-check-target-2j7hq from openshift-network-diagnostics started at 2023-07-26 23:17:32 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container network-check-target-container ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: collect-profiles-28173720-hn9xm from openshift-operator-lifecycle-manager started at 2023-07-27 02:00:00 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container collect-profiles ready: false, restart count 0 + Jul 27 02:31:01.685: INFO: collect-profiles-28173735-ln8gp from openshift-operator-lifecycle-manager started at 2023-07-27 02:15:00 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container collect-profiles ready: false, restart count 0 + Jul 27 02:31:01.685: INFO: collect-profiles-28173750-skvm6 from openshift-operator-lifecycle-manager started at 2023-07-27 02:30:00 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container collect-profiles ready: false, restart count 0 + Jul 27 02:31:01.685: INFO: packageserver-b9964c68-p2fd4 from openshift-operator-lifecycle-manager started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container packageserver ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: service-ca-665db46585-9cprv from openshift-service-ca started at 2023-07-26 23:21:59 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container service-ca-controller ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: sonobuoy-e2e-job-17fd703895604ed7 from sonobuoy started at 2023-07-27 01:27:09 +0000 UTC (2 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container e2e ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container sonobuoy-worker ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: sonobuoy-systemd-logs-daemon-set-2f6dede7a68c4213-vft4d from sonobuoy started at 2023-07-27 01:27:09 +0000 UTC (2 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container sonobuoy-worker ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: Container systemd-logs ready: true, restart count 0 + Jul 27 02:31:01.685: INFO: tigera-operator-5b48cf996b-5zb5v from tigera-operator started at 2023-07-26 23:12:21 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.685: INFO: Container tigera-operator ready: true, restart count 6 + Jul 27 02:31:01.685: INFO: + Logging pods the apiserver thinks is on node 10.245.128.18 before test + Jul 27 02:31:01.746: INFO: calico-kube-controllers-5575667dcd-ps6n9 from calico-system started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.746: INFO: Container calico-kube-controllers ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: calico-node-2vsm9 from calico-system started at 2023-07-26 23:19:48 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.746: INFO: Container calico-node ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: calico-typha-5549cc5cdc-nsmq8 from calico-system started at 2023-07-26 23:19:56 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.746: INFO: Container calico-typha ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: managed-storage-validation-webhooks-6dfcff48fb-4xxsq from ibm-odf-validation-webhook started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.746: INFO: Container managed-storage-validation-webhooks ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: managed-storage-validation-webhooks-6dfcff48fb-k6pcc from ibm-odf-validation-webhook started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.746: INFO: Container managed-storage-validation-webhooks ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: managed-storage-validation-webhooks-6dfcff48fb-swht2 from ibm-odf-validation-webhook started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.746: INFO: Container managed-storage-validation-webhooks ready: true, restart count 1 + Jul 27 02:31:01.746: INFO: ibm-keepalived-watcher-wjqkn from kube-system started at 2023-07-26 23:12:23 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.746: INFO: Container keepalived-watcher ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: ibm-master-proxy-static-10.245.128.18 from kube-system started at 2023-07-26 23:12:20 +0000 UTC (2 container statuses recorded) + Jul 27 02:31:01.746: INFO: Container ibm-master-proxy-static ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: Container pause ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: ibm-storage-metrics-agent-9fd89b544-292dm from kube-system started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) + Jul 27 02:31:01.746: INFO: Container ibm-storage-metrics-agent ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: Container storage-secret-sidecar ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: ibm-vpc-block-csi-node-lp4cr from kube-system started at 2023-07-26 23:12:23 +0000 UTC (4 container statuses recorded) + Jul 27 02:31:01.746: INFO: Container csi-driver-registrar ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: Container iks-vpc-block-node-driver ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: Container liveness-probe ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: Container storage-secret-sidecar ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: cluster-node-tuning-operator-5b85c5d47b-9cbp5 from openshift-cluster-node-tuning-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.746: INFO: Container cluster-node-tuning-operator ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: tuned-zxrv4 from openshift-cluster-node-tuning-operator started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.746: INFO: Container tuned ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: cluster-samples-operator-588cc6f8cc-fh5hj from openshift-cluster-samples-operator started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) + Jul 27 02:31:01.746: INFO: Container cluster-samples-operator ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: Container cluster-samples-operator-watch ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: cluster-storage-operator-586d5b4d95-tq97j from openshift-cluster-storage-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.746: INFO: Container cluster-storage-operator ready: true, restart count 1 + Jul 27 02:31:01.746: INFO: csi-snapshot-controller-5b77984679-wxrv8 from openshift-cluster-storage-operator started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.746: INFO: Container snapshot-controller ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: csi-snapshot-controller-operator-7c998b6874-9flch from openshift-cluster-storage-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.746: INFO: Container csi-snapshot-controller-operator ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: csi-snapshot-webhook-78b8c8d77c-jqbww from openshift-cluster-storage-operator started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.746: INFO: Container webhook ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: console-operator-8486d48d6-4xzr7 from openshift-console-operator started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) + Jul 27 02:31:01.746: INFO: Container console-operator ready: true, restart count 1 + Jul 27 02:31:01.746: INFO: Container conversion-webhook-server ready: true, restart count 2 + Jul 27 02:31:01.746: INFO: console-7fd48bd95f-pzr2s from openshift-console started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.746: INFO: Container console ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: downloads-6874b45df6-nm9q6 from openshift-console started at 2023-07-27 01:50:07 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.746: INFO: Container download-server ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: dns-operator-7c549b76fd-t56tt from openshift-dns-operator started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) + Jul 27 02:31:01.746: INFO: Container dns-operator ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: dns-default-r982z from openshift-dns started at 2023-07-26 23:25:39 +0000 UTC (2 container statuses recorded) + Jul 27 02:31:01.746: INFO: Container dns ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: node-resolver-txjwq from openshift-dns started at 2023-07-26 23:25:38 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.746: INFO: Container dns-node-resolver ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: cluster-image-registry-operator-96d4d84cf-65k8l from openshift-image-registry started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.746: INFO: Container cluster-image-registry-operator ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: node-ca-ntzct from openshift-image-registry started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.746: INFO: Container node-ca ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: ingress-canary-jphk8 from openshift-ingress-canary started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.746: INFO: Container serve-healthcheck-canary ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: ingress-operator-64bc7f7964-9sbtr from openshift-ingress-operator started at 2023-07-26 23:21:42 +0000 UTC (2 container statuses recorded) + Jul 27 02:31:01.746: INFO: Container ingress-operator ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: router-default-865b575f54-b946s from openshift-ingress started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.746: INFO: Container router ready: true, restart count 0 + Jul 27 02:31:01.746: INFO: insights-operator-5db47f7654-r8xdq from openshift-insights started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.746: INFO: Container insights-operator ready: true, restart count 1 + Jul 27 02:31:01.747: INFO: openshift-kube-proxy-6hxmn from openshift-kube-proxy started at 2023-07-26 23:17:29 +0000 UTC (2 container statuses recorded) + Jul 27 02:31:01.747: INFO: Container kube-proxy ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: kube-storage-version-migrator-operator-f4b8bf677-c24bz from openshift-kube-storage-version-migrator-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.747: INFO: Container kube-storage-version-migrator-operator ready: true, restart count 1 + Jul 27 02:31:01.747: INFO: marketplace-operator-5ddbd9fdbc-lrhrq from openshift-marketplace started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.747: INFO: Container marketplace-operator ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: alertmanager-main-0 from openshift-monitoring started at 2023-07-27 01:50:10 +0000 UTC (6 container statuses recorded) + Jul 27 02:31:01.747: INFO: Container alertmanager ready: true, restart count 1 + Jul 27 02:31:01.747: INFO: Container alertmanager-proxy ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: Container config-reloader ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: Container kube-rbac-proxy-metric ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: Container prom-label-proxy ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: cluster-monitoring-operator-7448698f65-65wn9 from openshift-monitoring started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.747: INFO: Container cluster-monitoring-operator ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: node-exporter-d46sh from openshift-monitoring started at 2023-07-26 23:26:18 +0000 UTC (2 container statuses recorded) + Jul 27 02:31:01.747: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: Container node-exporter ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: prometheus-adapter-657855c676-hwbr7 from openshift-monitoring started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.747: INFO: Container prometheus-adapter ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: prometheus-k8s-0 from openshift-monitoring started at 2023-07-27 01:50:11 +0000 UTC (6 container statuses recorded) + Jul 27 02:31:01.747: INFO: Container config-reloader ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: Container kube-rbac-proxy-thanos ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: Container prometheus ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: Container prometheus-proxy ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: Container thanos-sidecar ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: prometheus-operator-admission-webhook-84c7bbc8cc-jvbxn from openshift-monitoring started at 2023-07-27 01:45:59 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.747: INFO: Container prometheus-operator-admission-webhook ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: thanos-querier-7f9c896d7f-fk8mk from openshift-monitoring started at 2023-07-27 01:45:59 +0000 UTC (6 container statuses recorded) + Jul 27 02:31:01.747: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: Container kube-rbac-proxy-metrics ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: Container kube-rbac-proxy-rules ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: Container oauth-proxy ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: Container prom-label-proxy ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: Container thanos-query ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: multus-additional-cni-plugins-njhzm from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.747: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: multus-admission-controller-8ccd764f4-7kmkg from openshift-multus started at 2023-07-26 23:25:53 +0000 UTC (2 container statuses recorded) + Jul 27 02:31:01.747: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: Container multus-admission-controller ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: multus-zhftn from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.747: INFO: Container kube-multus ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: network-metrics-daemon-cglg2 from openshift-multus started at 2023-07-26 23:17:25 +0000 UTC (2 container statuses recorded) + Jul 27 02:31:01.747: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: Container network-metrics-daemon ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: network-check-source-6777f6456-pt5nn from openshift-network-diagnostics started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.747: INFO: Container check-endpoints ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: network-check-target-85dgs from openshift-network-diagnostics started at 2023-07-26 23:17:32 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.747: INFO: Container network-check-target-container ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: network-operator-6dddb4f685-gc764 from openshift-network-operator started at 2023-07-26 23:17:11 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.747: INFO: Container network-operator ready: true, restart count 1 + Jul 27 02:31:01.747: INFO: catalog-operator-69ccd5899d-lrpkv from openshift-operator-lifecycle-manager started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.747: INFO: Container catalog-operator ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: olm-operator-8448b5677d-bf2sl from openshift-operator-lifecycle-manager started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.747: INFO: Container olm-operator ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: package-server-manager-579d664b8c-klrwt from openshift-operator-lifecycle-manager started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.747: INFO: Container package-server-manager ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: packageserver-b9964c68-6gdlp from openshift-operator-lifecycle-manager started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.747: INFO: Container packageserver ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: metrics-6ff747d58d-llt7w from openshift-roks-metrics started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.747: INFO: Container metrics ready: true, restart count 2 + Jul 27 02:31:01.747: INFO: push-gateway-6448c6788-hrxtl from openshift-roks-metrics started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.747: INFO: Container push-gateway ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: service-ca-operator-5db987957b-pftl9 from openshift-service-ca-operator started at 2023-07-26 23:21:42 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.747: INFO: Container service-ca-operator ready: true, restart count 1 + Jul 27 02:31:01.747: INFO: sonobuoy from sonobuoy started at 2023-07-27 01:26:57 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.747: INFO: Container kube-sonobuoy ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: sonobuoy-systemd-logs-daemon-set-2f6dede7a68c4213-7p2cx from sonobuoy started at 2023-07-27 01:27:09 +0000 UTC (2 container statuses recorded) + Jul 27 02:31:01.747: INFO: Container sonobuoy-worker ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: Container systemd-logs ready: true, restart count 0 + Jul 27 02:31:01.747: INFO: + Logging pods the apiserver thinks is on node 10.245.128.19 before test + Jul 27 02:31:01.800: INFO: calico-node-tnbmn from calico-system started at 2023-07-26 23:19:48 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.800: INFO: Container calico-node ready: true, restart count 0 + Jul 27 02:31:01.800: INFO: calico-typha-5549cc5cdc-25l9k from calico-system started at 2023-07-26 23:19:48 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.800: INFO: Container calico-typha ready: true, restart count 0 + Jul 27 02:31:01.800: INFO: ibm-keepalived-watcher-228gb from kube-system started at 2023-07-26 23:12:15 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.800: INFO: Container keepalived-watcher ready: true, restart count 0 + Jul 27 02:31:01.800: INFO: ibm-master-proxy-static-10.245.128.19 from kube-system started at 2023-07-26 23:12:13 +0000 UTC (2 container statuses recorded) + Jul 27 02:31:01.800: INFO: Container ibm-master-proxy-static ready: true, restart count 0 + Jul 27 02:31:01.800: INFO: Container pause ready: true, restart count 0 + Jul 27 02:31:01.800: INFO: ibm-vpc-block-csi-node-m8dqf from kube-system started at 2023-07-26 23:12:15 +0000 UTC (4 container statuses recorded) + Jul 27 02:31:01.800: INFO: Container csi-driver-registrar ready: true, restart count 0 + Jul 27 02:31:01.800: INFO: Container iks-vpc-block-node-driver ready: true, restart count 0 + Jul 27 02:31:01.800: INFO: Container liveness-probe ready: true, restart count 0 + Jul 27 02:31:01.800: INFO: Container storage-secret-sidecar ready: true, restart count 0 + Jul 27 02:31:01.800: INFO: tuned-8xqng from openshift-cluster-node-tuning-operator started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.800: INFO: Container tuned ready: true, restart count 0 + Jul 27 02:31:01.800: INFO: dns-default-9k25b from openshift-dns started at 2023-07-27 01:50:33 +0000 UTC (2 container statuses recorded) + Jul 27 02:31:01.800: INFO: Container dns ready: true, restart count 0 + Jul 27 02:31:01.800: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:31:01.800: INFO: node-resolver-s2q44 from openshift-dns started at 2023-07-26 23:25:38 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.800: INFO: Container dns-node-resolver ready: true, restart count 0 + Jul 27 02:31:01.800: INFO: node-ca-kz4vp from openshift-image-registry started at 2023-07-26 23:25:39 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.800: INFO: Container node-ca ready: true, restart count 0 + Jul 27 02:31:01.800: INFO: ingress-canary-nf2dw from openshift-ingress-canary started at 2023-07-27 01:50:33 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.800: INFO: Container serve-healthcheck-canary ready: true, restart count 0 + Jul 27 02:31:01.800: INFO: openshift-kube-proxy-4qg5c from openshift-kube-proxy started at 2023-07-26 23:17:29 +0000 UTC (2 container statuses recorded) + Jul 27 02:31:01.800: INFO: Container kube-proxy ready: true, restart count 0 + Jul 27 02:31:01.800: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:31:01.800: INFO: node-exporter-vz8m9 from openshift-monitoring started at 2023-07-26 23:26:18 +0000 UTC (2 container statuses recorded) + Jul 27 02:31:01.800: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:31:01.800: INFO: Container node-exporter ready: true, restart count 0 + Jul 27 02:31:01.800: INFO: multus-287s2 from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.800: INFO: Container kube-multus ready: true, restart count 0 + Jul 27 02:31:01.800: INFO: multus-additional-cni-plugins-xns7c from openshift-multus started at 2023-07-26 23:17:24 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.800: INFO: Container kube-multus-additional-cni-plugins ready: true, restart count 0 + Jul 27 02:31:01.800: INFO: network-metrics-daemon-xpw2q from openshift-multus started at 2023-07-26 23:17:25 +0000 UTC (2 container statuses recorded) + Jul 27 02:31:01.800: INFO: Container kube-rbac-proxy ready: true, restart count 0 + Jul 27 02:31:01.800: INFO: Container network-metrics-daemon ready: true, restart count 0 + Jul 27 02:31:01.800: INFO: network-check-target-hf22d from openshift-network-diagnostics started at 2023-07-26 23:17:32 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.800: INFO: Container network-check-target-container ready: true, restart count 0 + Jul 27 02:31:01.800: INFO: sonobuoy-systemd-logs-daemon-set-2f6dede7a68c4213-p74pn from sonobuoy started at 2023-07-27 01:27:09 +0000 UTC (2 container statuses recorded) + Jul 27 02:31:01.800: INFO: Container sonobuoy-worker ready: true, restart count 0 + Jul 27 02:31:01.800: INFO: Container systemd-logs ready: true, restart count 0 + Jul 27 02:31:01.800: INFO: pod-service-account-defaultsa from svcaccounts-2726 started at 2023-07-27 02:29:44 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.800: INFO: Container token-test ready: false, restart count 0 + Jul 27 02:31:01.800: INFO: pod-service-account-defaultsa-mountspec from svcaccounts-2726 started at 2023-07-27 02:29:44 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.800: INFO: Container token-test ready: false, restart count 0 + Jul 27 02:31:01.800: INFO: pod-service-account-mountsa from svcaccounts-2726 started at 2023-07-27 02:29:44 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.800: INFO: Container token-test ready: false, restart count 0 + Jul 27 02:31:01.800: INFO: pod-service-account-mountsa-mountspec from svcaccounts-2726 started at 2023-07-27 02:29:44 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.800: INFO: Container token-test ready: false, restart count 0 + Jul 27 02:31:01.800: INFO: pod-service-account-nomountsa-mountspec from svcaccounts-2726 started at 2023-07-27 02:29:44 +0000 UTC (1 container statuses recorded) + Jul 27 02:31:01.800: INFO: Container token-test ready: false, restart count 0 + [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + test/e2e/scheduling/predicates.go:704 + STEP: Trying to launch a pod without a label to get a node which can launch it. 07/27/23 02:31:01.8 + Jul 27 02:31:01.832: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-2248" to be "running" + Jul 27 02:31:01.845: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 13.065929ms + Jul 27 02:31:03.856: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.024110442s + Jul 27 02:31:03.856: INFO: Pod "without-label" satisfied condition "running" + STEP: Explicitly delete pod here to free the resource it takes. 07/27/23 02:31:03.866 + STEP: Trying to apply a random label on the found node. 07/27/23 02:31:03.913 + STEP: verifying the node has the label kubernetes.io/e2e-0a328552-a0cb-492d-9f48-b5bdb4dfc50b 95 07/27/23 02:31:03.946 + STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled 07/27/23 02:31:03.96 + Jul 27 02:31:03.977: INFO: Waiting up to 5m0s for pod "pod4" in namespace "sched-pred-2248" to be "not pending" + Jul 27 02:31:03.986: INFO: Pod "pod4": Phase="Pending", Reason="", readiness=false. Elapsed: 9.224281ms + Jul 27 02:31:05.998: INFO: Pod "pod4": Phase="Running", Reason="", readiness=false. Elapsed: 2.021368741s + Jul 27 02:31:05.998: INFO: Pod "pod4" satisfied condition "not pending" + STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 10.245.128.19 on the node which pod4 resides and expect not scheduled 07/27/23 02:31:05.998 + Jul 27 02:31:06.017: INFO: Waiting up to 5m0s for pod "pod5" in namespace "sched-pred-2248" to be "not pending" + Jul 27 02:31:06.027: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.538525ms + Jul 27 02:31:08.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020439346s + Jul 27 02:31:10.041: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023978377s + Jul 27 02:31:12.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.021025129s + Jul 27 02:31:14.039: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.02161787s + Jul 27 02:31:16.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020826413s + Jul 27 02:31:18.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.020662586s + Jul 27 02:31:20.039: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.021336578s + Jul 27 02:31:22.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.020242856s + Jul 27 02:31:24.043: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.02626006s + Jul 27 02:31:26.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.020784856s + Jul 27 02:31:28.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.020441245s + Jul 27 02:31:30.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.020831921s + Jul 27 02:31:32.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.019780658s + Jul 27 02:31:34.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.019692821s + Jul 27 02:31:36.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.020902114s + Jul 27 02:31:38.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.020422699s + Jul 27 02:31:40.042: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.025077066s + Jul 27 02:31:42.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 36.020116936s + Jul 27 02:31:44.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 38.020429648s + Jul 27 02:31:46.039: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 40.021344313s + Jul 27 02:31:48.039: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 42.021674118s + Jul 27 02:31:50.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 44.020432268s + Jul 27 02:31:52.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 46.019738571s + Jul 27 02:31:54.039: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 48.021588085s + Jul 27 02:31:56.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 50.020692997s + Jul 27 02:31:58.040: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 52.02257402s + Jul 27 02:32:00.041: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 54.023528042s + Jul 27 02:32:02.039: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 56.02190497s + Jul 27 02:32:04.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 58.020789338s + Jul 27 02:32:06.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.020908044s + Jul 27 02:32:08.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.02123425s + Jul 27 02:32:10.042: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.024775998s + Jul 27 02:32:12.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.019375304s + Jul 27 02:32:14.042: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.025123171s + Jul 27 02:32:16.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.020808239s + Jul 27 02:32:18.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.020236955s + Jul 27 02:32:20.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.021210199s + Jul 27 02:32:22.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.020297082s + Jul 27 02:32:24.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.019354486s + Jul 27 02:32:26.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.020791213s + Jul 27 02:32:28.039: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.022074676s + Jul 27 02:32:30.048: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.031264313s + Jul 27 02:32:32.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.020053833s + Jul 27 02:32:34.093: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.07606171s + Jul 27 02:32:36.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.020386913s + Jul 27 02:32:38.048: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.030937555s + Jul 27 02:32:40.041: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.023733958s + Jul 27 02:32:42.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.020019895s + Jul 27 02:32:44.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.020254659s + Jul 27 02:32:46.052: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.035121816s + Jul 27 02:32:48.053: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.035604592s + Jul 27 02:32:50.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.020950563s + Jul 27 02:32:52.045: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.027930566s + Jul 27 02:32:54.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.02093263s + Jul 27 02:32:56.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.02116322s + Jul 27 02:32:58.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.020518741s + Jul 27 02:33:00.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.02088656s + Jul 27 02:33:02.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.020518177s + Jul 27 02:33:04.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.020410593s + Jul 27 02:33:06.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.019895535s + Jul 27 02:33:08.039: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.021982975s + Jul 27 02:33:10.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.021135624s + Jul 27 02:33:12.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.019745875s + Jul 27 02:33:14.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.020299733s + Jul 27 02:33:16.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.01939512s + Jul 27 02:33:18.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.020515379s + Jul 27 02:33:20.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.020153911s + Jul 27 02:33:22.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.020384105s + Jul 27 02:33:24.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.020264624s + Jul 27 02:33:26.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.020673416s + Jul 27 02:33:28.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.021087077s + Jul 27 02:33:30.040: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.022711606s + Jul 27 02:33:32.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.020749644s + Jul 27 02:33:34.039: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.022300699s + Jul 27 02:33:36.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.020655058s + Jul 27 02:33:38.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.020611722s + Jul 27 02:33:40.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.021088875s + Jul 27 02:33:42.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.019771448s + Jul 27 02:33:44.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.020727361s + Jul 27 02:33:46.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.020288625s + Jul 27 02:33:48.059: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.041501853s + Jul 27 02:33:50.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.020263537s + Jul 27 02:33:52.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.020696433s + Jul 27 02:33:54.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.020888258s + Jul 27 02:33:56.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.020698649s + Jul 27 02:33:58.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.019527339s + Jul 27 02:34:00.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.019729008s + Jul 27 02:34:02.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.020615707s + Jul 27 02:34:04.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.019556953s + Jul 27 02:34:06.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.020106466s + Jul 27 02:34:08.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.019967416s + Jul 27 02:34:10.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.021111919s + Jul 27 02:34:12.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.019865253s + Jul 27 02:34:14.041: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.023360573s + Jul 27 02:34:16.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.019596512s + Jul 27 02:34:18.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.020914508s + Jul 27 02:34:20.075: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.058020058s + Jul 27 02:34:22.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.019955317s + Jul 27 02:34:24.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.020960936s + Jul 27 02:34:26.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.020901364s + Jul 27 02:34:28.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.020433635s + Jul 27 02:34:30.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.020522376s + Jul 27 02:34:32.040: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.022783379s + Jul 27 02:34:34.054: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.036807304s + Jul 27 02:34:36.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.019949309s + Jul 27 02:34:38.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.02124244s + Jul 27 02:34:40.047: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.029797497s + Jul 27 02:34:42.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.019961622s + Jul 27 02:34:44.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.019946774s + Jul 27 02:34:46.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.020841333s + Jul 27 02:34:48.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.020825871s + Jul 27 02:34:50.039: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.021798268s + Jul 27 02:34:52.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.01997352s + Jul 27 02:34:54.052: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.034801562s + Jul 27 02:34:56.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.021100737s + Jul 27 02:34:58.039: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.02184123s + Jul 27 02:35:00.041: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.02391146s + Jul 27 02:35:02.058: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.041247774s + Jul 27 02:35:04.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.019570723s + Jul 27 02:35:06.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.020069387s + Jul 27 02:35:08.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.020447546s + Jul 27 02:35:10.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.021095603s + Jul 27 02:35:12.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.020671017s + Jul 27 02:35:14.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.020152088s + Jul 27 02:35:16.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.020085673s + Jul 27 02:35:18.039: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.022141216s + Jul 27 02:35:20.040: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.022401064s + Jul 27 02:35:22.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.020095752s + Jul 27 02:35:24.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.019717097s + Jul 27 02:35:26.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.020782571s + Jul 27 02:35:28.048: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.030790796s + Jul 27 02:35:30.042: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.024536409s + Jul 27 02:35:32.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.019857847s + Jul 27 02:35:34.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.020131937s + Jul 27 02:35:36.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.020392947s + Jul 27 02:35:38.036: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.019062604s + Jul 27 02:35:40.044: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.026350548s + Jul 27 02:35:42.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.021165738s + Jul 27 02:35:44.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.020930792s + Jul 27 02:35:46.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.021104923s + Jul 27 02:35:48.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.021228186s + Jul 27 02:35:50.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.020025497s + Jul 27 02:35:52.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.019766955s + Jul 27 02:35:54.039: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.021489166s + Jul 27 02:35:56.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.020087443s + Jul 27 02:35:58.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.019800026s + Jul 27 02:36:00.037: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.020216341s + Jul 27 02:36:02.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.020941354s + Jul 27 02:36:04.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.020393998s + Jul 27 02:36:06.038: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.020697391s + Jul 27 02:36:06.047: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.03022765s + STEP: removing the label kubernetes.io/e2e-0a328552-a0cb-492d-9f48-b5bdb4dfc50b off the node 10.245.128.19 07/27/23 02:36:06.047 + STEP: verifying the node doesn't have the label kubernetes.io/e2e-0a328552-a0cb-492d-9f48-b5bdb4dfc50b 07/27/23 02:36:06.09 + [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 - Jun 12 22:01:24.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] StatefulSet + Jul 27 02:36:06.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:88 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] StatefulSet + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] StatefulSet + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 - STEP: Destroying namespace "statefulset-504" for this suite. 06/12/23 22:01:24.893 + STEP: Destroying namespace "sched-pred-2248" for this suite. 07/27/23 02:36:06.114 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSS ------------------------------ -[sig-node] Security Context When creating a pod with readOnlyRootFilesystem - should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] - test/e2e/common/node/security_context.go:486 -[BeforeEach] [sig-node] Security Context +[sig-node] Probing container + should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:184 +[BeforeEach] [sig-node] Probing container set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:01:24.917 -Jun 12 22:01:24.917: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename security-context-test 06/12/23 22:01:24.919 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:01:24.989 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:01:25.005 -[BeforeEach] [sig-node] Security Context +STEP: Creating a kubernetes client 07/27/23 02:36:06.139 +Jul 27 02:36:06.139: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename container-probe 07/27/23 02:36:06.14 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:36:06.184 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:36:06.196 +[BeforeEach] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-node] Security Context - test/e2e/common/node/security_context.go:50 -[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] - test/e2e/common/node/security_context.go:486 -Jun 12 22:01:25.049: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-da8cae88-a5c3-4501-822a-41638b47ab7e" in namespace "security-context-test-9002" to be "Succeeded or Failed" -Jun 12 22:01:25.063: INFO: Pod "busybox-readonly-false-da8cae88-a5c3-4501-822a-41638b47ab7e": Phase="Pending", Reason="", readiness=false. Elapsed: 13.794378ms -Jun 12 22:01:27.083: INFO: Pod "busybox-readonly-false-da8cae88-a5c3-4501-822a-41638b47ab7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03444542s -Jun 12 22:01:29.102: INFO: Pod "busybox-readonly-false-da8cae88-a5c3-4501-822a-41638b47ab7e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052881548s -Jun 12 22:01:31.101: INFO: Pod "busybox-readonly-false-da8cae88-a5c3-4501-822a-41638b47ab7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051958676s -Jun 12 22:01:31.101: INFO: Pod "busybox-readonly-false-da8cae88-a5c3-4501-822a-41638b47ab7e" satisfied condition "Succeeded or Failed" -[AfterEach] [sig-node] Security Context +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 +[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:184 +STEP: Creating pod liveness-1fceaee3-7541-48b4-9c8d-89e092b24db4 in namespace container-probe-6856 07/27/23 02:36:06.208 +Jul 27 02:36:06.236: INFO: Waiting up to 5m0s for pod "liveness-1fceaee3-7541-48b4-9c8d-89e092b24db4" in namespace "container-probe-6856" to be "not pending" +Jul 27 02:36:06.257: INFO: Pod "liveness-1fceaee3-7541-48b4-9c8d-89e092b24db4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.405842ms +Jul 27 02:36:08.268: INFO: Pod "liveness-1fceaee3-7541-48b4-9c8d-89e092b24db4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032278664s +Jul 27 02:36:10.279: INFO: Pod "liveness-1fceaee3-7541-48b4-9c8d-89e092b24db4": Phase="Running", Reason="", readiness=true. Elapsed: 4.042428998s +Jul 27 02:36:10.279: INFO: Pod "liveness-1fceaee3-7541-48b4-9c8d-89e092b24db4" satisfied condition "not pending" +Jul 27 02:36:10.279: INFO: Started pod liveness-1fceaee3-7541-48b4-9c8d-89e092b24db4 in namespace container-probe-6856 +STEP: checking the pod's current state and verifying that restartCount is present 07/27/23 02:36:10.279 +Jul 27 02:36:10.304: INFO: Initial restart count of pod liveness-1fceaee3-7541-48b4-9c8d-89e092b24db4 is 0 +STEP: deleting the pod 07/27/23 02:40:11.847 +[AfterEach] [sig-node] Probing container test/e2e/framework/node/init/init.go:32 -Jun 12 22:01:31.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Security Context +Jul 27 02:40:11.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Security Context +[DeferCleanup (Each)] [sig-node] Probing container dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Security Context +[DeferCleanup (Each)] [sig-node] Probing container tear down framework | framework.go:193 -STEP: Destroying namespace "security-context-test-9002" for this suite. 06/12/23 22:01:31.152 +STEP: Destroying namespace "container-probe-6856" for this suite. 07/27/23 02:40:11.891 ------------------------------ -• [SLOW TEST] [6.253 seconds] -[sig-node] Security Context +• [SLOW TEST] [245.777 seconds] +[sig-node] Probing container test/e2e/common/node/framework.go:23 - When creating a pod with readOnlyRootFilesystem - test/e2e/common/node/security_context.go:430 - should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] - test/e2e/common/node/security_context.go:486 + should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:184 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Security Context + [BeforeEach] [sig-node] Probing container set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:01:24.917 - Jun 12 22:01:24.917: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename security-context-test 06/12/23 22:01:24.919 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:01:24.989 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:01:25.005 - [BeforeEach] [sig-node] Security Context + STEP: Creating a kubernetes client 07/27/23 02:36:06.139 + Jul 27 02:36:06.139: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename container-probe 07/27/23 02:36:06.14 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:36:06.184 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:36:06.196 + [BeforeEach] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-node] Security Context - test/e2e/common/node/security_context.go:50 - [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] - test/e2e/common/node/security_context.go:486 - Jun 12 22:01:25.049: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-da8cae88-a5c3-4501-822a-41638b47ab7e" in namespace "security-context-test-9002" to be "Succeeded or Failed" - Jun 12 22:01:25.063: INFO: Pod "busybox-readonly-false-da8cae88-a5c3-4501-822a-41638b47ab7e": Phase="Pending", Reason="", readiness=false. Elapsed: 13.794378ms - Jun 12 22:01:27.083: INFO: Pod "busybox-readonly-false-da8cae88-a5c3-4501-822a-41638b47ab7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03444542s - Jun 12 22:01:29.102: INFO: Pod "busybox-readonly-false-da8cae88-a5c3-4501-822a-41638b47ab7e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052881548s - Jun 12 22:01:31.101: INFO: Pod "busybox-readonly-false-da8cae88-a5c3-4501-822a-41638b47ab7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051958676s - Jun 12 22:01:31.101: INFO: Pod "busybox-readonly-false-da8cae88-a5c3-4501-822a-41638b47ab7e" satisfied condition "Succeeded or Failed" - [AfterEach] [sig-node] Security Context + [BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 + [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:184 + STEP: Creating pod liveness-1fceaee3-7541-48b4-9c8d-89e092b24db4 in namespace container-probe-6856 07/27/23 02:36:06.208 + Jul 27 02:36:06.236: INFO: Waiting up to 5m0s for pod "liveness-1fceaee3-7541-48b4-9c8d-89e092b24db4" in namespace "container-probe-6856" to be "not pending" + Jul 27 02:36:06.257: INFO: Pod "liveness-1fceaee3-7541-48b4-9c8d-89e092b24db4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.405842ms + Jul 27 02:36:08.268: INFO: Pod "liveness-1fceaee3-7541-48b4-9c8d-89e092b24db4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032278664s + Jul 27 02:36:10.279: INFO: Pod "liveness-1fceaee3-7541-48b4-9c8d-89e092b24db4": Phase="Running", Reason="", readiness=true. Elapsed: 4.042428998s + Jul 27 02:36:10.279: INFO: Pod "liveness-1fceaee3-7541-48b4-9c8d-89e092b24db4" satisfied condition "not pending" + Jul 27 02:36:10.279: INFO: Started pod liveness-1fceaee3-7541-48b4-9c8d-89e092b24db4 in namespace container-probe-6856 + STEP: checking the pod's current state and verifying that restartCount is present 07/27/23 02:36:10.279 + Jul 27 02:36:10.304: INFO: Initial restart count of pod liveness-1fceaee3-7541-48b4-9c8d-89e092b24db4 is 0 + STEP: deleting the pod 07/27/23 02:40:11.847 + [AfterEach] [sig-node] Probing container test/e2e/framework/node/init/init.go:32 - Jun 12 22:01:31.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Security Context + Jul 27 02:40:11.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Security Context + [DeferCleanup (Each)] [sig-node] Probing container dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Security Context + [DeferCleanup (Each)] [sig-node] Probing container tear down framework | framework.go:193 - STEP: Destroying namespace "security-context-test-9002" for this suite. 06/12/23 22:01:31.152 + STEP: Destroying namespace "container-probe-6856" for this suite. 07/27/23 02:40:11.891 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------- -[sig-storage] ConfigMap - should be consumable from pods in volume with mappings [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:89 -[BeforeEach] [sig-storage] ConfigMap +[sig-cli] Kubectl client Kubectl patch + should add annotations for pods in rc [Conformance] + test/e2e/kubectl/kubectl.go:1652 +[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:01:31.174 -Jun 12 22:01:31.175: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename configmap 06/12/23 22:01:31.176 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:01:31.227 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:01:31.272 -[BeforeEach] [sig-storage] ConfigMap +STEP: Creating a kubernetes client 07/27/23 02:40:11.916 +Jul 27 02:40:11.916: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename kubectl 07/27/23 02:40:11.917 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:40:11.959 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:40:11.972 +[BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 -[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:89 -STEP: Creating configMap with name configmap-test-volume-map-63a64e91-e841-491e-903e-496cd69d06b9 06/12/23 22:01:31.291 -STEP: Creating a pod to test consume configMaps 06/12/23 22:01:31.307 -Jun 12 22:01:31.337: INFO: Waiting up to 5m0s for pod "pod-configmaps-ddc896f2-eac5-4a3e-bfc2-e0717d997e1d" in namespace "configmap-6734" to be "Succeeded or Failed" -Jun 12 22:01:31.353: INFO: Pod "pod-configmaps-ddc896f2-eac5-4a3e-bfc2-e0717d997e1d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.675959ms -Jun 12 22:01:33.375: INFO: Pod "pod-configmaps-ddc896f2-eac5-4a3e-bfc2-e0717d997e1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038254047s -Jun 12 22:01:35.366: INFO: Pod "pod-configmaps-ddc896f2-eac5-4a3e-bfc2-e0717d997e1d": Phase="Running", Reason="", readiness=false. Elapsed: 4.029235892s -Jun 12 22:01:37.365: INFO: Pod "pod-configmaps-ddc896f2-eac5-4a3e-bfc2-e0717d997e1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028441209s -STEP: Saw pod success 06/12/23 22:01:37.366 -Jun 12 22:01:37.366: INFO: Pod "pod-configmaps-ddc896f2-eac5-4a3e-bfc2-e0717d997e1d" satisfied condition "Succeeded or Failed" -Jun 12 22:01:37.377: INFO: Trying to get logs from node 10.138.75.70 pod pod-configmaps-ddc896f2-eac5-4a3e-bfc2-e0717d997e1d container agnhost-container: -STEP: delete the pod 06/12/23 22:01:37.464 -Jun 12 22:01:37.525: INFO: Waiting for pod pod-configmaps-ddc896f2-eac5-4a3e-bfc2-e0717d997e1d to disappear -Jun 12 22:01:37.533: INFO: Pod pod-configmaps-ddc896f2-eac5-4a3e-bfc2-e0717d997e1d no longer exists -[AfterEach] [sig-storage] ConfigMap +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[It] should add annotations for pods in rc [Conformance] + test/e2e/kubectl/kubectl.go:1652 +STEP: creating Agnhost RC 07/27/23 02:40:11.984 +Jul 27 02:40:11.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2545 create -f -' +Jul 27 02:40:12.802: INFO: stderr: "" +Jul 27 02:40:12.802: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. 07/27/23 02:40:12.802 +Jul 27 02:40:13.814: INFO: Selector matched 1 pods for map[app:agnhost] +Jul 27 02:40:13.814: INFO: Found 0 / 1 +Jul 27 02:40:14.813: INFO: Selector matched 1 pods for map[app:agnhost] +Jul 27 02:40:14.813: INFO: Found 1 / 1 +Jul 27 02:40:14.813: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +STEP: patching all pods 07/27/23 02:40:14.813 +Jul 27 02:40:14.825: INFO: Selector matched 1 pods for map[app:agnhost] +Jul 27 02:40:14.825: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Jul 27 02:40:14.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2545 patch pod agnhost-primary-hkg5r -p {"metadata":{"annotations":{"x":"y"}}}' +Jul 27 02:40:14.925: INFO: stderr: "" +Jul 27 02:40:14.925: INFO: stdout: "pod/agnhost-primary-hkg5r patched\n" +STEP: checking annotations 07/27/23 02:40:14.925 +Jul 27 02:40:14.937: INFO: Selector matched 1 pods for map[app:agnhost] +Jul 27 02:40:14.937: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +[AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 -Jun 12 22:01:37.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] ConfigMap +Jul 27 02:40:14.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] ConfigMap +[DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] ConfigMap +[DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 -STEP: Destroying namespace "configmap-6734" for this suite. 06/12/23 22:01:37.551 +STEP: Destroying namespace "kubectl-2545" for this suite. 07/27/23 02:40:14.956 ------------------------------ -• [SLOW TEST] [6.394 seconds] -[sig-storage] ConfigMap -test/e2e/common/storage/framework.go:23 - should be consumable from pods in volume with mappings [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:89 +• [3.066 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl patch + test/e2e/kubectl/kubectl.go:1646 + should add annotations for pods in rc [Conformance] + test/e2e/kubectl/kubectl.go:1652 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] ConfigMap + [BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:01:31.174 - Jun 12 22:01:31.175: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename configmap 06/12/23 22:01:31.176 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:01:31.227 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:01:31.272 - [BeforeEach] [sig-storage] ConfigMap + STEP: Creating a kubernetes client 07/27/23 02:40:11.916 + Jul 27 02:40:11.916: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename kubectl 07/27/23 02:40:11.917 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:40:11.959 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:40:11.972 + [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 - [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:89 - STEP: Creating configMap with name configmap-test-volume-map-63a64e91-e841-491e-903e-496cd69d06b9 06/12/23 22:01:31.291 - STEP: Creating a pod to test consume configMaps 06/12/23 22:01:31.307 - Jun 12 22:01:31.337: INFO: Waiting up to 5m0s for pod "pod-configmaps-ddc896f2-eac5-4a3e-bfc2-e0717d997e1d" in namespace "configmap-6734" to be "Succeeded or Failed" - Jun 12 22:01:31.353: INFO: Pod "pod-configmaps-ddc896f2-eac5-4a3e-bfc2-e0717d997e1d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.675959ms - Jun 12 22:01:33.375: INFO: Pod "pod-configmaps-ddc896f2-eac5-4a3e-bfc2-e0717d997e1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038254047s - Jun 12 22:01:35.366: INFO: Pod "pod-configmaps-ddc896f2-eac5-4a3e-bfc2-e0717d997e1d": Phase="Running", Reason="", readiness=false. Elapsed: 4.029235892s - Jun 12 22:01:37.365: INFO: Pod "pod-configmaps-ddc896f2-eac5-4a3e-bfc2-e0717d997e1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028441209s - STEP: Saw pod success 06/12/23 22:01:37.366 - Jun 12 22:01:37.366: INFO: Pod "pod-configmaps-ddc896f2-eac5-4a3e-bfc2-e0717d997e1d" satisfied condition "Succeeded or Failed" - Jun 12 22:01:37.377: INFO: Trying to get logs from node 10.138.75.70 pod pod-configmaps-ddc896f2-eac5-4a3e-bfc2-e0717d997e1d container agnhost-container: - STEP: delete the pod 06/12/23 22:01:37.464 - Jun 12 22:01:37.525: INFO: Waiting for pod pod-configmaps-ddc896f2-eac5-4a3e-bfc2-e0717d997e1d to disappear - Jun 12 22:01:37.533: INFO: Pod pod-configmaps-ddc896f2-eac5-4a3e-bfc2-e0717d997e1d no longer exists - [AfterEach] [sig-storage] ConfigMap + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [It] should add annotations for pods in rc [Conformance] + test/e2e/kubectl/kubectl.go:1652 + STEP: creating Agnhost RC 07/27/23 02:40:11.984 + Jul 27 02:40:11.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2545 create -f -' + Jul 27 02:40:12.802: INFO: stderr: "" + Jul 27 02:40:12.802: INFO: stdout: "replicationcontroller/agnhost-primary created\n" + STEP: Waiting for Agnhost primary to start. 07/27/23 02:40:12.802 + Jul 27 02:40:13.814: INFO: Selector matched 1 pods for map[app:agnhost] + Jul 27 02:40:13.814: INFO: Found 0 / 1 + Jul 27 02:40:14.813: INFO: Selector matched 1 pods for map[app:agnhost] + Jul 27 02:40:14.813: INFO: Found 1 / 1 + Jul 27 02:40:14.813: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 + STEP: patching all pods 07/27/23 02:40:14.813 + Jul 27 02:40:14.825: INFO: Selector matched 1 pods for map[app:agnhost] + Jul 27 02:40:14.825: INFO: ForEach: Found 1 pods from the filter. Now looping through them. + Jul 27 02:40:14.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-2545 patch pod agnhost-primary-hkg5r -p {"metadata":{"annotations":{"x":"y"}}}' + Jul 27 02:40:14.925: INFO: stderr: "" + Jul 27 02:40:14.925: INFO: stdout: "pod/agnhost-primary-hkg5r patched\n" + STEP: checking annotations 07/27/23 02:40:14.925 + Jul 27 02:40:14.937: INFO: Selector matched 1 pods for map[app:agnhost] + Jul 27 02:40:14.937: INFO: ForEach: Found 1 pods from the filter. Now looping through them. + [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 - Jun 12 22:01:37.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] ConfigMap + Jul 27 02:40:14.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] ConfigMap + [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] ConfigMap + [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 - STEP: Destroying namespace "configmap-6734" for this suite. 06/12/23 22:01:37.551 + STEP: Destroying namespace "kubectl-2545" for this suite. 07/27/23 02:40:14.956 << End Captured GinkgoWriter Output ------------------------------ -SSSSSS +SSSSSSSSSSSSSSS ------------------------------ -[sig-apps] ReplicaSet - should adopt matching pods on creation and release no longer matching pods [Conformance] - test/e2e/apps/replica_set.go:131 -[BeforeEach] [sig-apps] ReplicaSet +[sig-storage] EmptyDir volumes + volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:87 +[BeforeEach] [sig-storage] EmptyDir volumes set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:01:37.57 -Jun 12 22:01:37.570: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename replicaset 06/12/23 22:01:37.572 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:01:37.66 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:01:37.702 -[BeforeEach] [sig-apps] ReplicaSet +STEP: Creating a kubernetes client 07/27/23 02:40:14.982 +Jul 27 02:40:14.983: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename emptydir 07/27/23 02:40:14.983 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:40:15.029 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:40:15.041 +[BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:31 -[It] should adopt matching pods on creation and release no longer matching pods [Conformance] - test/e2e/apps/replica_set.go:131 -STEP: Given a Pod with a 'name' label pod-adoption-release is created 06/12/23 22:01:37.729 -Jun 12 22:01:37.763: INFO: Waiting up to 5m0s for pod "pod-adoption-release" in namespace "replicaset-8212" to be "running and ready" -Jun 12 22:01:37.776: INFO: Pod "pod-adoption-release": Phase="Pending", Reason="", readiness=false. Elapsed: 13.32216ms -Jun 12 22:01:37.776: INFO: The phase of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) -Jun 12 22:01:39.789: INFO: Pod "pod-adoption-release": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026098985s -Jun 12 22:01:39.789: INFO: The phase of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) -Jun 12 22:01:41.787: INFO: Pod "pod-adoption-release": Phase="Running", Reason="", readiness=true. Elapsed: 4.024167954s -Jun 12 22:01:41.787: INFO: The phase of Pod pod-adoption-release is Running (Ready = true) -Jun 12 22:01:41.787: INFO: Pod "pod-adoption-release" satisfied condition "running and ready" -STEP: When a replicaset with a matching selector is created 06/12/23 22:01:41.796 -STEP: Then the orphan pod is adopted 06/12/23 22:01:41.82 -STEP: When the matched label of one of its pods change 06/12/23 22:01:42.845 -Jun 12 22:01:42.857: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 -STEP: Then the pod is released 06/12/23 22:01:42.888 -[AfterEach] [sig-apps] ReplicaSet +[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:87 +STEP: Creating a pod to test emptydir volume type on tmpfs 07/27/23 02:40:15.053 +W0727 02:40:15.081943 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "test-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "test-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "test-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "test-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +Jul 27 02:40:15.082: INFO: Waiting up to 5m0s for pod "pod-1c722215-4939-47f1-9bc1-564229361cd3" in namespace "emptydir-3115" to be "Succeeded or Failed" +Jul 27 02:40:15.091: INFO: Pod "pod-1c722215-4939-47f1-9bc1-564229361cd3": Phase="Pending", Reason="", readiness=false. Elapsed: 9.349158ms +Jul 27 02:40:17.104: INFO: Pod "pod-1c722215-4939-47f1-9bc1-564229361cd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022202252s +Jul 27 02:40:19.104: INFO: Pod "pod-1c722215-4939-47f1-9bc1-564229361cd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022320015s +STEP: Saw pod success 07/27/23 02:40:19.104 +Jul 27 02:40:19.104: INFO: Pod "pod-1c722215-4939-47f1-9bc1-564229361cd3" satisfied condition "Succeeded or Failed" +Jul 27 02:40:19.114: INFO: Trying to get logs from node 10.245.128.19 pod pod-1c722215-4939-47f1-9bc1-564229361cd3 container test-container: +STEP: delete the pod 07/27/23 02:40:19.202 +Jul 27 02:40:19.232: INFO: Waiting for pod pod-1c722215-4939-47f1-9bc1-564229361cd3 to disappear +Jul 27 02:40:19.241: INFO: Pod pod-1c722215-4939-47f1-9bc1-564229361cd3 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/node/init/init.go:32 -Jun 12 22:01:43.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] ReplicaSet +Jul 27 02:40:19.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] ReplicaSet +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] ReplicaSet +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes tear down framework | framework.go:193 -STEP: Destroying namespace "replicaset-8212" for this suite. 06/12/23 22:01:43.933 +STEP: Destroying namespace "emptydir-3115" for this suite. 07/27/23 02:40:19.255 ------------------------------ -• [SLOW TEST] [6.381 seconds] -[sig-apps] ReplicaSet -test/e2e/apps/framework.go:23 - should adopt matching pods on creation and release no longer matching pods [Conformance] - test/e2e/apps/replica_set.go:131 +• [4.297 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:87 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] ReplicaSet + [BeforeEach] [sig-storage] EmptyDir volumes set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:01:37.57 - Jun 12 22:01:37.570: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename replicaset 06/12/23 22:01:37.572 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:01:37.66 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:01:37.702 - [BeforeEach] [sig-apps] ReplicaSet + STEP: Creating a kubernetes client 07/27/23 02:40:14.982 + Jul 27 02:40:14.983: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename emptydir 07/27/23 02:40:14.983 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:40:15.029 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:40:15.041 + [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:31 - [It] should adopt matching pods on creation and release no longer matching pods [Conformance] - test/e2e/apps/replica_set.go:131 - STEP: Given a Pod with a 'name' label pod-adoption-release is created 06/12/23 22:01:37.729 - Jun 12 22:01:37.763: INFO: Waiting up to 5m0s for pod "pod-adoption-release" in namespace "replicaset-8212" to be "running and ready" - Jun 12 22:01:37.776: INFO: Pod "pod-adoption-release": Phase="Pending", Reason="", readiness=false. Elapsed: 13.32216ms - Jun 12 22:01:37.776: INFO: The phase of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) - Jun 12 22:01:39.789: INFO: Pod "pod-adoption-release": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026098985s - Jun 12 22:01:39.789: INFO: The phase of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) - Jun 12 22:01:41.787: INFO: Pod "pod-adoption-release": Phase="Running", Reason="", readiness=true. Elapsed: 4.024167954s - Jun 12 22:01:41.787: INFO: The phase of Pod pod-adoption-release is Running (Ready = true) - Jun 12 22:01:41.787: INFO: Pod "pod-adoption-release" satisfied condition "running and ready" - STEP: When a replicaset with a matching selector is created 06/12/23 22:01:41.796 - STEP: Then the orphan pod is adopted 06/12/23 22:01:41.82 - STEP: When the matched label of one of its pods change 06/12/23 22:01:42.845 - Jun 12 22:01:42.857: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 - STEP: Then the pod is released 06/12/23 22:01:42.888 - [AfterEach] [sig-apps] ReplicaSet + [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:87 + STEP: Creating a pod to test emptydir volume type on tmpfs 07/27/23 02:40:15.053 + W0727 02:40:15.081943 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "test-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "test-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "test-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "test-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + Jul 27 02:40:15.082: INFO: Waiting up to 5m0s for pod "pod-1c722215-4939-47f1-9bc1-564229361cd3" in namespace "emptydir-3115" to be "Succeeded or Failed" + Jul 27 02:40:15.091: INFO: Pod "pod-1c722215-4939-47f1-9bc1-564229361cd3": Phase="Pending", Reason="", readiness=false. Elapsed: 9.349158ms + Jul 27 02:40:17.104: INFO: Pod "pod-1c722215-4939-47f1-9bc1-564229361cd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022202252s + Jul 27 02:40:19.104: INFO: Pod "pod-1c722215-4939-47f1-9bc1-564229361cd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022320015s + STEP: Saw pod success 07/27/23 02:40:19.104 + Jul 27 02:40:19.104: INFO: Pod "pod-1c722215-4939-47f1-9bc1-564229361cd3" satisfied condition "Succeeded or Failed" + Jul 27 02:40:19.114: INFO: Trying to get logs from node 10.245.128.19 pod pod-1c722215-4939-47f1-9bc1-564229361cd3 container test-container: + STEP: delete the pod 07/27/23 02:40:19.202 + Jul 27 02:40:19.232: INFO: Waiting for pod pod-1c722215-4939-47f1-9bc1-564229361cd3 to disappear + Jul 27 02:40:19.241: INFO: Pod pod-1c722215-4939-47f1-9bc1-564229361cd3 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/node/init/init.go:32 - Jun 12 22:01:43.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] ReplicaSet + Jul 27 02:40:19.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] ReplicaSet + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] ReplicaSet + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes tear down framework | framework.go:193 - STEP: Destroying namespace "replicaset-8212" for this suite. 06/12/23 22:01:43.933 + STEP: Destroying namespace "emptydir-3115" for this suite. 07/27/23 02:40:19.255 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSS +SSSSSS ------------------------------ -[sig-network] Proxy version v1 - A set of valid responses are returned for both pod and service Proxy [Conformance] - test/e2e/network/proxy.go:380 -[BeforeEach] version v1 +[sig-storage] Projected secret + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:78 +[BeforeEach] [sig-storage] Projected secret set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:01:43.954 -Jun 12 22:01:43.955: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename proxy 06/12/23 22:01:43.958 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:01:44.001 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:01:44.035 -[BeforeEach] version v1 +STEP: Creating a kubernetes client 07/27/23 02:40:19.28 +Jul 27 02:40:19.280: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename projected 07/27/23 02:40:19.281 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:40:19.341 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:40:19.355 +[BeforeEach] [sig-storage] Projected secret test/e2e/framework/metrics/init/init.go:31 -[It] A set of valid responses are returned for both pod and service Proxy [Conformance] - test/e2e/network/proxy.go:380 -Jun 12 22:01:44.050: INFO: Creating pod... -Jun 12 22:01:44.081: INFO: Waiting up to 5m0s for pod "agnhost" in namespace "proxy-8932" to be "running" -Jun 12 22:01:44.099: INFO: Pod "agnhost": Phase="Pending", Reason="", readiness=false. Elapsed: 18.376859ms -Jun 12 22:01:46.112: INFO: Pod "agnhost": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031414663s -Jun 12 22:01:48.110: INFO: Pod "agnhost": Phase="Running", Reason="", readiness=true. Elapsed: 4.029670444s -Jun 12 22:01:48.110: INFO: Pod "agnhost" satisfied condition "running" -Jun 12 22:01:48.110: INFO: Creating service... -Jun 12 22:01:48.150: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-8932/pods/agnhost/proxy?method=DELETE -Jun 12 22:01:48.205: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE -Jun 12 22:01:48.207: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-8932/pods/agnhost/proxy?method=OPTIONS -Jun 12 22:01:48.229: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS -Jun 12 22:01:48.229: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-8932/pods/agnhost/proxy?method=PATCH -Jun 12 22:01:48.251: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH -Jun 12 22:01:48.251: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-8932/pods/agnhost/proxy?method=POST -Jun 12 22:01:48.291: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST -Jun 12 22:01:48.291: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-8932/pods/agnhost/proxy?method=PUT -Jun 12 22:01:48.353: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT -Jun 12 22:01:48.353: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-8932/services/e2e-proxy-test-service/proxy?method=DELETE -Jun 12 22:01:48.383: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE -Jun 12 22:01:48.383: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-8932/services/e2e-proxy-test-service/proxy?method=OPTIONS -Jun 12 22:01:48.432: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS -Jun 12 22:01:48.432: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-8932/services/e2e-proxy-test-service/proxy?method=PATCH -Jun 12 22:01:48.451: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH -Jun 12 22:01:48.451: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-8932/services/e2e-proxy-test-service/proxy?method=POST -Jun 12 22:01:48.495: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST -Jun 12 22:01:48.495: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-8932/services/e2e-proxy-test-service/proxy?method=PUT -Jun 12 22:01:48.528: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT -Jun 12 22:01:48.528: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-8932/pods/agnhost/proxy?method=GET -Jun 12 22:01:48.538: INFO: http.Client request:GET StatusCode:301 -Jun 12 22:01:48.538: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-8932/services/e2e-proxy-test-service/proxy?method=GET -Jun 12 22:01:48.551: INFO: http.Client request:GET StatusCode:301 -Jun 12 22:01:48.551: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-8932/pods/agnhost/proxy?method=HEAD -Jun 12 22:01:48.564: INFO: http.Client request:HEAD StatusCode:301 -Jun 12 22:01:48.564: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-8932/services/e2e-proxy-test-service/proxy?method=HEAD -Jun 12 22:01:48.577: INFO: http.Client request:HEAD StatusCode:301 -[AfterEach] version v1 +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:78 +STEP: Creating projection with secret that has name projected-secret-test-map-3811a531-269a-440c-b241-2275874bd260 07/27/23 02:40:19.368 +STEP: Creating a pod to test consume secrets 07/27/23 02:40:19.399 +Jul 27 02:40:19.447: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-82ad737b-a1ab-4afd-8745-d35a5ab47bf8" in namespace "projected-2893" to be "Succeeded or Failed" +Jul 27 02:40:19.461: INFO: Pod "pod-projected-secrets-82ad737b-a1ab-4afd-8745-d35a5ab47bf8": Phase="Pending", Reason="", readiness=false. Elapsed: 13.971078ms +Jul 27 02:40:21.473: INFO: Pod "pod-projected-secrets-82ad737b-a1ab-4afd-8745-d35a5ab47bf8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026499042s +Jul 27 02:40:23.472: INFO: Pod "pod-projected-secrets-82ad737b-a1ab-4afd-8745-d35a5ab47bf8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025234778s +STEP: Saw pod success 07/27/23 02:40:23.472 +Jul 27 02:40:23.472: INFO: Pod "pod-projected-secrets-82ad737b-a1ab-4afd-8745-d35a5ab47bf8" satisfied condition "Succeeded or Failed" +Jul 27 02:40:23.482: INFO: Trying to get logs from node 10.245.128.19 pod pod-projected-secrets-82ad737b-a1ab-4afd-8745-d35a5ab47bf8 container projected-secret-volume-test: +STEP: delete the pod 07/27/23 02:40:23.502 +Jul 27 02:40:23.524: INFO: Waiting for pod pod-projected-secrets-82ad737b-a1ab-4afd-8745-d35a5ab47bf8 to disappear +Jul 27 02:40:23.535: INFO: Pod pod-projected-secrets-82ad737b-a1ab-4afd-8745-d35a5ab47bf8 no longer exists +[AfterEach] [sig-storage] Projected secret test/e2e/framework/node/init/init.go:32 -Jun 12 22:01:48.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] version v1 +Jul 27 02:40:23.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected secret test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] version v1 +[DeferCleanup (Each)] [sig-storage] Projected secret dump namespaces | framework.go:196 -[DeferCleanup (Each)] version v1 +[DeferCleanup (Each)] [sig-storage] Projected secret tear down framework | framework.go:193 -STEP: Destroying namespace "proxy-8932" for this suite. 06/12/23 22:01:48.598 +STEP: Destroying namespace "projected-2893" for this suite. 07/27/23 02:40:23.551 ------------------------------ -• [4.663 seconds] -[sig-network] Proxy -test/e2e/network/common/framework.go:23 - version v1 - test/e2e/network/proxy.go:74 - A set of valid responses are returned for both pod and service Proxy [Conformance] - test/e2e/network/proxy.go:380 +• [4.296 seconds] +[sig-storage] Projected secret +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:78 Begin Captured GinkgoWriter Output >> - [BeforeEach] version v1 + [BeforeEach] [sig-storage] Projected secret set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:01:43.954 - Jun 12 22:01:43.955: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename proxy 06/12/23 22:01:43.958 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:01:44.001 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:01:44.035 - [BeforeEach] version v1 + STEP: Creating a kubernetes client 07/27/23 02:40:19.28 + Jul 27 02:40:19.280: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename projected 07/27/23 02:40:19.281 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:40:19.341 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:40:19.355 + [BeforeEach] [sig-storage] Projected secret test/e2e/framework/metrics/init/init.go:31 - [It] A set of valid responses are returned for both pod and service Proxy [Conformance] - test/e2e/network/proxy.go:380 - Jun 12 22:01:44.050: INFO: Creating pod... - Jun 12 22:01:44.081: INFO: Waiting up to 5m0s for pod "agnhost" in namespace "proxy-8932" to be "running" - Jun 12 22:01:44.099: INFO: Pod "agnhost": Phase="Pending", Reason="", readiness=false. Elapsed: 18.376859ms - Jun 12 22:01:46.112: INFO: Pod "agnhost": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031414663s - Jun 12 22:01:48.110: INFO: Pod "agnhost": Phase="Running", Reason="", readiness=true. Elapsed: 4.029670444s - Jun 12 22:01:48.110: INFO: Pod "agnhost" satisfied condition "running" - Jun 12 22:01:48.110: INFO: Creating service... - Jun 12 22:01:48.150: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-8932/pods/agnhost/proxy?method=DELETE - Jun 12 22:01:48.205: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE - Jun 12 22:01:48.207: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-8932/pods/agnhost/proxy?method=OPTIONS - Jun 12 22:01:48.229: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS - Jun 12 22:01:48.229: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-8932/pods/agnhost/proxy?method=PATCH - Jun 12 22:01:48.251: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH - Jun 12 22:01:48.251: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-8932/pods/agnhost/proxy?method=POST - Jun 12 22:01:48.291: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST - Jun 12 22:01:48.291: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-8932/pods/agnhost/proxy?method=PUT - Jun 12 22:01:48.353: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT - Jun 12 22:01:48.353: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-8932/services/e2e-proxy-test-service/proxy?method=DELETE - Jun 12 22:01:48.383: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE - Jun 12 22:01:48.383: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-8932/services/e2e-proxy-test-service/proxy?method=OPTIONS - Jun 12 22:01:48.432: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS - Jun 12 22:01:48.432: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-8932/services/e2e-proxy-test-service/proxy?method=PATCH - Jun 12 22:01:48.451: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH - Jun 12 22:01:48.451: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-8932/services/e2e-proxy-test-service/proxy?method=POST - Jun 12 22:01:48.495: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST - Jun 12 22:01:48.495: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-8932/services/e2e-proxy-test-service/proxy?method=PUT - Jun 12 22:01:48.528: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT - Jun 12 22:01:48.528: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-8932/pods/agnhost/proxy?method=GET - Jun 12 22:01:48.538: INFO: http.Client request:GET StatusCode:301 - Jun 12 22:01:48.538: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-8932/services/e2e-proxy-test-service/proxy?method=GET - Jun 12 22:01:48.551: INFO: http.Client request:GET StatusCode:301 - Jun 12 22:01:48.551: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-8932/pods/agnhost/proxy?method=HEAD - Jun 12 22:01:48.564: INFO: http.Client request:HEAD StatusCode:301 - Jun 12 22:01:48.564: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-8932/services/e2e-proxy-test-service/proxy?method=HEAD - Jun 12 22:01:48.577: INFO: http.Client request:HEAD StatusCode:301 - [AfterEach] version v1 + [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:78 + STEP: Creating projection with secret that has name projected-secret-test-map-3811a531-269a-440c-b241-2275874bd260 07/27/23 02:40:19.368 + STEP: Creating a pod to test consume secrets 07/27/23 02:40:19.399 + Jul 27 02:40:19.447: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-82ad737b-a1ab-4afd-8745-d35a5ab47bf8" in namespace "projected-2893" to be "Succeeded or Failed" + Jul 27 02:40:19.461: INFO: Pod "pod-projected-secrets-82ad737b-a1ab-4afd-8745-d35a5ab47bf8": Phase="Pending", Reason="", readiness=false. Elapsed: 13.971078ms + Jul 27 02:40:21.473: INFO: Pod "pod-projected-secrets-82ad737b-a1ab-4afd-8745-d35a5ab47bf8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026499042s + Jul 27 02:40:23.472: INFO: Pod "pod-projected-secrets-82ad737b-a1ab-4afd-8745-d35a5ab47bf8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025234778s + STEP: Saw pod success 07/27/23 02:40:23.472 + Jul 27 02:40:23.472: INFO: Pod "pod-projected-secrets-82ad737b-a1ab-4afd-8745-d35a5ab47bf8" satisfied condition "Succeeded or Failed" + Jul 27 02:40:23.482: INFO: Trying to get logs from node 10.245.128.19 pod pod-projected-secrets-82ad737b-a1ab-4afd-8745-d35a5ab47bf8 container projected-secret-volume-test: + STEP: delete the pod 07/27/23 02:40:23.502 + Jul 27 02:40:23.524: INFO: Waiting for pod pod-projected-secrets-82ad737b-a1ab-4afd-8745-d35a5ab47bf8 to disappear + Jul 27 02:40:23.535: INFO: Pod pod-projected-secrets-82ad737b-a1ab-4afd-8745-d35a5ab47bf8 no longer exists + [AfterEach] [sig-storage] Projected secret test/e2e/framework/node/init/init.go:32 - Jun 12 22:01:48.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] version v1 + Jul 27 02:40:23.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected secret test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] version v1 + [DeferCleanup (Each)] [sig-storage] Projected secret dump namespaces | framework.go:196 - [DeferCleanup (Each)] version v1 + [DeferCleanup (Each)] [sig-storage] Projected secret tear down framework | framework.go:193 - STEP: Destroying namespace "proxy-8932" for this suite. 06/12/23 22:01:48.598 + STEP: Destroying namespace "projected-2893" for this suite. 07/27/23 02:40:23.551 << End Captured GinkgoWriter Output ------------------------------ -SSSS +SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-network] EndpointSliceMirroring - should mirror a custom Endpoints resource through create update and delete [Conformance] - test/e2e/network/endpointslicemirroring.go:53 -[BeforeEach] [sig-network] EndpointSliceMirroring +[sig-storage] Projected configMap + updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:124 +[BeforeEach] [sig-storage] Projected configMap set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:01:48.62 -Jun 12 22:01:48.620: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename endpointslicemirroring 06/12/23 22:01:48.622 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:01:48.675 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:01:48.689 -[BeforeEach] [sig-network] EndpointSliceMirroring +STEP: Creating a kubernetes client 07/27/23 02:40:23.576 +Jul 27 02:40:23.576: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename projected 07/27/23 02:40:23.577 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:40:23.624 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:40:23.636 +[BeforeEach] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-network] EndpointSliceMirroring - test/e2e/network/endpointslicemirroring.go:41 -[It] should mirror a custom Endpoints resource through create update and delete [Conformance] - test/e2e/network/endpointslicemirroring.go:53 -STEP: mirroring a new custom Endpoint 06/12/23 22:01:48.75 -Jun 12 22:01:48.781: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 -STEP: mirroring an update to a custom Endpoint 06/12/23 22:01:50.793 -STEP: mirroring deletion of a custom Endpoint 06/12/23 22:01:50.827 -Jun 12 22:01:50.863: INFO: Waiting for 0 EndpointSlices to exist, got 1 -[AfterEach] [sig-network] EndpointSliceMirroring +[It] updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:124 +Jul 27 02:40:23.666: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node +STEP: Creating projection with configMap that has name projected-configmap-test-upd-e7594f1b-6b95-4b3f-8514-84e2d02c725a 07/27/23 02:40:23.666 +STEP: Creating the pod 07/27/23 02:40:23.684 +Jul 27 02:40:23.713: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d1d66d80-50d8-4748-8946-af4410d52517" in namespace "projected-7091" to be "running and ready" +Jul 27 02:40:23.723: INFO: Pod "pod-projected-configmaps-d1d66d80-50d8-4748-8946-af4410d52517": Phase="Pending", Reason="", readiness=false. Elapsed: 10.576333ms +Jul 27 02:40:23.723: INFO: The phase of Pod pod-projected-configmaps-d1d66d80-50d8-4748-8946-af4410d52517 is Pending, waiting for it to be Running (with Ready = true) +Jul 27 02:40:25.735: INFO: Pod "pod-projected-configmaps-d1d66d80-50d8-4748-8946-af4410d52517": Phase="Running", Reason="", readiness=true. Elapsed: 2.0226335s +Jul 27 02:40:25.735: INFO: The phase of Pod pod-projected-configmaps-d1d66d80-50d8-4748-8946-af4410d52517 is Running (Ready = true) +Jul 27 02:40:25.735: INFO: Pod "pod-projected-configmaps-d1d66d80-50d8-4748-8946-af4410d52517" satisfied condition "running and ready" +STEP: Updating configmap projected-configmap-test-upd-e7594f1b-6b95-4b3f-8514-84e2d02c725a 07/27/23 02:40:25.794 +STEP: waiting to observe update in volume 07/27/23 02:40:25.814 +[AfterEach] [sig-storage] Projected configMap test/e2e/framework/node/init/init.go:32 -Jun 12 22:01:52.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] EndpointSliceMirroring +Jul 27 02:40:27.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] EndpointSliceMirroring +[DeferCleanup (Each)] [sig-storage] Projected configMap dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] EndpointSliceMirroring +[DeferCleanup (Each)] [sig-storage] Projected configMap tear down framework | framework.go:193 -STEP: Destroying namespace "endpointslicemirroring-937" for this suite. 06/12/23 22:01:52.896 +STEP: Destroying namespace "projected-7091" for this suite. 07/27/23 02:40:27.884 ------------------------------ -• [4.295 seconds] -[sig-network] EndpointSliceMirroring -test/e2e/network/common/framework.go:23 - should mirror a custom Endpoints resource through create update and delete [Conformance] - test/e2e/network/endpointslicemirroring.go:53 +• [4.336 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:124 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] EndpointSliceMirroring + [BeforeEach] [sig-storage] Projected configMap set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:01:48.62 - Jun 12 22:01:48.620: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename endpointslicemirroring 06/12/23 22:01:48.622 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:01:48.675 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:01:48.689 - [BeforeEach] [sig-network] EndpointSliceMirroring + STEP: Creating a kubernetes client 07/27/23 02:40:23.576 + Jul 27 02:40:23.576: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename projected 07/27/23 02:40:23.577 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:40:23.624 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:40:23.636 + [BeforeEach] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-network] EndpointSliceMirroring - test/e2e/network/endpointslicemirroring.go:41 - [It] should mirror a custom Endpoints resource through create update and delete [Conformance] - test/e2e/network/endpointslicemirroring.go:53 - STEP: mirroring a new custom Endpoint 06/12/23 22:01:48.75 - Jun 12 22:01:48.781: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 - STEP: mirroring an update to a custom Endpoint 06/12/23 22:01:50.793 - STEP: mirroring deletion of a custom Endpoint 06/12/23 22:01:50.827 - Jun 12 22:01:50.863: INFO: Waiting for 0 EndpointSlices to exist, got 1 - [AfterEach] [sig-network] EndpointSliceMirroring + [It] updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:124 + Jul 27 02:40:23.666: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node + STEP: Creating projection with configMap that has name projected-configmap-test-upd-e7594f1b-6b95-4b3f-8514-84e2d02c725a 07/27/23 02:40:23.666 + STEP: Creating the pod 07/27/23 02:40:23.684 + Jul 27 02:40:23.713: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d1d66d80-50d8-4748-8946-af4410d52517" in namespace "projected-7091" to be "running and ready" + Jul 27 02:40:23.723: INFO: Pod "pod-projected-configmaps-d1d66d80-50d8-4748-8946-af4410d52517": Phase="Pending", Reason="", readiness=false. Elapsed: 10.576333ms + Jul 27 02:40:23.723: INFO: The phase of Pod pod-projected-configmaps-d1d66d80-50d8-4748-8946-af4410d52517 is Pending, waiting for it to be Running (with Ready = true) + Jul 27 02:40:25.735: INFO: Pod "pod-projected-configmaps-d1d66d80-50d8-4748-8946-af4410d52517": Phase="Running", Reason="", readiness=true. Elapsed: 2.0226335s + Jul 27 02:40:25.735: INFO: The phase of Pod pod-projected-configmaps-d1d66d80-50d8-4748-8946-af4410d52517 is Running (Ready = true) + Jul 27 02:40:25.735: INFO: Pod "pod-projected-configmaps-d1d66d80-50d8-4748-8946-af4410d52517" satisfied condition "running and ready" + STEP: Updating configmap projected-configmap-test-upd-e7594f1b-6b95-4b3f-8514-84e2d02c725a 07/27/23 02:40:25.794 + STEP: waiting to observe update in volume 07/27/23 02:40:25.814 + [AfterEach] [sig-storage] Projected configMap test/e2e/framework/node/init/init.go:32 - Jun 12 22:01:52.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] EndpointSliceMirroring + Jul 27 02:40:27.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] EndpointSliceMirroring + [DeferCleanup (Each)] [sig-storage] Projected configMap dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] EndpointSliceMirroring + [DeferCleanup (Each)] [sig-storage] Projected configMap tear down framework | framework.go:193 - STEP: Destroying namespace "endpointslicemirroring-937" for this suite. 06/12/23 22:01:52.896 + STEP: Destroying namespace "projected-7091" for this suite. 07/27/23 02:40:27.884 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] Downward API volume - should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:68 -[BeforeEach] [sig-storage] Downward API volume +[sig-api-machinery] Namespaces [Serial] + should apply changes to a namespace status [Conformance] + test/e2e/apimachinery/namespace.go:299 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:01:52.923 -Jun 12 22:01:52.923: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename downward-api 06/12/23 22:01:52.925 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:01:52.967 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:01:52.986 -[BeforeEach] [sig-storage] Downward API volume +STEP: Creating a kubernetes client 07/27/23 02:40:27.913 +Jul 27 02:40:27.913: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename namespaces 07/27/23 02:40:27.914 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:40:27.958 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:40:27.971 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-storage] Downward API volume - test/e2e/common/storage/downwardapi_volume.go:44 -[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:68 -STEP: Creating a pod to test downward API volume plugin 06/12/23 22:01:53.002 -Jun 12 22:01:53.036: INFO: Waiting up to 5m0s for pod "downwardapi-volume-28921828-7815-4906-bd2d-43c485fd7b08" in namespace "downward-api-7788" to be "Succeeded or Failed" -Jun 12 22:01:53.057: INFO: Pod "downwardapi-volume-28921828-7815-4906-bd2d-43c485fd7b08": Phase="Pending", Reason="", readiness=false. Elapsed: 20.858305ms -Jun 12 22:01:55.072: INFO: Pod "downwardapi-volume-28921828-7815-4906-bd2d-43c485fd7b08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03583602s -Jun 12 22:01:57.105: INFO: Pod "downwardapi-volume-28921828-7815-4906-bd2d-43c485fd7b08": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068235702s -Jun 12 22:01:59.068: INFO: Pod "downwardapi-volume-28921828-7815-4906-bd2d-43c485fd7b08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031587081s -STEP: Saw pod success 06/12/23 22:01:59.068 -Jun 12 22:01:59.069: INFO: Pod "downwardapi-volume-28921828-7815-4906-bd2d-43c485fd7b08" satisfied condition "Succeeded or Failed" -Jun 12 22:01:59.081: INFO: Trying to get logs from node 10.138.75.112 pod downwardapi-volume-28921828-7815-4906-bd2d-43c485fd7b08 container client-container: -STEP: delete the pod 06/12/23 22:01:59.132 -Jun 12 22:01:59.164: INFO: Waiting for pod downwardapi-volume-28921828-7815-4906-bd2d-43c485fd7b08 to disappear -Jun 12 22:01:59.174: INFO: Pod downwardapi-volume-28921828-7815-4906-bd2d-43c485fd7b08 no longer exists -[AfterEach] [sig-storage] Downward API volume +[It] should apply changes to a namespace status [Conformance] + test/e2e/apimachinery/namespace.go:299 +STEP: Read namespace status 07/27/23 02:40:27.987 +Jul 27 02:40:28.008: INFO: Status: v1.NamespaceStatus{Phase:"Active", Conditions:[]v1.NamespaceCondition(nil)} +STEP: Patch namespace status 07/27/23 02:40:28.008 +Jul 27 02:40:28.035: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusPatch", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Patched by an e2e test"} +STEP: Update namespace status 07/27/23 02:40:28.035 +Jul 27 02:40:28.127: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Updated by an e2e test"} +[AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 -Jun 12 22:01:59.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Downward API volume +Jul 27 02:40:28.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Downward API volume +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Downward API volume +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 -STEP: Destroying namespace "downward-api-7788" for this suite. 06/12/23 22:01:59.193 +STEP: Destroying namespace "namespaces-1952" for this suite. 07/27/23 02:40:28.145 ------------------------------ -• [SLOW TEST] [6.287 seconds] -[sig-storage] Downward API volume -test/e2e/common/storage/framework.go:23 - should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:68 +• [0.259 seconds] +[sig-api-machinery] Namespaces [Serial] +test/e2e/apimachinery/framework.go:23 + should apply changes to a namespace status [Conformance] + test/e2e/apimachinery/namespace.go:299 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Downward API volume + [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:01:52.923 - Jun 12 22:01:52.923: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename downward-api 06/12/23 22:01:52.925 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:01:52.967 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:01:52.986 - [BeforeEach] [sig-storage] Downward API volume + STEP: Creating a kubernetes client 07/27/23 02:40:27.913 + Jul 27 02:40:27.913: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename namespaces 07/27/23 02:40:27.914 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:40:27.958 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:40:27.971 + [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-storage] Downward API volume - test/e2e/common/storage/downwardapi_volume.go:44 - [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:68 - STEP: Creating a pod to test downward API volume plugin 06/12/23 22:01:53.002 - Jun 12 22:01:53.036: INFO: Waiting up to 5m0s for pod "downwardapi-volume-28921828-7815-4906-bd2d-43c485fd7b08" in namespace "downward-api-7788" to be "Succeeded or Failed" - Jun 12 22:01:53.057: INFO: Pod "downwardapi-volume-28921828-7815-4906-bd2d-43c485fd7b08": Phase="Pending", Reason="", readiness=false. Elapsed: 20.858305ms - Jun 12 22:01:55.072: INFO: Pod "downwardapi-volume-28921828-7815-4906-bd2d-43c485fd7b08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03583602s - Jun 12 22:01:57.105: INFO: Pod "downwardapi-volume-28921828-7815-4906-bd2d-43c485fd7b08": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068235702s - Jun 12 22:01:59.068: INFO: Pod "downwardapi-volume-28921828-7815-4906-bd2d-43c485fd7b08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031587081s - STEP: Saw pod success 06/12/23 22:01:59.068 - Jun 12 22:01:59.069: INFO: Pod "downwardapi-volume-28921828-7815-4906-bd2d-43c485fd7b08" satisfied condition "Succeeded or Failed" - Jun 12 22:01:59.081: INFO: Trying to get logs from node 10.138.75.112 pod downwardapi-volume-28921828-7815-4906-bd2d-43c485fd7b08 container client-container: - STEP: delete the pod 06/12/23 22:01:59.132 - Jun 12 22:01:59.164: INFO: Waiting for pod downwardapi-volume-28921828-7815-4906-bd2d-43c485fd7b08 to disappear - Jun 12 22:01:59.174: INFO: Pod downwardapi-volume-28921828-7815-4906-bd2d-43c485fd7b08 no longer exists - [AfterEach] [sig-storage] Downward API volume + [It] should apply changes to a namespace status [Conformance] + test/e2e/apimachinery/namespace.go:299 + STEP: Read namespace status 07/27/23 02:40:27.987 + Jul 27 02:40:28.008: INFO: Status: v1.NamespaceStatus{Phase:"Active", Conditions:[]v1.NamespaceCondition(nil)} + STEP: Patch namespace status 07/27/23 02:40:28.008 + Jul 27 02:40:28.035: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusPatch", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Patched by an e2e test"} + STEP: Update namespace status 07/27/23 02:40:28.035 + Jul 27 02:40:28.127: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Updated by an e2e test"} + [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 - Jun 12 22:01:59.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Downward API volume + Jul 27 02:40:28.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Downward API volume + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Downward API volume + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 - STEP: Destroying namespace "downward-api-7788" for this suite. 06/12/23 22:01:59.193 + STEP: Destroying namespace "namespaces-1952" for this suite. 07/27/23 02:40:28.145 << End Captured GinkgoWriter Output ------------------------------ -S +SSSSSSSSSSSS ------------------------------ -[sig-apps] Daemon set [Serial] - should list and delete a collection of DaemonSets [Conformance] - test/e2e/apps/daemon_set.go:823 -[BeforeEach] [sig-apps] Daemon set [Serial] +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should have a working scale subresource [Conformance] + test/e2e/apps/statefulset.go:848 +[BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:01:59.217 -Jun 12 22:01:59.217: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename daemonsets 06/12/23 22:01:59.22 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:01:59.27 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:01:59.305 -[BeforeEach] [sig-apps] Daemon set [Serial] +STEP: Creating a kubernetes client 07/27/23 02:40:28.172 +Jul 27 02:40:28.172: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename statefulset 07/27/23 02:40:28.173 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:40:28.226 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:40:28.241 +[BeforeEach] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-apps] Daemon set [Serial] - test/e2e/apps/daemon_set.go:146 -[It] should list and delete a collection of DaemonSets [Conformance] - test/e2e/apps/daemon_set.go:823 -STEP: Creating simple DaemonSet "daemon-set" 06/12/23 22:01:59.435 -STEP: Check that daemon pods launch on every node of the cluster. 06/12/23 22:01:59.464 -Jun 12 22:01:59.509: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 22:01:59.509: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 -Jun 12 22:02:00.673: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 22:02:00.673: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 -Jun 12 22:02:01.685: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 22:02:01.685: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 -Jun 12 22:02:02.652: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 -Jun 12 22:02:02.652: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 -Jun 12 22:02:03.657: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 -Jun 12 22:02:03.657: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 -Jun 12 22:02:04.543: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 -Jun 12 22:02:04.543: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 -Jun 12 22:02:05.546: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 -Jun 12 22:02:05.546: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set -STEP: listing all DeamonSets 06/12/23 22:02:05.558 -STEP: DeleteCollection of the DaemonSets 06/12/23 22:02:05.575 -STEP: Verify that ReplicaSets have been deleted 06/12/23 22:02:05.597 -[AfterEach] [sig-apps] Daemon set [Serial] - test/e2e/apps/daemon_set.go:111 -Jun 12 22:02:05.663: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"127365"},"items":null} - -Jun 12 22:02:05.695: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"127367"},"items":[{"metadata":{"name":"daemon-set-7flpb","generateName":"daemon-set-","namespace":"daemonsets-7024","uid":"5fc9bbbb-8852-4084-b6ac-336803c1e0ab","resourceVersion":"127365","creationTimestamp":"2023-06-12T22:01:59Z","deletionTimestamp":"2023-06-12T22:02:35Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"6cff669f8c","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"5e8e966117f0418de3a49417770dc53fd2abc9ec6d437777abee266ca20b4698","cni.projectcalico.org/podIP":"172.30.224.28/32","cni.projectcalico.org/podIPs":"172.30.224.28/32","k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"k8s-pod-network\",\n \"ips\": [\n \"172.30.224.28\"\n ],\n \"default\": true,\n \"dns\": {}\n}]","openshift.io/scc":"anyuid"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"575a4311-7b80-498c-a488-11013e22af99","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-12T22:01:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"575a4311-7b80-498c-a488-11013e22af99\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"calico","operation":"Update","apiVersion":"v1","time":"2023-06-12T22:02:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"multus","operation":"Update","apiVersion":"v1","time":"2023-06-12T22:02:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}},"subresource":"status"},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-12T22:02:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.30.224.28\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-5djl5","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}},{"configMap":{"name":"openshift-service-ca.crt","items":[{"key":"service-ca.crt","path":"service-ca.crt"}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-5djl5","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{"capabilities":{"drop":["MKNOD"]}}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"10.138.75.70","securityContext":{"seLinuxOptions":{"level":"s0:c60,c40"}},"imagePullSecrets":[{"name":"default-dockercfg-6jnsk"}],"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["10.138.75.70"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-06-12T22:01:59Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-06-12T22:02:01Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-06-12T22:02:01Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-06-12T22:01:59Z"}],"hostIP":"10.138.75.70","podIP":"172.30.224.28","podIPs":[{"ip":"172.30.224.28"}],"startTime":"2023-06-12T22:01:59Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-06-12T22:02:01Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22","containerID":"cri-o://8beffedf01690c75610314e0e8633682ad19928f672a8cf10411f41c7bf02d51","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-d5xxm","generateName":"daemon-set-","namespace":"daemonsets-7024","uid":"d16dd352-7199-4f0a-a728-e2f54c8722c3","resourceVersion":"127363","creationTimestamp":"2023-06-12T22:01:59Z","deletionTimestamp":"2023-06-12T22:02:35Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"6cff669f8c","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"844129b09fd9bfd763f813e84104874e30b0c5aa131d66461774e9b135a61117","cni.projectcalico.org/podIP":"172.30.185.119/32","cni.projectcalico.org/podIPs":"172.30.185.119/32","k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"k8s-pod-network\",\n \"ips\": [\n \"172.30.185.119\"\n ],\n \"default\": true,\n \"dns\": {}\n}]","openshift.io/scc":"anyuid"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"575a4311-7b80-498c-a488-11013e22af99","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-12T22:01:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"575a4311-7b80-498c-a488-11013e22af99\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"calico","operation":"Update","apiVersion":"v1","time":"2023-06-12T22:02:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"multus","operation":"Update","apiVersion":"v1","time":"2023-06-12T22:02:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}},"subresource":"status"},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-12T22:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.30.185.119\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-vvfb4","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}},{"configMap":{"name":"openshift-service-ca.crt","items":[{"key":"service-ca.crt","path":"service-ca.crt"}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-vvfb4","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{"capabilities":{"drop":["MKNOD"]}}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"10.138.75.116","securityContext":{"seLinuxOptions":{"level":"s0:c60,c40"}},"imagePullSecrets":[{"name":"default-dockercfg-6jnsk"}],"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["10.138.75.116"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-06-12T22:01:59Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-06-12T22:02:04Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-06-12T22:02:04Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-06-12T22:01:59Z"}],"hostIP":"10.138.75.116","podIP":"172.30.185.119","podIPs":[{"ip":"172.30.185.119"}],"startTime":"2023-06-12T22:01:59Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-06-12T22:02:03Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22","containerID":"cri-o://4f903940fd6f05fed0072b4ea898324ff55d17f74f574b7ad334a8beed2d83ae","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-jbmnd","generateName":"daemon-set-","namespace":"daemonsets-7024","uid":"18643af6-f675-407e-ba39-1ef11c4c36e8","resourceVersion":"127362","creationTimestamp":"2023-06-12T22:01:59Z","deletionTimestamp":"2023-06-12T22:02:35Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"6cff669f8c","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"978e6211c12546038a02759360a45755240840472406a886e78bb8d4250dc27e","cni.projectcalico.org/podIP":"172.30.161.102/32","cni.projectcalico.org/podIPs":"172.30.161.102/32","k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"k8s-pod-network\",\n \"ips\": [\n \"172.30.161.102\"\n ],\n \"default\": true,\n \"dns\": {}\n}]","openshift.io/scc":"anyuid"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"575a4311-7b80-498c-a488-11013e22af99","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-12T22:01:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"575a4311-7b80-498c-a488-11013e22af99\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"calico","operation":"Update","apiVersion":"v1","time":"2023-06-12T22:02:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-12T22:02:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.30.161.102\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"},{"manager":"multus","operation":"Update","apiVersion":"v1","time":"2023-06-12T22:02:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-wdl5s","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}},{"configMap":{"name":"openshift-service-ca.crt","items":[{"key":"service-ca.crt","path":"service-ca.crt"}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-wdl5s","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{"capabilities":{"drop":["MKNOD"]}}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"10.138.75.112","securityContext":{"seLinuxOptions":{"level":"s0:c60,c40"}},"imagePullSecrets":[{"name":"default-dockercfg-6jnsk"}],"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["10.138.75.112"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-06-12T22:01:59Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-06-12T22:02:01Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-06-12T22:02:01Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-06-12T22:01:59Z"}],"hostIP":"10.138.75.112","podIP":"172.30.161.102","podIPs":[{"ip":"172.30.161.102"}],"startTime":"2023-06-12T22:01:59Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-06-12T22:02:01Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22","containerID":"cri-o://23ad6e4af3d99bffab611761faedf29f609b0afb6c0d436bf7459175a5573805","started":true}],"qosClass":"BestEffort"}}]} - -[AfterEach] [sig-apps] Daemon set [Serial] +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 +STEP: Creating service test in namespace statefulset-5940 07/27/23 02:40:28.253 +[It] should have a working scale subresource [Conformance] + test/e2e/apps/statefulset.go:848 +STEP: Creating statefulset ss in namespace statefulset-5940 07/27/23 02:40:28.289 +Jul 27 02:40:28.332: INFO: Found 0 stateful pods, waiting for 1 +Jul 27 02:40:38.344: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: getting scale subresource 07/27/23 02:40:38.368 +STEP: updating a scale subresource 07/27/23 02:40:38.383 +STEP: verifying the statefulset Spec.Replicas was modified 07/27/23 02:40:38.403 +STEP: Patch a scale subresource 07/27/23 02:40:38.419 +STEP: verifying the statefulset Spec.Replicas was modified 07/27/23 02:40:38.47 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 +Jul 27 02:40:38.484: INFO: Deleting all statefulset in ns statefulset-5940 +Jul 27 02:40:38.498: INFO: Scaling statefulset ss to 0 +Jul 27 02:40:48.604: INFO: Waiting for statefulset status.replicas updated to 0 +Jul 27 02:40:48.648: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 -Jun 12 22:02:05.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] +Jul 27 02:40:48.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] +[DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] +[DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 -STEP: Destroying namespace "daemonsets-7024" for this suite. 06/12/23 22:02:05.818 +STEP: Destroying namespace "statefulset-5940" for this suite. 07/27/23 02:40:48.8 ------------------------------ -• [SLOW TEST] [6.655 seconds] -[sig-apps] Daemon set [Serial] +• [SLOW TEST] [20.731 seconds] +[sig-apps] StatefulSet test/e2e/apps/framework.go:23 - should list and delete a collection of DaemonSets [Conformance] - test/e2e/apps/daemon_set.go:823 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:103 + should have a working scale subresource [Conformance] + test/e2e/apps/statefulset.go:848 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] Daemon set [Serial] + [BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:01:59.217 - Jun 12 22:01:59.217: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename daemonsets 06/12/23 22:01:59.22 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:01:59.27 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:01:59.305 - [BeforeEach] [sig-apps] Daemon set [Serial] + STEP: Creating a kubernetes client 07/27/23 02:40:28.172 + Jul 27 02:40:28.172: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename statefulset 07/27/23 02:40:28.173 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:40:28.226 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:40:28.241 + [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-apps] Daemon set [Serial] - test/e2e/apps/daemon_set.go:146 - [It] should list and delete a collection of DaemonSets [Conformance] - test/e2e/apps/daemon_set.go:823 - STEP: Creating simple DaemonSet "daemon-set" 06/12/23 22:01:59.435 - STEP: Check that daemon pods launch on every node of the cluster. 06/12/23 22:01:59.464 - Jun 12 22:01:59.509: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 22:01:59.509: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 - Jun 12 22:02:00.673: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 22:02:00.673: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 - Jun 12 22:02:01.685: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 22:02:01.685: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 - Jun 12 22:02:02.652: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 - Jun 12 22:02:02.652: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 - Jun 12 22:02:03.657: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 - Jun 12 22:02:03.657: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 - Jun 12 22:02:04.543: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 - Jun 12 22:02:04.543: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 - Jun 12 22:02:05.546: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 - Jun 12 22:02:05.546: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set - STEP: listing all DeamonSets 06/12/23 22:02:05.558 - STEP: DeleteCollection of the DaemonSets 06/12/23 22:02:05.575 - STEP: Verify that ReplicaSets have been deleted 06/12/23 22:02:05.597 - [AfterEach] [sig-apps] Daemon set [Serial] - test/e2e/apps/daemon_set.go:111 - Jun 12 22:02:05.663: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"127365"},"items":null} - - Jun 12 22:02:05.695: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"127367"},"items":[{"metadata":{"name":"daemon-set-7flpb","generateName":"daemon-set-","namespace":"daemonsets-7024","uid":"5fc9bbbb-8852-4084-b6ac-336803c1e0ab","resourceVersion":"127365","creationTimestamp":"2023-06-12T22:01:59Z","deletionTimestamp":"2023-06-12T22:02:35Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"6cff669f8c","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"5e8e966117f0418de3a49417770dc53fd2abc9ec6d437777abee266ca20b4698","cni.projectcalico.org/podIP":"172.30.224.28/32","cni.projectcalico.org/podIPs":"172.30.224.28/32","k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"k8s-pod-network\",\n \"ips\": [\n \"172.30.224.28\"\n ],\n \"default\": true,\n \"dns\": {}\n}]","openshift.io/scc":"anyuid"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"575a4311-7b80-498c-a488-11013e22af99","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-12T22:01:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"575a4311-7b80-498c-a488-11013e22af99\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"calico","operation":"Update","apiVersion":"v1","time":"2023-06-12T22:02:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"multus","operation":"Update","apiVersion":"v1","time":"2023-06-12T22:02:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}},"subresource":"status"},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-12T22:02:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.30.224.28\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-5djl5","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}},{"configMap":{"name":"openshift-service-ca.crt","items":[{"key":"service-ca.crt","path":"service-ca.crt"}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-5djl5","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{"capabilities":{"drop":["MKNOD"]}}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"10.138.75.70","securityContext":{"seLinuxOptions":{"level":"s0:c60,c40"}},"imagePullSecrets":[{"name":"default-dockercfg-6jnsk"}],"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["10.138.75.70"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-06-12T22:01:59Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-06-12T22:02:01Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-06-12T22:02:01Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-06-12T22:01:59Z"}],"hostIP":"10.138.75.70","podIP":"172.30.224.28","podIPs":[{"ip":"172.30.224.28"}],"startTime":"2023-06-12T22:01:59Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-06-12T22:02:01Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22","containerID":"cri-o://8beffedf01690c75610314e0e8633682ad19928f672a8cf10411f41c7bf02d51","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-d5xxm","generateName":"daemon-set-","namespace":"daemonsets-7024","uid":"d16dd352-7199-4f0a-a728-e2f54c8722c3","resourceVersion":"127363","creationTimestamp":"2023-06-12T22:01:59Z","deletionTimestamp":"2023-06-12T22:02:35Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"6cff669f8c","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"844129b09fd9bfd763f813e84104874e30b0c5aa131d66461774e9b135a61117","cni.projectcalico.org/podIP":"172.30.185.119/32","cni.projectcalico.org/podIPs":"172.30.185.119/32","k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"k8s-pod-network\",\n \"ips\": [\n \"172.30.185.119\"\n ],\n \"default\": true,\n \"dns\": {}\n}]","openshift.io/scc":"anyuid"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"575a4311-7b80-498c-a488-11013e22af99","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-12T22:01:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"575a4311-7b80-498c-a488-11013e22af99\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"calico","operation":"Update","apiVersion":"v1","time":"2023-06-12T22:02:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"multus","operation":"Update","apiVersion":"v1","time":"2023-06-12T22:02:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}},"subresource":"status"},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-12T22:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.30.185.119\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-vvfb4","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}},{"configMap":{"name":"openshift-service-ca.crt","items":[{"key":"service-ca.crt","path":"service-ca.crt"}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-vvfb4","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{"capabilities":{"drop":["MKNOD"]}}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"10.138.75.116","securityContext":{"seLinuxOptions":{"level":"s0:c60,c40"}},"imagePullSecrets":[{"name":"default-dockercfg-6jnsk"}],"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["10.138.75.116"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-06-12T22:01:59Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-06-12T22:02:04Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-06-12T22:02:04Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-06-12T22:01:59Z"}],"hostIP":"10.138.75.116","podIP":"172.30.185.119","podIPs":[{"ip":"172.30.185.119"}],"startTime":"2023-06-12T22:01:59Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-06-12T22:02:03Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22","containerID":"cri-o://4f903940fd6f05fed0072b4ea898324ff55d17f74f574b7ad334a8beed2d83ae","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-jbmnd","generateName":"daemon-set-","namespace":"daemonsets-7024","uid":"18643af6-f675-407e-ba39-1ef11c4c36e8","resourceVersion":"127362","creationTimestamp":"2023-06-12T22:01:59Z","deletionTimestamp":"2023-06-12T22:02:35Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"6cff669f8c","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"978e6211c12546038a02759360a45755240840472406a886e78bb8d4250dc27e","cni.projectcalico.org/podIP":"172.30.161.102/32","cni.projectcalico.org/podIPs":"172.30.161.102/32","k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"k8s-pod-network\",\n \"ips\": [\n \"172.30.161.102\"\n ],\n \"default\": true,\n \"dns\": {}\n}]","openshift.io/scc":"anyuid"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"575a4311-7b80-498c-a488-11013e22af99","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-12T22:01:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"575a4311-7b80-498c-a488-11013e22af99\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"calico","operation":"Update","apiVersion":"v1","time":"2023-06-12T22:02:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-12T22:02:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.30.161.102\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"},{"manager":"multus","operation":"Update","apiVersion":"v1","time":"2023-06-12T22:02:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-wdl5s","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}},{"configMap":{"name":"openshift-service-ca.crt","items":[{"key":"service-ca.crt","path":"service-ca.crt"}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-wdl5s","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{"capabilities":{"drop":["MKNOD"]}}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"10.138.75.112","securityContext":{"seLinuxOptions":{"level":"s0:c60,c40"}},"imagePullSecrets":[{"name":"default-dockercfg-6jnsk"}],"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["10.138.75.112"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-06-12T22:01:59Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-06-12T22:02:01Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-06-12T22:02:01Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-06-12T22:01:59Z"}],"hostIP":"10.138.75.112","podIP":"172.30.161.102","podIPs":[{"ip":"172.30.161.102"}],"startTime":"2023-06-12T22:01:59Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-06-12T22:02:01Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22","containerID":"cri-o://23ad6e4af3d99bffab611761faedf29f609b0afb6c0d436bf7459175a5573805","started":true}],"qosClass":"BestEffort"}}]} - - [AfterEach] [sig-apps] Daemon set [Serial] + [BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 + [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 + STEP: Creating service test in namespace statefulset-5940 07/27/23 02:40:28.253 + [It] should have a working scale subresource [Conformance] + test/e2e/apps/statefulset.go:848 + STEP: Creating statefulset ss in namespace statefulset-5940 07/27/23 02:40:28.289 + Jul 27 02:40:28.332: INFO: Found 0 stateful pods, waiting for 1 + Jul 27 02:40:38.344: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true + STEP: getting scale subresource 07/27/23 02:40:38.368 + STEP: updating a scale subresource 07/27/23 02:40:38.383 + STEP: verifying the statefulset Spec.Replicas was modified 07/27/23 02:40:38.403 + STEP: Patch a scale subresource 07/27/23 02:40:38.419 + STEP: verifying the statefulset Spec.Replicas was modified 07/27/23 02:40:38.47 + [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 + Jul 27 02:40:38.484: INFO: Deleting all statefulset in ns statefulset-5940 + Jul 27 02:40:38.498: INFO: Scaling statefulset ss to 0 + Jul 27 02:40:48.604: INFO: Waiting for statefulset status.replicas updated to 0 + Jul 27 02:40:48.648: INFO: Deleting statefulset ss + [AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 - Jun 12 22:02:05.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + Jul 27 02:40:48.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + [DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 - STEP: Destroying namespace "daemonsets-7024" for this suite. 06/12/23 22:02:05.818 + STEP: Destroying namespace "statefulset-5940" for this suite. 07/27/23 02:40:48.8 << End Captured GinkgoWriter Output ------------------------------ -SSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-api-machinery] ResourceQuota - should be able to update and delete ResourceQuota. [Conformance] - test/e2e/apimachinery/resource_quota.go:884 -[BeforeEach] [sig-api-machinery] ResourceQuota +[sig-storage] Projected secret + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:215 +[BeforeEach] [sig-storage] Projected secret set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:02:05.874 -Jun 12 22:02:05.874: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename resourcequota 06/12/23 22:02:05.879 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:02:05.954 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:02:05.974 -[BeforeEach] [sig-api-machinery] ResourceQuota +STEP: Creating a kubernetes client 07/27/23 02:40:48.905 +Jul 27 02:40:48.905: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename projected 07/27/23 02:40:48.906 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:40:49.074 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:40:49.088 +[BeforeEach] [sig-storage] Projected secret test/e2e/framework/metrics/init/init.go:31 -[It] should be able to update and delete ResourceQuota. [Conformance] - test/e2e/apimachinery/resource_quota.go:884 -STEP: Creating a ResourceQuota 06/12/23 22:02:05.994 -STEP: Getting a ResourceQuota 06/12/23 22:02:06.018 -STEP: Updating a ResourceQuota 06/12/23 22:02:06.037 -STEP: Verifying a ResourceQuota was modified 06/12/23 22:02:06.083 -STEP: Deleting a ResourceQuota 06/12/23 22:02:06.116 -STEP: Verifying the deleted ResourceQuota 06/12/23 22:02:06.161 -[AfterEach] [sig-api-machinery] ResourceQuota +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:215 +Jul 27 02:40:49.136: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node +STEP: Creating secret with name s-test-opt-del-8a66ae26-95ba-448d-a74a-a401fd5f325f 07/27/23 02:40:49.136 +STEP: Creating secret with name s-test-opt-upd-81803056-e2b1-4763-bd6d-4c0a59e6a8e1 07/27/23 02:40:49.156 +STEP: Creating the pod 07/27/23 02:40:49.174 +Jul 27 02:40:49.213: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-afd8e306-dc45-482c-b3c2-7598e1224227" in namespace "projected-1731" to be "running and ready" +Jul 27 02:40:49.224: INFO: Pod "pod-projected-secrets-afd8e306-dc45-482c-b3c2-7598e1224227": Phase="Pending", Reason="", readiness=false. Elapsed: 11.096018ms +Jul 27 02:40:49.224: INFO: The phase of Pod pod-projected-secrets-afd8e306-dc45-482c-b3c2-7598e1224227 is Pending, waiting for it to be Running (with Ready = true) +Jul 27 02:40:51.235: INFO: Pod "pod-projected-secrets-afd8e306-dc45-482c-b3c2-7598e1224227": Phase="Running", Reason="", readiness=true. Elapsed: 2.022045272s +Jul 27 02:40:51.235: INFO: The phase of Pod pod-projected-secrets-afd8e306-dc45-482c-b3c2-7598e1224227 is Running (Ready = true) +Jul 27 02:40:51.235: INFO: Pod "pod-projected-secrets-afd8e306-dc45-482c-b3c2-7598e1224227" satisfied condition "running and ready" +STEP: Deleting secret s-test-opt-del-8a66ae26-95ba-448d-a74a-a401fd5f325f 07/27/23 02:40:51.306 +STEP: Updating secret s-test-opt-upd-81803056-e2b1-4763-bd6d-4c0a59e6a8e1 07/27/23 02:40:51.32 +STEP: Creating secret with name s-test-opt-create-88c4456f-1fad-43da-87a1-7deeb060b634 07/27/23 02:40:51.333 +STEP: waiting to observe update in volume 07/27/23 02:40:51.346 +[AfterEach] [sig-storage] Projected secret test/e2e/framework/node/init/init.go:32 -Jun 12 22:02:06.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +Jul 27 02:40:53.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected secret test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +[DeferCleanup (Each)] [sig-storage] Projected secret dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +[DeferCleanup (Each)] [sig-storage] Projected secret tear down framework | framework.go:193 -STEP: Destroying namespace "resourcequota-2148" for this suite. 06/12/23 22:02:06.225 +STEP: Destroying namespace "projected-1731" for this suite. 07/27/23 02:40:53.441 ------------------------------ -• [0.372 seconds] -[sig-api-machinery] ResourceQuota -test/e2e/apimachinery/framework.go:23 - should be able to update and delete ResourceQuota. [Conformance] - test/e2e/apimachinery/resource_quota.go:884 +• [4.565 seconds] +[sig-storage] Projected secret +test/e2e/common/storage/framework.go:23 + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:215 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] ResourceQuota + [BeforeEach] [sig-storage] Projected secret set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:02:05.874 - Jun 12 22:02:05.874: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename resourcequota 06/12/23 22:02:05.879 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:02:05.954 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:02:05.974 - [BeforeEach] [sig-api-machinery] ResourceQuota + STEP: Creating a kubernetes client 07/27/23 02:40:48.905 + Jul 27 02:40:48.905: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename projected 07/27/23 02:40:48.906 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:40:49.074 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:40:49.088 + [BeforeEach] [sig-storage] Projected secret test/e2e/framework/metrics/init/init.go:31 - [It] should be able to update and delete ResourceQuota. [Conformance] - test/e2e/apimachinery/resource_quota.go:884 - STEP: Creating a ResourceQuota 06/12/23 22:02:05.994 - STEP: Getting a ResourceQuota 06/12/23 22:02:06.018 - STEP: Updating a ResourceQuota 06/12/23 22:02:06.037 - STEP: Verifying a ResourceQuota was modified 06/12/23 22:02:06.083 - STEP: Deleting a ResourceQuota 06/12/23 22:02:06.116 - STEP: Verifying the deleted ResourceQuota 06/12/23 22:02:06.161 - [AfterEach] [sig-api-machinery] ResourceQuota + [It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:215 + Jul 27 02:40:49.136: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node + STEP: Creating secret with name s-test-opt-del-8a66ae26-95ba-448d-a74a-a401fd5f325f 07/27/23 02:40:49.136 + STEP: Creating secret with name s-test-opt-upd-81803056-e2b1-4763-bd6d-4c0a59e6a8e1 07/27/23 02:40:49.156 + STEP: Creating the pod 07/27/23 02:40:49.174 + Jul 27 02:40:49.213: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-afd8e306-dc45-482c-b3c2-7598e1224227" in namespace "projected-1731" to be "running and ready" + Jul 27 02:40:49.224: INFO: Pod "pod-projected-secrets-afd8e306-dc45-482c-b3c2-7598e1224227": Phase="Pending", Reason="", readiness=false. Elapsed: 11.096018ms + Jul 27 02:40:49.224: INFO: The phase of Pod pod-projected-secrets-afd8e306-dc45-482c-b3c2-7598e1224227 is Pending, waiting for it to be Running (with Ready = true) + Jul 27 02:40:51.235: INFO: Pod "pod-projected-secrets-afd8e306-dc45-482c-b3c2-7598e1224227": Phase="Running", Reason="", readiness=true. Elapsed: 2.022045272s + Jul 27 02:40:51.235: INFO: The phase of Pod pod-projected-secrets-afd8e306-dc45-482c-b3c2-7598e1224227 is Running (Ready = true) + Jul 27 02:40:51.235: INFO: Pod "pod-projected-secrets-afd8e306-dc45-482c-b3c2-7598e1224227" satisfied condition "running and ready" + STEP: Deleting secret s-test-opt-del-8a66ae26-95ba-448d-a74a-a401fd5f325f 07/27/23 02:40:51.306 + STEP: Updating secret s-test-opt-upd-81803056-e2b1-4763-bd6d-4c0a59e6a8e1 07/27/23 02:40:51.32 + STEP: Creating secret with name s-test-opt-create-88c4456f-1fad-43da-87a1-7deeb060b634 07/27/23 02:40:51.333 + STEP: waiting to observe update in volume 07/27/23 02:40:51.346 + [AfterEach] [sig-storage] Projected secret test/e2e/framework/node/init/init.go:32 - Jun 12 22:02:06.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + Jul 27 02:40:53.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected secret test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + [DeferCleanup (Each)] [sig-storage] Projected secret dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + [DeferCleanup (Each)] [sig-storage] Projected secret tear down framework | framework.go:193 - STEP: Destroying namespace "resourcequota-2148" for this suite. 06/12/23 22:02:06.225 + STEP: Destroying namespace "projected-1731" for this suite. 07/27/23 02:40:53.441 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------- -[sig-node] NoExecuteTaintManager Single Pod [Serial] - removing taint cancels eviction [Disruptive] [Conformance] - test/e2e/node/taints.go:293 -[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] +[sig-storage] Secrets + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:47 +[BeforeEach] [sig-storage] Secrets set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:02:06.274 -Jun 12 22:02:06.274: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename taint-single-pod 06/12/23 22:02:06.278 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:02:06.335 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:02:06.373 -[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] +STEP: Creating a kubernetes client 07/27/23 02:40:53.47 +Jul 27 02:40:53.470: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename secrets 07/27/23 02:40:53.471 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:40:53.518 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:40:53.53 +[BeforeEach] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] - test/e2e/node/taints.go:170 -Jun 12 22:02:06.404: INFO: Waiting up to 1m0s for all nodes to be ready -Jun 12 22:03:06.707: INFO: Waiting for terminating namespaces to be deleted... -[It] removing taint cancels eviction [Disruptive] [Conformance] - test/e2e/node/taints.go:293 -Jun 12 22:03:06.736: INFO: Starting informer... -STEP: Starting pod... 06/12/23 22:03:06.736 -Jun 12 22:03:07.014: INFO: Pod is running on 10.138.75.70. Tainting Node -STEP: Trying to apply a taint on the Node 06/12/23 22:03:07.014 -STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 06/12/23 22:03:07.119 -STEP: Waiting short time to make sure Pod is queued for deletion 06/12/23 22:03:07.14 -Jun 12 22:03:07.140: INFO: Pod wasn't evicted. Proceeding -Jun 12 22:03:07.140: INFO: Removing taint from Node -STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 06/12/23 22:03:07.271 -STEP: Waiting some time to make sure that toleration time passed. 06/12/23 22:03:07.34 -Jun 12 22:04:22.341: INFO: Pod wasn't evicted. Test successful -[AfterEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:47 +STEP: Creating secret with name secret-test-68b19121-da16-4d92-a36d-c62dadf0b7d2 07/27/23 02:40:53.543 +STEP: Creating a pod to test consume secrets 07/27/23 02:40:53.658 +Jul 27 02:40:53.686: INFO: Waiting up to 5m0s for pod "pod-secrets-68955e44-809c-496c-94ee-b0e14d8a84ef" in namespace "secrets-6243" to be "Succeeded or Failed" +Jul 27 02:40:53.695: INFO: Pod "pod-secrets-68955e44-809c-496c-94ee-b0e14d8a84ef": Phase="Pending", Reason="", readiness=false. Elapsed: 9.374175ms +Jul 27 02:40:55.707: INFO: Pod "pod-secrets-68955e44-809c-496c-94ee-b0e14d8a84ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020856049s +Jul 27 02:40:57.707: INFO: Pod "pod-secrets-68955e44-809c-496c-94ee-b0e14d8a84ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021270417s +STEP: Saw pod success 07/27/23 02:40:57.707 +Jul 27 02:40:57.708: INFO: Pod "pod-secrets-68955e44-809c-496c-94ee-b0e14d8a84ef" satisfied condition "Succeeded or Failed" +Jul 27 02:40:57.725: INFO: Trying to get logs from node 10.245.128.19 pod pod-secrets-68955e44-809c-496c-94ee-b0e14d8a84ef container secret-volume-test: +STEP: delete the pod 07/27/23 02:40:57.744 +Jul 27 02:40:57.768: INFO: Waiting for pod pod-secrets-68955e44-809c-496c-94ee-b0e14d8a84ef to disappear +Jul 27 02:40:57.777: INFO: Pod pod-secrets-68955e44-809c-496c-94ee-b0e14d8a84ef no longer exists +[AfterEach] [sig-storage] Secrets test/e2e/framework/node/init/init.go:32 -Jun 12 22:04:22.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Single Pod [Serial] +Jul 27 02:40:57.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Single Pod [Serial] +[DeferCleanup (Each)] [sig-storage] Secrets dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Single Pod [Serial] +[DeferCleanup (Each)] [sig-storage] Secrets tear down framework | framework.go:193 -STEP: Destroying namespace "taint-single-pod-3074" for this suite. 06/12/23 22:04:22.391 +STEP: Destroying namespace "secrets-6243" for this suite. 07/27/23 02:40:57.792 ------------------------------ -• [SLOW TEST] [136.135 seconds] -[sig-node] NoExecuteTaintManager Single Pod [Serial] -test/e2e/node/framework.go:23 - removing taint cancels eviction [Disruptive] [Conformance] - test/e2e/node/taints.go:293 +• [4.345 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:47 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + [BeforeEach] [sig-storage] Secrets set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:02:06.274 - Jun 12 22:02:06.274: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename taint-single-pod 06/12/23 22:02:06.278 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:02:06.335 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:02:06.373 - [BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + STEP: Creating a kubernetes client 07/27/23 02:40:53.47 + Jul 27 02:40:53.470: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename secrets 07/27/23 02:40:53.471 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:40:53.518 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:40:53.53 + [BeforeEach] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] - test/e2e/node/taints.go:170 - Jun 12 22:02:06.404: INFO: Waiting up to 1m0s for all nodes to be ready - Jun 12 22:03:06.707: INFO: Waiting for terminating namespaces to be deleted... - [It] removing taint cancels eviction [Disruptive] [Conformance] - test/e2e/node/taints.go:293 - Jun 12 22:03:06.736: INFO: Starting informer... - STEP: Starting pod... 06/12/23 22:03:06.736 - Jun 12 22:03:07.014: INFO: Pod is running on 10.138.75.70. Tainting Node - STEP: Trying to apply a taint on the Node 06/12/23 22:03:07.014 - STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 06/12/23 22:03:07.119 - STEP: Waiting short time to make sure Pod is queued for deletion 06/12/23 22:03:07.14 - Jun 12 22:03:07.140: INFO: Pod wasn't evicted. Proceeding - Jun 12 22:03:07.140: INFO: Removing taint from Node - STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 06/12/23 22:03:07.271 - STEP: Waiting some time to make sure that toleration time passed. 06/12/23 22:03:07.34 - Jun 12 22:04:22.341: INFO: Pod wasn't evicted. Test successful - [AfterEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + [It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:47 + STEP: Creating secret with name secret-test-68b19121-da16-4d92-a36d-c62dadf0b7d2 07/27/23 02:40:53.543 + STEP: Creating a pod to test consume secrets 07/27/23 02:40:53.658 + Jul 27 02:40:53.686: INFO: Waiting up to 5m0s for pod "pod-secrets-68955e44-809c-496c-94ee-b0e14d8a84ef" in namespace "secrets-6243" to be "Succeeded or Failed" + Jul 27 02:40:53.695: INFO: Pod "pod-secrets-68955e44-809c-496c-94ee-b0e14d8a84ef": Phase="Pending", Reason="", readiness=false. Elapsed: 9.374175ms + Jul 27 02:40:55.707: INFO: Pod "pod-secrets-68955e44-809c-496c-94ee-b0e14d8a84ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020856049s + Jul 27 02:40:57.707: INFO: Pod "pod-secrets-68955e44-809c-496c-94ee-b0e14d8a84ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021270417s + STEP: Saw pod success 07/27/23 02:40:57.707 + Jul 27 02:40:57.708: INFO: Pod "pod-secrets-68955e44-809c-496c-94ee-b0e14d8a84ef" satisfied condition "Succeeded or Failed" + Jul 27 02:40:57.725: INFO: Trying to get logs from node 10.245.128.19 pod pod-secrets-68955e44-809c-496c-94ee-b0e14d8a84ef container secret-volume-test: + STEP: delete the pod 07/27/23 02:40:57.744 + Jul 27 02:40:57.768: INFO: Waiting for pod pod-secrets-68955e44-809c-496c-94ee-b0e14d8a84ef to disappear + Jul 27 02:40:57.777: INFO: Pod pod-secrets-68955e44-809c-496c-94ee-b0e14d8a84ef no longer exists + [AfterEach] [sig-storage] Secrets test/e2e/framework/node/init/init.go:32 - Jun 12 22:04:22.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Single Pod [Serial] + Jul 27 02:40:57.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Single Pod [Serial] + [DeferCleanup (Each)] [sig-storage] Secrets dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Single Pod [Serial] + [DeferCleanup (Each)] [sig-storage] Secrets tear down framework | framework.go:193 - STEP: Destroying namespace "taint-single-pod-3074" for this suite. 06/12/23 22:04:22.391 + STEP: Destroying namespace "secrets-6243" for this suite. 07/27/23 02:40:57.792 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-node] Downward API - should provide pod UID as env vars [NodeConformance] [Conformance] - test/e2e/common/node/downwardapi.go:267 -[BeforeEach] [sig-node] Downward API +[sig-api-machinery] ResourceQuota + should verify ResourceQuota with terminating scopes. [Conformance] + test/e2e/apimachinery/resource_quota.go:690 +[BeforeEach] [sig-api-machinery] ResourceQuota set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:04:22.414 -Jun 12 22:04:22.431: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename downward-api 06/12/23 22:04:22.434 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:04:22.481 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:04:22.496 -[BeforeEach] [sig-node] Downward API +STEP: Creating a kubernetes client 07/27/23 02:40:57.816 +Jul 27 02:40:57.816: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename resourcequota 07/27/23 02:40:57.817 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:40:57.86 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:40:57.871 +[BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:31 -[It] should provide pod UID as env vars [NodeConformance] [Conformance] - test/e2e/common/node/downwardapi.go:267 -STEP: Creating a pod to test downward api env vars 06/12/23 22:04:22.51 -Jun 12 22:04:22.599: INFO: Waiting up to 5m0s for pod "downward-api-552f62b5-ee36-45a2-9f97-21ac31bd837c" in namespace "downward-api-912" to be "Succeeded or Failed" -Jun 12 22:04:22.612: INFO: Pod "downward-api-552f62b5-ee36-45a2-9f97-21ac31bd837c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.623347ms -Jun 12 22:04:24.624: INFO: Pod "downward-api-552f62b5-ee36-45a2-9f97-21ac31bd837c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025445586s -Jun 12 22:04:26.630: INFO: Pod "downward-api-552f62b5-ee36-45a2-9f97-21ac31bd837c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030830508s -Jun 12 22:04:28.639: INFO: Pod "downward-api-552f62b5-ee36-45a2-9f97-21ac31bd837c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0401232s -Jun 12 22:04:30.625: INFO: Pod "downward-api-552f62b5-ee36-45a2-9f97-21ac31bd837c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.02624757s -STEP: Saw pod success 06/12/23 22:04:30.625 -Jun 12 22:04:30.626: INFO: Pod "downward-api-552f62b5-ee36-45a2-9f97-21ac31bd837c" satisfied condition "Succeeded or Failed" -Jun 12 22:04:30.684: INFO: Trying to get logs from node 10.138.75.70 pod downward-api-552f62b5-ee36-45a2-9f97-21ac31bd837c container dapi-container: -STEP: delete the pod 06/12/23 22:04:30.742 -Jun 12 22:04:30.777: INFO: Waiting for pod downward-api-552f62b5-ee36-45a2-9f97-21ac31bd837c to disappear -Jun 12 22:04:30.788: INFO: Pod downward-api-552f62b5-ee36-45a2-9f97-21ac31bd837c no longer exists -[AfterEach] [sig-node] Downward API +[It] should verify ResourceQuota with terminating scopes. [Conformance] + test/e2e/apimachinery/resource_quota.go:690 +STEP: Creating a ResourceQuota with terminating scope 07/27/23 02:40:57.884 +STEP: Ensuring ResourceQuota status is calculated 07/27/23 02:40:57.9 +STEP: Creating a ResourceQuota with not terminating scope 07/27/23 02:40:59.922 +STEP: Ensuring ResourceQuota status is calculated 07/27/23 02:40:59.94 +STEP: Creating a long running pod 07/27/23 02:41:01.95 +STEP: Ensuring resource quota with not terminating scope captures the pod usage 07/27/23 02:41:01.988 +STEP: Ensuring resource quota with terminating scope ignored the pod usage 07/27/23 02:41:04.002 +STEP: Deleting the pod 07/27/23 02:41:06.013 +STEP: Ensuring resource quota status released the pod usage 07/27/23 02:41:06.035 +STEP: Creating a terminating pod 07/27/23 02:41:08.046 +STEP: Ensuring resource quota with terminating scope captures the pod usage 07/27/23 02:41:08.076 +STEP: Ensuring resource quota with not terminating scope ignored the pod usage 07/27/23 02:41:10.1 +STEP: Deleting the pod 07/27/23 02:41:12.111 +STEP: Ensuring resource quota status released the pod usage 07/27/23 02:41:12.13 +[AfterEach] [sig-api-machinery] ResourceQuota test/e2e/framework/node/init/init.go:32 -Jun 12 22:04:30.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Downward API +Jul 27 02:41:14.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Downward API +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Downward API +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota tear down framework | framework.go:193 -STEP: Destroying namespace "downward-api-912" for this suite. 06/12/23 22:04:30.854 +STEP: Destroying namespace "resourcequota-9368" for this suite. 07/27/23 02:41:14.198 ------------------------------ -• [SLOW TEST] [8.471 seconds] -[sig-node] Downward API -test/e2e/common/node/framework.go:23 - should provide pod UID as env vars [NodeConformance] [Conformance] - test/e2e/common/node/downwardapi.go:267 +• [SLOW TEST] [16.406 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should verify ResourceQuota with terminating scopes. [Conformance] + test/e2e/apimachinery/resource_quota.go:690 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Downward API + [BeforeEach] [sig-api-machinery] ResourceQuota set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:04:22.414 - Jun 12 22:04:22.431: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename downward-api 06/12/23 22:04:22.434 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:04:22.481 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:04:22.496 - [BeforeEach] [sig-node] Downward API + STEP: Creating a kubernetes client 07/27/23 02:40:57.816 + Jul 27 02:40:57.816: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename resourcequota 07/27/23 02:40:57.817 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:40:57.86 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:40:57.871 + [BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:31 - [It] should provide pod UID as env vars [NodeConformance] [Conformance] - test/e2e/common/node/downwardapi.go:267 - STEP: Creating a pod to test downward api env vars 06/12/23 22:04:22.51 - Jun 12 22:04:22.599: INFO: Waiting up to 5m0s for pod "downward-api-552f62b5-ee36-45a2-9f97-21ac31bd837c" in namespace "downward-api-912" to be "Succeeded or Failed" - Jun 12 22:04:22.612: INFO: Pod "downward-api-552f62b5-ee36-45a2-9f97-21ac31bd837c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.623347ms - Jun 12 22:04:24.624: INFO: Pod "downward-api-552f62b5-ee36-45a2-9f97-21ac31bd837c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025445586s - Jun 12 22:04:26.630: INFO: Pod "downward-api-552f62b5-ee36-45a2-9f97-21ac31bd837c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030830508s - Jun 12 22:04:28.639: INFO: Pod "downward-api-552f62b5-ee36-45a2-9f97-21ac31bd837c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0401232s - Jun 12 22:04:30.625: INFO: Pod "downward-api-552f62b5-ee36-45a2-9f97-21ac31bd837c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.02624757s - STEP: Saw pod success 06/12/23 22:04:30.625 - Jun 12 22:04:30.626: INFO: Pod "downward-api-552f62b5-ee36-45a2-9f97-21ac31bd837c" satisfied condition "Succeeded or Failed" - Jun 12 22:04:30.684: INFO: Trying to get logs from node 10.138.75.70 pod downward-api-552f62b5-ee36-45a2-9f97-21ac31bd837c container dapi-container: - STEP: delete the pod 06/12/23 22:04:30.742 - Jun 12 22:04:30.777: INFO: Waiting for pod downward-api-552f62b5-ee36-45a2-9f97-21ac31bd837c to disappear - Jun 12 22:04:30.788: INFO: Pod downward-api-552f62b5-ee36-45a2-9f97-21ac31bd837c no longer exists - [AfterEach] [sig-node] Downward API + [It] should verify ResourceQuota with terminating scopes. [Conformance] + test/e2e/apimachinery/resource_quota.go:690 + STEP: Creating a ResourceQuota with terminating scope 07/27/23 02:40:57.884 + STEP: Ensuring ResourceQuota status is calculated 07/27/23 02:40:57.9 + STEP: Creating a ResourceQuota with not terminating scope 07/27/23 02:40:59.922 + STEP: Ensuring ResourceQuota status is calculated 07/27/23 02:40:59.94 + STEP: Creating a long running pod 07/27/23 02:41:01.95 + STEP: Ensuring resource quota with not terminating scope captures the pod usage 07/27/23 02:41:01.988 + STEP: Ensuring resource quota with terminating scope ignored the pod usage 07/27/23 02:41:04.002 + STEP: Deleting the pod 07/27/23 02:41:06.013 + STEP: Ensuring resource quota status released the pod usage 07/27/23 02:41:06.035 + STEP: Creating a terminating pod 07/27/23 02:41:08.046 + STEP: Ensuring resource quota with terminating scope captures the pod usage 07/27/23 02:41:08.076 + STEP: Ensuring resource quota with not terminating scope ignored the pod usage 07/27/23 02:41:10.1 + STEP: Deleting the pod 07/27/23 02:41:12.111 + STEP: Ensuring resource quota status released the pod usage 07/27/23 02:41:12.13 + [AfterEach] [sig-api-machinery] ResourceQuota test/e2e/framework/node/init/init.go:32 - Jun 12 22:04:30.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Downward API + Jul 27 02:41:14.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Downward API + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Downward API + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota tear down framework | framework.go:193 - STEP: Destroying namespace "downward-api-912" for this suite. 06/12/23 22:04:30.854 + STEP: Destroying namespace "resourcequota-9368" for this suite. 07/27/23 02:41:14.198 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SS ------------------------------ -[sig-cli] Kubectl client Kubectl version - should check is all data is printed [Conformance] - test/e2e/kubectl/kubectl.go:1685 -[BeforeEach] [sig-cli] Kubectl client +[sig-node] Security Context When creating a container with runAsUser + should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:347 +[BeforeEach] [sig-node] Security Context set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:04:30.89 -Jun 12 22:04:30.890: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename kubectl 06/12/23 22:04:30.892 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:04:30.968 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:04:30.983 -[BeforeEach] [sig-cli] Kubectl client +STEP: Creating a kubernetes client 07/27/23 02:41:14.222 +Jul 27 02:41:14.222: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename security-context-test 07/27/23 02:41:14.223 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:41:14.294 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:41:14.34 +[BeforeEach] [sig-node] Security Context test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-cli] Kubectl client - test/e2e/kubectl/kubectl.go:274 -[It] should check is all data is printed [Conformance] - test/e2e/kubectl/kubectl.go:1685 -Jun 12 22:04:31.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-2321 version' -Jun 12 22:04:31.185: INFO: stderr: "WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.\n" -Jun 12 22:04:31.185: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"26\", GitVersion:\"v1.26.3\", GitCommit:\"9e644106593f3f4aa98f8a84b23db5fa378900bd\", GitTreeState:\"clean\", BuildDate:\"2023-03-15T13:40:17Z\", GoVersion:\"go1.19.7\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nKustomize Version: v4.5.7\nServer Version: version.Info{Major:\"1\", Minor:\"26\", GitVersion:\"v1.26.3+b404935\", GitCommit:\"9ded806a7d5deed41a20f680cf89dae58bbb5697\", GitTreeState:\"clean\", BuildDate:\"2023-04-19T02:20:48Z\", GoVersion:\"go1.19.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" -[AfterEach] [sig-cli] Kubectl client +[BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:50 +[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:347 +Jul 27 02:41:14.401: INFO: Waiting up to 5m0s for pod "busybox-user-65534-d2f5d58d-ba3a-48af-a622-83699564f4dc" in namespace "security-context-test-2147" to be "Succeeded or Failed" +Jul 27 02:41:14.414: INFO: Pod "busybox-user-65534-d2f5d58d-ba3a-48af-a622-83699564f4dc": Phase="Pending", Reason="", readiness=false. Elapsed: 13.60015ms +Jul 27 02:41:16.427: INFO: Pod "busybox-user-65534-d2f5d58d-ba3a-48af-a622-83699564f4dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026149905s +Jul 27 02:41:18.427: INFO: Pod "busybox-user-65534-d2f5d58d-ba3a-48af-a622-83699564f4dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02626229s +Jul 27 02:41:18.427: INFO: Pod "busybox-user-65534-d2f5d58d-ba3a-48af-a622-83699564f4dc" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context test/e2e/framework/node/init/init.go:32 -Jun 12 22:04:31.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-cli] Kubectl client +Jul 27 02:41:18.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Security Context test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-cli] Kubectl client +[DeferCleanup (Each)] [sig-node] Security Context dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-cli] Kubectl client +[DeferCleanup (Each)] [sig-node] Security Context tear down framework | framework.go:193 -STEP: Destroying namespace "kubectl-2321" for this suite. 06/12/23 22:04:31.202 +STEP: Destroying namespace "security-context-test-2147" for this suite. 07/27/23 02:41:18.441 ------------------------------ -• [0.333 seconds] -[sig-cli] Kubectl client -test/e2e/kubectl/framework.go:23 - Kubectl version - test/e2e/kubectl/kubectl.go:1679 - should check is all data is printed [Conformance] - test/e2e/kubectl/kubectl.go:1685 +• [4.242 seconds] +[sig-node] Security Context +test/e2e/common/node/framework.go:23 + When creating a container with runAsUser + test/e2e/common/node/security_context.go:309 + should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:347 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-cli] Kubectl client + [BeforeEach] [sig-node] Security Context set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:04:30.89 - Jun 12 22:04:30.890: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename kubectl 06/12/23 22:04:30.892 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:04:30.968 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:04:30.983 - [BeforeEach] [sig-cli] Kubectl client + STEP: Creating a kubernetes client 07/27/23 02:41:14.222 + Jul 27 02:41:14.222: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename security-context-test 07/27/23 02:41:14.223 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:41:14.294 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:41:14.34 + [BeforeEach] [sig-node] Security Context test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-cli] Kubectl client - test/e2e/kubectl/kubectl.go:274 - [It] should check is all data is printed [Conformance] - test/e2e/kubectl/kubectl.go:1685 - Jun 12 22:04:31.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-2321 version' - Jun 12 22:04:31.185: INFO: stderr: "WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.\n" - Jun 12 22:04:31.185: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"26\", GitVersion:\"v1.26.3\", GitCommit:\"9e644106593f3f4aa98f8a84b23db5fa378900bd\", GitTreeState:\"clean\", BuildDate:\"2023-03-15T13:40:17Z\", GoVersion:\"go1.19.7\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nKustomize Version: v4.5.7\nServer Version: version.Info{Major:\"1\", Minor:\"26\", GitVersion:\"v1.26.3+b404935\", GitCommit:\"9ded806a7d5deed41a20f680cf89dae58bbb5697\", GitTreeState:\"clean\", BuildDate:\"2023-04-19T02:20:48Z\", GoVersion:\"go1.19.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" - [AfterEach] [sig-cli] Kubectl client + [BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:50 + [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:347 + Jul 27 02:41:14.401: INFO: Waiting up to 5m0s for pod "busybox-user-65534-d2f5d58d-ba3a-48af-a622-83699564f4dc" in namespace "security-context-test-2147" to be "Succeeded or Failed" + Jul 27 02:41:14.414: INFO: Pod "busybox-user-65534-d2f5d58d-ba3a-48af-a622-83699564f4dc": Phase="Pending", Reason="", readiness=false. Elapsed: 13.60015ms + Jul 27 02:41:16.427: INFO: Pod "busybox-user-65534-d2f5d58d-ba3a-48af-a622-83699564f4dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026149905s + Jul 27 02:41:18.427: INFO: Pod "busybox-user-65534-d2f5d58d-ba3a-48af-a622-83699564f4dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02626229s + Jul 27 02:41:18.427: INFO: Pod "busybox-user-65534-d2f5d58d-ba3a-48af-a622-83699564f4dc" satisfied condition "Succeeded or Failed" + [AfterEach] [sig-node] Security Context test/e2e/framework/node/init/init.go:32 - Jun 12 22:04:31.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-cli] Kubectl client + Jul 27 02:41:18.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Security Context test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-cli] Kubectl client + [DeferCleanup (Each)] [sig-node] Security Context dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-cli] Kubectl client + [DeferCleanup (Each)] [sig-node] Security Context tear down framework | framework.go:193 - STEP: Destroying namespace "kubectl-2321" for this suite. 06/12/23 22:04:31.202 + STEP: Destroying namespace "security-context-test-2147" for this suite. 07/27/23 02:41:18.441 << End Captured GinkgoWriter Output ------------------------------ -SS +S ------------------------------ -[sig-storage] Projected downwardAPI - should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:68 -[BeforeEach] [sig-storage] Projected downwardAPI +[sig-network] Proxy version v1 + should proxy through a service and a pod [Conformance] + test/e2e/network/proxy.go:101 +[BeforeEach] version v1 set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:04:31.224 -Jun 12 22:04:31.224: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename projected 06/12/23 22:04:31.229 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:04:31.281 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:04:31.293 -[BeforeEach] [sig-storage] Projected downwardAPI +STEP: Creating a kubernetes client 07/27/23 02:41:18.465 +Jul 27 02:41:18.465: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename proxy 07/27/23 02:41:18.466 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:41:18.509 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:41:18.521 +[BeforeEach] version v1 test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-storage] Projected downwardAPI - test/e2e/common/storage/projected_downwardapi.go:44 -[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:68 -STEP: Creating a pod to test downward API volume plugin 06/12/23 22:04:31.309 -Jun 12 22:04:31.341: INFO: Waiting up to 5m0s for pod "downwardapi-volume-50b64a01-23e7-4a09-aac5-f9db1914195f" in namespace "projected-8417" to be "Succeeded or Failed" -Jun 12 22:04:31.359: INFO: Pod "downwardapi-volume-50b64a01-23e7-4a09-aac5-f9db1914195f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.00719ms -Jun 12 22:04:33.374: INFO: Pod "downwardapi-volume-50b64a01-23e7-4a09-aac5-f9db1914195f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033539287s -Jun 12 22:04:35.371: INFO: Pod "downwardapi-volume-50b64a01-23e7-4a09-aac5-f9db1914195f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029730639s -Jun 12 22:04:37.427: INFO: Pod "downwardapi-volume-50b64a01-23e7-4a09-aac5-f9db1914195f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.086103672s -STEP: Saw pod success 06/12/23 22:04:37.427 -Jun 12 22:04:37.428: INFO: Pod "downwardapi-volume-50b64a01-23e7-4a09-aac5-f9db1914195f" satisfied condition "Succeeded or Failed" -Jun 12 22:04:37.439: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-50b64a01-23e7-4a09-aac5-f9db1914195f container client-container: -STEP: delete the pod 06/12/23 22:04:37.464 -Jun 12 22:04:37.496: INFO: Waiting for pod downwardapi-volume-50b64a01-23e7-4a09-aac5-f9db1914195f to disappear -Jun 12 22:04:37.506: INFO: Pod downwardapi-volume-50b64a01-23e7-4a09-aac5-f9db1914195f no longer exists -[AfterEach] [sig-storage] Projected downwardAPI +[It] should proxy through a service and a pod [Conformance] + test/e2e/network/proxy.go:101 +STEP: starting an echo server on multiple ports 07/27/23 02:41:18.591 +STEP: creating replication controller proxy-service-f2l4n in namespace proxy-1734 07/27/23 02:41:18.591 +I0727 02:41:18.618091 20 runners.go:193] Created replication controller with name: proxy-service-f2l4n, namespace: proxy-1734, replica count: 1 +I0727 02:41:19.669303 20 runners.go:193] proxy-service-f2l4n Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0727 02:41:20.670080 20 runners.go:193] proxy-service-f2l4n Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Jul 27 02:41:20.683: INFO: setup took 2.149799889s, starting test cases +STEP: running 16 cases, 20 attempts per case, 320 total attempts 07/27/23 02:41:20.683 +Jul 27 02:41:20.707: INFO: (0) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 24.296739ms) +Jul 27 02:41:20.713: INFO: (0) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 29.793899ms) +Jul 27 02:41:20.713: INFO: (0) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 30.083755ms) +Jul 27 02:41:20.715: INFO: (0) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 31.306438ms) +Jul 27 02:41:20.715: INFO: (0) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname2/proxy/: bar (200; 32.220144ms) +Jul 27 02:41:20.716: INFO: (0) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:1080/proxy/: ... (200; 32.475613ms) +Jul 27 02:41:20.716: INFO: (0) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr/proxy/: test (200; 33.197179ms) +Jul 27 02:41:20.716: INFO: (0) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:1080/proxy/: test<... (200; 33.064743ms) +Jul 27 02:41:20.718: INFO: (0) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname1/proxy/: foo (200; 34.484625ms) +Jul 27 02:41:20.719: INFO: (0) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname1/proxy/: foo (200; 36.033752ms) +Jul 27 02:41:20.720: INFO: (0) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname2/proxy/: bar (200; 36.258568ms) +Jul 27 02:41:20.731: INFO: (0) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: ... (200; 21.869596ms) +Jul 27 02:41:20.755: INFO: (1) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:462/proxy/: tls qux (200; 21.806748ms) +Jul 27 02:41:20.756: INFO: (1) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:1080/proxy/: test<... (200; 22.221223ms) +Jul 27 02:41:20.756: INFO: (1) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: test (200; 22.534551ms) +Jul 27 02:41:20.756: INFO: (1) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 22.633546ms) +Jul 27 02:41:20.756: INFO: (1) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname2/proxy/: bar (200; 23.082065ms) +Jul 27 02:41:20.762: INFO: (1) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname1/proxy/: tls baz (200; 28.872398ms) +Jul 27 02:41:20.764: INFO: (1) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname1/proxy/: foo (200; 30.876735ms) +Jul 27 02:41:20.764: INFO: (1) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname2/proxy/: bar (200; 30.895087ms) +Jul 27 02:41:20.764: INFO: (1) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname1/proxy/: foo (200; 31.200572ms) +Jul 27 02:41:20.765: INFO: (1) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname2/proxy/: tls qux (200; 31.707813ms) +Jul 27 02:41:20.784: INFO: (2) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr/proxy/: test (200; 18.926483ms) +Jul 27 02:41:20.795: INFO: (2) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:462/proxy/: tls qux (200; 30.239401ms) +Jul 27 02:41:20.795: INFO: (2) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:1080/proxy/: ... (200; 30.217117ms) +Jul 27 02:41:20.795: INFO: (2) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:1080/proxy/: test<... (200; 30.330128ms) +Jul 27 02:41:20.796: INFO: (2) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: test<... (200; 22.306438ms) +Jul 27 02:41:20.827: INFO: (3) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:462/proxy/: tls qux (200; 22.432593ms) +Jul 27 02:41:20.827: INFO: (3) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr/proxy/: test (200; 22.193845ms) +Jul 27 02:41:20.828: INFO: (3) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:1080/proxy/: ... (200; 22.680961ms) +Jul 27 02:41:20.828: INFO: (3) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 22.733748ms) +Jul 27 02:41:20.828: INFO: (3) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: ... (200; 20.821132ms) +Jul 27 02:41:20.859: INFO: (4) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 22.98459ms) +Jul 27 02:41:20.860: INFO: (4) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:1080/proxy/: test<... (200; 23.950536ms) +Jul 27 02:41:20.860: INFO: (4) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr/proxy/: test (200; 23.953914ms) +Jul 27 02:41:20.860: INFO: (4) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: test (200; 19.553598ms) +Jul 27 02:41:20.898: INFO: (5) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: ... (200; 19.773319ms) +Jul 27 02:41:20.898: INFO: (5) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:462/proxy/: tls qux (200; 19.735983ms) +Jul 27 02:41:20.898: INFO: (5) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:1080/proxy/: test<... (200; 19.819787ms) +Jul 27 02:41:20.898: INFO: (5) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname2/proxy/: tls qux (200; 20.547857ms) +Jul 27 02:41:20.902: INFO: (5) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname1/proxy/: foo (200; 23.841179ms) +Jul 27 02:41:20.904: INFO: (5) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname1/proxy/: tls baz (200; 26.564356ms) +Jul 27 02:41:20.905: INFO: (5) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname2/proxy/: bar (200; 27.116243ms) +Jul 27 02:41:20.905: INFO: (5) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname1/proxy/: foo (200; 27.35158ms) +Jul 27 02:41:20.906: INFO: (5) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname2/proxy/: bar (200; 27.743523ms) +Jul 27 02:41:20.922: INFO: (6) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr/proxy/: test (200; 16.449278ms) +Jul 27 02:41:20.924: INFO: (6) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:1080/proxy/: ... (200; 18.669632ms) +Jul 27 02:41:20.924: INFO: (6) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: test<... (200; 20.035915ms) +Jul 27 02:41:20.927: INFO: (6) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 20.520978ms) +Jul 27 02:41:20.927: INFO: (6) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:462/proxy/: tls qux (200; 20.828015ms) +Jul 27 02:41:20.927: INFO: (6) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:460/proxy/: tls baz (200; 20.989383ms) +Jul 27 02:41:20.931: INFO: (6) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname1/proxy/: foo (200; 25.068618ms) +Jul 27 02:41:20.931: INFO: (6) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname2/proxy/: bar (200; 24.81439ms) +Jul 27 02:41:20.933: INFO: (6) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname2/proxy/: tls qux (200; 27.223785ms) +Jul 27 02:41:20.933: INFO: (6) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname1/proxy/: tls baz (200; 27.662467ms) +Jul 27 02:41:20.934: INFO: (6) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname2/proxy/: bar (200; 27.962511ms) +Jul 27 02:41:20.934: INFO: (6) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname1/proxy/: foo (200; 27.90829ms) +Jul 27 02:41:20.954: INFO: (7) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr/proxy/: test (200; 19.462807ms) +Jul 27 02:41:20.954: INFO: (7) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:460/proxy/: tls baz (200; 19.317754ms) +Jul 27 02:41:20.955: INFO: (7) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:1080/proxy/: test<... (200; 20.944425ms) +Jul 27 02:41:20.956: INFO: (7) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 21.25717ms) +Jul 27 02:41:20.956: INFO: (7) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 21.677566ms) +Jul 27 02:41:20.956: INFO: (7) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:1080/proxy/: ... (200; 21.772468ms) +Jul 27 02:41:20.956: INFO: (7) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: test<... (200; 15.728726ms) +Jul 27 02:41:20.987: INFO: (8) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:1080/proxy/: ... (200; 22.956233ms) +Jul 27 02:41:20.987: INFO: (8) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: test (200; 23.113325ms) +Jul 27 02:41:20.987: INFO: (8) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 23.157433ms) +Jul 27 02:41:20.987: INFO: (8) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 23.644483ms) +Jul 27 02:41:20.987: INFO: (8) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:460/proxy/: tls baz (200; 23.684489ms) +Jul 27 02:41:20.987: INFO: (8) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 23.624471ms) +Jul 27 02:41:20.987: INFO: (8) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 23.664132ms) +Jul 27 02:41:20.988: INFO: (8) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:462/proxy/: tls qux (200; 24.200156ms) +Jul 27 02:41:21.003: INFO: (8) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname2/proxy/: tls qux (200; 39.233908ms) +Jul 27 02:41:21.003: INFO: (8) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname1/proxy/: foo (200; 39.355351ms) +Jul 27 02:41:21.003: INFO: (8) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname1/proxy/: tls baz (200; 39.326578ms) +Jul 27 02:41:21.003: INFO: (8) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname1/proxy/: foo (200; 39.358654ms) +Jul 27 02:41:21.003: INFO: (8) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname2/proxy/: bar (200; 39.307206ms) +Jul 27 02:41:21.003: INFO: (8) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname2/proxy/: bar (200; 39.363929ms) +Jul 27 02:41:21.038: INFO: (9) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 34.44041ms) +Jul 27 02:41:21.044: INFO: (9) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:460/proxy/: tls baz (200; 39.660696ms) +Jul 27 02:41:21.045: INFO: (9) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 40.280369ms) +Jul 27 02:41:21.046: INFO: (9) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:1080/proxy/: ... (200; 40.479957ms) +Jul 27 02:41:21.046: INFO: (9) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 42.395641ms) +Jul 27 02:41:21.046: INFO: (9) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr/proxy/: test (200; 40.934961ms) +Jul 27 02:41:21.046: INFO: (9) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:1080/proxy/: test<... (200; 41.959015ms) +Jul 27 02:41:21.046: INFO: (9) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:462/proxy/: tls qux (200; 42.926405ms) +Jul 27 02:41:21.046: INFO: (9) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: test<... (200; 47.155786ms) +Jul 27 02:41:21.118: INFO: (10) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 46.593709ms) +Jul 27 02:41:21.119: INFO: (10) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:462/proxy/: tls qux (200; 47.043886ms) +Jul 27 02:41:21.119: INFO: (10) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:1080/proxy/: ... (200; 47.575994ms) +Jul 27 02:41:21.120: INFO: (10) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr/proxy/: test (200; 48.278041ms) +Jul 27 02:41:21.121: INFO: (10) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 49.357307ms) +Jul 27 02:41:21.121: INFO: (10) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: ... (200; 46.44914ms) +Jul 27 02:41:21.212: INFO: (11) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: test<... (200; 48.265936ms) +Jul 27 02:41:21.212: INFO: (11) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:462/proxy/: tls qux (200; 47.788311ms) +Jul 27 02:41:21.212: INFO: (11) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:460/proxy/: tls baz (200; 48.589932ms) +Jul 27 02:41:21.213: INFO: (11) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr/proxy/: test (200; 48.786566ms) +Jul 27 02:41:21.213: INFO: (11) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 49.964235ms) +Jul 27 02:41:21.226: INFO: (11) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname2/proxy/: tls qux (200; 62.099062ms) +Jul 27 02:41:21.226: INFO: (11) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname1/proxy/: tls baz (200; 61.930748ms) +Jul 27 02:41:21.253: INFO: (11) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname2/proxy/: bar (200; 88.97409ms) +Jul 27 02:41:21.253: INFO: (11) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname1/proxy/: foo (200; 89.69931ms) +Jul 27 02:41:21.253: INFO: (11) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname2/proxy/: bar (200; 89.498677ms) +Jul 27 02:41:21.256: INFO: (11) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname1/proxy/: foo (200; 91.923484ms) +Jul 27 02:41:21.302: INFO: (12) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: ... (200; 46.962993ms) +Jul 27 02:41:21.305: INFO: (12) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 47.490143ms) +Jul 27 02:41:21.306: INFO: (12) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:1080/proxy/: test<... (200; 49.320353ms) +Jul 27 02:41:21.307: INFO: (12) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr/proxy/: test (200; 50.629486ms) +Jul 27 02:41:21.309: INFO: (12) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 52.149474ms) +Jul 27 02:41:21.312: INFO: (12) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname1/proxy/: tls baz (200; 55.371477ms) +Jul 27 02:41:21.312: INFO: (12) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 54.990736ms) +Jul 27 02:41:21.316: INFO: (12) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname2/proxy/: tls qux (200; 58.672179ms) +Jul 27 02:41:21.342: INFO: (12) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname2/proxy/: bar (200; 85.213517ms) +Jul 27 02:41:21.346: INFO: (12) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname1/proxy/: foo (200; 89.848843ms) +Jul 27 02:41:21.346: INFO: (12) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname2/proxy/: bar (200; 89.297773ms) +Jul 27 02:41:21.347: INFO: (12) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname1/proxy/: foo (200; 89.763849ms) +Jul 27 02:41:21.391: INFO: (13) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 43.821787ms) +Jul 27 02:41:21.394: INFO: (13) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: ... (200; 45.481337ms) +Jul 27 02:41:21.398: INFO: (13) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 50.623607ms) +Jul 27 02:41:21.399: INFO: (13) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr/proxy/: test (200; 50.745815ms) +Jul 27 02:41:21.399: INFO: (13) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:1080/proxy/: test<... (200; 51.326688ms) +Jul 27 02:41:21.399: INFO: (13) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:460/proxy/: tls baz (200; 51.663964ms) +Jul 27 02:41:21.399: INFO: (13) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 51.783639ms) +Jul 27 02:41:21.399: INFO: (13) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 51.182131ms) +Jul 27 02:41:21.399: INFO: (13) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:462/proxy/: tls qux (200; 51.053462ms) +Jul 27 02:41:21.402: INFO: (13) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname1/proxy/: tls baz (200; 54.160678ms) +Jul 27 02:41:21.406: INFO: (13) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname2/proxy/: tls qux (200; 57.819569ms) +Jul 27 02:41:21.426: INFO: (13) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname1/proxy/: foo (200; 77.850973ms) +Jul 27 02:41:21.433: INFO: (13) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname1/proxy/: foo (200; 84.85128ms) +Jul 27 02:41:21.433: INFO: (13) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname2/proxy/: bar (200; 85.485435ms) +Jul 27 02:41:21.434: INFO: (13) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname2/proxy/: bar (200; 86.385316ms) +Jul 27 02:41:21.481: INFO: (14) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 46.313431ms) +Jul 27 02:41:21.483: INFO: (14) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 48.147425ms) +Jul 27 02:41:21.483: INFO: (14) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 48.898764ms) +Jul 27 02:41:21.483: INFO: (14) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 48.807364ms) +Jul 27 02:41:21.483: INFO: (14) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:1080/proxy/: test<... (200; 48.455953ms) +Jul 27 02:41:21.484: INFO: (14) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:462/proxy/: tls qux (200; 49.442366ms) +Jul 27 02:41:21.484: INFO: (14) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: test (200; 49.658205ms) +Jul 27 02:41:21.487: INFO: (14) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:460/proxy/: tls baz (200; 52.772328ms) +Jul 27 02:41:21.487: INFO: (14) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:1080/proxy/: ... (200; 52.093694ms) +Jul 27 02:41:21.500: INFO: (14) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname1/proxy/: tls baz (200; 65.383953ms) +Jul 27 02:41:21.500: INFO: (14) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname2/proxy/: tls qux (200; 65.191124ms) +Jul 27 02:41:21.527: INFO: (14) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname2/proxy/: bar (200; 92.563098ms) +Jul 27 02:41:21.531: INFO: (14) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname2/proxy/: bar (200; 96.062106ms) +Jul 27 02:41:21.531: INFO: (14) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname1/proxy/: foo (200; 96.589654ms) +Jul 27 02:41:21.531: INFO: (14) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname1/proxy/: foo (200; 96.589811ms) +Jul 27 02:41:21.589: INFO: (15) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 58.380317ms) +Jul 27 02:41:21.590: INFO: (15) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 58.376699ms) +Jul 27 02:41:21.590: INFO: (15) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:1080/proxy/: test<... (200; 58.647587ms) +Jul 27 02:41:21.590: INFO: (15) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr/proxy/: test (200; 58.7016ms) +Jul 27 02:41:21.590: INFO: (15) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 59.372745ms) +Jul 27 02:41:21.591: INFO: (15) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:460/proxy/: tls baz (200; 59.488853ms) +Jul 27 02:41:21.591: INFO: (15) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: ... (200; 60.381986ms) +Jul 27 02:41:21.599: INFO: (15) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname2/proxy/: tls qux (200; 67.556137ms) +Jul 27 02:41:21.606: INFO: (15) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname1/proxy/: tls baz (200; 74.631266ms) +Jul 27 02:41:21.630: INFO: (15) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname2/proxy/: bar (200; 98.950892ms) +Jul 27 02:41:21.631: INFO: (15) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname1/proxy/: foo (200; 99.452087ms) +Jul 27 02:41:21.631: INFO: (15) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname1/proxy/: foo (200; 99.549987ms) +Jul 27 02:41:21.631: INFO: (15) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname2/proxy/: bar (200; 99.761174ms) +Jul 27 02:41:21.647: INFO: (16) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:462/proxy/: tls qux (200; 15.674002ms) +Jul 27 02:41:21.650: INFO: (16) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 18.65559ms) +Jul 27 02:41:21.650: INFO: (16) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:1080/proxy/: test<... (200; 19.083115ms) +Jul 27 02:41:21.650: INFO: (16) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:1080/proxy/: ... (200; 19.393621ms) +Jul 27 02:41:21.651: INFO: (16) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 20.228961ms) +Jul 27 02:41:21.652: INFO: (16) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 20.487356ms) +Jul 27 02:41:21.652: INFO: (16) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:460/proxy/: tls baz (200; 20.828851ms) +Jul 27 02:41:21.653: INFO: (16) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname2/proxy/: tls qux (200; 21.229413ms) +Jul 27 02:41:21.653: INFO: (16) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr/proxy/: test (200; 21.293919ms) +Jul 27 02:41:21.653: INFO: (16) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 21.781821ms) +Jul 27 02:41:21.653: INFO: (16) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: test (200; 17.593091ms) +Jul 27 02:41:21.676: INFO: (17) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 17.69454ms) +Jul 27 02:41:21.677: INFO: (17) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 19.26123ms) +Jul 27 02:41:21.678: INFO: (17) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: test<... (200; 20.29493ms) +Jul 27 02:41:21.678: INFO: (17) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:460/proxy/: tls baz (200; 20.362433ms) +Jul 27 02:41:21.679: INFO: (17) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 21.034106ms) +Jul 27 02:41:21.679: INFO: (17) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:1080/proxy/: ... (200; 21.23863ms) +Jul 27 02:41:21.680: INFO: (17) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 21.43358ms) +Jul 27 02:41:21.680: INFO: (17) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:462/proxy/: tls qux (200; 21.409744ms) +Jul 27 02:41:21.683: INFO: (17) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname2/proxy/: tls qux (200; 24.587158ms) +Jul 27 02:41:21.684: INFO: (17) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname2/proxy/: bar (200; 25.75596ms) +Jul 27 02:41:21.685: INFO: (17) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname1/proxy/: foo (200; 26.899631ms) +Jul 27 02:41:21.685: INFO: (17) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname1/proxy/: tls baz (200; 27.021122ms) +Jul 27 02:41:21.685: INFO: (17) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname2/proxy/: bar (200; 27.174013ms) +Jul 27 02:41:21.686: INFO: (17) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname1/proxy/: foo (200; 27.375001ms) +Jul 27 02:41:21.708: INFO: (18) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 22.067939ms) +Jul 27 02:41:21.709: INFO: (18) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:1080/proxy/: ... (200; 23.350879ms) +Jul 27 02:41:21.709: INFO: (18) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 23.153762ms) +Jul 27 02:41:21.709: INFO: (18) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:460/proxy/: tls baz (200; 23.253716ms) +Jul 27 02:41:21.709: INFO: (18) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:462/proxy/: tls qux (200; 23.34906ms) +Jul 27 02:41:21.709: INFO: (18) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr/proxy/: test (200; 23.379041ms) +Jul 27 02:41:21.709: INFO: (18) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 23.431722ms) +Jul 27 02:41:21.710: INFO: (18) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:1080/proxy/: test<... (200; 23.873768ms) +Jul 27 02:41:21.710: INFO: (18) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: test<... (200; 18.73101ms) +Jul 27 02:41:21.740: INFO: (19) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 24.51348ms) +Jul 27 02:41:21.740: INFO: (19) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 23.840531ms) +Jul 27 02:41:21.740: INFO: (19) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 24.510631ms) +Jul 27 02:41:21.740: INFO: (19) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr/proxy/: test (200; 24.303232ms) +Jul 27 02:41:21.741: INFO: (19) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:462/proxy/: tls qux (200; 24.191305ms) +Jul 27 02:41:21.741: INFO: (19) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:460/proxy/: tls baz (200; 24.87729ms) +Jul 27 02:41:21.741: INFO: (19) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: ... (200; 24.450802ms) +Jul 27 02:41:21.741: INFO: (19) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname2/proxy/: bar (200; 25.530046ms) +Jul 27 02:41:21.744: INFO: (19) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 28.663557ms) +Jul 27 02:41:21.744: INFO: (19) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname2/proxy/: tls qux (200; 28.563385ms) +Jul 27 02:41:21.747: INFO: (19) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname1/proxy/: tls baz (200; 30.698791ms) +Jul 27 02:41:21.747: INFO: (19) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname1/proxy/: foo (200; 30.965284ms) +Jul 27 02:41:21.747: INFO: (19) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname2/proxy/: bar (200; 31.460336ms) +Jul 27 02:41:21.747: INFO: (19) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname1/proxy/: foo (200; 30.890465ms) +STEP: deleting ReplicationController proxy-service-f2l4n in namespace proxy-1734, will wait for the garbage collector to delete the pods 07/27/23 02:41:21.747 +Jul 27 02:41:21.836: INFO: Deleting ReplicationController proxy-service-f2l4n took: 23.844387ms +Jul 27 02:41:21.937: INFO: Terminating ReplicationController proxy-service-f2l4n pods took: 100.63913ms +[AfterEach] version v1 test/e2e/framework/node/init/init.go:32 -Jun 12 22:04:37.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Projected downwardAPI +Jul 27 02:41:23.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] version v1 test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Projected downwardAPI +[DeferCleanup (Each)] version v1 dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Projected downwardAPI +[DeferCleanup (Each)] version v1 tear down framework | framework.go:193 -STEP: Destroying namespace "projected-8417" for this suite. 06/12/23 22:04:37.522 +STEP: Destroying namespace "proxy-1734" for this suite. 07/27/23 02:41:23.257 ------------------------------ -• [SLOW TEST] [6.314 seconds] -[sig-storage] Projected downwardAPI -test/e2e/common/storage/framework.go:23 - should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:68 +• [4.817 seconds] +[sig-network] Proxy +test/e2e/network/common/framework.go:23 + version v1 + test/e2e/network/proxy.go:74 + should proxy through a service and a pod [Conformance] + test/e2e/network/proxy.go:101 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Projected downwardAPI + [BeforeEach] version v1 set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:04:31.224 - Jun 12 22:04:31.224: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename projected 06/12/23 22:04:31.229 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:04:31.281 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:04:31.293 - [BeforeEach] [sig-storage] Projected downwardAPI + STEP: Creating a kubernetes client 07/27/23 02:41:18.465 + Jul 27 02:41:18.465: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename proxy 07/27/23 02:41:18.466 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:41:18.509 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:41:18.521 + [BeforeEach] version v1 test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-storage] Projected downwardAPI - test/e2e/common/storage/projected_downwardapi.go:44 - [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:68 - STEP: Creating a pod to test downward API volume plugin 06/12/23 22:04:31.309 - Jun 12 22:04:31.341: INFO: Waiting up to 5m0s for pod "downwardapi-volume-50b64a01-23e7-4a09-aac5-f9db1914195f" in namespace "projected-8417" to be "Succeeded or Failed" - Jun 12 22:04:31.359: INFO: Pod "downwardapi-volume-50b64a01-23e7-4a09-aac5-f9db1914195f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.00719ms - Jun 12 22:04:33.374: INFO: Pod "downwardapi-volume-50b64a01-23e7-4a09-aac5-f9db1914195f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033539287s - Jun 12 22:04:35.371: INFO: Pod "downwardapi-volume-50b64a01-23e7-4a09-aac5-f9db1914195f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029730639s - Jun 12 22:04:37.427: INFO: Pod "downwardapi-volume-50b64a01-23e7-4a09-aac5-f9db1914195f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.086103672s - STEP: Saw pod success 06/12/23 22:04:37.427 - Jun 12 22:04:37.428: INFO: Pod "downwardapi-volume-50b64a01-23e7-4a09-aac5-f9db1914195f" satisfied condition "Succeeded or Failed" - Jun 12 22:04:37.439: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-50b64a01-23e7-4a09-aac5-f9db1914195f container client-container: - STEP: delete the pod 06/12/23 22:04:37.464 - Jun 12 22:04:37.496: INFO: Waiting for pod downwardapi-volume-50b64a01-23e7-4a09-aac5-f9db1914195f to disappear - Jun 12 22:04:37.506: INFO: Pod downwardapi-volume-50b64a01-23e7-4a09-aac5-f9db1914195f no longer exists - [AfterEach] [sig-storage] Projected downwardAPI + [It] should proxy through a service and a pod [Conformance] + test/e2e/network/proxy.go:101 + STEP: starting an echo server on multiple ports 07/27/23 02:41:18.591 + STEP: creating replication controller proxy-service-f2l4n in namespace proxy-1734 07/27/23 02:41:18.591 + I0727 02:41:18.618091 20 runners.go:193] Created replication controller with name: proxy-service-f2l4n, namespace: proxy-1734, replica count: 1 + I0727 02:41:19.669303 20 runners.go:193] proxy-service-f2l4n Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + I0727 02:41:20.670080 20 runners.go:193] proxy-service-f2l4n Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Jul 27 02:41:20.683: INFO: setup took 2.149799889s, starting test cases + STEP: running 16 cases, 20 attempts per case, 320 total attempts 07/27/23 02:41:20.683 + Jul 27 02:41:20.707: INFO: (0) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 24.296739ms) + Jul 27 02:41:20.713: INFO: (0) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 29.793899ms) + Jul 27 02:41:20.713: INFO: (0) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 30.083755ms) + Jul 27 02:41:20.715: INFO: (0) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 31.306438ms) + Jul 27 02:41:20.715: INFO: (0) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname2/proxy/: bar (200; 32.220144ms) + Jul 27 02:41:20.716: INFO: (0) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:1080/proxy/: ... (200; 32.475613ms) + Jul 27 02:41:20.716: INFO: (0) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr/proxy/: test (200; 33.197179ms) + Jul 27 02:41:20.716: INFO: (0) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:1080/proxy/: test<... (200; 33.064743ms) + Jul 27 02:41:20.718: INFO: (0) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname1/proxy/: foo (200; 34.484625ms) + Jul 27 02:41:20.719: INFO: (0) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname1/proxy/: foo (200; 36.033752ms) + Jul 27 02:41:20.720: INFO: (0) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname2/proxy/: bar (200; 36.258568ms) + Jul 27 02:41:20.731: INFO: (0) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: ... (200; 21.869596ms) + Jul 27 02:41:20.755: INFO: (1) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:462/proxy/: tls qux (200; 21.806748ms) + Jul 27 02:41:20.756: INFO: (1) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:1080/proxy/: test<... (200; 22.221223ms) + Jul 27 02:41:20.756: INFO: (1) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: test (200; 22.534551ms) + Jul 27 02:41:20.756: INFO: (1) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 22.633546ms) + Jul 27 02:41:20.756: INFO: (1) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname2/proxy/: bar (200; 23.082065ms) + Jul 27 02:41:20.762: INFO: (1) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname1/proxy/: tls baz (200; 28.872398ms) + Jul 27 02:41:20.764: INFO: (1) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname1/proxy/: foo (200; 30.876735ms) + Jul 27 02:41:20.764: INFO: (1) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname2/proxy/: bar (200; 30.895087ms) + Jul 27 02:41:20.764: INFO: (1) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname1/proxy/: foo (200; 31.200572ms) + Jul 27 02:41:20.765: INFO: (1) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname2/proxy/: tls qux (200; 31.707813ms) + Jul 27 02:41:20.784: INFO: (2) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr/proxy/: test (200; 18.926483ms) + Jul 27 02:41:20.795: INFO: (2) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:462/proxy/: tls qux (200; 30.239401ms) + Jul 27 02:41:20.795: INFO: (2) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:1080/proxy/: ... (200; 30.217117ms) + Jul 27 02:41:20.795: INFO: (2) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:1080/proxy/: test<... (200; 30.330128ms) + Jul 27 02:41:20.796: INFO: (2) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: test<... (200; 22.306438ms) + Jul 27 02:41:20.827: INFO: (3) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:462/proxy/: tls qux (200; 22.432593ms) + Jul 27 02:41:20.827: INFO: (3) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr/proxy/: test (200; 22.193845ms) + Jul 27 02:41:20.828: INFO: (3) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:1080/proxy/: ... (200; 22.680961ms) + Jul 27 02:41:20.828: INFO: (3) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 22.733748ms) + Jul 27 02:41:20.828: INFO: (3) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: ... (200; 20.821132ms) + Jul 27 02:41:20.859: INFO: (4) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 22.98459ms) + Jul 27 02:41:20.860: INFO: (4) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:1080/proxy/: test<... (200; 23.950536ms) + Jul 27 02:41:20.860: INFO: (4) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr/proxy/: test (200; 23.953914ms) + Jul 27 02:41:20.860: INFO: (4) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: test (200; 19.553598ms) + Jul 27 02:41:20.898: INFO: (5) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: ... (200; 19.773319ms) + Jul 27 02:41:20.898: INFO: (5) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:462/proxy/: tls qux (200; 19.735983ms) + Jul 27 02:41:20.898: INFO: (5) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:1080/proxy/: test<... (200; 19.819787ms) + Jul 27 02:41:20.898: INFO: (5) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname2/proxy/: tls qux (200; 20.547857ms) + Jul 27 02:41:20.902: INFO: (5) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname1/proxy/: foo (200; 23.841179ms) + Jul 27 02:41:20.904: INFO: (5) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname1/proxy/: tls baz (200; 26.564356ms) + Jul 27 02:41:20.905: INFO: (5) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname2/proxy/: bar (200; 27.116243ms) + Jul 27 02:41:20.905: INFO: (5) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname1/proxy/: foo (200; 27.35158ms) + Jul 27 02:41:20.906: INFO: (5) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname2/proxy/: bar (200; 27.743523ms) + Jul 27 02:41:20.922: INFO: (6) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr/proxy/: test (200; 16.449278ms) + Jul 27 02:41:20.924: INFO: (6) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:1080/proxy/: ... (200; 18.669632ms) + Jul 27 02:41:20.924: INFO: (6) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: test<... (200; 20.035915ms) + Jul 27 02:41:20.927: INFO: (6) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 20.520978ms) + Jul 27 02:41:20.927: INFO: (6) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:462/proxy/: tls qux (200; 20.828015ms) + Jul 27 02:41:20.927: INFO: (6) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:460/proxy/: tls baz (200; 20.989383ms) + Jul 27 02:41:20.931: INFO: (6) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname1/proxy/: foo (200; 25.068618ms) + Jul 27 02:41:20.931: INFO: (6) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname2/proxy/: bar (200; 24.81439ms) + Jul 27 02:41:20.933: INFO: (6) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname2/proxy/: tls qux (200; 27.223785ms) + Jul 27 02:41:20.933: INFO: (6) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname1/proxy/: tls baz (200; 27.662467ms) + Jul 27 02:41:20.934: INFO: (6) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname2/proxy/: bar (200; 27.962511ms) + Jul 27 02:41:20.934: INFO: (6) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname1/proxy/: foo (200; 27.90829ms) + Jul 27 02:41:20.954: INFO: (7) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr/proxy/: test (200; 19.462807ms) + Jul 27 02:41:20.954: INFO: (7) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:460/proxy/: tls baz (200; 19.317754ms) + Jul 27 02:41:20.955: INFO: (7) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:1080/proxy/: test<... (200; 20.944425ms) + Jul 27 02:41:20.956: INFO: (7) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 21.25717ms) + Jul 27 02:41:20.956: INFO: (7) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 21.677566ms) + Jul 27 02:41:20.956: INFO: (7) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:1080/proxy/: ... (200; 21.772468ms) + Jul 27 02:41:20.956: INFO: (7) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: test<... (200; 15.728726ms) + Jul 27 02:41:20.987: INFO: (8) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:1080/proxy/: ... (200; 22.956233ms) + Jul 27 02:41:20.987: INFO: (8) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: test (200; 23.113325ms) + Jul 27 02:41:20.987: INFO: (8) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 23.157433ms) + Jul 27 02:41:20.987: INFO: (8) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 23.644483ms) + Jul 27 02:41:20.987: INFO: (8) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:460/proxy/: tls baz (200; 23.684489ms) + Jul 27 02:41:20.987: INFO: (8) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 23.624471ms) + Jul 27 02:41:20.987: INFO: (8) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 23.664132ms) + Jul 27 02:41:20.988: INFO: (8) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:462/proxy/: tls qux (200; 24.200156ms) + Jul 27 02:41:21.003: INFO: (8) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname2/proxy/: tls qux (200; 39.233908ms) + Jul 27 02:41:21.003: INFO: (8) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname1/proxy/: foo (200; 39.355351ms) + Jul 27 02:41:21.003: INFO: (8) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname1/proxy/: tls baz (200; 39.326578ms) + Jul 27 02:41:21.003: INFO: (8) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname1/proxy/: foo (200; 39.358654ms) + Jul 27 02:41:21.003: INFO: (8) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname2/proxy/: bar (200; 39.307206ms) + Jul 27 02:41:21.003: INFO: (8) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname2/proxy/: bar (200; 39.363929ms) + Jul 27 02:41:21.038: INFO: (9) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 34.44041ms) + Jul 27 02:41:21.044: INFO: (9) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:460/proxy/: tls baz (200; 39.660696ms) + Jul 27 02:41:21.045: INFO: (9) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 40.280369ms) + Jul 27 02:41:21.046: INFO: (9) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:1080/proxy/: ... (200; 40.479957ms) + Jul 27 02:41:21.046: INFO: (9) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 42.395641ms) + Jul 27 02:41:21.046: INFO: (9) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr/proxy/: test (200; 40.934961ms) + Jul 27 02:41:21.046: INFO: (9) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:1080/proxy/: test<... (200; 41.959015ms) + Jul 27 02:41:21.046: INFO: (9) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:462/proxy/: tls qux (200; 42.926405ms) + Jul 27 02:41:21.046: INFO: (9) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: test<... (200; 47.155786ms) + Jul 27 02:41:21.118: INFO: (10) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 46.593709ms) + Jul 27 02:41:21.119: INFO: (10) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:462/proxy/: tls qux (200; 47.043886ms) + Jul 27 02:41:21.119: INFO: (10) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:1080/proxy/: ... (200; 47.575994ms) + Jul 27 02:41:21.120: INFO: (10) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr/proxy/: test (200; 48.278041ms) + Jul 27 02:41:21.121: INFO: (10) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 49.357307ms) + Jul 27 02:41:21.121: INFO: (10) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: ... (200; 46.44914ms) + Jul 27 02:41:21.212: INFO: (11) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: test<... (200; 48.265936ms) + Jul 27 02:41:21.212: INFO: (11) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:462/proxy/: tls qux (200; 47.788311ms) + Jul 27 02:41:21.212: INFO: (11) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:460/proxy/: tls baz (200; 48.589932ms) + Jul 27 02:41:21.213: INFO: (11) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr/proxy/: test (200; 48.786566ms) + Jul 27 02:41:21.213: INFO: (11) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 49.964235ms) + Jul 27 02:41:21.226: INFO: (11) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname2/proxy/: tls qux (200; 62.099062ms) + Jul 27 02:41:21.226: INFO: (11) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname1/proxy/: tls baz (200; 61.930748ms) + Jul 27 02:41:21.253: INFO: (11) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname2/proxy/: bar (200; 88.97409ms) + Jul 27 02:41:21.253: INFO: (11) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname1/proxy/: foo (200; 89.69931ms) + Jul 27 02:41:21.253: INFO: (11) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname2/proxy/: bar (200; 89.498677ms) + Jul 27 02:41:21.256: INFO: (11) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname1/proxy/: foo (200; 91.923484ms) + Jul 27 02:41:21.302: INFO: (12) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: ... (200; 46.962993ms) + Jul 27 02:41:21.305: INFO: (12) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 47.490143ms) + Jul 27 02:41:21.306: INFO: (12) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:1080/proxy/: test<... (200; 49.320353ms) + Jul 27 02:41:21.307: INFO: (12) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr/proxy/: test (200; 50.629486ms) + Jul 27 02:41:21.309: INFO: (12) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 52.149474ms) + Jul 27 02:41:21.312: INFO: (12) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname1/proxy/: tls baz (200; 55.371477ms) + Jul 27 02:41:21.312: INFO: (12) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 54.990736ms) + Jul 27 02:41:21.316: INFO: (12) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname2/proxy/: tls qux (200; 58.672179ms) + Jul 27 02:41:21.342: INFO: (12) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname2/proxy/: bar (200; 85.213517ms) + Jul 27 02:41:21.346: INFO: (12) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname1/proxy/: foo (200; 89.848843ms) + Jul 27 02:41:21.346: INFO: (12) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname2/proxy/: bar (200; 89.297773ms) + Jul 27 02:41:21.347: INFO: (12) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname1/proxy/: foo (200; 89.763849ms) + Jul 27 02:41:21.391: INFO: (13) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 43.821787ms) + Jul 27 02:41:21.394: INFO: (13) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: ... (200; 45.481337ms) + Jul 27 02:41:21.398: INFO: (13) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 50.623607ms) + Jul 27 02:41:21.399: INFO: (13) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr/proxy/: test (200; 50.745815ms) + Jul 27 02:41:21.399: INFO: (13) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:1080/proxy/: test<... (200; 51.326688ms) + Jul 27 02:41:21.399: INFO: (13) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:460/proxy/: tls baz (200; 51.663964ms) + Jul 27 02:41:21.399: INFO: (13) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 51.783639ms) + Jul 27 02:41:21.399: INFO: (13) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 51.182131ms) + Jul 27 02:41:21.399: INFO: (13) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:462/proxy/: tls qux (200; 51.053462ms) + Jul 27 02:41:21.402: INFO: (13) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname1/proxy/: tls baz (200; 54.160678ms) + Jul 27 02:41:21.406: INFO: (13) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname2/proxy/: tls qux (200; 57.819569ms) + Jul 27 02:41:21.426: INFO: (13) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname1/proxy/: foo (200; 77.850973ms) + Jul 27 02:41:21.433: INFO: (13) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname1/proxy/: foo (200; 84.85128ms) + Jul 27 02:41:21.433: INFO: (13) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname2/proxy/: bar (200; 85.485435ms) + Jul 27 02:41:21.434: INFO: (13) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname2/proxy/: bar (200; 86.385316ms) + Jul 27 02:41:21.481: INFO: (14) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 46.313431ms) + Jul 27 02:41:21.483: INFO: (14) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 48.147425ms) + Jul 27 02:41:21.483: INFO: (14) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 48.898764ms) + Jul 27 02:41:21.483: INFO: (14) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 48.807364ms) + Jul 27 02:41:21.483: INFO: (14) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:1080/proxy/: test<... (200; 48.455953ms) + Jul 27 02:41:21.484: INFO: (14) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:462/proxy/: tls qux (200; 49.442366ms) + Jul 27 02:41:21.484: INFO: (14) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: test (200; 49.658205ms) + Jul 27 02:41:21.487: INFO: (14) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:460/proxy/: tls baz (200; 52.772328ms) + Jul 27 02:41:21.487: INFO: (14) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:1080/proxy/: ... (200; 52.093694ms) + Jul 27 02:41:21.500: INFO: (14) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname1/proxy/: tls baz (200; 65.383953ms) + Jul 27 02:41:21.500: INFO: (14) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname2/proxy/: tls qux (200; 65.191124ms) + Jul 27 02:41:21.527: INFO: (14) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname2/proxy/: bar (200; 92.563098ms) + Jul 27 02:41:21.531: INFO: (14) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname2/proxy/: bar (200; 96.062106ms) + Jul 27 02:41:21.531: INFO: (14) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname1/proxy/: foo (200; 96.589654ms) + Jul 27 02:41:21.531: INFO: (14) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname1/proxy/: foo (200; 96.589811ms) + Jul 27 02:41:21.589: INFO: (15) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 58.380317ms) + Jul 27 02:41:21.590: INFO: (15) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 58.376699ms) + Jul 27 02:41:21.590: INFO: (15) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:1080/proxy/: test<... (200; 58.647587ms) + Jul 27 02:41:21.590: INFO: (15) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr/proxy/: test (200; 58.7016ms) + Jul 27 02:41:21.590: INFO: (15) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 59.372745ms) + Jul 27 02:41:21.591: INFO: (15) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:460/proxy/: tls baz (200; 59.488853ms) + Jul 27 02:41:21.591: INFO: (15) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: ... (200; 60.381986ms) + Jul 27 02:41:21.599: INFO: (15) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname2/proxy/: tls qux (200; 67.556137ms) + Jul 27 02:41:21.606: INFO: (15) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname1/proxy/: tls baz (200; 74.631266ms) + Jul 27 02:41:21.630: INFO: (15) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname2/proxy/: bar (200; 98.950892ms) + Jul 27 02:41:21.631: INFO: (15) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname1/proxy/: foo (200; 99.452087ms) + Jul 27 02:41:21.631: INFO: (15) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname1/proxy/: foo (200; 99.549987ms) + Jul 27 02:41:21.631: INFO: (15) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname2/proxy/: bar (200; 99.761174ms) + Jul 27 02:41:21.647: INFO: (16) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:462/proxy/: tls qux (200; 15.674002ms) + Jul 27 02:41:21.650: INFO: (16) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 18.65559ms) + Jul 27 02:41:21.650: INFO: (16) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:1080/proxy/: test<... (200; 19.083115ms) + Jul 27 02:41:21.650: INFO: (16) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:1080/proxy/: ... (200; 19.393621ms) + Jul 27 02:41:21.651: INFO: (16) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 20.228961ms) + Jul 27 02:41:21.652: INFO: (16) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 20.487356ms) + Jul 27 02:41:21.652: INFO: (16) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:460/proxy/: tls baz (200; 20.828851ms) + Jul 27 02:41:21.653: INFO: (16) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname2/proxy/: tls qux (200; 21.229413ms) + Jul 27 02:41:21.653: INFO: (16) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr/proxy/: test (200; 21.293919ms) + Jul 27 02:41:21.653: INFO: (16) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 21.781821ms) + Jul 27 02:41:21.653: INFO: (16) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: test (200; 17.593091ms) + Jul 27 02:41:21.676: INFO: (17) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 17.69454ms) + Jul 27 02:41:21.677: INFO: (17) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 19.26123ms) + Jul 27 02:41:21.678: INFO: (17) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: test<... (200; 20.29493ms) + Jul 27 02:41:21.678: INFO: (17) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:460/proxy/: tls baz (200; 20.362433ms) + Jul 27 02:41:21.679: INFO: (17) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 21.034106ms) + Jul 27 02:41:21.679: INFO: (17) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:1080/proxy/: ... (200; 21.23863ms) + Jul 27 02:41:21.680: INFO: (17) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 21.43358ms) + Jul 27 02:41:21.680: INFO: (17) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:462/proxy/: tls qux (200; 21.409744ms) + Jul 27 02:41:21.683: INFO: (17) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname2/proxy/: tls qux (200; 24.587158ms) + Jul 27 02:41:21.684: INFO: (17) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname2/proxy/: bar (200; 25.75596ms) + Jul 27 02:41:21.685: INFO: (17) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname1/proxy/: foo (200; 26.899631ms) + Jul 27 02:41:21.685: INFO: (17) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname1/proxy/: tls baz (200; 27.021122ms) + Jul 27 02:41:21.685: INFO: (17) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname2/proxy/: bar (200; 27.174013ms) + Jul 27 02:41:21.686: INFO: (17) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname1/proxy/: foo (200; 27.375001ms) + Jul 27 02:41:21.708: INFO: (18) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 22.067939ms) + Jul 27 02:41:21.709: INFO: (18) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:1080/proxy/: ... (200; 23.350879ms) + Jul 27 02:41:21.709: INFO: (18) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 23.153762ms) + Jul 27 02:41:21.709: INFO: (18) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:460/proxy/: tls baz (200; 23.253716ms) + Jul 27 02:41:21.709: INFO: (18) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:462/proxy/: tls qux (200; 23.34906ms) + Jul 27 02:41:21.709: INFO: (18) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr/proxy/: test (200; 23.379041ms) + Jul 27 02:41:21.709: INFO: (18) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 23.431722ms) + Jul 27 02:41:21.710: INFO: (18) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:1080/proxy/: test<... (200; 23.873768ms) + Jul 27 02:41:21.710: INFO: (18) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: test<... (200; 18.73101ms) + Jul 27 02:41:21.740: INFO: (19) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 24.51348ms) + Jul 27 02:41:21.740: INFO: (19) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:160/proxy/: foo (200; 23.840531ms) + Jul 27 02:41:21.740: INFO: (19) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 24.510631ms) + Jul 27 02:41:21.740: INFO: (19) /api/v1/namespaces/proxy-1734/pods/proxy-service-f2l4n-pfktr/proxy/: test (200; 24.303232ms) + Jul 27 02:41:21.741: INFO: (19) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:462/proxy/: tls qux (200; 24.191305ms) + Jul 27 02:41:21.741: INFO: (19) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:460/proxy/: tls baz (200; 24.87729ms) + Jul 27 02:41:21.741: INFO: (19) /api/v1/namespaces/proxy-1734/pods/https:proxy-service-f2l4n-pfktr:443/proxy/: ... (200; 24.450802ms) + Jul 27 02:41:21.741: INFO: (19) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname2/proxy/: bar (200; 25.530046ms) + Jul 27 02:41:21.744: INFO: (19) /api/v1/namespaces/proxy-1734/pods/http:proxy-service-f2l4n-pfktr:162/proxy/: bar (200; 28.663557ms) + Jul 27 02:41:21.744: INFO: (19) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname2/proxy/: tls qux (200; 28.563385ms) + Jul 27 02:41:21.747: INFO: (19) /api/v1/namespaces/proxy-1734/services/https:proxy-service-f2l4n:tlsportname1/proxy/: tls baz (200; 30.698791ms) + Jul 27 02:41:21.747: INFO: (19) /api/v1/namespaces/proxy-1734/services/proxy-service-f2l4n:portname1/proxy/: foo (200; 30.965284ms) + Jul 27 02:41:21.747: INFO: (19) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname2/proxy/: bar (200; 31.460336ms) + Jul 27 02:41:21.747: INFO: (19) /api/v1/namespaces/proxy-1734/services/http:proxy-service-f2l4n:portname1/proxy/: foo (200; 30.890465ms) + STEP: deleting ReplicationController proxy-service-f2l4n in namespace proxy-1734, will wait for the garbage collector to delete the pods 07/27/23 02:41:21.747 + Jul 27 02:41:21.836: INFO: Deleting ReplicationController proxy-service-f2l4n took: 23.844387ms + Jul 27 02:41:21.937: INFO: Terminating ReplicationController proxy-service-f2l4n pods took: 100.63913ms + [AfterEach] version v1 test/e2e/framework/node/init/init.go:32 - Jun 12 22:04:37.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + Jul 27 02:41:23.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] version v1 test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + [DeferCleanup (Each)] version v1 dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + [DeferCleanup (Each)] version v1 tear down framework | framework.go:193 - STEP: Destroying namespace "projected-8417" for this suite. 06/12/23 22:04:37.522 + STEP: Destroying namespace "proxy-1734" for this suite. 07/27/23 02:41:23.257 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSS +SSSSSSSSSSSSSSS ------------------------------ -[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] - works for multiple CRDs of different groups [Conformance] - test/e2e/apimachinery/crd_publish_openapi.go:276 -[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +[sig-storage] Projected configMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:174 +[BeforeEach] [sig-storage] Projected configMap set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:04:37.544 -Jun 12 22:04:37.544: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename crd-publish-openapi 06/12/23 22:04:37.547 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:04:37.59 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:04:37.613 -[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +STEP: Creating a kubernetes client 07/27/23 02:41:23.284 +Jul 27 02:41:23.284: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename projected 07/27/23 02:41:23.284 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:41:23.331 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:41:23.344 +[BeforeEach] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:31 -[It] works for multiple CRDs of different groups [Conformance] - test/e2e/apimachinery/crd_publish_openapi.go:276 -STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation 06/12/23 22:04:37.636 -Jun 12 22:04:37.638: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 22:04:44.836: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:174 +Jul 27 02:41:23.384: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node +STEP: Creating configMap with name cm-test-opt-del-8ff6ec78-65d6-4172-b558-08f11b7dd735 07/27/23 02:41:23.384 +STEP: Creating configMap with name cm-test-opt-upd-0216ed02-d24c-4b27-9694-5c739df2ca31 07/27/23 02:41:23.404 +STEP: Creating the pod 07/27/23 02:41:23.425 +Jul 27 02:41:23.457: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-50abd5c8-eea9-49bf-94bc-76196b7afeb2" in namespace "projected-4904" to be "running and ready" +Jul 27 02:41:23.474: INFO: Pod "pod-projected-configmaps-50abd5c8-eea9-49bf-94bc-76196b7afeb2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.786675ms +Jul 27 02:41:23.475: INFO: The phase of Pod pod-projected-configmaps-50abd5c8-eea9-49bf-94bc-76196b7afeb2 is Pending, waiting for it to be Running (with Ready = true) +Jul 27 02:41:25.486: INFO: Pod "pod-projected-configmaps-50abd5c8-eea9-49bf-94bc-76196b7afeb2": Phase="Running", Reason="", readiness=true. Elapsed: 2.029754359s +Jul 27 02:41:25.486: INFO: The phase of Pod pod-projected-configmaps-50abd5c8-eea9-49bf-94bc-76196b7afeb2 is Running (Ready = true) +Jul 27 02:41:25.486: INFO: Pod "pod-projected-configmaps-50abd5c8-eea9-49bf-94bc-76196b7afeb2" satisfied condition "running and ready" +STEP: Deleting configmap cm-test-opt-del-8ff6ec78-65d6-4172-b558-08f11b7dd735 07/27/23 02:41:25.554 +STEP: Updating configmap cm-test-opt-upd-0216ed02-d24c-4b27-9694-5c739df2ca31 07/27/23 02:41:25.582 +STEP: Creating configMap with name cm-test-opt-create-d4c83e51-b9ed-49ad-876e-47337839ffb7 07/27/23 02:41:25.6 +STEP: waiting to observe update in volume 07/27/23 02:41:25.618 +[AfterEach] [sig-storage] Projected configMap test/e2e/framework/node/init/init.go:32 -Jun 12 22:05:14.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +Jul 27 02:41:27.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-storage] Projected configMap dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-storage] Projected configMap tear down framework | framework.go:193 -STEP: Destroying namespace "crd-publish-openapi-2734" for this suite. 06/12/23 22:05:14.187 +STEP: Destroying namespace "projected-4904" for this suite. 07/27/23 02:41:27.719 ------------------------------ -• [SLOW TEST] [36.682 seconds] -[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] -test/e2e/apimachinery/framework.go:23 - works for multiple CRDs of different groups [Conformance] - test/e2e/apimachinery/crd_publish_openapi.go:276 +• [4.461 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:174 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + [BeforeEach] [sig-storage] Projected configMap set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:04:37.544 - Jun 12 22:04:37.544: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename crd-publish-openapi 06/12/23 22:04:37.547 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:04:37.59 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:04:37.613 - [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + STEP: Creating a kubernetes client 07/27/23 02:41:23.284 + Jul 27 02:41:23.284: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename projected 07/27/23 02:41:23.284 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:41:23.331 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:41:23.344 + [BeforeEach] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:31 - [It] works for multiple CRDs of different groups [Conformance] - test/e2e/apimachinery/crd_publish_openapi.go:276 - STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation 06/12/23 22:04:37.636 - Jun 12 22:04:37.638: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 22:04:44.836: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + [It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:174 + Jul 27 02:41:23.384: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node + STEP: Creating configMap with name cm-test-opt-del-8ff6ec78-65d6-4172-b558-08f11b7dd735 07/27/23 02:41:23.384 + STEP: Creating configMap with name cm-test-opt-upd-0216ed02-d24c-4b27-9694-5c739df2ca31 07/27/23 02:41:23.404 + STEP: Creating the pod 07/27/23 02:41:23.425 + Jul 27 02:41:23.457: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-50abd5c8-eea9-49bf-94bc-76196b7afeb2" in namespace "projected-4904" to be "running and ready" + Jul 27 02:41:23.474: INFO: Pod "pod-projected-configmaps-50abd5c8-eea9-49bf-94bc-76196b7afeb2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.786675ms + Jul 27 02:41:23.475: INFO: The phase of Pod pod-projected-configmaps-50abd5c8-eea9-49bf-94bc-76196b7afeb2 is Pending, waiting for it to be Running (with Ready = true) + Jul 27 02:41:25.486: INFO: Pod "pod-projected-configmaps-50abd5c8-eea9-49bf-94bc-76196b7afeb2": Phase="Running", Reason="", readiness=true. Elapsed: 2.029754359s + Jul 27 02:41:25.486: INFO: The phase of Pod pod-projected-configmaps-50abd5c8-eea9-49bf-94bc-76196b7afeb2 is Running (Ready = true) + Jul 27 02:41:25.486: INFO: Pod "pod-projected-configmaps-50abd5c8-eea9-49bf-94bc-76196b7afeb2" satisfied condition "running and ready" + STEP: Deleting configmap cm-test-opt-del-8ff6ec78-65d6-4172-b558-08f11b7dd735 07/27/23 02:41:25.554 + STEP: Updating configmap cm-test-opt-upd-0216ed02-d24c-4b27-9694-5c739df2ca31 07/27/23 02:41:25.582 + STEP: Creating configMap with name cm-test-opt-create-d4c83e51-b9ed-49ad-876e-47337839ffb7 07/27/23 02:41:25.6 + STEP: waiting to observe update in volume 07/27/23 02:41:25.618 + [AfterEach] [sig-storage] Projected configMap test/e2e/framework/node/init/init.go:32 - Jun 12 22:05:14.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + Jul 27 02:41:27.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-storage] Projected configMap dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-storage] Projected configMap tear down framework | framework.go:193 - STEP: Destroying namespace "crd-publish-openapi-2734" for this suite. 06/12/23 22:05:14.187 + STEP: Destroying namespace "projected-4904" for this suite. 07/27/23 02:41:27.719 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSS ------------------------------ -[sig-node] Secrets - should fail to create secret due to empty secret key [Conformance] - test/e2e/common/node/secrets.go:140 -[BeforeEach] [sig-node] Secrets +[sig-node] Containers + should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:59 +[BeforeEach] [sig-node] Containers set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:05:14.292 -Jun 12 22:05:14.292: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename secrets 06/12/23 22:05:14.295 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:05:14.402 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:05:14.469 -[BeforeEach] [sig-node] Secrets +STEP: Creating a kubernetes client 07/27/23 02:41:27.745 +Jul 27 02:41:27.745: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename containers 07/27/23 02:41:27.746 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:41:27.788 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:41:27.8 +[BeforeEach] [sig-node] Containers test/e2e/framework/metrics/init/init.go:31 -[It] should fail to create secret due to empty secret key [Conformance] - test/e2e/common/node/secrets.go:140 -STEP: Creating projection with secret that has name secret-emptykey-test-f845c789-22b0-41fb-a8b9-707c2bdcd21b 06/12/23 22:05:14.549 -[AfterEach] [sig-node] Secrets +[It] should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:59 +STEP: Creating a pod to test override arguments 07/27/23 02:41:27.817 +W0727 02:41:27.843681 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "agnhost-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "agnhost-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "agnhost-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "agnhost-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +Jul 27 02:41:27.843: INFO: Waiting up to 5m0s for pod "client-containers-e6644d63-243c-43db-a33e-6709ead04e95" in namespace "containers-7662" to be "Succeeded or Failed" +Jul 27 02:41:27.855: INFO: Pod "client-containers-e6644d63-243c-43db-a33e-6709ead04e95": Phase="Pending", Reason="", readiness=false. Elapsed: 11.330608ms +Jul 27 02:41:29.867: INFO: Pod "client-containers-e6644d63-243c-43db-a33e-6709ead04e95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023158836s +Jul 27 02:41:31.882: INFO: Pod "client-containers-e6644d63-243c-43db-a33e-6709ead04e95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038341216s +STEP: Saw pod success 07/27/23 02:41:31.882 +Jul 27 02:41:31.882: INFO: Pod "client-containers-e6644d63-243c-43db-a33e-6709ead04e95" satisfied condition "Succeeded or Failed" +Jul 27 02:41:31.891: INFO: Trying to get logs from node 10.245.128.19 pod client-containers-e6644d63-243c-43db-a33e-6709ead04e95 container agnhost-container: +STEP: delete the pod 07/27/23 02:41:31.91 +Jul 27 02:41:31.931: INFO: Waiting for pod client-containers-e6644d63-243c-43db-a33e-6709ead04e95 to disappear +Jul 27 02:41:31.941: INFO: Pod client-containers-e6644d63-243c-43db-a33e-6709ead04e95 no longer exists +[AfterEach] [sig-node] Containers test/e2e/framework/node/init/init.go:32 -Jun 12 22:05:14.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Secrets +Jul 27 02:41:31.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Containers test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Secrets +[DeferCleanup (Each)] [sig-node] Containers dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Secrets +[DeferCleanup (Each)] [sig-node] Containers tear down framework | framework.go:193 -STEP: Destroying namespace "secrets-6639" for this suite. 06/12/23 22:05:14.582 +STEP: Destroying namespace "containers-7662" for this suite. 07/27/23 02:41:31.958 ------------------------------ -• [0.399 seconds] -[sig-node] Secrets +• [4.238 seconds] +[sig-node] Containers test/e2e/common/node/framework.go:23 - should fail to create secret due to empty secret key [Conformance] - test/e2e/common/node/secrets.go:140 + should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:59 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Secrets + [BeforeEach] [sig-node] Containers set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:05:14.292 - Jun 12 22:05:14.292: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename secrets 06/12/23 22:05:14.295 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:05:14.402 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:05:14.469 - [BeforeEach] [sig-node] Secrets + STEP: Creating a kubernetes client 07/27/23 02:41:27.745 + Jul 27 02:41:27.745: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename containers 07/27/23 02:41:27.746 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:41:27.788 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:41:27.8 + [BeforeEach] [sig-node] Containers test/e2e/framework/metrics/init/init.go:31 - [It] should fail to create secret due to empty secret key [Conformance] - test/e2e/common/node/secrets.go:140 - STEP: Creating projection with secret that has name secret-emptykey-test-f845c789-22b0-41fb-a8b9-707c2bdcd21b 06/12/23 22:05:14.549 - [AfterEach] [sig-node] Secrets + [It] should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:59 + STEP: Creating a pod to test override arguments 07/27/23 02:41:27.817 + W0727 02:41:27.843681 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "agnhost-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "agnhost-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "agnhost-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "agnhost-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + Jul 27 02:41:27.843: INFO: Waiting up to 5m0s for pod "client-containers-e6644d63-243c-43db-a33e-6709ead04e95" in namespace "containers-7662" to be "Succeeded or Failed" + Jul 27 02:41:27.855: INFO: Pod "client-containers-e6644d63-243c-43db-a33e-6709ead04e95": Phase="Pending", Reason="", readiness=false. Elapsed: 11.330608ms + Jul 27 02:41:29.867: INFO: Pod "client-containers-e6644d63-243c-43db-a33e-6709ead04e95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023158836s + Jul 27 02:41:31.882: INFO: Pod "client-containers-e6644d63-243c-43db-a33e-6709ead04e95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038341216s + STEP: Saw pod success 07/27/23 02:41:31.882 + Jul 27 02:41:31.882: INFO: Pod "client-containers-e6644d63-243c-43db-a33e-6709ead04e95" satisfied condition "Succeeded or Failed" + Jul 27 02:41:31.891: INFO: Trying to get logs from node 10.245.128.19 pod client-containers-e6644d63-243c-43db-a33e-6709ead04e95 container agnhost-container: + STEP: delete the pod 07/27/23 02:41:31.91 + Jul 27 02:41:31.931: INFO: Waiting for pod client-containers-e6644d63-243c-43db-a33e-6709ead04e95 to disappear + Jul 27 02:41:31.941: INFO: Pod client-containers-e6644d63-243c-43db-a33e-6709ead04e95 no longer exists + [AfterEach] [sig-node] Containers test/e2e/framework/node/init/init.go:32 - Jun 12 22:05:14.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Secrets + Jul 27 02:41:31.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Containers test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Secrets + [DeferCleanup (Each)] [sig-node] Containers dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Secrets + [DeferCleanup (Each)] [sig-node] Containers tear down framework | framework.go:193 - STEP: Destroying namespace "secrets-6639" for this suite. 06/12/23 22:05:14.582 + STEP: Destroying namespace "containers-7662" for this suite. 07/27/23 02:41:31.958 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSS +SSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] Downward API volume - should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:261 -[BeforeEach] [sig-storage] Downward API volume +[sig-node] Downward API + should provide host IP as an env var [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:90 +[BeforeEach] [sig-node] Downward API set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:05:14.701 -Jun 12 22:05:14.701: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename downward-api 06/12/23 22:05:14.705 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:05:14.766 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:05:14.799 -[BeforeEach] [sig-storage] Downward API volume +STEP: Creating a kubernetes client 07/27/23 02:41:31.984 +Jul 27 02:41:31.984: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename downward-api 07/27/23 02:41:31.985 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:41:32.073 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:41:32.086 +[BeforeEach] [sig-node] Downward API test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-storage] Downward API volume - test/e2e/common/storage/downwardapi_volume.go:44 -[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:261 -STEP: Creating a pod to test downward API volume plugin 06/12/23 22:05:14.842 -Jun 12 22:05:14.873: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7bac6b36-632f-4ba9-a797-5ef934c9b814" in namespace "downward-api-4443" to be "Succeeded or Failed" -Jun 12 22:05:14.944: INFO: Pod "downwardapi-volume-7bac6b36-632f-4ba9-a797-5ef934c9b814": Phase="Pending", Reason="", readiness=false. Elapsed: 70.446349ms -Jun 12 22:05:16.958: INFO: Pod "downwardapi-volume-7bac6b36-632f-4ba9-a797-5ef934c9b814": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084063375s -Jun 12 22:05:18.958: INFO: Pod "downwardapi-volume-7bac6b36-632f-4ba9-a797-5ef934c9b814": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084837531s -Jun 12 22:05:20.959: INFO: Pod "downwardapi-volume-7bac6b36-632f-4ba9-a797-5ef934c9b814": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.085166233s -STEP: Saw pod success 06/12/23 22:05:20.959 -Jun 12 22:05:20.959: INFO: Pod "downwardapi-volume-7bac6b36-632f-4ba9-a797-5ef934c9b814" satisfied condition "Succeeded or Failed" -Jun 12 22:05:20.987: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-7bac6b36-632f-4ba9-a797-5ef934c9b814 container client-container: -STEP: delete the pod 06/12/23 22:05:21.141 -Jun 12 22:05:21.191: INFO: Waiting for pod downwardapi-volume-7bac6b36-632f-4ba9-a797-5ef934c9b814 to disappear -Jun 12 22:05:21.203: INFO: Pod downwardapi-volume-7bac6b36-632f-4ba9-a797-5ef934c9b814 no longer exists -[AfterEach] [sig-storage] Downward API volume +[It] should provide host IP as an env var [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:90 +STEP: Creating a pod to test downward api env vars 07/27/23 02:41:32.098 +W0727 02:41:32.130196 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "dapi-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "dapi-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "dapi-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "dapi-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +Jul 27 02:41:32.130: INFO: Waiting up to 5m0s for pod "downward-api-b4e8bfc8-47ad-44d7-9f97-c96d50dff451" in namespace "downward-api-6155" to be "Succeeded or Failed" +Jul 27 02:41:32.140: INFO: Pod "downward-api-b4e8bfc8-47ad-44d7-9f97-c96d50dff451": Phase="Pending", Reason="", readiness=false. Elapsed: 10.218496ms +Jul 27 02:41:34.151: INFO: Pod "downward-api-b4e8bfc8-47ad-44d7-9f97-c96d50dff451": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021138097s +Jul 27 02:41:36.152: INFO: Pod "downward-api-b4e8bfc8-47ad-44d7-9f97-c96d50dff451": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022398072s +Jul 27 02:41:38.151: INFO: Pod "downward-api-b4e8bfc8-47ad-44d7-9f97-c96d50dff451": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021158565s +STEP: Saw pod success 07/27/23 02:41:38.151 +Jul 27 02:41:38.151: INFO: Pod "downward-api-b4e8bfc8-47ad-44d7-9f97-c96d50dff451" satisfied condition "Succeeded or Failed" +Jul 27 02:41:38.161: INFO: Trying to get logs from node 10.245.128.19 pod downward-api-b4e8bfc8-47ad-44d7-9f97-c96d50dff451 container dapi-container: +STEP: delete the pod 07/27/23 02:41:38.187 +Jul 27 02:41:38.214: INFO: Waiting for pod downward-api-b4e8bfc8-47ad-44d7-9f97-c96d50dff451 to disappear +Jul 27 02:41:38.225: INFO: Pod downward-api-b4e8bfc8-47ad-44d7-9f97-c96d50dff451 no longer exists +[AfterEach] [sig-node] Downward API test/e2e/framework/node/init/init.go:32 -Jun 12 22:05:21.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Downward API volume +Jul 27 02:41:38.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Downward API test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Downward API volume +[DeferCleanup (Each)] [sig-node] Downward API dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Downward API volume +[DeferCleanup (Each)] [sig-node] Downward API tear down framework | framework.go:193 -STEP: Destroying namespace "downward-api-4443" for this suite. 06/12/23 22:05:21.223 +STEP: Destroying namespace "downward-api-6155" for this suite. 07/27/23 02:41:38.239 ------------------------------ -• [SLOW TEST] [6.546 seconds] -[sig-storage] Downward API volume -test/e2e/common/storage/framework.go:23 - should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:261 +• [SLOW TEST] [6.280 seconds] +[sig-node] Downward API +test/e2e/common/node/framework.go:23 + should provide host IP as an env var [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:90 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Downward API volume + [BeforeEach] [sig-node] Downward API set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:05:14.701 - Jun 12 22:05:14.701: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename downward-api 06/12/23 22:05:14.705 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:05:14.766 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:05:14.799 - [BeforeEach] [sig-storage] Downward API volume + STEP: Creating a kubernetes client 07/27/23 02:41:31.984 + Jul 27 02:41:31.984: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename downward-api 07/27/23 02:41:31.985 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:41:32.073 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:41:32.086 + [BeforeEach] [sig-node] Downward API test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-storage] Downward API volume - test/e2e/common/storage/downwardapi_volume.go:44 - [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:261 - STEP: Creating a pod to test downward API volume plugin 06/12/23 22:05:14.842 - Jun 12 22:05:14.873: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7bac6b36-632f-4ba9-a797-5ef934c9b814" in namespace "downward-api-4443" to be "Succeeded or Failed" - Jun 12 22:05:14.944: INFO: Pod "downwardapi-volume-7bac6b36-632f-4ba9-a797-5ef934c9b814": Phase="Pending", Reason="", readiness=false. Elapsed: 70.446349ms - Jun 12 22:05:16.958: INFO: Pod "downwardapi-volume-7bac6b36-632f-4ba9-a797-5ef934c9b814": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084063375s - Jun 12 22:05:18.958: INFO: Pod "downwardapi-volume-7bac6b36-632f-4ba9-a797-5ef934c9b814": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084837531s - Jun 12 22:05:20.959: INFO: Pod "downwardapi-volume-7bac6b36-632f-4ba9-a797-5ef934c9b814": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.085166233s - STEP: Saw pod success 06/12/23 22:05:20.959 - Jun 12 22:05:20.959: INFO: Pod "downwardapi-volume-7bac6b36-632f-4ba9-a797-5ef934c9b814" satisfied condition "Succeeded or Failed" - Jun 12 22:05:20.987: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-7bac6b36-632f-4ba9-a797-5ef934c9b814 container client-container: - STEP: delete the pod 06/12/23 22:05:21.141 - Jun 12 22:05:21.191: INFO: Waiting for pod downwardapi-volume-7bac6b36-632f-4ba9-a797-5ef934c9b814 to disappear - Jun 12 22:05:21.203: INFO: Pod downwardapi-volume-7bac6b36-632f-4ba9-a797-5ef934c9b814 no longer exists - [AfterEach] [sig-storage] Downward API volume + [It] should provide host IP as an env var [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:90 + STEP: Creating a pod to test downward api env vars 07/27/23 02:41:32.098 + W0727 02:41:32.130196 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "dapi-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "dapi-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "dapi-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "dapi-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + Jul 27 02:41:32.130: INFO: Waiting up to 5m0s for pod "downward-api-b4e8bfc8-47ad-44d7-9f97-c96d50dff451" in namespace "downward-api-6155" to be "Succeeded or Failed" + Jul 27 02:41:32.140: INFO: Pod "downward-api-b4e8bfc8-47ad-44d7-9f97-c96d50dff451": Phase="Pending", Reason="", readiness=false. Elapsed: 10.218496ms + Jul 27 02:41:34.151: INFO: Pod "downward-api-b4e8bfc8-47ad-44d7-9f97-c96d50dff451": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021138097s + Jul 27 02:41:36.152: INFO: Pod "downward-api-b4e8bfc8-47ad-44d7-9f97-c96d50dff451": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022398072s + Jul 27 02:41:38.151: INFO: Pod "downward-api-b4e8bfc8-47ad-44d7-9f97-c96d50dff451": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021158565s + STEP: Saw pod success 07/27/23 02:41:38.151 + Jul 27 02:41:38.151: INFO: Pod "downward-api-b4e8bfc8-47ad-44d7-9f97-c96d50dff451" satisfied condition "Succeeded or Failed" + Jul 27 02:41:38.161: INFO: Trying to get logs from node 10.245.128.19 pod downward-api-b4e8bfc8-47ad-44d7-9f97-c96d50dff451 container dapi-container: + STEP: delete the pod 07/27/23 02:41:38.187 + Jul 27 02:41:38.214: INFO: Waiting for pod downward-api-b4e8bfc8-47ad-44d7-9f97-c96d50dff451 to disappear + Jul 27 02:41:38.225: INFO: Pod downward-api-b4e8bfc8-47ad-44d7-9f97-c96d50dff451 no longer exists + [AfterEach] [sig-node] Downward API test/e2e/framework/node/init/init.go:32 - Jun 12 22:05:21.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Downward API volume + Jul 27 02:41:38.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Downward API test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Downward API volume + [DeferCleanup (Each)] [sig-node] Downward API dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Downward API volume + [DeferCleanup (Each)] [sig-node] Downward API tear down framework | framework.go:193 - STEP: Destroying namespace "downward-api-4443" for this suite. 06/12/23 22:05:21.223 + STEP: Destroying namespace "downward-api-6155" for this suite. 07/27/23 02:41:38.239 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSS +SSSSS ------------------------------ -[sig-node] Pods Extended Pods Set QOS Class - should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] - test/e2e/node/pods.go:161 -[BeforeEach] [sig-node] Pods Extended +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a replica set. [Conformance] + test/e2e/apimachinery/resource_quota.go:448 +[BeforeEach] [sig-api-machinery] ResourceQuota set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:05:21.251 -Jun 12 22:05:21.252: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename pods 06/12/23 22:05:21.254 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:05:21.301 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:05:21.312 -[BeforeEach] [sig-node] Pods Extended +STEP: Creating a kubernetes client 07/27/23 02:41:38.264 +Jul 27 02:41:38.264: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename resourcequota 07/27/23 02:41:38.265 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:41:38.307 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:41:38.319 +[BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] Pods Set QOS Class - test/e2e/node/pods.go:152 -[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] - test/e2e/node/pods.go:161 -STEP: creating the pod 06/12/23 22:05:21.322 -STEP: submitting the pod to kubernetes 06/12/23 22:05:21.322 -STEP: verifying QOS class is set on the pod 06/12/23 22:05:21.355 -[AfterEach] [sig-node] Pods Extended +[It] should create a ResourceQuota and capture the life of a replica set. [Conformance] + test/e2e/apimachinery/resource_quota.go:448 +STEP: Counting existing ResourceQuota 07/27/23 02:41:38.331 +STEP: Creating a ResourceQuota 07/27/23 02:41:43.341 +STEP: Ensuring resource quota status is calculated 07/27/23 02:41:43.357 +STEP: Creating a ReplicaSet 07/27/23 02:41:45.378 +STEP: Ensuring resource quota status captures replicaset creation 07/27/23 02:41:45.456 +STEP: Deleting a ReplicaSet 07/27/23 02:41:47.467 +STEP: Ensuring resource quota status released usage 07/27/23 02:41:47.481 +[AfterEach] [sig-api-machinery] ResourceQuota test/e2e/framework/node/init/init.go:32 -Jun 12 22:05:21.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Pods Extended +Jul 27 02:41:49.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Pods Extended +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Pods Extended +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota tear down framework | framework.go:193 -STEP: Destroying namespace "pods-5281" for this suite. 06/12/23 22:05:21.391 +STEP: Destroying namespace "resourcequota-7071" for this suite. 07/27/23 02:41:49.508 ------------------------------ -• [0.167 seconds] -[sig-node] Pods Extended -test/e2e/node/framework.go:23 - Pods Set QOS Class - test/e2e/node/pods.go:150 - should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] - test/e2e/node/pods.go:161 - - Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Pods Extended - set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:05:21.251 - Jun 12 22:05:21.252: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename pods 06/12/23 22:05:21.254 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:05:21.301 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:05:21.312 - [BeforeEach] [sig-node] Pods Extended - test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] Pods Set QOS Class - test/e2e/node/pods.go:152 - [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] - test/e2e/node/pods.go:161 - STEP: creating the pod 06/12/23 22:05:21.322 - STEP: submitting the pod to kubernetes 06/12/23 22:05:21.322 - STEP: verifying QOS class is set on the pod 06/12/23 22:05:21.355 - [AfterEach] [sig-node] Pods Extended - test/e2e/framework/node/init/init.go:32 - Jun 12 22:05:21.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Pods Extended - test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Pods Extended - dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Pods Extended - tear down framework | framework.go:193 - STEP: Destroying namespace "pods-5281" for this suite. 06/12/23 22:05:21.391 - << End Captured GinkgoWriter Output ------------------------------- -SSSSSSSSSSSSSSS ------------------------------- -[sig-node] Pods - should be submitted and removed [NodeConformance] [Conformance] - test/e2e/common/node/pods.go:226 -[BeforeEach] [sig-node] Pods - set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:05:21.421 -Jun 12 22:05:21.422: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename pods 06/12/23 22:05:21.426 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:05:21.479 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:05:21.496 -[BeforeEach] [sig-node] Pods - test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-node] Pods - test/e2e/common/node/pods.go:194 -[It] should be submitted and removed [NodeConformance] [Conformance] - test/e2e/common/node/pods.go:226 -STEP: creating the pod 06/12/23 22:05:21.511 -STEP: setting up watch 06/12/23 22:05:21.511 -STEP: submitting the pod to kubernetes 06/12/23 22:05:21.626 -STEP: verifying the pod is in kubernetes 06/12/23 22:05:21.669 -STEP: verifying pod creation was observed 06/12/23 22:05:21.706 -Jun 12 22:05:21.706: INFO: Waiting up to 5m0s for pod "pod-submit-remove-204608b2-c0e9-4d3a-b59a-114365fd1205" in namespace "pods-4594" to be "running" -Jun 12 22:05:21.727: INFO: Pod "pod-submit-remove-204608b2-c0e9-4d3a-b59a-114365fd1205": Phase="Pending", Reason="", readiness=false. Elapsed: 20.991704ms -Jun 12 22:05:23.749: INFO: Pod "pod-submit-remove-204608b2-c0e9-4d3a-b59a-114365fd1205": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042854639s -Jun 12 22:05:25.760: INFO: Pod "pod-submit-remove-204608b2-c0e9-4d3a-b59a-114365fd1205": Phase="Running", Reason="", readiness=true. Elapsed: 4.054273619s -Jun 12 22:05:25.760: INFO: Pod "pod-submit-remove-204608b2-c0e9-4d3a-b59a-114365fd1205" satisfied condition "running" -STEP: deleting the pod gracefully 06/12/23 22:05:25.797 -STEP: verifying pod deletion was observed 06/12/23 22:05:25.826 -[AfterEach] [sig-node] Pods - test/e2e/framework/node/init/init.go:32 -Jun 12 22:05:28.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Pods - test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Pods - dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Pods - tear down framework | framework.go:193 -STEP: Destroying namespace "pods-4594" for this suite. 06/12/23 22:05:28.876 ------------------------------- -• [SLOW TEST] [7.479 seconds] -[sig-node] Pods -test/e2e/common/node/framework.go:23 - should be submitted and removed [NodeConformance] [Conformance] - test/e2e/common/node/pods.go:226 +• [SLOW TEST] [11.266 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a replica set. [Conformance] + test/e2e/apimachinery/resource_quota.go:448 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Pods + [BeforeEach] [sig-api-machinery] ResourceQuota set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:05:21.421 - Jun 12 22:05:21.422: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename pods 06/12/23 22:05:21.426 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:05:21.479 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:05:21.496 - [BeforeEach] [sig-node] Pods + STEP: Creating a kubernetes client 07/27/23 02:41:38.264 + Jul 27 02:41:38.264: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename resourcequota 07/27/23 02:41:38.265 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:41:38.307 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:41:38.319 + [BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-node] Pods - test/e2e/common/node/pods.go:194 - [It] should be submitted and removed [NodeConformance] [Conformance] - test/e2e/common/node/pods.go:226 - STEP: creating the pod 06/12/23 22:05:21.511 - STEP: setting up watch 06/12/23 22:05:21.511 - STEP: submitting the pod to kubernetes 06/12/23 22:05:21.626 - STEP: verifying the pod is in kubernetes 06/12/23 22:05:21.669 - STEP: verifying pod creation was observed 06/12/23 22:05:21.706 - Jun 12 22:05:21.706: INFO: Waiting up to 5m0s for pod "pod-submit-remove-204608b2-c0e9-4d3a-b59a-114365fd1205" in namespace "pods-4594" to be "running" - Jun 12 22:05:21.727: INFO: Pod "pod-submit-remove-204608b2-c0e9-4d3a-b59a-114365fd1205": Phase="Pending", Reason="", readiness=false. Elapsed: 20.991704ms - Jun 12 22:05:23.749: INFO: Pod "pod-submit-remove-204608b2-c0e9-4d3a-b59a-114365fd1205": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042854639s - Jun 12 22:05:25.760: INFO: Pod "pod-submit-remove-204608b2-c0e9-4d3a-b59a-114365fd1205": Phase="Running", Reason="", readiness=true. Elapsed: 4.054273619s - Jun 12 22:05:25.760: INFO: Pod "pod-submit-remove-204608b2-c0e9-4d3a-b59a-114365fd1205" satisfied condition "running" - STEP: deleting the pod gracefully 06/12/23 22:05:25.797 - STEP: verifying pod deletion was observed 06/12/23 22:05:25.826 - [AfterEach] [sig-node] Pods + [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] + test/e2e/apimachinery/resource_quota.go:448 + STEP: Counting existing ResourceQuota 07/27/23 02:41:38.331 + STEP: Creating a ResourceQuota 07/27/23 02:41:43.341 + STEP: Ensuring resource quota status is calculated 07/27/23 02:41:43.357 + STEP: Creating a ReplicaSet 07/27/23 02:41:45.378 + STEP: Ensuring resource quota status captures replicaset creation 07/27/23 02:41:45.456 + STEP: Deleting a ReplicaSet 07/27/23 02:41:47.467 + STEP: Ensuring resource quota status released usage 07/27/23 02:41:47.481 + [AfterEach] [sig-api-machinery] ResourceQuota test/e2e/framework/node/init/init.go:32 - Jun 12 22:05:28.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Pods + Jul 27 02:41:49.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Pods + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Pods + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota tear down framework | framework.go:193 - STEP: Destroying namespace "pods-4594" for this suite. 06/12/23 22:05:28.876 + STEP: Destroying namespace "resourcequota-7071" for this suite. 07/27/23 02:41:49.508 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Pods - should support remote command execution over websockets [NodeConformance] [Conformance] - test/e2e/common/node/pods.go:536 + should patch a pod status [Conformance] + test/e2e/common/node/pods.go:1083 [BeforeEach] [sig-node] Pods set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:05:28.91 -Jun 12 22:05:28.910: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename pods 06/12/23 22:05:28.912 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:05:28.979 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:05:28.997 +STEP: Creating a kubernetes client 07/27/23 02:41:49.531 +Jul 27 02:41:49.531: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename pods 07/27/23 02:41:49.532 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:41:49.579 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:41:49.591 [BeforeEach] [sig-node] Pods test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-node] Pods test/e2e/common/node/pods.go:194 -[It] should support remote command execution over websockets [NodeConformance] [Conformance] - test/e2e/common/node/pods.go:536 -Jun 12 22:05:29.014: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: creating the pod 06/12/23 22:05:29.015 -STEP: submitting the pod to kubernetes 06/12/23 22:05:29.015 -Jun 12 22:05:29.054: INFO: Waiting up to 5m0s for pod "pod-exec-websocket-a31c392e-4a33-44df-be90-f0417d81eeb8" in namespace "pods-4668" to be "running and ready" -Jun 12 22:05:29.073: INFO: Pod "pod-exec-websocket-a31c392e-4a33-44df-be90-f0417d81eeb8": Phase="Pending", Reason="", readiness=false. Elapsed: 18.338064ms -Jun 12 22:05:29.073: INFO: The phase of Pod pod-exec-websocket-a31c392e-4a33-44df-be90-f0417d81eeb8 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 22:05:31.090: INFO: Pod "pod-exec-websocket-a31c392e-4a33-44df-be90-f0417d81eeb8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035871939s -Jun 12 22:05:31.090: INFO: The phase of Pod pod-exec-websocket-a31c392e-4a33-44df-be90-f0417d81eeb8 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 22:05:33.087: INFO: Pod "pod-exec-websocket-a31c392e-4a33-44df-be90-f0417d81eeb8": Phase="Running", Reason="", readiness=true. Elapsed: 4.032506081s -Jun 12 22:05:33.087: INFO: The phase of Pod pod-exec-websocket-a31c392e-4a33-44df-be90-f0417d81eeb8 is Running (Ready = true) -Jun 12 22:05:33.087: INFO: Pod "pod-exec-websocket-a31c392e-4a33-44df-be90-f0417d81eeb8" satisfied condition "running and ready" +[It] should patch a pod status [Conformance] + test/e2e/common/node/pods.go:1083 +STEP: Create a pod 07/27/23 02:41:49.604 +Jul 27 02:41:49.630: INFO: Waiting up to 5m0s for pod "pod-gk5m9" in namespace "pods-2306" to be "running" +Jul 27 02:41:49.643: INFO: Pod "pod-gk5m9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.327951ms +Jul 27 02:41:51.654: INFO: Pod "pod-gk5m9": Phase="Running", Reason="", readiness=true. Elapsed: 2.023003917s +Jul 27 02:41:51.654: INFO: Pod "pod-gk5m9" satisfied condition "running" +STEP: patching /status 07/27/23 02:41:51.654 +Jul 27 02:41:51.673: INFO: Status Message: "Patched by e2e test" and Reason: "E2E" [AfterEach] [sig-node] Pods test/e2e/framework/node/init/init.go:32 -Jun 12 22:05:33.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +Jul 27 02:41:51.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Pods test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Pods dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-node] Pods tear down framework | framework.go:193 -STEP: Destroying namespace "pods-4668" for this suite. 06/12/23 22:05:33.461 +STEP: Destroying namespace "pods-2306" for this suite. 07/27/23 02:41:51.688 ------------------------------ -• [4.574 seconds] +• [2.186 seconds] [sig-node] Pods test/e2e/common/node/framework.go:23 - should support remote command execution over websockets [NodeConformance] [Conformance] - test/e2e/common/node/pods.go:536 + should patch a pod status [Conformance] + test/e2e/common/node/pods.go:1083 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-node] Pods set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:05:28.91 - Jun 12 22:05:28.910: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename pods 06/12/23 22:05:28.912 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:05:28.979 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:05:28.997 + STEP: Creating a kubernetes client 07/27/23 02:41:49.531 + Jul 27 02:41:49.531: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename pods 07/27/23 02:41:49.532 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:41:49.579 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:41:49.591 [BeforeEach] [sig-node] Pods test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-node] Pods test/e2e/common/node/pods.go:194 - [It] should support remote command execution over websockets [NodeConformance] [Conformance] - test/e2e/common/node/pods.go:536 - Jun 12 22:05:29.014: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: creating the pod 06/12/23 22:05:29.015 - STEP: submitting the pod to kubernetes 06/12/23 22:05:29.015 - Jun 12 22:05:29.054: INFO: Waiting up to 5m0s for pod "pod-exec-websocket-a31c392e-4a33-44df-be90-f0417d81eeb8" in namespace "pods-4668" to be "running and ready" - Jun 12 22:05:29.073: INFO: Pod "pod-exec-websocket-a31c392e-4a33-44df-be90-f0417d81eeb8": Phase="Pending", Reason="", readiness=false. Elapsed: 18.338064ms - Jun 12 22:05:29.073: INFO: The phase of Pod pod-exec-websocket-a31c392e-4a33-44df-be90-f0417d81eeb8 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 22:05:31.090: INFO: Pod "pod-exec-websocket-a31c392e-4a33-44df-be90-f0417d81eeb8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035871939s - Jun 12 22:05:31.090: INFO: The phase of Pod pod-exec-websocket-a31c392e-4a33-44df-be90-f0417d81eeb8 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 22:05:33.087: INFO: Pod "pod-exec-websocket-a31c392e-4a33-44df-be90-f0417d81eeb8": Phase="Running", Reason="", readiness=true. Elapsed: 4.032506081s - Jun 12 22:05:33.087: INFO: The phase of Pod pod-exec-websocket-a31c392e-4a33-44df-be90-f0417d81eeb8 is Running (Ready = true) - Jun 12 22:05:33.087: INFO: Pod "pod-exec-websocket-a31c392e-4a33-44df-be90-f0417d81eeb8" satisfied condition "running and ready" + [It] should patch a pod status [Conformance] + test/e2e/common/node/pods.go:1083 + STEP: Create a pod 07/27/23 02:41:49.604 + Jul 27 02:41:49.630: INFO: Waiting up to 5m0s for pod "pod-gk5m9" in namespace "pods-2306" to be "running" + Jul 27 02:41:49.643: INFO: Pod "pod-gk5m9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.327951ms + Jul 27 02:41:51.654: INFO: Pod "pod-gk5m9": Phase="Running", Reason="", readiness=true. Elapsed: 2.023003917s + Jul 27 02:41:51.654: INFO: Pod "pod-gk5m9" satisfied condition "running" + STEP: patching /status 07/27/23 02:41:51.654 + Jul 27 02:41:51.673: INFO: Status Message: "Patched by e2e test" and Reason: "E2E" [AfterEach] [sig-node] Pods test/e2e/framework/node/init/init.go:32 - Jun 12 22:05:33.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + Jul 27 02:41:51.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Pods test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Pods dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-node] Pods tear down framework | framework.go:193 - STEP: Destroying namespace "pods-4668" for this suite. 06/12/23 22:05:33.461 + STEP: Destroying namespace "pods-2306" for this suite. 07/27/23 02:41:51.688 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSS +SSSSSS ------------------------------ -[sig-node] Security Context when creating containers with AllowPrivilegeEscalation - should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/node/security_context.go:609 -[BeforeEach] [sig-node] Security Context +[sig-node] Kubelet when scheduling an agnhost Pod with hostAliases + should write entries to /etc/hosts [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:148 +[BeforeEach] [sig-node] Kubelet set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:05:33.486 -Jun 12 22:05:33.486: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename security-context-test 06/12/23 22:05:33.491 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:05:33.546 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:05:33.561 -[BeforeEach] [sig-node] Security Context +STEP: Creating a kubernetes client 07/27/23 02:41:51.717 +Jul 27 02:41:51.718: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename kubelet-test 07/27/23 02:41:51.718 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:41:51.758 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:41:51.771 +[BeforeEach] [sig-node] Kubelet test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-node] Security Context - test/e2e/common/node/security_context.go:50 -[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/node/security_context.go:609 -Jun 12 22:05:33.608: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-c1a9f4bd-dcca-4824-a37c-e8f624adeaf1" in namespace "security-context-test-6803" to be "Succeeded or Failed" -Jun 12 22:05:33.623: INFO: Pod "alpine-nnp-false-c1a9f4bd-dcca-4824-a37c-e8f624adeaf1": Phase="Pending", Reason="", readiness=false. Elapsed: 15.257478ms -Jun 12 22:05:35.636: INFO: Pod "alpine-nnp-false-c1a9f4bd-dcca-4824-a37c-e8f624adeaf1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028423381s -Jun 12 22:05:37.637: INFO: Pod "alpine-nnp-false-c1a9f4bd-dcca-4824-a37c-e8f624adeaf1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028547723s -Jun 12 22:05:39.639: INFO: Pod "alpine-nnp-false-c1a9f4bd-dcca-4824-a37c-e8f624adeaf1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031259859s -Jun 12 22:05:41.641: INFO: Pod "alpine-nnp-false-c1a9f4bd-dcca-4824-a37c-e8f624adeaf1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.033169095s -Jun 12 22:05:43.639: INFO: Pod "alpine-nnp-false-c1a9f4bd-dcca-4824-a37c-e8f624adeaf1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.03063381s -Jun 12 22:05:45.637: INFO: Pod "alpine-nnp-false-c1a9f4bd-dcca-4824-a37c-e8f624adeaf1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.02917555s -Jun 12 22:05:47.643: INFO: Pod "alpine-nnp-false-c1a9f4bd-dcca-4824-a37c-e8f624adeaf1": Phase="Pending", Reason="", readiness=false. Elapsed: 14.034974212s -Jun 12 22:05:49.635: INFO: Pod "alpine-nnp-false-c1a9f4bd-dcca-4824-a37c-e8f624adeaf1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.02730609s -Jun 12 22:05:49.636: INFO: Pod "alpine-nnp-false-c1a9f4bd-dcca-4824-a37c-e8f624adeaf1" satisfied condition "Succeeded or Failed" -[AfterEach] [sig-node] Security Context +[BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 +[It] should write entries to /etc/hosts [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:148 +STEP: Waiting for pod completion 07/27/23 02:41:51.813 +Jul 27 02:41:51.813: INFO: Waiting up to 3m0s for pod "agnhost-host-aliasesf4549932-0569-49da-ba42-a4a35a17ee7f" in namespace "kubelet-test-2838" to be "completed" +Jul 27 02:41:51.824: INFO: Pod "agnhost-host-aliasesf4549932-0569-49da-ba42-a4a35a17ee7f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.827872ms +Jul 27 02:41:53.835: INFO: Pod "agnhost-host-aliasesf4549932-0569-49da-ba42-a4a35a17ee7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022255876s +Jul 27 02:41:55.834: INFO: Pod "agnhost-host-aliasesf4549932-0569-49da-ba42-a4a35a17ee7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021055293s +Jul 27 02:41:55.834: INFO: Pod "agnhost-host-aliasesf4549932-0569-49da-ba42-a4a35a17ee7f" satisfied condition "completed" +[AfterEach] [sig-node] Kubelet test/e2e/framework/node/init/init.go:32 -Jun 12 22:05:49.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Security Context +Jul 27 02:41:55.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Kubelet test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Security Context +[DeferCleanup (Each)] [sig-node] Kubelet dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Security Context +[DeferCleanup (Each)] [sig-node] Kubelet tear down framework | framework.go:193 -STEP: Destroying namespace "security-context-test-6803" for this suite. 06/12/23 22:05:49.68 +STEP: Destroying namespace "kubelet-test-2838" for this suite. 07/27/23 02:41:55.869 ------------------------------ -• [SLOW TEST] [16.214 seconds] -[sig-node] Security Context +• [4.175 seconds] +[sig-node] Kubelet test/e2e/common/node/framework.go:23 - when creating containers with AllowPrivilegeEscalation - test/e2e/common/node/security_context.go:555 - should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/node/security_context.go:609 + when scheduling an agnhost Pod with hostAliases + test/e2e/common/node/kubelet.go:140 + should write entries to /etc/hosts [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:148 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Security Context + [BeforeEach] [sig-node] Kubelet set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:05:33.486 - Jun 12 22:05:33.486: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename security-context-test 06/12/23 22:05:33.491 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:05:33.546 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:05:33.561 - [BeforeEach] [sig-node] Security Context + STEP: Creating a kubernetes client 07/27/23 02:41:51.717 + Jul 27 02:41:51.718: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename kubelet-test 07/27/23 02:41:51.718 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:41:51.758 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:41:51.771 + [BeforeEach] [sig-node] Kubelet test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-node] Security Context - test/e2e/common/node/security_context.go:50 - [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/node/security_context.go:609 - Jun 12 22:05:33.608: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-c1a9f4bd-dcca-4824-a37c-e8f624adeaf1" in namespace "security-context-test-6803" to be "Succeeded or Failed" - Jun 12 22:05:33.623: INFO: Pod "alpine-nnp-false-c1a9f4bd-dcca-4824-a37c-e8f624adeaf1": Phase="Pending", Reason="", readiness=false. Elapsed: 15.257478ms - Jun 12 22:05:35.636: INFO: Pod "alpine-nnp-false-c1a9f4bd-dcca-4824-a37c-e8f624adeaf1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028423381s - Jun 12 22:05:37.637: INFO: Pod "alpine-nnp-false-c1a9f4bd-dcca-4824-a37c-e8f624adeaf1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028547723s - Jun 12 22:05:39.639: INFO: Pod "alpine-nnp-false-c1a9f4bd-dcca-4824-a37c-e8f624adeaf1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031259859s - Jun 12 22:05:41.641: INFO: Pod "alpine-nnp-false-c1a9f4bd-dcca-4824-a37c-e8f624adeaf1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.033169095s - Jun 12 22:05:43.639: INFO: Pod "alpine-nnp-false-c1a9f4bd-dcca-4824-a37c-e8f624adeaf1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.03063381s - Jun 12 22:05:45.637: INFO: Pod "alpine-nnp-false-c1a9f4bd-dcca-4824-a37c-e8f624adeaf1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.02917555s - Jun 12 22:05:47.643: INFO: Pod "alpine-nnp-false-c1a9f4bd-dcca-4824-a37c-e8f624adeaf1": Phase="Pending", Reason="", readiness=false. Elapsed: 14.034974212s - Jun 12 22:05:49.635: INFO: Pod "alpine-nnp-false-c1a9f4bd-dcca-4824-a37c-e8f624adeaf1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.02730609s - Jun 12 22:05:49.636: INFO: Pod "alpine-nnp-false-c1a9f4bd-dcca-4824-a37c-e8f624adeaf1" satisfied condition "Succeeded or Failed" - [AfterEach] [sig-node] Security Context + [BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 + [It] should write entries to /etc/hosts [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:148 + STEP: Waiting for pod completion 07/27/23 02:41:51.813 + Jul 27 02:41:51.813: INFO: Waiting up to 3m0s for pod "agnhost-host-aliasesf4549932-0569-49da-ba42-a4a35a17ee7f" in namespace "kubelet-test-2838" to be "completed" + Jul 27 02:41:51.824: INFO: Pod "agnhost-host-aliasesf4549932-0569-49da-ba42-a4a35a17ee7f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.827872ms + Jul 27 02:41:53.835: INFO: Pod "agnhost-host-aliasesf4549932-0569-49da-ba42-a4a35a17ee7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022255876s + Jul 27 02:41:55.834: INFO: Pod "agnhost-host-aliasesf4549932-0569-49da-ba42-a4a35a17ee7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021055293s + Jul 27 02:41:55.834: INFO: Pod "agnhost-host-aliasesf4549932-0569-49da-ba42-a4a35a17ee7f" satisfied condition "completed" + [AfterEach] [sig-node] Kubelet test/e2e/framework/node/init/init.go:32 - Jun 12 22:05:49.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Security Context + Jul 27 02:41:55.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Kubelet test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Security Context + [DeferCleanup (Each)] [sig-node] Kubelet dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Security Context + [DeferCleanup (Each)] [sig-node] Kubelet tear down framework | framework.go:193 - STEP: Destroying namespace "security-context-test-6803" for this suite. 06/12/23 22:05:49.68 + STEP: Destroying namespace "kubelet-test-2838" for this suite. 07/27/23 02:41:55.869 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSS +SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - should be able to deny custom resource creation, update and deletion [Conformance] - test/e2e/apimachinery/webhook.go:221 + should include webhook resources in discovery documents [Conformance] + test/e2e/apimachinery/webhook.go:117 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:05:49.702 -Jun 12 22:05:49.703: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename webhook 06/12/23 22:05:49.706 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:05:49.764 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:05:49.783 +STEP: Creating a kubernetes client 07/27/23 02:41:55.893 +Jul 27 02:41:55.893: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename webhook 07/27/23 02:41:55.894 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:41:55.936 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:41:55.947 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/apimachinery/webhook.go:90 -STEP: Setting up server cert 06/12/23 22:05:49.851 -STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 22:05:50.819 -STEP: Deploying the webhook pod 06/12/23 22:05:50.852 -STEP: Wait for the deployment to be ready 06/12/23 22:05:50.89 -Jun 12 22:05:50.914: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set -Jun 12 22:05:52.948: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 5, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 5, 50, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 5, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 5, 50, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} -STEP: Deploying the webhook service 06/12/23 22:05:54.963 -STEP: Verifying the service has paired with the endpoint 06/12/23 22:05:55.023 -Jun 12 22:05:56.048: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 -[It] should be able to deny custom resource creation, update and deletion [Conformance] - test/e2e/apimachinery/webhook.go:221 -Jun 12 22:05:56.062: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Registering the custom resource webhook via the AdmissionRegistration API 06/12/23 22:05:56.123 -STEP: Creating a custom resource that should be denied by the webhook 06/12/23 22:05:56.185 -STEP: Creating a custom resource whose deletion would be denied by the webhook 06/12/23 22:05:58.265 -STEP: Updating the custom resource with disallowed data should be denied 06/12/23 22:05:58.285 -STEP: Deleting the custom resource should be denied 06/12/23 22:05:58.378 -STEP: Remove the offending key and value from the custom resource data 06/12/23 22:05:58.44 -STEP: Deleting the updated custom resource should be successful 06/12/23 22:05:58.51 +STEP: Setting up server cert 07/27/23 02:41:56.022 +STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 02:41:56.181 +STEP: Deploying the webhook pod 07/27/23 02:41:56.218 +STEP: Wait for the deployment to be ready 07/27/23 02:41:56.249 +Jul 27 02:41:56.271: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created +Jul 27 02:41:58.307: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 2, 41, 56, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 2, 41, 56, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 2, 41, 56, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 2, 41, 56, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service 07/27/23 02:42:00.317 +STEP: Verifying the service has paired with the endpoint 07/27/23 02:42:00.355 +Jul 27 02:42:01.356: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should include webhook resources in discovery documents [Conformance] + test/e2e/apimachinery/webhook.go:117 +STEP: fetching the /apis discovery document 07/27/23 02:42:01.367 +STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document 07/27/23 02:42:01.374 +STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document 07/27/23 02:42:01.374 +STEP: fetching the /apis/admissionregistration.k8s.io discovery document 07/27/23 02:42:01.374 +STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document 07/27/23 02:42:01.382 +STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document 07/27/23 02:42:01.382 +STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document 07/27/23 02:42:01.388 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 -Jun 12 22:05:59.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +Jul 27 02:42:01.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/apimachinery/webhook.go:105 [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] @@ -31056,49 +30067,48 @@ Jun 12 22:05:59.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 -STEP: Destroying namespace "webhook-2736" for this suite. 06/12/23 22:05:59.31 -STEP: Destroying namespace "webhook-2736-markers" for this suite. 06/12/23 22:05:59.422 +STEP: Destroying namespace "webhook-9735" for this suite. 07/27/23 02:42:01.535 +STEP: Destroying namespace "webhook-9735-markers" for this suite. 07/27/23 02:42:01.562 ------------------------------ -• [SLOW TEST] [9.764 seconds] +• [SLOW TEST] [5.693 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/apimachinery/framework.go:23 - should be able to deny custom resource creation, update and deletion [Conformance] - test/e2e/apimachinery/webhook.go:221 + should include webhook resources in discovery documents [Conformance] + test/e2e/apimachinery/webhook.go:117 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:05:49.702 - Jun 12 22:05:49.703: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename webhook 06/12/23 22:05:49.706 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:05:49.764 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:05:49.783 + STEP: Creating a kubernetes client 07/27/23 02:41:55.893 + Jul 27 02:41:55.893: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename webhook 07/27/23 02:41:55.894 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:41:55.936 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:41:55.947 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/apimachinery/webhook.go:90 - STEP: Setting up server cert 06/12/23 22:05:49.851 - STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 22:05:50.819 - STEP: Deploying the webhook pod 06/12/23 22:05:50.852 - STEP: Wait for the deployment to be ready 06/12/23 22:05:50.89 - Jun 12 22:05:50.914: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set - Jun 12 22:05:52.948: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 5, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 5, 50, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 5, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 5, 50, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} - STEP: Deploying the webhook service 06/12/23 22:05:54.963 - STEP: Verifying the service has paired with the endpoint 06/12/23 22:05:55.023 - Jun 12 22:05:56.048: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 - [It] should be able to deny custom resource creation, update and deletion [Conformance] - test/e2e/apimachinery/webhook.go:221 - Jun 12 22:05:56.062: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Registering the custom resource webhook via the AdmissionRegistration API 06/12/23 22:05:56.123 - STEP: Creating a custom resource that should be denied by the webhook 06/12/23 22:05:56.185 - STEP: Creating a custom resource whose deletion would be denied by the webhook 06/12/23 22:05:58.265 - STEP: Updating the custom resource with disallowed data should be denied 06/12/23 22:05:58.285 - STEP: Deleting the custom resource should be denied 06/12/23 22:05:58.378 - STEP: Remove the offending key and value from the custom resource data 06/12/23 22:05:58.44 - STEP: Deleting the updated custom resource should be successful 06/12/23 22:05:58.51 + STEP: Setting up server cert 07/27/23 02:41:56.022 + STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 02:41:56.181 + STEP: Deploying the webhook pod 07/27/23 02:41:56.218 + STEP: Wait for the deployment to be ready 07/27/23 02:41:56.249 + Jul 27 02:41:56.271: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created + Jul 27 02:41:58.307: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 2, 41, 56, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 2, 41, 56, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 2, 41, 56, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 2, 41, 56, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} + STEP: Deploying the webhook service 07/27/23 02:42:00.317 + STEP: Verifying the service has paired with the endpoint 07/27/23 02:42:00.355 + Jul 27 02:42:01.356: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should include webhook resources in discovery documents [Conformance] + test/e2e/apimachinery/webhook.go:117 + STEP: fetching the /apis discovery document 07/27/23 02:42:01.367 + STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document 07/27/23 02:42:01.374 + STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document 07/27/23 02:42:01.374 + STEP: fetching the /apis/admissionregistration.k8s.io discovery document 07/27/23 02:42:01.374 + STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document 07/27/23 02:42:01.382 + STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document 07/27/23 02:42:01.382 + STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document 07/27/23 02:42:01.388 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 22:05:59.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + Jul 27 02:42:01.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/apimachinery/webhook.go:105 [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] @@ -31107,9906 +30117,10298 @@ test/e2e/apimachinery/framework.go:23 dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "webhook-2736" for this suite. 06/12/23 22:05:59.31 - STEP: Destroying namespace "webhook-2736-markers" for this suite. 06/12/23 22:05:59.422 + STEP: Destroying namespace "webhook-9735" for this suite. 07/27/23 02:42:01.535 + STEP: Destroying namespace "webhook-9735-markers" for this suite. 07/27/23 02:42:01.562 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSS ------------------------------ -[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] - Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] - test/e2e/apps/statefulset.go:697 -[BeforeEach] [sig-apps] StatefulSet +[sig-apps] ReplicaSet + should adopt matching pods on creation and release no longer matching pods [Conformance] + test/e2e/apps/replica_set.go:131 +[BeforeEach] [sig-apps] ReplicaSet set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:05:59.484 -Jun 12 22:05:59.485: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename statefulset 06/12/23 22:05:59.486 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:05:59.582 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:05:59.664 -[BeforeEach] [sig-apps] StatefulSet +STEP: Creating a kubernetes client 07/27/23 02:42:01.588 +Jul 27 02:42:01.588: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename replicaset 07/27/23 02:42:01.589 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:42:01.654 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:42:01.667 +[BeforeEach] [sig-apps] ReplicaSet test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-apps] StatefulSet - test/e2e/apps/statefulset.go:98 -[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:113 -STEP: Creating service test in namespace statefulset-8615 06/12/23 22:05:59.729 -[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] - test/e2e/apps/statefulset.go:697 -STEP: Creating stateful set ss in namespace statefulset-8615 06/12/23 22:05:59.794 -STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8615 06/12/23 22:05:59.849 -Jun 12 22:05:59.911: INFO: Found 0 stateful pods, waiting for 1 -Jun 12 22:06:09.944: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true -STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod 06/12/23 22:06:09.944 -Jun 12 22:06:09.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-8615 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' -Jun 12 22:06:10.453: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" -Jun 12 22:06:10.453: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" -Jun 12 22:06:10.453: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' - -Jun 12 22:06:10.490: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true -Jun 12 22:06:20.506: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false -Jun 12 22:06:20.506: INFO: Waiting for statefulset status.replicas updated to 0 -Jun 12 22:06:20.566: INFO: POD NODE PHASE GRACE CONDITIONS -Jun 12 22:06:20.566: INFO: ss-0 10.138.75.70 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:05:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:05:59 +0000 UTC }] -Jun 12 22:06:20.566: INFO: -Jun 12 22:06:20.566: INFO: StatefulSet ss has not reached scale 3, at 1 -Jun 12 22:06:21.580: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.986343265s -Jun 12 22:06:22.631: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.969100872s -Jun 12 22:06:23.648: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.920118497s -Jun 12 22:06:24.664: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.904470159s -Jun 12 22:06:25.680: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.888111726s -Jun 12 22:06:26.699: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.871504867s -Jun 12 22:06:27.736: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.853233221s -Jun 12 22:06:28.758: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.815550752s -Jun 12 22:06:29.858: INFO: Verifying statefulset ss doesn't scale past 3 for another 771.283194ms -STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8615 06/12/23 22:06:30.86 -Jun 12 22:06:30.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-8615 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' -Jun 12 22:06:31.538: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" -Jun 12 22:06:31.538: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" -Jun 12 22:06:31.538: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' - -Jun 12 22:06:31.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-8615 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' -Jun 12 22:06:32.244: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" -Jun 12 22:06:32.245: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" -Jun 12 22:06:32.245: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' - -Jun 12 22:06:32.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-8615 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' -Jun 12 22:06:33.275: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" -Jun 12 22:06:33.275: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" -Jun 12 22:06:33.275: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' - -Jun 12 22:06:33.296: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true -Jun 12 22:06:33.296: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true -Jun 12 22:06:33.296: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true -STEP: Scale down will not halt with unhealthy stateful pod 06/12/23 22:06:33.296 -Jun 12 22:06:33.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-8615 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' -Jun 12 22:06:34.246: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" -Jun 12 22:06:34.246: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" -Jun 12 22:06:34.246: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' - -Jun 12 22:06:34.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-8615 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' -Jun 12 22:06:35.506: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" -Jun 12 22:06:35.506: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" -Jun 12 22:06:35.506: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' - -Jun 12 22:06:35.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-8615 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' -Jun 12 22:06:36.612: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" -Jun 12 22:06:36.614: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" -Jun 12 22:06:36.614: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' - -Jun 12 22:06:36.614: INFO: Waiting for statefulset status.replicas updated to 0 -Jun 12 22:06:36.628: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 -Jun 12 22:06:46.660: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false -Jun 12 22:06:46.660: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false -Jun 12 22:06:46.660: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false -Jun 12 22:06:46.708: INFO: POD NODE PHASE GRACE CONDITIONS -Jun 12 22:06:46.708: INFO: ss-0 10.138.75.70 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:05:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:05:59 +0000 UTC }] -Jun 12 22:06:46.708: INFO: ss-1 10.138.75.112 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:20 +0000 UTC }] -Jun 12 22:06:46.708: INFO: ss-2 10.138.75.116 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:20 +0000 UTC }] -Jun 12 22:06:46.708: INFO: -Jun 12 22:06:46.708: INFO: StatefulSet ss has not reached scale 0, at 3 -Jun 12 22:06:47.727: INFO: POD NODE PHASE GRACE CONDITIONS -Jun 12 22:06:47.727: INFO: ss-0 10.138.75.70 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:05:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:05:59 +0000 UTC }] -Jun 12 22:06:47.727: INFO: ss-1 10.138.75.112 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:20 +0000 UTC }] -Jun 12 22:06:47.728: INFO: ss-2 10.138.75.116 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:20 +0000 UTC }] -Jun 12 22:06:47.728: INFO: -Jun 12 22:06:47.728: INFO: StatefulSet ss has not reached scale 0, at 3 -Jun 12 22:06:48.776: INFO: POD NODE PHASE GRACE CONDITIONS -Jun 12 22:06:48.777: INFO: ss-1 10.138.75.112 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:20 +0000 UTC }] -Jun 12 22:06:48.777: INFO: ss-2 10.138.75.116 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:20 +0000 UTC }] -Jun 12 22:06:48.777: INFO: -Jun 12 22:06:48.777: INFO: StatefulSet ss has not reached scale 0, at 2 -Jun 12 22:06:49.790: INFO: Verifying statefulset ss doesn't scale past 0 for another 6.915621625s -Jun 12 22:06:50.802: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.902503485s -Jun 12 22:06:51.815: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.889307658s -Jun 12 22:06:52.836: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.876646185s -Jun 12 22:06:53.853: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.855605973s -Jun 12 22:06:54.874: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.838016656s -Jun 12 22:06:55.888: INFO: Verifying statefulset ss doesn't scale past 0 for another 818.286345ms -STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8615 06/12/23 22:06:56.889 -Jun 12 22:06:56.902: INFO: Scaling statefulset ss to 0 -Jun 12 22:06:56.942: INFO: Waiting for statefulset status.replicas updated to 0 -[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:124 -Jun 12 22:06:56.964: INFO: Deleting all statefulset in ns statefulset-8615 -Jun 12 22:06:56.976: INFO: Scaling statefulset ss to 0 -Jun 12 22:06:57.022: INFO: Waiting for statefulset status.replicas updated to 0 -Jun 12 22:06:57.035: INFO: Deleting statefulset ss -[AfterEach] [sig-apps] StatefulSet +[It] should adopt matching pods on creation and release no longer matching pods [Conformance] + test/e2e/apps/replica_set.go:131 +STEP: Given a Pod with a 'name' label pod-adoption-release is created 07/27/23 02:42:01.681 +Jul 27 02:42:02.718: INFO: Waiting up to 5m0s for pod "pod-adoption-release" in namespace "replicaset-4801" to be "running and ready" +Jul 27 02:42:02.727: INFO: Pod "pod-adoption-release": Phase="Pending", Reason="", readiness=false. Elapsed: 9.446401ms +Jul 27 02:42:02.727: INFO: The phase of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) +Jul 27 02:42:04.739: INFO: Pod "pod-adoption-release": Phase="Running", Reason="", readiness=true. Elapsed: 2.021573709s +Jul 27 02:42:04.739: INFO: The phase of Pod pod-adoption-release is Running (Ready = true) +Jul 27 02:42:04.739: INFO: Pod "pod-adoption-release" satisfied condition "running and ready" +STEP: When a replicaset with a matching selector is created 07/27/23 02:42:04.749 +STEP: Then the orphan pod is adopted 07/27/23 02:42:04.764 +STEP: When the matched label of one of its pods change 07/27/23 02:42:05.785 +Jul 27 02:42:05.795: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 +STEP: Then the pod is released 07/27/23 02:42:05.826 +[AfterEach] [sig-apps] ReplicaSet test/e2e/framework/node/init/init.go:32 -Jun 12 22:06:57.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] StatefulSet +Jul 27 02:42:06.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicaSet test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] StatefulSet +[DeferCleanup (Each)] [sig-apps] ReplicaSet dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] StatefulSet +[DeferCleanup (Each)] [sig-apps] ReplicaSet tear down framework | framework.go:193 -STEP: Destroying namespace "statefulset-8615" for this suite. 06/12/23 22:06:57.102 +STEP: Destroying namespace "replicaset-4801" for this suite. 07/27/23 02:42:06.863 ------------------------------ -• [SLOW TEST] [57.642 seconds] -[sig-apps] StatefulSet +• [SLOW TEST] [5.299 seconds] +[sig-apps] ReplicaSet test/e2e/apps/framework.go:23 - Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:103 - Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] - test/e2e/apps/statefulset.go:697 + should adopt matching pods on creation and release no longer matching pods [Conformance] + test/e2e/apps/replica_set.go:131 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] StatefulSet + [BeforeEach] [sig-apps] ReplicaSet set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:05:59.484 - Jun 12 22:05:59.485: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename statefulset 06/12/23 22:05:59.486 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:05:59.582 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:05:59.664 - [BeforeEach] [sig-apps] StatefulSet + STEP: Creating a kubernetes client 07/27/23 02:42:01.588 + Jul 27 02:42:01.588: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename replicaset 07/27/23 02:42:01.589 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:42:01.654 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:42:01.667 + [BeforeEach] [sig-apps] ReplicaSet test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-apps] StatefulSet - test/e2e/apps/statefulset.go:98 - [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:113 - STEP: Creating service test in namespace statefulset-8615 06/12/23 22:05:59.729 - [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] - test/e2e/apps/statefulset.go:697 - STEP: Creating stateful set ss in namespace statefulset-8615 06/12/23 22:05:59.794 - STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8615 06/12/23 22:05:59.849 - Jun 12 22:05:59.911: INFO: Found 0 stateful pods, waiting for 1 - Jun 12 22:06:09.944: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true - STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod 06/12/23 22:06:09.944 - Jun 12 22:06:09.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-8615 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' - Jun 12 22:06:10.453: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" - Jun 12 22:06:10.453: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" - Jun 12 22:06:10.453: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' - - Jun 12 22:06:10.490: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true - Jun 12 22:06:20.506: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false - Jun 12 22:06:20.506: INFO: Waiting for statefulset status.replicas updated to 0 - Jun 12 22:06:20.566: INFO: POD NODE PHASE GRACE CONDITIONS - Jun 12 22:06:20.566: INFO: ss-0 10.138.75.70 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:05:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:05:59 +0000 UTC }] - Jun 12 22:06:20.566: INFO: - Jun 12 22:06:20.566: INFO: StatefulSet ss has not reached scale 3, at 1 - Jun 12 22:06:21.580: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.986343265s - Jun 12 22:06:22.631: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.969100872s - Jun 12 22:06:23.648: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.920118497s - Jun 12 22:06:24.664: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.904470159s - Jun 12 22:06:25.680: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.888111726s - Jun 12 22:06:26.699: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.871504867s - Jun 12 22:06:27.736: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.853233221s - Jun 12 22:06:28.758: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.815550752s - Jun 12 22:06:29.858: INFO: Verifying statefulset ss doesn't scale past 3 for another 771.283194ms - STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8615 06/12/23 22:06:30.86 - Jun 12 22:06:30.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-8615 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' - Jun 12 22:06:31.538: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" - Jun 12 22:06:31.538: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" - Jun 12 22:06:31.538: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' - - Jun 12 22:06:31.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-8615 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' - Jun 12 22:06:32.244: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" - Jun 12 22:06:32.245: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" - Jun 12 22:06:32.245: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' - - Jun 12 22:06:32.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-8615 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' - Jun 12 22:06:33.275: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" - Jun 12 22:06:33.275: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" - Jun 12 22:06:33.275: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' - - Jun 12 22:06:33.296: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true - Jun 12 22:06:33.296: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true - Jun 12 22:06:33.296: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true - STEP: Scale down will not halt with unhealthy stateful pod 06/12/23 22:06:33.296 - Jun 12 22:06:33.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-8615 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' - Jun 12 22:06:34.246: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" - Jun 12 22:06:34.246: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" - Jun 12 22:06:34.246: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' - - Jun 12 22:06:34.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-8615 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' - Jun 12 22:06:35.506: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" - Jun 12 22:06:35.506: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" - Jun 12 22:06:35.506: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' - - Jun 12 22:06:35.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=statefulset-8615 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' - Jun 12 22:06:36.612: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" - Jun 12 22:06:36.614: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" - Jun 12 22:06:36.614: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' - - Jun 12 22:06:36.614: INFO: Waiting for statefulset status.replicas updated to 0 - Jun 12 22:06:36.628: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 - Jun 12 22:06:46.660: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false - Jun 12 22:06:46.660: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false - Jun 12 22:06:46.660: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false - Jun 12 22:06:46.708: INFO: POD NODE PHASE GRACE CONDITIONS - Jun 12 22:06:46.708: INFO: ss-0 10.138.75.70 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:05:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:05:59 +0000 UTC }] - Jun 12 22:06:46.708: INFO: ss-1 10.138.75.112 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:20 +0000 UTC }] - Jun 12 22:06:46.708: INFO: ss-2 10.138.75.116 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:20 +0000 UTC }] - Jun 12 22:06:46.708: INFO: - Jun 12 22:06:46.708: INFO: StatefulSet ss has not reached scale 0, at 3 - Jun 12 22:06:47.727: INFO: POD NODE PHASE GRACE CONDITIONS - Jun 12 22:06:47.727: INFO: ss-0 10.138.75.70 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:05:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:05:59 +0000 UTC }] - Jun 12 22:06:47.727: INFO: ss-1 10.138.75.112 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:20 +0000 UTC }] - Jun 12 22:06:47.728: INFO: ss-2 10.138.75.116 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:20 +0000 UTC }] - Jun 12 22:06:47.728: INFO: - Jun 12 22:06:47.728: INFO: StatefulSet ss has not reached scale 0, at 3 - Jun 12 22:06:48.776: INFO: POD NODE PHASE GRACE CONDITIONS - Jun 12 22:06:48.777: INFO: ss-1 10.138.75.112 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:20 +0000 UTC }] - Jun 12 22:06:48.777: INFO: ss-2 10.138.75.116 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-12 22:06:20 +0000 UTC }] - Jun 12 22:06:48.777: INFO: - Jun 12 22:06:48.777: INFO: StatefulSet ss has not reached scale 0, at 2 - Jun 12 22:06:49.790: INFO: Verifying statefulset ss doesn't scale past 0 for another 6.915621625s - Jun 12 22:06:50.802: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.902503485s - Jun 12 22:06:51.815: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.889307658s - Jun 12 22:06:52.836: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.876646185s - Jun 12 22:06:53.853: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.855605973s - Jun 12 22:06:54.874: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.838016656s - Jun 12 22:06:55.888: INFO: Verifying statefulset ss doesn't scale past 0 for another 818.286345ms - STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8615 06/12/23 22:06:56.889 - Jun 12 22:06:56.902: INFO: Scaling statefulset ss to 0 - Jun 12 22:06:56.942: INFO: Waiting for statefulset status.replicas updated to 0 - [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:124 - Jun 12 22:06:56.964: INFO: Deleting all statefulset in ns statefulset-8615 - Jun 12 22:06:56.976: INFO: Scaling statefulset ss to 0 - Jun 12 22:06:57.022: INFO: Waiting for statefulset status.replicas updated to 0 - Jun 12 22:06:57.035: INFO: Deleting statefulset ss - [AfterEach] [sig-apps] StatefulSet + [It] should adopt matching pods on creation and release no longer matching pods [Conformance] + test/e2e/apps/replica_set.go:131 + STEP: Given a Pod with a 'name' label pod-adoption-release is created 07/27/23 02:42:01.681 + Jul 27 02:42:02.718: INFO: Waiting up to 5m0s for pod "pod-adoption-release" in namespace "replicaset-4801" to be "running and ready" + Jul 27 02:42:02.727: INFO: Pod "pod-adoption-release": Phase="Pending", Reason="", readiness=false. Elapsed: 9.446401ms + Jul 27 02:42:02.727: INFO: The phase of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) + Jul 27 02:42:04.739: INFO: Pod "pod-adoption-release": Phase="Running", Reason="", readiness=true. Elapsed: 2.021573709s + Jul 27 02:42:04.739: INFO: The phase of Pod pod-adoption-release is Running (Ready = true) + Jul 27 02:42:04.739: INFO: Pod "pod-adoption-release" satisfied condition "running and ready" + STEP: When a replicaset with a matching selector is created 07/27/23 02:42:04.749 + STEP: Then the orphan pod is adopted 07/27/23 02:42:04.764 + STEP: When the matched label of one of its pods change 07/27/23 02:42:05.785 + Jul 27 02:42:05.795: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 + STEP: Then the pod is released 07/27/23 02:42:05.826 + [AfterEach] [sig-apps] ReplicaSet test/e2e/framework/node/init/init.go:32 - Jun 12 22:06:57.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] StatefulSet + Jul 27 02:42:06.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicaSet test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] StatefulSet + [DeferCleanup (Each)] [sig-apps] ReplicaSet dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] StatefulSet + [DeferCleanup (Each)] [sig-apps] ReplicaSet tear down framework | framework.go:193 - STEP: Destroying namespace "statefulset-8615" for this suite. 06/12/23 22:06:57.102 + STEP: Destroying namespace "replicaset-4801" for this suite. 07/27/23 02:42:06.863 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------- -[sig-auth] ServiceAccounts - should mount an API token into pods [Conformance] - test/e2e/auth/service_accounts.go:78 -[BeforeEach] [sig-auth] ServiceAccounts +[sig-apps] Deployment + RollingUpdateDeployment should delete old pods and create new ones [Conformance] + test/e2e/apps/deployment.go:105 +[BeforeEach] [sig-apps] Deployment set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:06:57.13 -Jun 12 22:06:57.131: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename svcaccounts 06/12/23 22:06:57.134 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:06:57.193 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:06:57.207 -[BeforeEach] [sig-auth] ServiceAccounts +STEP: Creating a kubernetes client 07/27/23 02:42:06.887 +Jul 27 02:42:06.887: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename deployment 07/27/23 02:42:06.888 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:42:06.937 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:42:06.95 +[BeforeEach] [sig-apps] Deployment test/e2e/framework/metrics/init/init.go:31 -[It] should mount an API token into pods [Conformance] - test/e2e/auth/service_accounts.go:78 -Jun 12 22:06:57.274: INFO: Waiting up to 5m0s for pod "pod-service-account-7e708c11-1735-494b-ab82-dfae3d807a59" in namespace "svcaccounts-9341" to be "running" -Jun 12 22:06:57.290: INFO: Pod "pod-service-account-7e708c11-1735-494b-ab82-dfae3d807a59": Phase="Pending", Reason="", readiness=false. Elapsed: 15.589326ms -Jun 12 22:06:59.303: INFO: Pod "pod-service-account-7e708c11-1735-494b-ab82-dfae3d807a59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028851238s -Jun 12 22:07:01.309: INFO: Pod "pod-service-account-7e708c11-1735-494b-ab82-dfae3d807a59": Phase="Running", Reason="", readiness=true. Elapsed: 4.034053534s -Jun 12 22:07:01.309: INFO: Pod "pod-service-account-7e708c11-1735-494b-ab82-dfae3d807a59" satisfied condition "running" -STEP: reading a file in the container 06/12/23 22:07:01.309 -Jun 12 22:07:01.309: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9341 pod-service-account-7e708c11-1735-494b-ab82-dfae3d807a59 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' -STEP: reading a file in the container 06/12/23 22:07:02.169 -Jun 12 22:07:02.170: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9341 pod-service-account-7e708c11-1735-494b-ab82-dfae3d807a59 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' -STEP: reading a file in the container 06/12/23 22:07:03.224 -Jun 12 22:07:03.225: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9341 pod-service-account-7e708c11-1735-494b-ab82-dfae3d807a59 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' -Jun 12 22:07:03.980: INFO: Got root ca configmap in namespace "svcaccounts-9341" -[AfterEach] [sig-auth] ServiceAccounts +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] + test/e2e/apps/deployment.go:105 +Jul 27 02:42:06.964: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) +Jul 27 02:42:06.989: INFO: Pod name sample-pod: Found 0 pods out of 1 +Jul 27 02:42:12.003: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running 07/27/23 02:42:12.003 +Jul 27 02:42:12.004: INFO: Creating deployment "test-rolling-update-deployment" +Jul 27 02:42:12.017: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has +Jul 27 02:42:12.037: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created +Jul 27 02:42:14.057: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected +Jul 27 02:42:14.073: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Jul 27 02:42:14.101: INFO: Deployment "test-rolling-update-deployment": +&Deployment{ObjectMeta:{test-rolling-update-deployment deployment-43 0331e6af-67d5-461f-86a3-e3ed29dcb4c7 114796 1 2023-07-27 02:42:12 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2023-07-27 02:42:12 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 02:42:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004a143c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-07-27 02:42:12 +0000 UTC,LastTransitionTime:2023-07-27 02:42:12 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-7549d9f46d" has successfully progressed.,LastUpdateTime:2023-07-27 02:42:13 +0000 UTC,LastTransitionTime:2023-07-27 02:42:12 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Jul 27 02:42:14.111: INFO: New ReplicaSet "test-rolling-update-deployment-7549d9f46d" of Deployment "test-rolling-update-deployment": +&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-7549d9f46d deployment-43 bed5c664-6909-45c4-9c74-517a33577495 114786 1 2023-07-27 02:42:12 +0000 UTC map[name:sample-pod pod-template-hash:7549d9f46d] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 0331e6af-67d5-461f-86a3-e3ed29dcb4c7 0xc004a148a7 0xc004a148a8}] [] [{kube-controller-manager Update apps/v1 2023-07-27 02:42:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0331e6af-67d5-461f-86a3-e3ed29dcb4c7\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 02:42:13 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 7549d9f46d,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:7549d9f46d] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004a14958 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Jul 27 02:42:14.111: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": +Jul 27 02:42:14.111: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-43 6476beb1-6399-4714-b4e8-8af24c17861b 114795 2 2023-07-27 02:42:06 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 0331e6af-67d5-461f-86a3-e3ed29dcb4c7 0xc004a14777 0xc004a14778}] [] [{e2e.test Update apps/v1 2023-07-27 02:42:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 02:42:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0331e6af-67d5-461f-86a3-e3ed29dcb4c7\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2023-07-27 02:42:13 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004a14838 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Jul 27 02:42:14.120: INFO: Pod "test-rolling-update-deployment-7549d9f46d-dbt8v" is available: +&Pod{ObjectMeta:{test-rolling-update-deployment-7549d9f46d-dbt8v test-rolling-update-deployment-7549d9f46d- deployment-43 1dec89f3-5c03-469d-824d-62662596a0c3 114785 0 2023-07-27 02:42:12 +0000 UTC map[name:sample-pod pod-template-hash:7549d9f46d] map[cni.projectcalico.org/containerID:3e522721fb2d6bd131eaec8142d802dff66e59def98ae0706fcc9c6dc5b0aefc cni.projectcalico.org/podIP:172.17.225.60/32 cni.projectcalico.org/podIPs:172.17.225.60/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.225.60" + ], + "default": true, + "dns": {} +}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-rolling-update-deployment-7549d9f46d bed5c664-6909-45c4-9c74-517a33577495 0xc004a14de7 0xc004a14de8}] [] [{calico Update v1 2023-07-27 02:42:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kube-controller-manager Update v1 2023-07-27 02:42:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bed5c664-6909-45c4-9c74-517a33577495\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-07-27 02:42:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-07-27 02:42:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.225.60\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mmxnt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mmxnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.19,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c61,c10,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-lqb8b,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:42:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:42:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:42:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:42:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.19,PodIP:172.17.225.60,StartTime:2023-07-27 02:42:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-07-27 02:42:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,ImageID:registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://c700252ff0b2d97054da089478e5e1e3c550348048d62e1857305356b6091dcb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.17.225.60,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment test/e2e/framework/node/init/init.go:32 -Jun 12 22:07:03.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-auth] ServiceAccounts +Jul 27 02:42:14.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Deployment test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-auth] ServiceAccounts +[DeferCleanup (Each)] [sig-apps] Deployment dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-auth] ServiceAccounts +[DeferCleanup (Each)] [sig-apps] Deployment tear down framework | framework.go:193 -STEP: Destroying namespace "svcaccounts-9341" for this suite. 06/12/23 22:07:04.007 +STEP: Destroying namespace "deployment-43" for this suite. 07/27/23 02:42:14.134 ------------------------------ -• [SLOW TEST] [6.899 seconds] -[sig-auth] ServiceAccounts -test/e2e/auth/framework.go:23 - should mount an API token into pods [Conformance] - test/e2e/auth/service_accounts.go:78 +• [SLOW TEST] [7.271 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + RollingUpdateDeployment should delete old pods and create new ones [Conformance] + test/e2e/apps/deployment.go:105 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-auth] ServiceAccounts + [BeforeEach] [sig-apps] Deployment set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:06:57.13 - Jun 12 22:06:57.131: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename svcaccounts 06/12/23 22:06:57.134 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:06:57.193 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:06:57.207 - [BeforeEach] [sig-auth] ServiceAccounts + STEP: Creating a kubernetes client 07/27/23 02:42:06.887 + Jul 27 02:42:06.887: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename deployment 07/27/23 02:42:06.888 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:42:06.937 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:42:06.95 + [BeforeEach] [sig-apps] Deployment test/e2e/framework/metrics/init/init.go:31 - [It] should mount an API token into pods [Conformance] - test/e2e/auth/service_accounts.go:78 - Jun 12 22:06:57.274: INFO: Waiting up to 5m0s for pod "pod-service-account-7e708c11-1735-494b-ab82-dfae3d807a59" in namespace "svcaccounts-9341" to be "running" - Jun 12 22:06:57.290: INFO: Pod "pod-service-account-7e708c11-1735-494b-ab82-dfae3d807a59": Phase="Pending", Reason="", readiness=false. Elapsed: 15.589326ms - Jun 12 22:06:59.303: INFO: Pod "pod-service-account-7e708c11-1735-494b-ab82-dfae3d807a59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028851238s - Jun 12 22:07:01.309: INFO: Pod "pod-service-account-7e708c11-1735-494b-ab82-dfae3d807a59": Phase="Running", Reason="", readiness=true. Elapsed: 4.034053534s - Jun 12 22:07:01.309: INFO: Pod "pod-service-account-7e708c11-1735-494b-ab82-dfae3d807a59" satisfied condition "running" - STEP: reading a file in the container 06/12/23 22:07:01.309 - Jun 12 22:07:01.309: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9341 pod-service-account-7e708c11-1735-494b-ab82-dfae3d807a59 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' - STEP: reading a file in the container 06/12/23 22:07:02.169 - Jun 12 22:07:02.170: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9341 pod-service-account-7e708c11-1735-494b-ab82-dfae3d807a59 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' - STEP: reading a file in the container 06/12/23 22:07:03.224 - Jun 12 22:07:03.225: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9341 pod-service-account-7e708c11-1735-494b-ab82-dfae3d807a59 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' - Jun 12 22:07:03.980: INFO: Got root ca configmap in namespace "svcaccounts-9341" - [AfterEach] [sig-auth] ServiceAccounts + [BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 + [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] + test/e2e/apps/deployment.go:105 + Jul 27 02:42:06.964: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) + Jul 27 02:42:06.989: INFO: Pod name sample-pod: Found 0 pods out of 1 + Jul 27 02:42:12.003: INFO: Pod name sample-pod: Found 1 pods out of 1 + STEP: ensuring each pod is running 07/27/23 02:42:12.003 + Jul 27 02:42:12.004: INFO: Creating deployment "test-rolling-update-deployment" + Jul 27 02:42:12.017: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has + Jul 27 02:42:12.037: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created + Jul 27 02:42:14.057: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected + Jul 27 02:42:14.073: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) + [AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 + Jul 27 02:42:14.101: INFO: Deployment "test-rolling-update-deployment": + &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-43 0331e6af-67d5-461f-86a3-e3ed29dcb4c7 114796 1 2023-07-27 02:42:12 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2023-07-27 02:42:12 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 02:42:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004a143c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-07-27 02:42:12 +0000 UTC,LastTransitionTime:2023-07-27 02:42:12 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-7549d9f46d" has successfully progressed.,LastUpdateTime:2023-07-27 02:42:13 +0000 UTC,LastTransitionTime:2023-07-27 02:42:12 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + + Jul 27 02:42:14.111: INFO: New ReplicaSet "test-rolling-update-deployment-7549d9f46d" of Deployment "test-rolling-update-deployment": + &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-7549d9f46d deployment-43 bed5c664-6909-45c4-9c74-517a33577495 114786 1 2023-07-27 02:42:12 +0000 UTC map[name:sample-pod pod-template-hash:7549d9f46d] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 0331e6af-67d5-461f-86a3-e3ed29dcb4c7 0xc004a148a7 0xc004a148a8}] [] [{kube-controller-manager Update apps/v1 2023-07-27 02:42:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0331e6af-67d5-461f-86a3-e3ed29dcb4c7\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 02:42:13 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 7549d9f46d,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:7549d9f46d] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004a14958 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} + Jul 27 02:42:14.111: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": + Jul 27 02:42:14.111: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-43 6476beb1-6399-4714-b4e8-8af24c17861b 114795 2 2023-07-27 02:42:06 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 0331e6af-67d5-461f-86a3-e3ed29dcb4c7 0xc004a14777 0xc004a14778}] [] [{e2e.test Update apps/v1 2023-07-27 02:42:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-07-27 02:42:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0331e6af-67d5-461f-86a3-e3ed29dcb4c7\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2023-07-27 02:42:13 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004a14838 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + Jul 27 02:42:14.120: INFO: Pod "test-rolling-update-deployment-7549d9f46d-dbt8v" is available: + &Pod{ObjectMeta:{test-rolling-update-deployment-7549d9f46d-dbt8v test-rolling-update-deployment-7549d9f46d- deployment-43 1dec89f3-5c03-469d-824d-62662596a0c3 114785 0 2023-07-27 02:42:12 +0000 UTC map[name:sample-pod pod-template-hash:7549d9f46d] map[cni.projectcalico.org/containerID:3e522721fb2d6bd131eaec8142d802dff66e59def98ae0706fcc9c6dc5b0aefc cni.projectcalico.org/podIP:172.17.225.60/32 cni.projectcalico.org/podIPs:172.17.225.60/32 k8s.v1.cni.cncf.io/network-status:[{ + "name": "k8s-pod-network", + "ips": [ + "172.17.225.60" + ], + "default": true, + "dns": {} + }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-rolling-update-deployment-7549d9f46d bed5c664-6909-45c4-9c74-517a33577495 0xc004a14de7 0xc004a14de8}] [] [{calico Update v1 2023-07-27 02:42:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kube-controller-manager Update v1 2023-07-27 02:42:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bed5c664-6909-45c4-9c74-517a33577495\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-07-27 02:42:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-07-27 02:42:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.17.225.60\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mmxnt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mmxnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.245.128.19,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c61,c10,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-lqb8b,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:42:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:42:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:42:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-07-27 02:42:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.245.128.19,PodIP:172.17.225.60,StartTime:2023-07-27 02:42:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-07-27 02:42:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,ImageID:registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://c700252ff0b2d97054da089478e5e1e3c550348048d62e1857305356b6091dcb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.17.225.60,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + [AfterEach] [sig-apps] Deployment test/e2e/framework/node/init/init.go:32 - Jun 12 22:07:03.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-auth] ServiceAccounts + Jul 27 02:42:14.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Deployment test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-auth] ServiceAccounts + [DeferCleanup (Each)] [sig-apps] Deployment dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-auth] ServiceAccounts + [DeferCleanup (Each)] [sig-apps] Deployment tear down framework | framework.go:193 - STEP: Destroying namespace "svcaccounts-9341" for this suite. 06/12/23 22:07:04.007 + STEP: Destroying namespace "deployment-43" for this suite. 07/27/23 02:42:14.134 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSS ------------------------------ -[sig-node] NoExecuteTaintManager Multiple Pods [Serial] - evicts pods with minTolerationSeconds [Disruptive] [Conformance] - test/e2e/node/taints.go:455 -[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] +[sig-scheduling] LimitRange + should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + test/e2e/scheduling/limit_range.go:61 +[BeforeEach] [sig-scheduling] LimitRange set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:07:04.031 -Jun 12 22:07:04.031: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename taint-multiple-pods 06/12/23 22:07:04.034 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:07:04.096 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:07:04.112 -[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] +STEP: Creating a kubernetes client 07/27/23 02:42:14.159 +Jul 27 02:42:14.159: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename limitrange 07/27/23 02:42:14.16 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:42:14.199 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:42:14.211 +[BeforeEach] [sig-scheduling] LimitRange test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] - test/e2e/node/taints.go:383 -Jun 12 22:07:04.134: INFO: Waiting up to 1m0s for all nodes to be ready -Jun 12 22:08:04.342: INFO: Waiting for terminating namespaces to be deleted... -[It] evicts pods with minTolerationSeconds [Disruptive] [Conformance] - test/e2e/node/taints.go:455 -Jun 12 22:08:04.369: INFO: Starting informer... -STEP: Starting pods... 06/12/23 22:08:04.369 -Jun 12 22:08:04.629: INFO: Pod1 is running on 10.138.75.70. Tainting Node -Jun 12 22:08:04.865: INFO: Waiting up to 5m0s for pod "taint-eviction-b1" in namespace "taint-multiple-pods-3989" to be "running" -Jun 12 22:08:04.879: INFO: Pod "taint-eviction-b1": Phase="Pending", Reason="", readiness=false. Elapsed: 13.659023ms -Jun 12 22:08:06.894: INFO: Pod "taint-eviction-b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028884407s -Jun 12 22:08:08.896: INFO: Pod "taint-eviction-b1": Phase="Running", Reason="", readiness=true. Elapsed: 4.030626779s -Jun 12 22:08:08.896: INFO: Pod "taint-eviction-b1" satisfied condition "running" -Jun 12 22:08:08.896: INFO: Waiting up to 5m0s for pod "taint-eviction-b2" in namespace "taint-multiple-pods-3989" to be "running" -Jun 12 22:08:08.911: INFO: Pod "taint-eviction-b2": Phase="Running", Reason="", readiness=true. Elapsed: 15.181975ms -Jun 12 22:08:08.911: INFO: Pod "taint-eviction-b2" satisfied condition "running" -Jun 12 22:08:08.911: INFO: Pod2 is running on 10.138.75.70. Tainting Node -STEP: Trying to apply a taint on the Node 06/12/23 22:08:08.911 -STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 06/12/23 22:08:08.975 -STEP: Waiting for Pod1 and Pod2 to be deleted 06/12/23 22:08:08.986 -Jun 12 22:08:18.420: INFO: Noticed Pod "taint-eviction-b1" gets evicted. -Jun 12 22:08:35.610: INFO: Noticed Pod "taint-eviction-b2" gets evicted. -STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 06/12/23 22:08:35.69 -[AfterEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] +[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + test/e2e/scheduling/limit_range.go:61 +STEP: Creating a LimitRange 07/27/23 02:42:14.224 +STEP: Setting up watch 07/27/23 02:42:14.224 +STEP: Submitting a LimitRange 07/27/23 02:42:14.346 +STEP: Verifying LimitRange creation was observed 07/27/23 02:42:14.364 +STEP: Fetching the LimitRange to ensure it has proper values 07/27/23 02:42:14.364 +Jul 27 02:42:14.378: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] +Jul 27 02:42:14.378: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Creating a Pod with no resource requirements 07/27/23 02:42:14.378 +STEP: Ensuring Pod has resource requirements applied from LimitRange 07/27/23 02:42:14.395 +Jul 27 02:42:14.406: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] +Jul 27 02:42:14.406: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Creating a Pod with partial resource requirements 07/27/23 02:42:14.406 +STEP: Ensuring Pod has merged resource requirements applied from LimitRange 07/27/23 02:42:14.421 +Jul 27 02:42:14.445: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] +Jul 27 02:42:14.445: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Failing to create a Pod with less than min resources 07/27/23 02:42:14.445 +STEP: Failing to create a Pod with more than max resources 07/27/23 02:42:14.453 +STEP: Updating a LimitRange 07/27/23 02:42:14.461 +STEP: Verifying LimitRange updating is effective 07/27/23 02:42:14.478 +STEP: Creating a Pod with less than former min resources 07/27/23 02:42:16.492 +STEP: Failing to create a Pod with more than max resources 07/27/23 02:42:16.51 +STEP: Deleting a LimitRange 07/27/23 02:42:16.519 +STEP: Verifying the LimitRange was deleted 07/27/23 02:42:16.537 +Jul 27 02:42:21.552: INFO: limitRange is already deleted +STEP: Creating a Pod with more than former max resources 07/27/23 02:42:21.552 +[AfterEach] [sig-scheduling] LimitRange test/e2e/framework/node/init/init.go:32 -Jun 12 22:08:35.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] +Jul 27 02:42:21.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-scheduling] LimitRange test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] +[DeferCleanup (Each)] [sig-scheduling] LimitRange dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] +[DeferCleanup (Each)] [sig-scheduling] LimitRange tear down framework | framework.go:193 -STEP: Destroying namespace "taint-multiple-pods-3989" for this suite. 06/12/23 22:08:35.833 +STEP: Destroying namespace "limitrange-9546" for this suite. 07/27/23 02:42:21.593 ------------------------------ -• [SLOW TEST] [91.822 seconds] -[sig-node] NoExecuteTaintManager Multiple Pods [Serial] -test/e2e/node/framework.go:23 - evicts pods with minTolerationSeconds [Disruptive] [Conformance] - test/e2e/node/taints.go:455 +• [SLOW TEST] [7.458 seconds] +[sig-scheduling] LimitRange +test/e2e/scheduling/framework.go:40 + should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + test/e2e/scheduling/limit_range.go:61 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + [BeforeEach] [sig-scheduling] LimitRange set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:07:04.031 - Jun 12 22:07:04.031: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename taint-multiple-pods 06/12/23 22:07:04.034 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:07:04.096 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:07:04.112 - [BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + STEP: Creating a kubernetes client 07/27/23 02:42:14.159 + Jul 27 02:42:14.159: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename limitrange 07/27/23 02:42:14.16 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:42:14.199 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:42:14.211 + [BeforeEach] [sig-scheduling] LimitRange test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] - test/e2e/node/taints.go:383 - Jun 12 22:07:04.134: INFO: Waiting up to 1m0s for all nodes to be ready - Jun 12 22:08:04.342: INFO: Waiting for terminating namespaces to be deleted... - [It] evicts pods with minTolerationSeconds [Disruptive] [Conformance] - test/e2e/node/taints.go:455 - Jun 12 22:08:04.369: INFO: Starting informer... - STEP: Starting pods... 06/12/23 22:08:04.369 - Jun 12 22:08:04.629: INFO: Pod1 is running on 10.138.75.70. Tainting Node - Jun 12 22:08:04.865: INFO: Waiting up to 5m0s for pod "taint-eviction-b1" in namespace "taint-multiple-pods-3989" to be "running" - Jun 12 22:08:04.879: INFO: Pod "taint-eviction-b1": Phase="Pending", Reason="", readiness=false. Elapsed: 13.659023ms - Jun 12 22:08:06.894: INFO: Pod "taint-eviction-b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028884407s - Jun 12 22:08:08.896: INFO: Pod "taint-eviction-b1": Phase="Running", Reason="", readiness=true. Elapsed: 4.030626779s - Jun 12 22:08:08.896: INFO: Pod "taint-eviction-b1" satisfied condition "running" - Jun 12 22:08:08.896: INFO: Waiting up to 5m0s for pod "taint-eviction-b2" in namespace "taint-multiple-pods-3989" to be "running" - Jun 12 22:08:08.911: INFO: Pod "taint-eviction-b2": Phase="Running", Reason="", readiness=true. Elapsed: 15.181975ms - Jun 12 22:08:08.911: INFO: Pod "taint-eviction-b2" satisfied condition "running" - Jun 12 22:08:08.911: INFO: Pod2 is running on 10.138.75.70. Tainting Node - STEP: Trying to apply a taint on the Node 06/12/23 22:08:08.911 - STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 06/12/23 22:08:08.975 - STEP: Waiting for Pod1 and Pod2 to be deleted 06/12/23 22:08:08.986 - Jun 12 22:08:18.420: INFO: Noticed Pod "taint-eviction-b1" gets evicted. - Jun 12 22:08:35.610: INFO: Noticed Pod "taint-eviction-b2" gets evicted. - STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 06/12/23 22:08:35.69 - [AfterEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + test/e2e/scheduling/limit_range.go:61 + STEP: Creating a LimitRange 07/27/23 02:42:14.224 + STEP: Setting up watch 07/27/23 02:42:14.224 + STEP: Submitting a LimitRange 07/27/23 02:42:14.346 + STEP: Verifying LimitRange creation was observed 07/27/23 02:42:14.364 + STEP: Fetching the LimitRange to ensure it has proper values 07/27/23 02:42:14.364 + Jul 27 02:42:14.378: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] + Jul 27 02:42:14.378: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] + STEP: Creating a Pod with no resource requirements 07/27/23 02:42:14.378 + STEP: Ensuring Pod has resource requirements applied from LimitRange 07/27/23 02:42:14.395 + Jul 27 02:42:14.406: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] + Jul 27 02:42:14.406: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] + STEP: Creating a Pod with partial resource requirements 07/27/23 02:42:14.406 + STEP: Ensuring Pod has merged resource requirements applied from LimitRange 07/27/23 02:42:14.421 + Jul 27 02:42:14.445: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] + Jul 27 02:42:14.445: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] + STEP: Failing to create a Pod with less than min resources 07/27/23 02:42:14.445 + STEP: Failing to create a Pod with more than max resources 07/27/23 02:42:14.453 + STEP: Updating a LimitRange 07/27/23 02:42:14.461 + STEP: Verifying LimitRange updating is effective 07/27/23 02:42:14.478 + STEP: Creating a Pod with less than former min resources 07/27/23 02:42:16.492 + STEP: Failing to create a Pod with more than max resources 07/27/23 02:42:16.51 + STEP: Deleting a LimitRange 07/27/23 02:42:16.519 + STEP: Verifying the LimitRange was deleted 07/27/23 02:42:16.537 + Jul 27 02:42:21.552: INFO: limitRange is already deleted + STEP: Creating a Pod with more than former max resources 07/27/23 02:42:21.552 + [AfterEach] [sig-scheduling] LimitRange test/e2e/framework/node/init/init.go:32 - Jun 12 22:08:35.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + Jul 27 02:42:21.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-scheduling] LimitRange test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + [DeferCleanup (Each)] [sig-scheduling] LimitRange dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + [DeferCleanup (Each)] [sig-scheduling] LimitRange tear down framework | framework.go:193 - STEP: Destroying namespace "taint-multiple-pods-3989" for this suite. 06/12/23 22:08:35.833 + STEP: Destroying namespace "limitrange-9546" for this suite. 07/27/23 02:42:21.593 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SS ------------------------------ -[sig-cli] Kubectl client Guestbook application - should create and stop a working application [Conformance] - test/e2e/kubectl/kubectl.go:394 -[BeforeEach] [sig-cli] Kubectl client +[sig-storage] Downward API volume + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:84 +[BeforeEach] [sig-storage] Downward API volume set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:08:35.88 -Jun 12 22:08:35.880: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename kubectl 06/12/23 22:08:35.884 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:08:35.945 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:08:35.961 -[BeforeEach] [sig-cli] Kubectl client +STEP: Creating a kubernetes client 07/27/23 02:42:21.618 +Jul 27 02:42:21.618: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename downward-api 07/27/23 02:42:21.619 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:42:21.662 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:42:21.674 +[BeforeEach] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-cli] Kubectl client - test/e2e/kubectl/kubectl.go:274 -[It] should create and stop a working application [Conformance] - test/e2e/kubectl/kubectl.go:394 -STEP: creating all guestbook components 06/12/23 22:08:36.005 -Jun 12 22:08:36.005: INFO: apiVersion: v1 -kind: Service -metadata: - name: agnhost-replica - labels: - app: agnhost - role: replica - tier: backend -spec: - ports: - - port: 6379 - selector: - app: agnhost - role: replica - tier: backend - -Jun 12 22:08:36.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-5080 create -f -' -Jun 12 22:08:41.717: INFO: stderr: "" -Jun 12 22:08:41.717: INFO: stdout: "service/agnhost-replica created\n" -Jun 12 22:08:41.717: INFO: apiVersion: v1 -kind: Service -metadata: - name: agnhost-primary - labels: - app: agnhost - role: primary - tier: backend -spec: - ports: - - port: 6379 - targetPort: 6379 - selector: - app: agnhost - role: primary - tier: backend - -Jun 12 22:08:41.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-5080 create -f -' -Jun 12 22:08:46.293: INFO: stderr: "" -Jun 12 22:08:46.293: INFO: stdout: "service/agnhost-primary created\n" -Jun 12 22:08:46.293: INFO: apiVersion: v1 -kind: Service -metadata: - name: frontend - labels: - app: guestbook - tier: frontend -spec: - # if your cluster supports it, uncomment the following to automatically create - # an external load-balanced IP for the frontend service. - # type: LoadBalancer - ports: - - port: 80 - selector: - app: guestbook - tier: frontend - -Jun 12 22:08:46.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-5080 create -f -' -Jun 12 22:08:49.168: INFO: stderr: "" -Jun 12 22:08:49.168: INFO: stdout: "service/frontend created\n" -Jun 12 22:08:49.169: INFO: apiVersion: apps/v1 -kind: Deployment -metadata: - name: frontend -spec: - replicas: 3 - selector: - matchLabels: - app: guestbook - tier: frontend - template: - metadata: - labels: - app: guestbook - tier: frontend - spec: - containers: - - name: guestbook-frontend - image: registry.k8s.io/e2e-test-images/agnhost:2.43 - args: [ "guestbook", "--backend-port", "6379" ] - resources: - requests: - cpu: 100m - memory: 100Mi - ports: - - containerPort: 80 - -Jun 12 22:08:49.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-5080 create -f -' -Jun 12 22:08:54.286: INFO: stderr: "" -Jun 12 22:08:54.286: INFO: stdout: "deployment.apps/frontend created\n" -Jun 12 22:08:54.286: INFO: apiVersion: apps/v1 -kind: Deployment -metadata: - name: agnhost-primary -spec: - replicas: 1 - selector: - matchLabels: - app: agnhost - role: primary - tier: backend - template: - metadata: - labels: - app: agnhost - role: primary - tier: backend - spec: - containers: - - name: primary - image: registry.k8s.io/e2e-test-images/agnhost:2.43 - args: [ "guestbook", "--http-port", "6379" ] - resources: - requests: - cpu: 100m - memory: 100Mi - ports: - - containerPort: 6379 - -Jun 12 22:08:54.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-5080 create -f -' -Jun 12 22:08:56.645: INFO: stderr: "" -Jun 12 22:08:56.645: INFO: stdout: "deployment.apps/agnhost-primary created\n" -Jun 12 22:08:56.645: INFO: apiVersion: apps/v1 -kind: Deployment -metadata: - name: agnhost-replica -spec: - replicas: 2 - selector: - matchLabels: - app: agnhost - role: replica - tier: backend - template: - metadata: - labels: - app: agnhost - role: replica - tier: backend - spec: - containers: - - name: replica - image: registry.k8s.io/e2e-test-images/agnhost:2.43 - args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] - resources: - requests: - cpu: 100m - memory: 100Mi - ports: - - containerPort: 6379 - -Jun 12 22:08:56.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-5080 create -f -' -Jun 12 22:09:01.689: INFO: stderr: "" -Jun 12 22:09:01.689: INFO: stdout: "deployment.apps/agnhost-replica created\n" -STEP: validating guestbook app 06/12/23 22:09:01.689 -Jun 12 22:09:01.690: INFO: Waiting for all frontend pods to be Running. -Jun 12 22:09:06.741: INFO: Waiting for frontend to serve content. -Jun 12 22:09:06.775: INFO: Trying to add a new entry to the guestbook. -Jun 12 22:09:06.807: INFO: Verifying that added entry can be retrieved. -STEP: using delete to clean up resources 06/12/23 22:09:06.853 -Jun 12 22:09:06.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-5080 delete --grace-period=0 --force -f -' -Jun 12 22:09:07.161: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" -Jun 12 22:09:07.161: INFO: stdout: "service \"agnhost-replica\" force deleted\n" -STEP: using delete to clean up resources 06/12/23 22:09:07.161 -Jun 12 22:09:07.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-5080 delete --grace-period=0 --force -f -' -Jun 12 22:09:07.594: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" -Jun 12 22:09:07.594: INFO: stdout: "service \"agnhost-primary\" force deleted\n" -STEP: using delete to clean up resources 06/12/23 22:09:07.595 -Jun 12 22:09:07.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-5080 delete --grace-period=0 --force -f -' -Jun 12 22:09:08.108: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" -Jun 12 22:09:08.108: INFO: stdout: "service \"frontend\" force deleted\n" -STEP: using delete to clean up resources 06/12/23 22:09:08.108 -Jun 12 22:09:08.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-5080 delete --grace-period=0 --force -f -' -Jun 12 22:09:09.441: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" -Jun 12 22:09:09.441: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" -STEP: using delete to clean up resources 06/12/23 22:09:09.442 -Jun 12 22:09:09.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-5080 delete --grace-period=0 --force -f -' -Jun 12 22:09:12.218: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" -Jun 12 22:09:12.218: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" -STEP: using delete to clean up resources 06/12/23 22:09:12.218 -Jun 12 22:09:12.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-5080 delete --grace-period=0 --force -f -' -Jun 12 22:09:13.450: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" -Jun 12 22:09:13.450: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" -[AfterEach] [sig-cli] Kubectl client +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:84 +STEP: Creating a pod to test downward API volume plugin 07/27/23 02:42:21.688 +W0727 02:42:21.721818 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +Jul 27 02:42:21.721: INFO: Waiting up to 5m0s for pod "downwardapi-volume-256561e7-4655-4e74-8877-c82b56ad2af1" in namespace "downward-api-2047" to be "Succeeded or Failed" +Jul 27 02:42:21.742: INFO: Pod "downwardapi-volume-256561e7-4655-4e74-8877-c82b56ad2af1": Phase="Pending", Reason="", readiness=false. Elapsed: 20.920315ms +Jul 27 02:42:23.755: INFO: Pod "downwardapi-volume-256561e7-4655-4e74-8877-c82b56ad2af1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033723998s +Jul 27 02:42:25.753: INFO: Pod "downwardapi-volume-256561e7-4655-4e74-8877-c82b56ad2af1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031399873s +STEP: Saw pod success 07/27/23 02:42:25.753 +Jul 27 02:42:25.753: INFO: Pod "downwardapi-volume-256561e7-4655-4e74-8877-c82b56ad2af1" satisfied condition "Succeeded or Failed" +Jul 27 02:42:25.763: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-256561e7-4655-4e74-8877-c82b56ad2af1 container client-container: +STEP: delete the pod 07/27/23 02:42:25.782 +Jul 27 02:42:25.807: INFO: Waiting for pod downwardapi-volume-256561e7-4655-4e74-8877-c82b56ad2af1 to disappear +Jul 27 02:42:25.817: INFO: Pod downwardapi-volume-256561e7-4655-4e74-8877-c82b56ad2af1 no longer exists +[AfterEach] [sig-storage] Downward API volume test/e2e/framework/node/init/init.go:32 -Jun 12 22:09:13.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-cli] Kubectl client +Jul 27 02:42:25.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-cli] Kubectl client +[DeferCleanup (Each)] [sig-storage] Downward API volume dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-cli] Kubectl client +[DeferCleanup (Each)] [sig-storage] Downward API volume tear down framework | framework.go:193 -STEP: Destroying namespace "kubectl-5080" for this suite. 06/12/23 22:09:13.522 +STEP: Destroying namespace "downward-api-2047" for this suite. 07/27/23 02:42:25.835 ------------------------------ -• [SLOW TEST] [37.702 seconds] -[sig-cli] Kubectl client -test/e2e/kubectl/framework.go:23 - Guestbook application - test/e2e/kubectl/kubectl.go:369 - should create and stop a working application [Conformance] - test/e2e/kubectl/kubectl.go:394 +• [4.242 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:84 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-cli] Kubectl client + [BeforeEach] [sig-storage] Downward API volume set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:08:35.88 - Jun 12 22:08:35.880: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename kubectl 06/12/23 22:08:35.884 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:08:35.945 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:08:35.961 - [BeforeEach] [sig-cli] Kubectl client + STEP: Creating a kubernetes client 07/27/23 02:42:21.618 + Jul 27 02:42:21.618: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename downward-api 07/27/23 02:42:21.619 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:42:21.662 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:42:21.674 + [BeforeEach] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-cli] Kubectl client - test/e2e/kubectl/kubectl.go:274 - [It] should create and stop a working application [Conformance] - test/e2e/kubectl/kubectl.go:394 - STEP: creating all guestbook components 06/12/23 22:08:36.005 - Jun 12 22:08:36.005: INFO: apiVersion: v1 - kind: Service - metadata: - name: agnhost-replica - labels: - app: agnhost - role: replica - tier: backend - spec: - ports: - - port: 6379 - selector: - app: agnhost - role: replica - tier: backend - - Jun 12 22:08:36.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-5080 create -f -' - Jun 12 22:08:41.717: INFO: stderr: "" - Jun 12 22:08:41.717: INFO: stdout: "service/agnhost-replica created\n" - Jun 12 22:08:41.717: INFO: apiVersion: v1 - kind: Service - metadata: - name: agnhost-primary - labels: - app: agnhost - role: primary - tier: backend - spec: - ports: - - port: 6379 - targetPort: 6379 - selector: - app: agnhost - role: primary - tier: backend + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 + [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:84 + STEP: Creating a pod to test downward API volume plugin 07/27/23 02:42:21.688 + W0727 02:42:21.721818 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + Jul 27 02:42:21.721: INFO: Waiting up to 5m0s for pod "downwardapi-volume-256561e7-4655-4e74-8877-c82b56ad2af1" in namespace "downward-api-2047" to be "Succeeded or Failed" + Jul 27 02:42:21.742: INFO: Pod "downwardapi-volume-256561e7-4655-4e74-8877-c82b56ad2af1": Phase="Pending", Reason="", readiness=false. Elapsed: 20.920315ms + Jul 27 02:42:23.755: INFO: Pod "downwardapi-volume-256561e7-4655-4e74-8877-c82b56ad2af1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033723998s + Jul 27 02:42:25.753: INFO: Pod "downwardapi-volume-256561e7-4655-4e74-8877-c82b56ad2af1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031399873s + STEP: Saw pod success 07/27/23 02:42:25.753 + Jul 27 02:42:25.753: INFO: Pod "downwardapi-volume-256561e7-4655-4e74-8877-c82b56ad2af1" satisfied condition "Succeeded or Failed" + Jul 27 02:42:25.763: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-256561e7-4655-4e74-8877-c82b56ad2af1 container client-container: + STEP: delete the pod 07/27/23 02:42:25.782 + Jul 27 02:42:25.807: INFO: Waiting for pod downwardapi-volume-256561e7-4655-4e74-8877-c82b56ad2af1 to disappear + Jul 27 02:42:25.817: INFO: Pod downwardapi-volume-256561e7-4655-4e74-8877-c82b56ad2af1 no longer exists + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 + Jul 27 02:42:25.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 + STEP: Destroying namespace "downward-api-2047" for this suite. 07/27/23 02:42:25.835 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of same group and version but different kinds [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:357 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 07/27/23 02:42:25.86 +Jul 27 02:42:25.860: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename crd-publish-openapi 07/27/23 02:42:25.861 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:42:25.903 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:42:25.915 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] works for multiple CRDs of same group and version but different kinds [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:357 +STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation 07/27/23 02:42:25.927 +Jul 27 02:42:25.928: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 02:42:33.723: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jul 27 02:43:01.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "crd-publish-openapi-612" for this suite. 07/27/23 02:43:01.798 +------------------------------ +• [SLOW TEST] [35.960 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for multiple CRDs of same group and version but different kinds [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:357 - Jun 12 22:08:41.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-5080 create -f -' - Jun 12 22:08:46.293: INFO: stderr: "" - Jun 12 22:08:46.293: INFO: stdout: "service/agnhost-primary created\n" - Jun 12 22:08:46.293: INFO: apiVersion: v1 - kind: Service - metadata: - name: frontend - labels: - app: guestbook - tier: frontend - spec: - # if your cluster supports it, uncomment the following to automatically create - # an external load-balanced IP for the frontend service. - # type: LoadBalancer - ports: - - port: 80 - selector: - app: guestbook - tier: frontend + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 07/27/23 02:42:25.86 + Jul 27 02:42:25.860: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename crd-publish-openapi 07/27/23 02:42:25.861 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:42:25.903 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:42:25.915 + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] works for multiple CRDs of same group and version but different kinds [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:357 + STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation 07/27/23 02:42:25.927 + Jul 27 02:42:25.928: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 02:42:33.723: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jul 27 02:43:01.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "crd-publish-openapi-612" for this suite. 07/27/23 02:43:01.798 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-node] Containers + should be able to override the image's default command and arguments [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:87 +[BeforeEach] [sig-node] Containers + set up framework | framework.go:178 +STEP: Creating a kubernetes client 07/27/23 02:43:01.821 +Jul 27 02:43:01.821: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename containers 07/27/23 02:43:01.822 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:43:01.875 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:43:01.885 +[BeforeEach] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:31 +[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:87 +STEP: Creating a pod to test override all 07/27/23 02:43:01.894 +Jul 27 02:43:01.928: INFO: Waiting up to 5m0s for pod "client-containers-e5a1492a-3c19-49ce-9533-13998604923a" in namespace "containers-1193" to be "Succeeded or Failed" +Jul 27 02:43:01.936: INFO: Pod "client-containers-e5a1492a-3c19-49ce-9533-13998604923a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.875718ms +Jul 27 02:43:03.950: INFO: Pod "client-containers-e5a1492a-3c19-49ce-9533-13998604923a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02187455s +Jul 27 02:43:05.947: INFO: Pod "client-containers-e5a1492a-3c19-49ce-9533-13998604923a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019094322s +STEP: Saw pod success 07/27/23 02:43:05.947 +Jul 27 02:43:05.947: INFO: Pod "client-containers-e5a1492a-3c19-49ce-9533-13998604923a" satisfied condition "Succeeded or Failed" +Jul 27 02:43:05.957: INFO: Trying to get logs from node 10.245.128.19 pod client-containers-e5a1492a-3c19-49ce-9533-13998604923a container agnhost-container: +STEP: delete the pod 07/27/23 02:43:06.001 +Jul 27 02:43:06.026: INFO: Waiting for pod client-containers-e5a1492a-3c19-49ce-9533-13998604923a to disappear +Jul 27 02:43:06.048: INFO: Pod client-containers-e5a1492a-3c19-49ce-9533-13998604923a no longer exists +[AfterEach] [sig-node] Containers + test/e2e/framework/node/init/init.go:32 +Jul 27 02:43:06.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Containers + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Containers + tear down framework | framework.go:193 +STEP: Destroying namespace "containers-1193" for this suite. 07/27/23 02:43:06.063 +------------------------------ +• [4.279 seconds] +[sig-node] Containers +test/e2e/common/node/framework.go:23 + should be able to override the image's default command and arguments [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:87 - Jun 12 22:08:46.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-5080 create -f -' - Jun 12 22:08:49.168: INFO: stderr: "" - Jun 12 22:08:49.168: INFO: stdout: "service/frontend created\n" - Jun 12 22:08:49.169: INFO: apiVersion: apps/v1 - kind: Deployment - metadata: - name: frontend - spec: - replicas: 3 - selector: - matchLabels: - app: guestbook - tier: frontend - template: - metadata: - labels: - app: guestbook - tier: frontend - spec: - containers: - - name: guestbook-frontend - image: registry.k8s.io/e2e-test-images/agnhost:2.43 - args: [ "guestbook", "--backend-port", "6379" ] - resources: - requests: - cpu: 100m - memory: 100Mi - ports: - - containerPort: 80 + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Containers + set up framework | framework.go:178 + STEP: Creating a kubernetes client 07/27/23 02:43:01.821 + Jul 27 02:43:01.821: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename containers 07/27/23 02:43:01.822 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:43:01.875 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:43:01.885 + [BeforeEach] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:31 + [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:87 + STEP: Creating a pod to test override all 07/27/23 02:43:01.894 + Jul 27 02:43:01.928: INFO: Waiting up to 5m0s for pod "client-containers-e5a1492a-3c19-49ce-9533-13998604923a" in namespace "containers-1193" to be "Succeeded or Failed" + Jul 27 02:43:01.936: INFO: Pod "client-containers-e5a1492a-3c19-49ce-9533-13998604923a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.875718ms + Jul 27 02:43:03.950: INFO: Pod "client-containers-e5a1492a-3c19-49ce-9533-13998604923a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02187455s + Jul 27 02:43:05.947: INFO: Pod "client-containers-e5a1492a-3c19-49ce-9533-13998604923a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019094322s + STEP: Saw pod success 07/27/23 02:43:05.947 + Jul 27 02:43:05.947: INFO: Pod "client-containers-e5a1492a-3c19-49ce-9533-13998604923a" satisfied condition "Succeeded or Failed" + Jul 27 02:43:05.957: INFO: Trying to get logs from node 10.245.128.19 pod client-containers-e5a1492a-3c19-49ce-9533-13998604923a container agnhost-container: + STEP: delete the pod 07/27/23 02:43:06.001 + Jul 27 02:43:06.026: INFO: Waiting for pod client-containers-e5a1492a-3c19-49ce-9533-13998604923a to disappear + Jul 27 02:43:06.048: INFO: Pod client-containers-e5a1492a-3c19-49ce-9533-13998604923a no longer exists + [AfterEach] [sig-node] Containers + test/e2e/framework/node/init/init.go:32 + Jul 27 02:43:06.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Containers + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Containers + tear down framework | framework.go:193 + STEP: Destroying namespace "containers-1193" for this suite. 07/27/23 02:43:06.063 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:109 +[BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 07/27/23 02:43:06.1 +Jul 27 02:43:06.100: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename configmap 07/27/23 02:43:06.1 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:43:06.224 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:43:06.236 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:109 +STEP: Creating configMap with name configmap-test-volume-map-bb36248e-16c3-4c53-abe2-5d2bc22ad8d0 07/27/23 02:43:06.246 +STEP: Creating a pod to test consume configMaps 07/27/23 02:43:06.272 +Jul 27 02:43:06.312: INFO: Waiting up to 5m0s for pod "pod-configmaps-220ac92f-54f0-4a32-9d7d-9bf5abd7e9a6" in namespace "configmap-5557" to be "Succeeded or Failed" +Jul 27 02:43:06.322: INFO: Pod "pod-configmaps-220ac92f-54f0-4a32-9d7d-9bf5abd7e9a6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.363115ms +Jul 27 02:43:08.333: INFO: Pod "pod-configmaps-220ac92f-54f0-4a32-9d7d-9bf5abd7e9a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0209995s +Jul 27 02:43:10.335: INFO: Pod "pod-configmaps-220ac92f-54f0-4a32-9d7d-9bf5abd7e9a6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022843248s +Jul 27 02:43:12.335: INFO: Pod "pod-configmaps-220ac92f-54f0-4a32-9d7d-9bf5abd7e9a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022831139s +STEP: Saw pod success 07/27/23 02:43:12.335 +Jul 27 02:43:12.335: INFO: Pod "pod-configmaps-220ac92f-54f0-4a32-9d7d-9bf5abd7e9a6" satisfied condition "Succeeded or Failed" +Jul 27 02:43:12.345: INFO: Trying to get logs from node 10.245.128.19 pod pod-configmaps-220ac92f-54f0-4a32-9d7d-9bf5abd7e9a6 container agnhost-container: +STEP: delete the pod 07/27/23 02:43:12.368 +Jul 27 02:43:12.412: INFO: Waiting for pod pod-configmaps-220ac92f-54f0-4a32-9d7d-9bf5abd7e9a6 to disappear +Jul 27 02:43:12.429: INFO: Pod pod-configmaps-220ac92f-54f0-4a32-9d7d-9bf5abd7e9a6 no longer exists +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 +Jul 27 02:43:12.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 +STEP: Destroying namespace "configmap-5557" for this suite. 07/27/23 02:43:12.473 +------------------------------ +• [SLOW TEST] [6.411 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:109 - Jun 12 22:08:49.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-5080 create -f -' - Jun 12 22:08:54.286: INFO: stderr: "" - Jun 12 22:08:54.286: INFO: stdout: "deployment.apps/frontend created\n" - Jun 12 22:08:54.286: INFO: apiVersion: apps/v1 - kind: Deployment - metadata: - name: agnhost-primary - spec: - replicas: 1 - selector: - matchLabels: - app: agnhost - role: primary - tier: backend - template: - metadata: - labels: - app: agnhost - role: primary - tier: backend - spec: - containers: - - name: primary - image: registry.k8s.io/e2e-test-images/agnhost:2.43 - args: [ "guestbook", "--http-port", "6379" ] - resources: - requests: - cpu: 100m - memory: 100Mi - ports: - - containerPort: 6379 + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 07/27/23 02:43:06.1 + Jul 27 02:43:06.100: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename configmap 07/27/23 02:43:06.1 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:43:06.224 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:43:06.236 + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:109 + STEP: Creating configMap with name configmap-test-volume-map-bb36248e-16c3-4c53-abe2-5d2bc22ad8d0 07/27/23 02:43:06.246 + STEP: Creating a pod to test consume configMaps 07/27/23 02:43:06.272 + Jul 27 02:43:06.312: INFO: Waiting up to 5m0s for pod "pod-configmaps-220ac92f-54f0-4a32-9d7d-9bf5abd7e9a6" in namespace "configmap-5557" to be "Succeeded or Failed" + Jul 27 02:43:06.322: INFO: Pod "pod-configmaps-220ac92f-54f0-4a32-9d7d-9bf5abd7e9a6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.363115ms + Jul 27 02:43:08.333: INFO: Pod "pod-configmaps-220ac92f-54f0-4a32-9d7d-9bf5abd7e9a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0209995s + Jul 27 02:43:10.335: INFO: Pod "pod-configmaps-220ac92f-54f0-4a32-9d7d-9bf5abd7e9a6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022843248s + Jul 27 02:43:12.335: INFO: Pod "pod-configmaps-220ac92f-54f0-4a32-9d7d-9bf5abd7e9a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022831139s + STEP: Saw pod success 07/27/23 02:43:12.335 + Jul 27 02:43:12.335: INFO: Pod "pod-configmaps-220ac92f-54f0-4a32-9d7d-9bf5abd7e9a6" satisfied condition "Succeeded or Failed" + Jul 27 02:43:12.345: INFO: Trying to get logs from node 10.245.128.19 pod pod-configmaps-220ac92f-54f0-4a32-9d7d-9bf5abd7e9a6 container agnhost-container: + STEP: delete the pod 07/27/23 02:43:12.368 + Jul 27 02:43:12.412: INFO: Waiting for pod pod-configmaps-220ac92f-54f0-4a32-9d7d-9bf5abd7e9a6 to disappear + Jul 27 02:43:12.429: INFO: Pod pod-configmaps-220ac92f-54f0-4a32-9d7d-9bf5abd7e9a6 no longer exists + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 + Jul 27 02:43:12.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 + STEP: Destroying namespace "configmap-5557" for this suite. 07/27/23 02:43:12.473 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + test/e2e/network/endpointslice.go:102 +[BeforeEach] [sig-network] EndpointSlice + set up framework | framework.go:178 +STEP: Creating a kubernetes client 07/27/23 02:43:12.511 +Jul 27 02:43:12.512: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename endpointslice 07/27/23 02:43:12.512 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:43:12.573 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:43:12.587 +[BeforeEach] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:52 +[It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + test/e2e/network/endpointslice.go:102 +[AfterEach] [sig-network] EndpointSlice + test/e2e/framework/node/init/init.go:32 +Jul 27 02:43:13.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] EndpointSlice + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] EndpointSlice + tear down framework | framework.go:193 +STEP: Destroying namespace "endpointslice-6346" for this suite. 07/27/23 02:43:13.067 +------------------------------ +• [0.632 seconds] +[sig-network] EndpointSlice +test/e2e/network/common/framework.go:23 + should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + test/e2e/network/endpointslice.go:102 - Jun 12 22:08:54.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-5080 create -f -' - Jun 12 22:08:56.645: INFO: stderr: "" - Jun 12 22:08:56.645: INFO: stdout: "deployment.apps/agnhost-primary created\n" - Jun 12 22:08:56.645: INFO: apiVersion: apps/v1 - kind: Deployment - metadata: - name: agnhost-replica - spec: - replicas: 2 - selector: - matchLabels: - app: agnhost - role: replica - tier: backend - template: - metadata: - labels: - app: agnhost - role: replica - tier: backend - spec: - containers: - - name: replica - image: registry.k8s.io/e2e-test-images/agnhost:2.43 - args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] - resources: - requests: - cpu: 100m - memory: 100Mi - ports: - - containerPort: 6379 + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] EndpointSlice + set up framework | framework.go:178 + STEP: Creating a kubernetes client 07/27/23 02:43:12.511 + Jul 27 02:43:12.512: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename endpointslice 07/27/23 02:43:12.512 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:43:12.573 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:43:12.587 + [BeforeEach] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:52 + [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + test/e2e/network/endpointslice.go:102 + [AfterEach] [sig-network] EndpointSlice + test/e2e/framework/node/init/init.go:32 + Jul 27 02:43:13.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] EndpointSlice + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] EndpointSlice + tear down framework | framework.go:193 + STEP: Destroying namespace "endpointslice-6346" for this suite. 07/27/23 02:43:13.067 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD with validation schema [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:69 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 07/27/23 02:43:13.144 +Jul 27 02:43:13.144: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename crd-publish-openapi 07/27/23 02:43:13.145 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:43:13.205 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:43:13.215 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] works for CRD with validation schema [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:69 +Jul 27 02:43:13.226: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: kubectl validation (kubectl create and apply) allows request with known and required properties 07/27/23 02:43:20.824 +Jul 27 02:43:20.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-6243 --namespace=crd-publish-openapi-6243 create -f -' +Jul 27 02:43:22.293: INFO: stderr: "" +Jul 27 02:43:22.293: INFO: stdout: "e2e-test-crd-publish-openapi-9475-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" +Jul 27 02:43:22.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-6243 --namespace=crd-publish-openapi-6243 delete e2e-test-crd-publish-openapi-9475-crds test-foo' +Jul 27 02:43:22.486: INFO: stderr: "" +Jul 27 02:43:22.486: INFO: stdout: "e2e-test-crd-publish-openapi-9475-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" +Jul 27 02:43:22.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-6243 --namespace=crd-publish-openapi-6243 apply -f -' +Jul 27 02:43:22.895: INFO: stderr: "" +Jul 27 02:43:22.895: INFO: stdout: "e2e-test-crd-publish-openapi-9475-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" +Jul 27 02:43:22.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-6243 --namespace=crd-publish-openapi-6243 delete e2e-test-crd-publish-openapi-9475-crds test-foo' +Jul 27 02:43:23.078: INFO: stderr: "" +Jul 27 02:43:23.078: INFO: stdout: "e2e-test-crd-publish-openapi-9475-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" +STEP: kubectl validation (kubectl create and apply) rejects request with value outside defined enum values 07/27/23 02:43:23.078 +Jul 27 02:43:23.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-6243 --namespace=crd-publish-openapi-6243 create -f -' +Jul 27 02:43:25.857: INFO: rc: 1 +STEP: kubectl validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema 07/27/23 02:43:25.857 +Jul 27 02:43:25.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-6243 --namespace=crd-publish-openapi-6243 create -f -' +Jul 27 02:43:26.328: INFO: rc: 1 +Jul 27 02:43:26.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-6243 --namespace=crd-publish-openapi-6243 apply -f -' +Jul 27 02:43:26.753: INFO: rc: 1 +STEP: kubectl validation (kubectl create and apply) rejects request without required properties 07/27/23 02:43:26.753 +Jul 27 02:43:26.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-6243 --namespace=crd-publish-openapi-6243 create -f -' +Jul 27 02:43:27.159: INFO: rc: 1 +Jul 27 02:43:27.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-6243 --namespace=crd-publish-openapi-6243 apply -f -' +Jul 27 02:43:27.562: INFO: rc: 1 +STEP: kubectl explain works to explain CR properties 07/27/23 02:43:27.562 +Jul 27 02:43:27.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-6243 explain e2e-test-crd-publish-openapi-9475-crds' +Jul 27 02:43:27.949: INFO: stderr: "" +Jul 27 02:43:27.950: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9475-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" +STEP: kubectl explain works to explain CR properties recursively 07/27/23 02:43:27.95 +Jul 27 02:43:27.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-6243 explain e2e-test-crd-publish-openapi-9475-crds.metadata' +Jul 27 02:43:28.339: INFO: stderr: "" +Jul 27 02:43:28.339: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9475-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n return a 409.\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n Deprecated: selfLink is a legacy read-only field that is no longer\n populated by the system.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" +Jul 27 02:43:28.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-6243 explain e2e-test-crd-publish-openapi-9475-crds.spec' +Jul 27 02:43:31.023: INFO: stderr: "" +Jul 27 02:43:31.023: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9475-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" +Jul 27 02:43:31.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-6243 explain e2e-test-crd-publish-openapi-9475-crds.spec.bars' +Jul 27 02:43:31.409: INFO: stderr: "" +Jul 27 02:43:31.409: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9475-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n feeling\t\n Whether Bar is feeling great.\n\n name\t -required-\n Name of Bar.\n\n" +STEP: kubectl explain works to return error when explain is called on property that doesn't exist 07/27/23 02:43:31.409 +Jul 27 02:43:31.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-6243 explain e2e-test-crd-publish-openapi-9475-crds.spec.bars2' +Jul 27 02:43:31.783: INFO: rc: 1 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jul 27 02:43:39.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "crd-publish-openapi-6243" for this suite. 07/27/23 02:43:39.763 +------------------------------ +• [SLOW TEST] [26.639 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for CRD with validation schema [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:69 - Jun 12 22:08:56.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-5080 create -f -' - Jun 12 22:09:01.689: INFO: stderr: "" - Jun 12 22:09:01.689: INFO: stdout: "deployment.apps/agnhost-replica created\n" - STEP: validating guestbook app 06/12/23 22:09:01.689 - Jun 12 22:09:01.690: INFO: Waiting for all frontend pods to be Running. - Jun 12 22:09:06.741: INFO: Waiting for frontend to serve content. - Jun 12 22:09:06.775: INFO: Trying to add a new entry to the guestbook. - Jun 12 22:09:06.807: INFO: Verifying that added entry can be retrieved. - STEP: using delete to clean up resources 06/12/23 22:09:06.853 - Jun 12 22:09:06.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-5080 delete --grace-period=0 --force -f -' - Jun 12 22:09:07.161: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" - Jun 12 22:09:07.161: INFO: stdout: "service \"agnhost-replica\" force deleted\n" - STEP: using delete to clean up resources 06/12/23 22:09:07.161 - Jun 12 22:09:07.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-5080 delete --grace-period=0 --force -f -' - Jun 12 22:09:07.594: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" - Jun 12 22:09:07.594: INFO: stdout: "service \"agnhost-primary\" force deleted\n" - STEP: using delete to clean up resources 06/12/23 22:09:07.595 - Jun 12 22:09:07.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-5080 delete --grace-period=0 --force -f -' - Jun 12 22:09:08.108: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" - Jun 12 22:09:08.108: INFO: stdout: "service \"frontend\" force deleted\n" - STEP: using delete to clean up resources 06/12/23 22:09:08.108 - Jun 12 22:09:08.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-5080 delete --grace-period=0 --force -f -' - Jun 12 22:09:09.441: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" - Jun 12 22:09:09.441: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" - STEP: using delete to clean up resources 06/12/23 22:09:09.442 - Jun 12 22:09:09.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-5080 delete --grace-period=0 --force -f -' - Jun 12 22:09:12.218: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" - Jun 12 22:09:12.218: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" - STEP: using delete to clean up resources 06/12/23 22:09:12.218 - Jun 12 22:09:12.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-5080 delete --grace-period=0 --force -f -' - Jun 12 22:09:13.450: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" - Jun 12 22:09:13.450: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" - [AfterEach] [sig-cli] Kubectl client + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 07/27/23 02:43:13.144 + Jul 27 02:43:13.144: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename crd-publish-openapi 07/27/23 02:43:13.145 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:43:13.205 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:43:13.215 + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] works for CRD with validation schema [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:69 + Jul 27 02:43:13.226: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: kubectl validation (kubectl create and apply) allows request with known and required properties 07/27/23 02:43:20.824 + Jul 27 02:43:20.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-6243 --namespace=crd-publish-openapi-6243 create -f -' + Jul 27 02:43:22.293: INFO: stderr: "" + Jul 27 02:43:22.293: INFO: stdout: "e2e-test-crd-publish-openapi-9475-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" + Jul 27 02:43:22.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-6243 --namespace=crd-publish-openapi-6243 delete e2e-test-crd-publish-openapi-9475-crds test-foo' + Jul 27 02:43:22.486: INFO: stderr: "" + Jul 27 02:43:22.486: INFO: stdout: "e2e-test-crd-publish-openapi-9475-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" + Jul 27 02:43:22.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-6243 --namespace=crd-publish-openapi-6243 apply -f -' + Jul 27 02:43:22.895: INFO: stderr: "" + Jul 27 02:43:22.895: INFO: stdout: "e2e-test-crd-publish-openapi-9475-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" + Jul 27 02:43:22.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-6243 --namespace=crd-publish-openapi-6243 delete e2e-test-crd-publish-openapi-9475-crds test-foo' + Jul 27 02:43:23.078: INFO: stderr: "" + Jul 27 02:43:23.078: INFO: stdout: "e2e-test-crd-publish-openapi-9475-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" + STEP: kubectl validation (kubectl create and apply) rejects request with value outside defined enum values 07/27/23 02:43:23.078 + Jul 27 02:43:23.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-6243 --namespace=crd-publish-openapi-6243 create -f -' + Jul 27 02:43:25.857: INFO: rc: 1 + STEP: kubectl validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema 07/27/23 02:43:25.857 + Jul 27 02:43:25.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-6243 --namespace=crd-publish-openapi-6243 create -f -' + Jul 27 02:43:26.328: INFO: rc: 1 + Jul 27 02:43:26.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-6243 --namespace=crd-publish-openapi-6243 apply -f -' + Jul 27 02:43:26.753: INFO: rc: 1 + STEP: kubectl validation (kubectl create and apply) rejects request without required properties 07/27/23 02:43:26.753 + Jul 27 02:43:26.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-6243 --namespace=crd-publish-openapi-6243 create -f -' + Jul 27 02:43:27.159: INFO: rc: 1 + Jul 27 02:43:27.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-6243 --namespace=crd-publish-openapi-6243 apply -f -' + Jul 27 02:43:27.562: INFO: rc: 1 + STEP: kubectl explain works to explain CR properties 07/27/23 02:43:27.562 + Jul 27 02:43:27.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-6243 explain e2e-test-crd-publish-openapi-9475-crds' + Jul 27 02:43:27.949: INFO: stderr: "" + Jul 27 02:43:27.950: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9475-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" + STEP: kubectl explain works to explain CR properties recursively 07/27/23 02:43:27.95 + Jul 27 02:43:27.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-6243 explain e2e-test-crd-publish-openapi-9475-crds.metadata' + Jul 27 02:43:28.339: INFO: stderr: "" + Jul 27 02:43:28.339: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9475-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n return a 409.\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n Deprecated: selfLink is a legacy read-only field that is no longer\n populated by the system.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" + Jul 27 02:43:28.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-6243 explain e2e-test-crd-publish-openapi-9475-crds.spec' + Jul 27 02:43:31.023: INFO: stderr: "" + Jul 27 02:43:31.023: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9475-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" + Jul 27 02:43:31.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-6243 explain e2e-test-crd-publish-openapi-9475-crds.spec.bars' + Jul 27 02:43:31.409: INFO: stderr: "" + Jul 27 02:43:31.409: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9475-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n feeling\t\n Whether Bar is feeling great.\n\n name\t -required-\n Name of Bar.\n\n" + STEP: kubectl explain works to return error when explain is called on property that doesn't exist 07/27/23 02:43:31.409 + Jul 27 02:43:31.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-6243 explain e2e-test-crd-publish-openapi-9475-crds.spec.bars2' + Jul 27 02:43:31.783: INFO: rc: 1 + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 22:09:13.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-cli] Kubectl client + Jul 27 02:43:39.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-cli] Kubectl client + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-cli] Kubectl client + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "kubectl-5080" for this suite. 06/12/23 22:09:13.522 + STEP: Destroying namespace "crd-publish-openapi-6243" for this suite. 07/27/23 02:43:39.763 << End Captured GinkgoWriter Output ------------------------------ -SSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-cli] Kubectl client Kubectl run pod - should create a pod from an image when restart is Never [Conformance] - test/e2e/kubectl/kubectl.go:1713 -[BeforeEach] [sig-cli] Kubectl client +[sig-network] DNS + should provide DNS for services [Conformance] + test/e2e/network/dns.go:137 +[BeforeEach] [sig-network] DNS set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:09:13.583 -Jun 12 22:09:13.583: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename kubectl 06/12/23 22:09:13.588 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:09:13.737 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:09:13.769 -[BeforeEach] [sig-cli] Kubectl client +STEP: Creating a kubernetes client 07/27/23 02:43:39.785 +Jul 27 02:43:39.785: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename dns 07/27/23 02:43:39.786 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:43:39.831 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:43:39.841 +[BeforeEach] [sig-network] DNS test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-cli] Kubectl client - test/e2e/kubectl/kubectl.go:274 -[BeforeEach] Kubectl run pod - test/e2e/kubectl/kubectl.go:1700 -[It] should create a pod from an image when restart is Never [Conformance] - test/e2e/kubectl/kubectl.go:1713 -STEP: running the image registry.k8s.io/e2e-test-images/httpd:2.4.38-4 06/12/23 22:09:13.801 -Jun 12 22:09:13.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-3657 run e2e-test-httpd-pod --restart=Never --pod-running-timeout=2m0s --image=registry.k8s.io/e2e-test-images/httpd:2.4.38-4' -Jun 12 22:09:15.983: INFO: stderr: "" -Jun 12 22:09:15.983: INFO: stdout: "pod/e2e-test-httpd-pod created\n" -STEP: verifying the pod e2e-test-httpd-pod was created 06/12/23 22:09:15.983 -[AfterEach] Kubectl run pod - test/e2e/kubectl/kubectl.go:1704 -Jun 12 22:09:16.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-3657 delete pods e2e-test-httpd-pod' -Jun 12 22:09:22.114: INFO: stderr: "" -Jun 12 22:09:22.114: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" -[AfterEach] [sig-cli] Kubectl client +[It] should provide DNS for services [Conformance] + test/e2e/network/dns.go:137 +STEP: Creating a test headless service 07/27/23 02:43:39.856 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9451.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9451.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9451.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9451.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9451.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9451.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9451.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9451.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9451.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9451.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 178.127.21.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.21.127.178_udp@PTR;check="$$(dig +tcp +noall +answer +search 178.127.21.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.21.127.178_tcp@PTR;sleep 1; done + 07/27/23 02:43:39.917 +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9451.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9451.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9451.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9451.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9451.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9451.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9451.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9451.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9451.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9451.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 178.127.21.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.21.127.178_udp@PTR;check="$$(dig +tcp +noall +answer +search 178.127.21.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.21.127.178_tcp@PTR;sleep 1; done + 07/27/23 02:43:39.917 +STEP: creating a pod to probe DNS 07/27/23 02:43:39.917 +STEP: submitting the pod to kubernetes 07/27/23 02:43:39.917 +Jul 27 02:43:39.952: INFO: Waiting up to 15m0s for pod "dns-test-cfaacfcf-3426-46c7-a640-b7f228383212" in namespace "dns-9451" to be "running" +Jul 27 02:43:39.966: INFO: Pod "dns-test-cfaacfcf-3426-46c7-a640-b7f228383212": Phase="Pending", Reason="", readiness=false. Elapsed: 13.15721ms +Jul 27 02:43:41.982: INFO: Pod "dns-test-cfaacfcf-3426-46c7-a640-b7f228383212": Phase="Running", Reason="", readiness=true. Elapsed: 2.029397931s +Jul 27 02:43:41.982: INFO: Pod "dns-test-cfaacfcf-3426-46c7-a640-b7f228383212" satisfied condition "running" +STEP: retrieving the pod 07/27/23 02:43:41.982 +STEP: looking for the results for each expected name from probers 07/27/23 02:43:42.011 +Jul 27 02:43:42.074: INFO: Unable to read wheezy_udp@dns-test-service.dns-9451.svc.cluster.local from pod dns-9451/dns-test-cfaacfcf-3426-46c7-a640-b7f228383212: the server could not find the requested resource (get pods dns-test-cfaacfcf-3426-46c7-a640-b7f228383212) +Jul 27 02:43:42.110: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9451.svc.cluster.local from pod dns-9451/dns-test-cfaacfcf-3426-46c7-a640-b7f228383212: the server could not find the requested resource (get pods dns-test-cfaacfcf-3426-46c7-a640-b7f228383212) +Jul 27 02:43:42.126: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local from pod dns-9451/dns-test-cfaacfcf-3426-46c7-a640-b7f228383212: the server could not find the requested resource (get pods dns-test-cfaacfcf-3426-46c7-a640-b7f228383212) +Jul 27 02:43:42.150: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local from pod dns-9451/dns-test-cfaacfcf-3426-46c7-a640-b7f228383212: the server could not find the requested resource (get pods dns-test-cfaacfcf-3426-46c7-a640-b7f228383212) +Jul 27 02:43:42.262: INFO: Unable to read jessie_udp@dns-test-service.dns-9451.svc.cluster.local from pod dns-9451/dns-test-cfaacfcf-3426-46c7-a640-b7f228383212: the server could not find the requested resource (get pods dns-test-cfaacfcf-3426-46c7-a640-b7f228383212) +Jul 27 02:43:42.279: INFO: Unable to read jessie_tcp@dns-test-service.dns-9451.svc.cluster.local from pod dns-9451/dns-test-cfaacfcf-3426-46c7-a640-b7f228383212: the server could not find the requested resource (get pods dns-test-cfaacfcf-3426-46c7-a640-b7f228383212) +Jul 27 02:43:42.298: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local from pod dns-9451/dns-test-cfaacfcf-3426-46c7-a640-b7f228383212: the server could not find the requested resource (get pods dns-test-cfaacfcf-3426-46c7-a640-b7f228383212) +Jul 27 02:43:42.315: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local from pod dns-9451/dns-test-cfaacfcf-3426-46c7-a640-b7f228383212: the server could not find the requested resource (get pods dns-test-cfaacfcf-3426-46c7-a640-b7f228383212) +Jul 27 02:43:42.395: INFO: Lookups using dns-9451/dns-test-cfaacfcf-3426-46c7-a640-b7f228383212 failed for: [wheezy_udp@dns-test-service.dns-9451.svc.cluster.local wheezy_tcp@dns-test-service.dns-9451.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local jessie_udp@dns-test-service.dns-9451.svc.cluster.local jessie_tcp@dns-test-service.dns-9451.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local] + +Jul 27 02:43:47.711: INFO: DNS probes using dns-9451/dns-test-cfaacfcf-3426-46c7-a640-b7f228383212 succeeded + +STEP: deleting the pod 07/27/23 02:43:47.711 +STEP: deleting the test service 07/27/23 02:43:47.74 +STEP: deleting the test headless service 07/27/23 02:43:47.836 +[AfterEach] [sig-network] DNS test/e2e/framework/node/init/init.go:32 -Jun 12 22:09:22.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-cli] Kubectl client +Jul 27 02:43:47.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] DNS test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-cli] Kubectl client +[DeferCleanup (Each)] [sig-network] DNS dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-cli] Kubectl client +[DeferCleanup (Each)] [sig-network] DNS tear down framework | framework.go:193 -STEP: Destroying namespace "kubectl-3657" for this suite. 06/12/23 22:09:22.14 +STEP: Destroying namespace "dns-9451" for this suite. 07/27/23 02:43:47.912 ------------------------------ -• [SLOW TEST] [8.584 seconds] -[sig-cli] Kubectl client -test/e2e/kubectl/framework.go:23 - Kubectl run pod - test/e2e/kubectl/kubectl.go:1697 - should create a pod from an image when restart is Never [Conformance] - test/e2e/kubectl/kubectl.go:1713 +• [SLOW TEST] [8.148 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should provide DNS for services [Conformance] + test/e2e/network/dns.go:137 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-cli] Kubectl client + [BeforeEach] [sig-network] DNS set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:09:13.583 - Jun 12 22:09:13.583: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename kubectl 06/12/23 22:09:13.588 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:09:13.737 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:09:13.769 - [BeforeEach] [sig-cli] Kubectl client + STEP: Creating a kubernetes client 07/27/23 02:43:39.785 + Jul 27 02:43:39.785: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename dns 07/27/23 02:43:39.786 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:43:39.831 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:43:39.841 + [BeforeEach] [sig-network] DNS test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-cli] Kubectl client - test/e2e/kubectl/kubectl.go:274 - [BeforeEach] Kubectl run pod - test/e2e/kubectl/kubectl.go:1700 - [It] should create a pod from an image when restart is Never [Conformance] - test/e2e/kubectl/kubectl.go:1713 - STEP: running the image registry.k8s.io/e2e-test-images/httpd:2.4.38-4 06/12/23 22:09:13.801 - Jun 12 22:09:13.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-3657 run e2e-test-httpd-pod --restart=Never --pod-running-timeout=2m0s --image=registry.k8s.io/e2e-test-images/httpd:2.4.38-4' - Jun 12 22:09:15.983: INFO: stderr: "" - Jun 12 22:09:15.983: INFO: stdout: "pod/e2e-test-httpd-pod created\n" - STEP: verifying the pod e2e-test-httpd-pod was created 06/12/23 22:09:15.983 - [AfterEach] Kubectl run pod - test/e2e/kubectl/kubectl.go:1704 - Jun 12 22:09:16.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-3657 delete pods e2e-test-httpd-pod' - Jun 12 22:09:22.114: INFO: stderr: "" - Jun 12 22:09:22.114: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" - [AfterEach] [sig-cli] Kubectl client + [It] should provide DNS for services [Conformance] + test/e2e/network/dns.go:137 + STEP: Creating a test headless service 07/27/23 02:43:39.856 + STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9451.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9451.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9451.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9451.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9451.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9451.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9451.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9451.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9451.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9451.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 178.127.21.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.21.127.178_udp@PTR;check="$$(dig +tcp +noall +answer +search 178.127.21.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.21.127.178_tcp@PTR;sleep 1; done + 07/27/23 02:43:39.917 + STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9451.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9451.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9451.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9451.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9451.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9451.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9451.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9451.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9451.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9451.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 178.127.21.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.21.127.178_udp@PTR;check="$$(dig +tcp +noall +answer +search 178.127.21.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.21.127.178_tcp@PTR;sleep 1; done + 07/27/23 02:43:39.917 + STEP: creating a pod to probe DNS 07/27/23 02:43:39.917 + STEP: submitting the pod to kubernetes 07/27/23 02:43:39.917 + Jul 27 02:43:39.952: INFO: Waiting up to 15m0s for pod "dns-test-cfaacfcf-3426-46c7-a640-b7f228383212" in namespace "dns-9451" to be "running" + Jul 27 02:43:39.966: INFO: Pod "dns-test-cfaacfcf-3426-46c7-a640-b7f228383212": Phase="Pending", Reason="", readiness=false. Elapsed: 13.15721ms + Jul 27 02:43:41.982: INFO: Pod "dns-test-cfaacfcf-3426-46c7-a640-b7f228383212": Phase="Running", Reason="", readiness=true. Elapsed: 2.029397931s + Jul 27 02:43:41.982: INFO: Pod "dns-test-cfaacfcf-3426-46c7-a640-b7f228383212" satisfied condition "running" + STEP: retrieving the pod 07/27/23 02:43:41.982 + STEP: looking for the results for each expected name from probers 07/27/23 02:43:42.011 + Jul 27 02:43:42.074: INFO: Unable to read wheezy_udp@dns-test-service.dns-9451.svc.cluster.local from pod dns-9451/dns-test-cfaacfcf-3426-46c7-a640-b7f228383212: the server could not find the requested resource (get pods dns-test-cfaacfcf-3426-46c7-a640-b7f228383212) + Jul 27 02:43:42.110: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9451.svc.cluster.local from pod dns-9451/dns-test-cfaacfcf-3426-46c7-a640-b7f228383212: the server could not find the requested resource (get pods dns-test-cfaacfcf-3426-46c7-a640-b7f228383212) + Jul 27 02:43:42.126: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local from pod dns-9451/dns-test-cfaacfcf-3426-46c7-a640-b7f228383212: the server could not find the requested resource (get pods dns-test-cfaacfcf-3426-46c7-a640-b7f228383212) + Jul 27 02:43:42.150: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local from pod dns-9451/dns-test-cfaacfcf-3426-46c7-a640-b7f228383212: the server could not find the requested resource (get pods dns-test-cfaacfcf-3426-46c7-a640-b7f228383212) + Jul 27 02:43:42.262: INFO: Unable to read jessie_udp@dns-test-service.dns-9451.svc.cluster.local from pod dns-9451/dns-test-cfaacfcf-3426-46c7-a640-b7f228383212: the server could not find the requested resource (get pods dns-test-cfaacfcf-3426-46c7-a640-b7f228383212) + Jul 27 02:43:42.279: INFO: Unable to read jessie_tcp@dns-test-service.dns-9451.svc.cluster.local from pod dns-9451/dns-test-cfaacfcf-3426-46c7-a640-b7f228383212: the server could not find the requested resource (get pods dns-test-cfaacfcf-3426-46c7-a640-b7f228383212) + Jul 27 02:43:42.298: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local from pod dns-9451/dns-test-cfaacfcf-3426-46c7-a640-b7f228383212: the server could not find the requested resource (get pods dns-test-cfaacfcf-3426-46c7-a640-b7f228383212) + Jul 27 02:43:42.315: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local from pod dns-9451/dns-test-cfaacfcf-3426-46c7-a640-b7f228383212: the server could not find the requested resource (get pods dns-test-cfaacfcf-3426-46c7-a640-b7f228383212) + Jul 27 02:43:42.395: INFO: Lookups using dns-9451/dns-test-cfaacfcf-3426-46c7-a640-b7f228383212 failed for: [wheezy_udp@dns-test-service.dns-9451.svc.cluster.local wheezy_tcp@dns-test-service.dns-9451.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local jessie_udp@dns-test-service.dns-9451.svc.cluster.local jessie_tcp@dns-test-service.dns-9451.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local] + + Jul 27 02:43:47.711: INFO: DNS probes using dns-9451/dns-test-cfaacfcf-3426-46c7-a640-b7f228383212 succeeded + + STEP: deleting the pod 07/27/23 02:43:47.711 + STEP: deleting the test service 07/27/23 02:43:47.74 + STEP: deleting the test headless service 07/27/23 02:43:47.836 + [AfterEach] [sig-network] DNS test/e2e/framework/node/init/init.go:32 - Jun 12 22:09:22.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-cli] Kubectl client + Jul 27 02:43:47.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] DNS test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-cli] Kubectl client + [DeferCleanup (Each)] [sig-network] DNS dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-cli] Kubectl client + [DeferCleanup (Each)] [sig-network] DNS tear down framework | framework.go:193 - STEP: Destroying namespace "kubectl-3657" for this suite. 06/12/23 22:09:22.14 + STEP: Destroying namespace "dns-9451" for this suite. 07/27/23 02:43:47.912 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSS +SSSSSSSS ------------------------------ -[sig-cli] Kubectl client Kubectl cluster-info - should check if Kubernetes control plane services is included in cluster-info [Conformance] - test/e2e/kubectl/kubectl.go:1250 -[BeforeEach] [sig-cli] Kubectl client +[sig-node] KubeletManagedEtcHosts + should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/kubelet_etc_hosts.go:63 +[BeforeEach] [sig-node] KubeletManagedEtcHosts set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:09:22.168 -Jun 12 22:09:22.168: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename kubectl 06/12/23 22:09:22.174 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:09:22.24 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:09:22.251 -[BeforeEach] [sig-cli] Kubectl client +STEP: Creating a kubernetes client 07/27/23 02:43:47.933 +Jul 27 02:43:47.933: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts 07/27/23 02:43:47.934 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:43:47.986 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:43:47.998 +[BeforeEach] [sig-node] KubeletManagedEtcHosts test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-cli] Kubectl client - test/e2e/kubectl/kubectl.go:274 -[It] should check if Kubernetes control plane services is included in cluster-info [Conformance] - test/e2e/kubectl/kubectl.go:1250 -STEP: validating cluster-info 06/12/23 22:09:22.277 -Jun 12 22:09:22.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-6891 cluster-info' -Jun 12 22:09:22.858: INFO: stderr: "" -Jun 12 22:09:22.858: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://172.21.0.1:443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" -[AfterEach] [sig-cli] Kubectl client +[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/kubelet_etc_hosts.go:63 +STEP: Setting up the test 07/27/23 02:43:48.01 +STEP: Creating hostNetwork=false pod 07/27/23 02:43:48.01 +Jul 27 02:43:48.076: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "e2e-kubelet-etc-hosts-2558" to be "running and ready" +Jul 27 02:43:48.093: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 16.656524ms +Jul 27 02:43:48.093: INFO: The phase of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) +Jul 27 02:43:50.112: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.035799733s +Jul 27 02:43:50.112: INFO: The phase of Pod test-pod is Running (Ready = true) +Jul 27 02:43:50.112: INFO: Pod "test-pod" satisfied condition "running and ready" +STEP: Creating hostNetwork=true pod 07/27/23 02:43:50.123 +Jul 27 02:43:50.145: INFO: Waiting up to 5m0s for pod "test-host-network-pod" in namespace "e2e-kubelet-etc-hosts-2558" to be "running and ready" +Jul 27 02:43:50.156: INFO: Pod "test-host-network-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 10.941782ms +Jul 27 02:43:50.156: INFO: The phase of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) +Jul 27 02:43:52.168: INFO: Pod "test-host-network-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.023439962s +Jul 27 02:43:52.168: INFO: The phase of Pod test-host-network-pod is Running (Ready = true) +Jul 27 02:43:52.168: INFO: Pod "test-host-network-pod" satisfied condition "running and ready" +STEP: Running the test 07/27/23 02:43:52.179 +STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false 07/27/23 02:43:52.179 +Jul 27 02:43:52.179: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2558 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jul 27 02:43:52.179: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 02:43:52.180: INFO: ExecWithOptions: Clientset creation +Jul 27 02:43:52.180: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2558/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true) +Jul 27 02:43:52.335: INFO: Exec stderr: "" +Jul 27 02:43:52.335: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2558 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jul 27 02:43:52.335: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 02:43:52.336: INFO: ExecWithOptions: Clientset creation +Jul 27 02:43:52.336: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2558/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true) +Jul 27 02:43:52.477: INFO: Exec stderr: "" +Jul 27 02:43:52.477: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2558 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jul 27 02:43:52.477: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 02:43:52.477: INFO: ExecWithOptions: Clientset creation +Jul 27 02:43:52.477: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2558/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true) +Jul 27 02:43:52.638: INFO: Exec stderr: "" +Jul 27 02:43:52.638: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2558 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jul 27 02:43:52.638: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 02:43:52.638: INFO: ExecWithOptions: Clientset creation +Jul 27 02:43:52.638: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2558/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true) +Jul 27 02:43:52.787: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount 07/27/23 02:43:52.787 +Jul 27 02:43:52.787: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2558 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jul 27 02:43:52.787: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 02:43:52.788: INFO: ExecWithOptions: Clientset creation +Jul 27 02:43:52.788: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2558/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-3&container=busybox-3&stderr=true&stdout=true) +Jul 27 02:43:52.903: INFO: Exec stderr: "" +Jul 27 02:43:52.903: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2558 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jul 27 02:43:52.903: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 02:43:52.904: INFO: ExecWithOptions: Clientset creation +Jul 27 02:43:52.904: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2558/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-3&container=busybox-3&stderr=true&stdout=true) +Jul 27 02:43:53.035: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true 07/27/23 02:43:53.035 +Jul 27 02:43:53.036: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2558 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jul 27 02:43:53.036: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 02:43:53.036: INFO: ExecWithOptions: Clientset creation +Jul 27 02:43:53.036: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2558/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true) +Jul 27 02:43:53.192: INFO: Exec stderr: "" +Jul 27 02:43:53.192: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2558 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jul 27 02:43:53.192: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 02:43:53.193: INFO: ExecWithOptions: Clientset creation +Jul 27 02:43:53.193: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2558/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true) +Jul 27 02:43:53.315: INFO: Exec stderr: "" +Jul 27 02:43:53.315: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2558 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jul 27 02:43:53.315: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 02:43:53.315: INFO: ExecWithOptions: Clientset creation +Jul 27 02:43:53.315: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2558/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true) +Jul 27 02:43:53.504: INFO: Exec stderr: "" +Jul 27 02:43:53.504: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2558 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jul 27 02:43:53.504: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 02:43:53.505: INFO: ExecWithOptions: Clientset creation +Jul 27 02:43:53.505: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2558/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true) +Jul 27 02:43:53.655: INFO: Exec stderr: "" +[AfterEach] [sig-node] KubeletManagedEtcHosts test/e2e/framework/node/init/init.go:32 -Jun 12 22:09:22.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-cli] Kubectl client +Jul 27 02:43:53.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] KubeletManagedEtcHosts test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-cli] Kubectl client +[DeferCleanup (Each)] [sig-node] KubeletManagedEtcHosts dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-cli] Kubectl client +[DeferCleanup (Each)] [sig-node] KubeletManagedEtcHosts tear down framework | framework.go:193 -STEP: Destroying namespace "kubectl-6891" for this suite. 06/12/23 22:09:22.896 +STEP: Destroying namespace "e2e-kubelet-etc-hosts-2558" for this suite. 07/27/23 02:43:53.676 ------------------------------ -• [0.755 seconds] -[sig-cli] Kubectl client -test/e2e/kubectl/framework.go:23 - Kubectl cluster-info - test/e2e/kubectl/kubectl.go:1244 - should check if Kubernetes control plane services is included in cluster-info [Conformance] - test/e2e/kubectl/kubectl.go:1250 +• [SLOW TEST] [5.766 seconds] +[sig-node] KubeletManagedEtcHosts +test/e2e/common/node/framework.go:23 + should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/kubelet_etc_hosts.go:63 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-cli] Kubectl client + [BeforeEach] [sig-node] KubeletManagedEtcHosts set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:09:22.168 - Jun 12 22:09:22.168: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename kubectl 06/12/23 22:09:22.174 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:09:22.24 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:09:22.251 - [BeforeEach] [sig-cli] Kubectl client + STEP: Creating a kubernetes client 07/27/23 02:43:47.933 + Jul 27 02:43:47.933: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts 07/27/23 02:43:47.934 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:43:47.986 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:43:47.998 + [BeforeEach] [sig-node] KubeletManagedEtcHosts test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-cli] Kubectl client - test/e2e/kubectl/kubectl.go:274 - [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] - test/e2e/kubectl/kubectl.go:1250 - STEP: validating cluster-info 06/12/23 22:09:22.277 - Jun 12 22:09:22.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-6891 cluster-info' - Jun 12 22:09:22.858: INFO: stderr: "" - Jun 12 22:09:22.858: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://172.21.0.1:443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" - [AfterEach] [sig-cli] Kubectl client + [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/kubelet_etc_hosts.go:63 + STEP: Setting up the test 07/27/23 02:43:48.01 + STEP: Creating hostNetwork=false pod 07/27/23 02:43:48.01 + Jul 27 02:43:48.076: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "e2e-kubelet-etc-hosts-2558" to be "running and ready" + Jul 27 02:43:48.093: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 16.656524ms + Jul 27 02:43:48.093: INFO: The phase of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) + Jul 27 02:43:50.112: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.035799733s + Jul 27 02:43:50.112: INFO: The phase of Pod test-pod is Running (Ready = true) + Jul 27 02:43:50.112: INFO: Pod "test-pod" satisfied condition "running and ready" + STEP: Creating hostNetwork=true pod 07/27/23 02:43:50.123 + Jul 27 02:43:50.145: INFO: Waiting up to 5m0s for pod "test-host-network-pod" in namespace "e2e-kubelet-etc-hosts-2558" to be "running and ready" + Jul 27 02:43:50.156: INFO: Pod "test-host-network-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 10.941782ms + Jul 27 02:43:50.156: INFO: The phase of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) + Jul 27 02:43:52.168: INFO: Pod "test-host-network-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.023439962s + Jul 27 02:43:52.168: INFO: The phase of Pod test-host-network-pod is Running (Ready = true) + Jul 27 02:43:52.168: INFO: Pod "test-host-network-pod" satisfied condition "running and ready" + STEP: Running the test 07/27/23 02:43:52.179 + STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false 07/27/23 02:43:52.179 + Jul 27 02:43:52.179: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2558 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jul 27 02:43:52.179: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 02:43:52.180: INFO: ExecWithOptions: Clientset creation + Jul 27 02:43:52.180: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2558/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true) + Jul 27 02:43:52.335: INFO: Exec stderr: "" + Jul 27 02:43:52.335: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2558 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jul 27 02:43:52.335: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 02:43:52.336: INFO: ExecWithOptions: Clientset creation + Jul 27 02:43:52.336: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2558/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true) + Jul 27 02:43:52.477: INFO: Exec stderr: "" + Jul 27 02:43:52.477: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2558 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jul 27 02:43:52.477: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 02:43:52.477: INFO: ExecWithOptions: Clientset creation + Jul 27 02:43:52.477: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2558/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true) + Jul 27 02:43:52.638: INFO: Exec stderr: "" + Jul 27 02:43:52.638: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2558 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jul 27 02:43:52.638: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 02:43:52.638: INFO: ExecWithOptions: Clientset creation + Jul 27 02:43:52.638: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2558/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true) + Jul 27 02:43:52.787: INFO: Exec stderr: "" + STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount 07/27/23 02:43:52.787 + Jul 27 02:43:52.787: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2558 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jul 27 02:43:52.787: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 02:43:52.788: INFO: ExecWithOptions: Clientset creation + Jul 27 02:43:52.788: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2558/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-3&container=busybox-3&stderr=true&stdout=true) + Jul 27 02:43:52.903: INFO: Exec stderr: "" + Jul 27 02:43:52.903: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2558 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jul 27 02:43:52.903: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 02:43:52.904: INFO: ExecWithOptions: Clientset creation + Jul 27 02:43:52.904: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2558/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-3&container=busybox-3&stderr=true&stdout=true) + Jul 27 02:43:53.035: INFO: Exec stderr: "" + STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true 07/27/23 02:43:53.035 + Jul 27 02:43:53.036: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2558 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jul 27 02:43:53.036: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 02:43:53.036: INFO: ExecWithOptions: Clientset creation + Jul 27 02:43:53.036: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2558/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true) + Jul 27 02:43:53.192: INFO: Exec stderr: "" + Jul 27 02:43:53.192: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2558 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jul 27 02:43:53.192: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 02:43:53.193: INFO: ExecWithOptions: Clientset creation + Jul 27 02:43:53.193: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2558/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true) + Jul 27 02:43:53.315: INFO: Exec stderr: "" + Jul 27 02:43:53.315: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2558 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jul 27 02:43:53.315: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 02:43:53.315: INFO: ExecWithOptions: Clientset creation + Jul 27 02:43:53.315: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2558/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true) + Jul 27 02:43:53.504: INFO: Exec stderr: "" + Jul 27 02:43:53.504: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2558 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jul 27 02:43:53.504: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 02:43:53.505: INFO: ExecWithOptions: Clientset creation + Jul 27 02:43:53.505: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2558/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true) + Jul 27 02:43:53.655: INFO: Exec stderr: "" + [AfterEach] [sig-node] KubeletManagedEtcHosts test/e2e/framework/node/init/init.go:32 - Jun 12 22:09:22.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-cli] Kubectl client + Jul 27 02:43:53.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] KubeletManagedEtcHosts test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-cli] Kubectl client + [DeferCleanup (Each)] [sig-node] KubeletManagedEtcHosts dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-cli] Kubectl client + [DeferCleanup (Each)] [sig-node] KubeletManagedEtcHosts tear down framework | framework.go:193 - STEP: Destroying namespace "kubectl-6891" for this suite. 06/12/23 22:09:22.896 + STEP: Destroying namespace "e2e-kubelet-etc-hosts-2558" for this suite. 07/27/23 02:43:53.676 << End Captured GinkgoWriter Output ------------------------------ -S ------------------------------- -[sig-api-machinery] Aggregator - Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] - test/e2e/apimachinery/aggregator.go:100 -[BeforeEach] [sig-api-machinery] Aggregator +[sig-storage] Projected secret + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:67 +[BeforeEach] [sig-storage] Projected secret set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:09:22.924 -Jun 12 22:09:22.924: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename aggregator 06/12/23 22:09:22.927 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:09:23.049 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:09:23.074 -[BeforeEach] [sig-api-machinery] Aggregator +STEP: Creating a kubernetes client 07/27/23 02:43:53.699 +Jul 27 02:43:53.699: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename projected 07/27/23 02:43:53.7 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:43:53.746 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:43:53.756 +[BeforeEach] [sig-storage] Projected secret test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-api-machinery] Aggregator - test/e2e/apimachinery/aggregator.go:78 -Jun 12 22:09:23.092: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] - test/e2e/apimachinery/aggregator.go:100 -STEP: Registering the sample API server. 06/12/23 22:09:23.095 -Jun 12 22:09:25.907: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:09:27.947: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:09:29.925: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:09:31.921: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:09:33.921: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:09:35.959: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:09:37.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:09:39.920: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:09:41.926: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:09:43.942: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:09:45.926: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:09:47.923: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:09:49.924: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:09:51.922: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:09:54.392: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:09:56.037: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:09:58.017: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:09:59.940: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:10:01.922: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:10:04.080: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:10:05.921: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:10:07.926: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:10:09.923: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:10:11.923: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:10:13.922: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:10:15.921: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:10:17.920: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:10:19.921: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:10:21.919: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:10:23.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:10:25.921: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:10:27.923: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:10:29.920: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:10:31.925: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:10:34.335: INFO: Waited 239.26983ms for the sample-apiserver to be ready to handle requests. -I0612 22:10:35.730205 23 request.go:690] Waited for 1.013057155s due to client-side throttling, not priority and fairness, request: GET:https://172.21.0.1:443/apis/operators.coreos.com/v1 -STEP: Read Status for v1alpha1.wardle.example.com 06/12/23 22:10:35.861 -STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' 06/12/23 22:10:35.878 -STEP: List APIServices 06/12/23 22:10:35.915 -Jun 12 22:10:36.004: INFO: Found v1alpha1.wardle.example.com in APIServiceList -[AfterEach] [sig-api-machinery] Aggregator - test/e2e/apimachinery/aggregator.go:68 -[AfterEach] [sig-api-machinery] Aggregator +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:67 +STEP: Creating projection with secret that has name projected-secret-test-a4a5b4ad-d9c8-4d2d-8a0e-d9e6edda4ede 07/27/23 02:43:53.769 +STEP: Creating a pod to test consume secrets 07/27/23 02:43:53.789 +Jul 27 02:43:53.814: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9aa3110c-1e6a-4fc0-8f1d-f321905e97aa" in namespace "projected-6053" to be "Succeeded or Failed" +Jul 27 02:43:53.826: INFO: Pod "pod-projected-secrets-9aa3110c-1e6a-4fc0-8f1d-f321905e97aa": Phase="Pending", Reason="", readiness=false. Elapsed: 11.841432ms +Jul 27 02:43:55.836: INFO: Pod "pod-projected-secrets-9aa3110c-1e6a-4fc0-8f1d-f321905e97aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022281476s +Jul 27 02:43:57.841: INFO: Pod "pod-projected-secrets-9aa3110c-1e6a-4fc0-8f1d-f321905e97aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026640554s +STEP: Saw pod success 07/27/23 02:43:57.841 +Jul 27 02:43:57.841: INFO: Pod "pod-projected-secrets-9aa3110c-1e6a-4fc0-8f1d-f321905e97aa" satisfied condition "Succeeded or Failed" +Jul 27 02:43:57.853: INFO: Trying to get logs from node 10.245.128.19 pod pod-projected-secrets-9aa3110c-1e6a-4fc0-8f1d-f321905e97aa container projected-secret-volume-test: +STEP: delete the pod 07/27/23 02:43:57.903 +Jul 27 02:43:57.931: INFO: Waiting for pod pod-projected-secrets-9aa3110c-1e6a-4fc0-8f1d-f321905e97aa to disappear +Jul 27 02:43:57.942: INFO: Pod pod-projected-secrets-9aa3110c-1e6a-4fc0-8f1d-f321905e97aa no longer exists +[AfterEach] [sig-storage] Projected secret test/e2e/framework/node/init/init.go:32 -Jun 12 22:10:36.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] Aggregator +Jul 27 02:43:57.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected secret test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] Aggregator +[DeferCleanup (Each)] [sig-storage] Projected secret dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] Aggregator +[DeferCleanup (Each)] [sig-storage] Projected secret tear down framework | framework.go:193 -STEP: Destroying namespace "aggregator-5076" for this suite. 06/12/23 22:10:36.745 +STEP: Destroying namespace "projected-6053" for this suite. 07/27/23 02:43:57.97 ------------------------------ -• [SLOW TEST] [73.925 seconds] -[sig-api-machinery] Aggregator -test/e2e/apimachinery/framework.go:23 - Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] - test/e2e/apimachinery/aggregator.go:100 +• [4.294 seconds] +[sig-storage] Projected secret +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:67 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] Aggregator + [BeforeEach] [sig-storage] Projected secret set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:09:22.924 - Jun 12 22:09:22.924: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename aggregator 06/12/23 22:09:22.927 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:09:23.049 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:09:23.074 - [BeforeEach] [sig-api-machinery] Aggregator + STEP: Creating a kubernetes client 07/27/23 02:43:53.699 + Jul 27 02:43:53.699: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename projected 07/27/23 02:43:53.7 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:43:53.746 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:43:53.756 + [BeforeEach] [sig-storage] Projected secret test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-api-machinery] Aggregator - test/e2e/apimachinery/aggregator.go:78 - Jun 12 22:09:23.092: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] - test/e2e/apimachinery/aggregator.go:100 - STEP: Registering the sample API server. 06/12/23 22:09:23.095 - Jun 12 22:09:25.907: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:09:27.947: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:09:29.925: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:09:31.921: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:09:33.921: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:09:35.959: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:09:37.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:09:39.920: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:09:41.926: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:09:43.942: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:09:45.926: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:09:47.923: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:09:49.924: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:09:51.922: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:09:54.392: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:09:56.037: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:09:58.017: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:09:59.940: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:10:01.922: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:10:04.080: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:10:05.921: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:10:07.926: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:10:09.923: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:10:11.923: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:10:13.922: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:10:15.921: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:10:17.920: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:10:19.921: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:10:21.919: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:10:23.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:10:25.921: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:10:27.923: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:10:29.920: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:10:31.925: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 9, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:10:34.335: INFO: Waited 239.26983ms for the sample-apiserver to be ready to handle requests. - I0612 22:10:35.730205 23 request.go:690] Waited for 1.013057155s due to client-side throttling, not priority and fairness, request: GET:https://172.21.0.1:443/apis/operators.coreos.com/v1 - STEP: Read Status for v1alpha1.wardle.example.com 06/12/23 22:10:35.861 - STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' 06/12/23 22:10:35.878 - STEP: List APIServices 06/12/23 22:10:35.915 - Jun 12 22:10:36.004: INFO: Found v1alpha1.wardle.example.com in APIServiceList - [AfterEach] [sig-api-machinery] Aggregator - test/e2e/apimachinery/aggregator.go:68 - [AfterEach] [sig-api-machinery] Aggregator + [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:67 + STEP: Creating projection with secret that has name projected-secret-test-a4a5b4ad-d9c8-4d2d-8a0e-d9e6edda4ede 07/27/23 02:43:53.769 + STEP: Creating a pod to test consume secrets 07/27/23 02:43:53.789 + Jul 27 02:43:53.814: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9aa3110c-1e6a-4fc0-8f1d-f321905e97aa" in namespace "projected-6053" to be "Succeeded or Failed" + Jul 27 02:43:53.826: INFO: Pod "pod-projected-secrets-9aa3110c-1e6a-4fc0-8f1d-f321905e97aa": Phase="Pending", Reason="", readiness=false. Elapsed: 11.841432ms + Jul 27 02:43:55.836: INFO: Pod "pod-projected-secrets-9aa3110c-1e6a-4fc0-8f1d-f321905e97aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022281476s + Jul 27 02:43:57.841: INFO: Pod "pod-projected-secrets-9aa3110c-1e6a-4fc0-8f1d-f321905e97aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026640554s + STEP: Saw pod success 07/27/23 02:43:57.841 + Jul 27 02:43:57.841: INFO: Pod "pod-projected-secrets-9aa3110c-1e6a-4fc0-8f1d-f321905e97aa" satisfied condition "Succeeded or Failed" + Jul 27 02:43:57.853: INFO: Trying to get logs from node 10.245.128.19 pod pod-projected-secrets-9aa3110c-1e6a-4fc0-8f1d-f321905e97aa container projected-secret-volume-test: + STEP: delete the pod 07/27/23 02:43:57.903 + Jul 27 02:43:57.931: INFO: Waiting for pod pod-projected-secrets-9aa3110c-1e6a-4fc0-8f1d-f321905e97aa to disappear + Jul 27 02:43:57.942: INFO: Pod pod-projected-secrets-9aa3110c-1e6a-4fc0-8f1d-f321905e97aa no longer exists + [AfterEach] [sig-storage] Projected secret test/e2e/framework/node/init/init.go:32 - Jun 12 22:10:36.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] Aggregator + Jul 27 02:43:57.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected secret test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] Aggregator + [DeferCleanup (Each)] [sig-storage] Projected secret dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] Aggregator + [DeferCleanup (Each)] [sig-storage] Projected secret tear down framework | framework.go:193 - STEP: Destroying namespace "aggregator-5076" for this suite. 06/12/23 22:10:36.745 + STEP: Destroying namespace "projected-6053" for this suite. 07/27/23 02:43:57.97 << End Captured GinkgoWriter Output ------------------------------ -SSSS +SSSSSSSSSSSSSSSS ------------------------------ -[sig-architecture] Conformance Tests - should have at least two untainted nodes [Conformance] - test/e2e/architecture/conformance.go:38 -[BeforeEach] [sig-architecture] Conformance Tests +[sig-storage] Projected downwardAPI + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:261 +[BeforeEach] [sig-storage] Projected downwardAPI set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:10:36.851 -Jun 12 22:10:36.851: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename conformance-tests 06/12/23 22:10:36.853 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:10:36.964 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:10:36.979 -[BeforeEach] [sig-architecture] Conformance Tests +STEP: Creating a kubernetes client 07/27/23 02:43:57.994 +Jul 27 02:43:57.994: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename projected 07/27/23 02:43:57.995 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:43:58.051 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:43:58.066 +[BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:31 -[It] should have at least two untainted nodes [Conformance] - test/e2e/architecture/conformance.go:38 -STEP: Getting node addresses 06/12/23 22:10:37 -Jun 12 22:10:37.000: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable -[AfterEach] [sig-architecture] Conformance Tests +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:261 +STEP: Creating a pod to test downward API volume plugin 07/27/23 02:43:58.086 +Jul 27 02:43:58.120: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8b247869-5870-4024-bc47-646647d4d987" in namespace "projected-387" to be "Succeeded or Failed" +Jul 27 02:43:58.136: INFO: Pod "downwardapi-volume-8b247869-5870-4024-bc47-646647d4d987": Phase="Pending", Reason="", readiness=false. Elapsed: 15.855962ms +Jul 27 02:44:00.148: INFO: Pod "downwardapi-volume-8b247869-5870-4024-bc47-646647d4d987": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028427874s +Jul 27 02:44:02.156: INFO: Pod "downwardapi-volume-8b247869-5870-4024-bc47-646647d4d987": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036242089s +STEP: Saw pod success 07/27/23 02:44:02.156 +Jul 27 02:44:02.156: INFO: Pod "downwardapi-volume-8b247869-5870-4024-bc47-646647d4d987" satisfied condition "Succeeded or Failed" +Jul 27 02:44:02.169: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-8b247869-5870-4024-bc47-646647d4d987 container client-container: +STEP: delete the pod 07/27/23 02:44:02.204 +Jul 27 02:44:02.254: INFO: Waiting for pod downwardapi-volume-8b247869-5870-4024-bc47-646647d4d987 to disappear +Jul 27 02:44:02.297: INFO: Pod downwardapi-volume-8b247869-5870-4024-bc47-646647d4d987 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/node/init/init.go:32 -Jun 12 22:10:37.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-architecture] Conformance Tests +Jul 27 02:44:02.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-architecture] Conformance Tests +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-architecture] Conformance Tests +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI tear down framework | framework.go:193 -STEP: Destroying namespace "conformance-tests-3687" for this suite. 06/12/23 22:10:37.048 +STEP: Destroying namespace "projected-387" for this suite. 07/27/23 02:44:02.379 ------------------------------ -• [0.232 seconds] -[sig-architecture] Conformance Tests -test/e2e/architecture/framework.go:23 - should have at least two untainted nodes [Conformance] - test/e2e/architecture/conformance.go:38 +• [4.440 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:261 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-architecture] Conformance Tests + [BeforeEach] [sig-storage] Projected downwardAPI set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:10:36.851 - Jun 12 22:10:36.851: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename conformance-tests 06/12/23 22:10:36.853 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:10:36.964 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:10:36.979 - [BeforeEach] [sig-architecture] Conformance Tests + STEP: Creating a kubernetes client 07/27/23 02:43:57.994 + Jul 27 02:43:57.994: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename projected 07/27/23 02:43:57.995 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:43:58.051 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:43:58.066 + [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:31 - [It] should have at least two untainted nodes [Conformance] - test/e2e/architecture/conformance.go:38 - STEP: Getting node addresses 06/12/23 22:10:37 - Jun 12 22:10:37.000: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable - [AfterEach] [sig-architecture] Conformance Tests + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 + [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:261 + STEP: Creating a pod to test downward API volume plugin 07/27/23 02:43:58.086 + Jul 27 02:43:58.120: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8b247869-5870-4024-bc47-646647d4d987" in namespace "projected-387" to be "Succeeded or Failed" + Jul 27 02:43:58.136: INFO: Pod "downwardapi-volume-8b247869-5870-4024-bc47-646647d4d987": Phase="Pending", Reason="", readiness=false. Elapsed: 15.855962ms + Jul 27 02:44:00.148: INFO: Pod "downwardapi-volume-8b247869-5870-4024-bc47-646647d4d987": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028427874s + Jul 27 02:44:02.156: INFO: Pod "downwardapi-volume-8b247869-5870-4024-bc47-646647d4d987": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036242089s + STEP: Saw pod success 07/27/23 02:44:02.156 + Jul 27 02:44:02.156: INFO: Pod "downwardapi-volume-8b247869-5870-4024-bc47-646647d4d987" satisfied condition "Succeeded or Failed" + Jul 27 02:44:02.169: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-8b247869-5870-4024-bc47-646647d4d987 container client-container: + STEP: delete the pod 07/27/23 02:44:02.204 + Jul 27 02:44:02.254: INFO: Waiting for pod downwardapi-volume-8b247869-5870-4024-bc47-646647d4d987 to disappear + Jul 27 02:44:02.297: INFO: Pod downwardapi-volume-8b247869-5870-4024-bc47-646647d4d987 no longer exists + [AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/node/init/init.go:32 - Jun 12 22:10:37.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-architecture] Conformance Tests + Jul 27 02:44:02.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-architecture] Conformance Tests + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-architecture] Conformance Tests + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI tear down framework | framework.go:193 - STEP: Destroying namespace "conformance-tests-3687" for this suite. 06/12/23 22:10:37.048 + STEP: Destroying namespace "projected-387" for this suite. 07/27/23 02:44:02.379 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] EmptyDir volumes - should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:167 -[BeforeEach] [sig-storage] EmptyDir volumes +[sig-network] Ingress API + should support creating Ingress API operations [Conformance] + test/e2e/network/ingress.go:552 +[BeforeEach] [sig-network] Ingress API set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:10:37.084 -Jun 12 22:10:37.084: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename emptydir 06/12/23 22:10:37.101 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:10:37.198 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:10:37.216 -[BeforeEach] [sig-storage] EmptyDir volumes +STEP: Creating a kubernetes client 07/27/23 02:44:02.435 +Jul 27 02:44:02.435: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename ingress 07/27/23 02:44:02.436 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:44:02.543 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:44:02.615 +[BeforeEach] [sig-network] Ingress API test/e2e/framework/metrics/init/init.go:31 -[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:167 -STEP: Creating a pod to test emptydir 0644 on node default medium 06/12/23 22:10:37.231 -Jun 12 22:10:37.265: INFO: Waiting up to 5m0s for pod "pod-95b8ce7c-17eb-404e-870d-498dd65ad644" in namespace "emptydir-2374" to be "Succeeded or Failed" -Jun 12 22:10:37.285: INFO: Pod "pod-95b8ce7c-17eb-404e-870d-498dd65ad644": Phase="Pending", Reason="", readiness=false. Elapsed: 20.238552ms -Jun 12 22:10:39.301: INFO: Pod "pod-95b8ce7c-17eb-404e-870d-498dd65ad644": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036113871s -Jun 12 22:10:41.301: INFO: Pod "pod-95b8ce7c-17eb-404e-870d-498dd65ad644": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035924836s -Jun 12 22:10:43.319: INFO: Pod "pod-95b8ce7c-17eb-404e-870d-498dd65ad644": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.054233223s -STEP: Saw pod success 06/12/23 22:10:43.319 -Jun 12 22:10:43.325: INFO: Pod "pod-95b8ce7c-17eb-404e-870d-498dd65ad644" satisfied condition "Succeeded or Failed" -Jun 12 22:10:43.357: INFO: Trying to get logs from node 10.138.75.70 pod pod-95b8ce7c-17eb-404e-870d-498dd65ad644 container test-container: -STEP: delete the pod 06/12/23 22:10:43.435 -Jun 12 22:10:43.470: INFO: Waiting for pod pod-95b8ce7c-17eb-404e-870d-498dd65ad644 to disappear -Jun 12 22:10:43.483: INFO: Pod pod-95b8ce7c-17eb-404e-870d-498dd65ad644 no longer exists -[AfterEach] [sig-storage] EmptyDir volumes +[It] should support creating Ingress API operations [Conformance] + test/e2e/network/ingress.go:552 +STEP: getting /apis 07/27/23 02:44:02.651 +STEP: getting /apis/networking.k8s.io 07/27/23 02:44:02.69 +STEP: getting /apis/networking.k8s.iov1 07/27/23 02:44:02.698 +STEP: creating 07/27/23 02:44:02.719 +STEP: getting 07/27/23 02:44:02.863 +STEP: listing 07/27/23 02:44:02.915 +STEP: watching 07/27/23 02:44:02.935 +Jul 27 02:44:02.935: INFO: starting watch +STEP: cluster-wide listing 07/27/23 02:44:02.946 +STEP: cluster-wide watching 07/27/23 02:44:02.971 +Jul 27 02:44:02.971: INFO: starting watch +STEP: patching 07/27/23 02:44:02.997 +STEP: updating 07/27/23 02:44:03.033 +Jul 27 02:44:03.139: INFO: waiting for watch events with expected annotations +Jul 27 02:44:03.139: INFO: saw patched and updated annotations +STEP: patching /status 07/27/23 02:44:03.139 +STEP: updating /status 07/27/23 02:44:03.157 +STEP: get /status 07/27/23 02:44:03.249 +STEP: deleting 07/27/23 02:44:03.299 +STEP: deleting a collection 07/27/23 02:44:03.381 +[AfterEach] [sig-network] Ingress API test/e2e/framework/node/init/init.go:32 -Jun 12 22:10:43.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +Jul 27 02:44:03.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Ingress API test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +[DeferCleanup (Each)] [sig-network] Ingress API dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +[DeferCleanup (Each)] [sig-network] Ingress API tear down framework | framework.go:193 -STEP: Destroying namespace "emptydir-2374" for this suite. 06/12/23 22:10:43.524 +STEP: Destroying namespace "ingress-6577" for this suite. 07/27/23 02:44:03.448 ------------------------------ -• [SLOW TEST] [6.467 seconds] -[sig-storage] EmptyDir volumes -test/e2e/common/storage/framework.go:23 - should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:167 +• [1.035 seconds] +[sig-network] Ingress API +test/e2e/network/common/framework.go:23 + should support creating Ingress API operations [Conformance] + test/e2e/network/ingress.go:552 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] EmptyDir volumes + [BeforeEach] [sig-network] Ingress API set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:10:37.084 - Jun 12 22:10:37.084: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename emptydir 06/12/23 22:10:37.101 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:10:37.198 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:10:37.216 - [BeforeEach] [sig-storage] EmptyDir volumes + STEP: Creating a kubernetes client 07/27/23 02:44:02.435 + Jul 27 02:44:02.435: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename ingress 07/27/23 02:44:02.436 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:44:02.543 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:44:02.615 + [BeforeEach] [sig-network] Ingress API test/e2e/framework/metrics/init/init.go:31 - [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:167 - STEP: Creating a pod to test emptydir 0644 on node default medium 06/12/23 22:10:37.231 - Jun 12 22:10:37.265: INFO: Waiting up to 5m0s for pod "pod-95b8ce7c-17eb-404e-870d-498dd65ad644" in namespace "emptydir-2374" to be "Succeeded or Failed" - Jun 12 22:10:37.285: INFO: Pod "pod-95b8ce7c-17eb-404e-870d-498dd65ad644": Phase="Pending", Reason="", readiness=false. Elapsed: 20.238552ms - Jun 12 22:10:39.301: INFO: Pod "pod-95b8ce7c-17eb-404e-870d-498dd65ad644": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036113871s - Jun 12 22:10:41.301: INFO: Pod "pod-95b8ce7c-17eb-404e-870d-498dd65ad644": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035924836s - Jun 12 22:10:43.319: INFO: Pod "pod-95b8ce7c-17eb-404e-870d-498dd65ad644": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.054233223s - STEP: Saw pod success 06/12/23 22:10:43.319 - Jun 12 22:10:43.325: INFO: Pod "pod-95b8ce7c-17eb-404e-870d-498dd65ad644" satisfied condition "Succeeded or Failed" - Jun 12 22:10:43.357: INFO: Trying to get logs from node 10.138.75.70 pod pod-95b8ce7c-17eb-404e-870d-498dd65ad644 container test-container: - STEP: delete the pod 06/12/23 22:10:43.435 - Jun 12 22:10:43.470: INFO: Waiting for pod pod-95b8ce7c-17eb-404e-870d-498dd65ad644 to disappear - Jun 12 22:10:43.483: INFO: Pod pod-95b8ce7c-17eb-404e-870d-498dd65ad644 no longer exists - [AfterEach] [sig-storage] EmptyDir volumes + [It] should support creating Ingress API operations [Conformance] + test/e2e/network/ingress.go:552 + STEP: getting /apis 07/27/23 02:44:02.651 + STEP: getting /apis/networking.k8s.io 07/27/23 02:44:02.69 + STEP: getting /apis/networking.k8s.iov1 07/27/23 02:44:02.698 + STEP: creating 07/27/23 02:44:02.719 + STEP: getting 07/27/23 02:44:02.863 + STEP: listing 07/27/23 02:44:02.915 + STEP: watching 07/27/23 02:44:02.935 + Jul 27 02:44:02.935: INFO: starting watch + STEP: cluster-wide listing 07/27/23 02:44:02.946 + STEP: cluster-wide watching 07/27/23 02:44:02.971 + Jul 27 02:44:02.971: INFO: starting watch + STEP: patching 07/27/23 02:44:02.997 + STEP: updating 07/27/23 02:44:03.033 + Jul 27 02:44:03.139: INFO: waiting for watch events with expected annotations + Jul 27 02:44:03.139: INFO: saw patched and updated annotations + STEP: patching /status 07/27/23 02:44:03.139 + STEP: updating /status 07/27/23 02:44:03.157 + STEP: get /status 07/27/23 02:44:03.249 + STEP: deleting 07/27/23 02:44:03.299 + STEP: deleting a collection 07/27/23 02:44:03.381 + [AfterEach] [sig-network] Ingress API test/e2e/framework/node/init/init.go:32 - Jun 12 22:10:43.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + Jul 27 02:44:03.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Ingress API test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + [DeferCleanup (Each)] [sig-network] Ingress API dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + [DeferCleanup (Each)] [sig-network] Ingress API tear down framework | framework.go:193 - STEP: Destroying namespace "emptydir-2374" for this suite. 06/12/23 22:10:43.524 + STEP: Destroying namespace "ingress-6577" for this suite. 07/27/23 02:44:03.448 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-api-machinery] ResourceQuota - should create a ResourceQuota and capture the life of a service. [Conformance] - test/e2e/apimachinery/resource_quota.go:100 -[BeforeEach] [sig-api-machinery] ResourceQuota +[sig-network] DNS + should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + test/e2e/network/dns.go:193 +[BeforeEach] [sig-network] DNS set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:10:43.552 -Jun 12 22:10:43.553: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename resourcequota 06/12/23 22:10:43.555 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:10:43.611 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:10:43.623 -[BeforeEach] [sig-api-machinery] ResourceQuota +STEP: Creating a kubernetes client 07/27/23 02:44:03.471 +Jul 27 02:44:03.471: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename dns 07/27/23 02:44:03.472 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:44:03.534 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:44:03.548 +[BeforeEach] [sig-network] DNS test/e2e/framework/metrics/init/init.go:31 -[It] should create a ResourceQuota and capture the life of a service. [Conformance] - test/e2e/apimachinery/resource_quota.go:100 -STEP: Counting existing ResourceQuota 06/12/23 22:10:43.643 -STEP: Creating a ResourceQuota 06/12/23 22:10:48.656 -STEP: Ensuring resource quota status is calculated 06/12/23 22:10:48.671 -STEP: Creating a Service 06/12/23 22:10:50.685 -STEP: Creating a NodePort Service 06/12/23 22:10:50.743 -STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota 06/12/23 22:10:50.803 -STEP: Ensuring resource quota status captures service creation 06/12/23 22:10:50.867 -STEP: Deleting Services 06/12/23 22:10:52.881 -STEP: Ensuring resource quota status released usage 06/12/23 22:10:52.998 -[AfterEach] [sig-api-machinery] ResourceQuota +[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + test/e2e/network/dns.go:193 +STEP: Creating a test headless service 07/27/23 02:44:03.58 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4436 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4436;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4436 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4436;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4436.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4436.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4436.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4436.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4436.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4436.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4436.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4436.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4436.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4436.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4436.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4436.svc;check="$$(dig +notcp +noall +answer +search 40.216.21.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.21.216.40_udp@PTR;check="$$(dig +tcp +noall +answer +search 40.216.21.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.21.216.40_tcp@PTR;sleep 1; done + 07/27/23 02:44:03.648 +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4436 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4436;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4436 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4436;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4436.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4436.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4436.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4436.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4436.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4436.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4436.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4436.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4436.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4436.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4436.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4436.svc;check="$$(dig +notcp +noall +answer +search 40.216.21.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.21.216.40_udp@PTR;check="$$(dig +tcp +noall +answer +search 40.216.21.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.21.216.40_tcp@PTR;sleep 1; done + 07/27/23 02:44:03.648 +STEP: creating a pod to probe DNS 07/27/23 02:44:03.648 +STEP: submitting the pod to kubernetes 07/27/23 02:44:03.648 +Jul 27 02:44:03.685: INFO: Waiting up to 15m0s for pod "dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf" in namespace "dns-4436" to be "running" +Jul 27 02:44:03.723: INFO: Pod "dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf": Phase="Pending", Reason="", readiness=false. Elapsed: 37.85033ms +Jul 27 02:44:05.735: INFO: Pod "dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049396439s +Jul 27 02:44:07.734: INFO: Pod "dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf": Phase="Running", Reason="", readiness=true. Elapsed: 4.049002499s +Jul 27 02:44:07.734: INFO: Pod "dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf" satisfied condition "running" +STEP: retrieving the pod 07/27/23 02:44:07.734 +STEP: looking for the results for each expected name from probers 07/27/23 02:44:07.744 +Jul 27 02:44:07.774: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf: the server could not find the requested resource (get pods dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf) +Jul 27 02:44:07.792: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf: the server could not find the requested resource (get pods dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf) +Jul 27 02:44:07.814: INFO: Unable to read wheezy_udp@dns-test-service.dns-4436 from pod dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf: the server could not find the requested resource (get pods dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf) +Jul 27 02:44:07.831: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4436 from pod dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf: the server could not find the requested resource (get pods dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf) +Jul 27 02:44:07.848: INFO: Unable to read wheezy_udp@dns-test-service.dns-4436.svc from pod dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf: the server could not find the requested resource (get pods dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf) +Jul 27 02:44:07.866: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4436.svc from pod dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf: the server could not find the requested resource (get pods dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf) +Jul 27 02:44:07.882: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4436.svc from pod dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf: the server could not find the requested resource (get pods dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf) +Jul 27 02:44:07.897: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4436.svc from pod dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf: the server could not find the requested resource (get pods dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf) +Jul 27 02:44:07.986: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf: the server could not find the requested resource (get pods dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf) +Jul 27 02:44:08.002: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf: the server could not find the requested resource (get pods dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf) +Jul 27 02:44:08.019: INFO: Unable to read jessie_udp@dns-test-service.dns-4436 from pod dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf: the server could not find the requested resource (get pods dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf) +Jul 27 02:44:08.037: INFO: Unable to read jessie_tcp@dns-test-service.dns-4436 from pod dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf: the server could not find the requested resource (get pods dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf) +Jul 27 02:44:08.055: INFO: Unable to read jessie_udp@dns-test-service.dns-4436.svc from pod dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf: the server could not find the requested resource (get pods dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf) +Jul 27 02:44:08.074: INFO: Unable to read jessie_tcp@dns-test-service.dns-4436.svc from pod dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf: the server could not find the requested resource (get pods dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf) +Jul 27 02:44:08.092: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4436.svc from pod dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf: the server could not find the requested resource (get pods dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf) +Jul 27 02:44:08.110: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4436.svc from pod dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf: the server could not find the requested resource (get pods dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf) +Jul 27 02:44:08.188: INFO: Lookups using dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4436 wheezy_tcp@dns-test-service.dns-4436 wheezy_udp@dns-test-service.dns-4436.svc wheezy_tcp@dns-test-service.dns-4436.svc wheezy_udp@_http._tcp.dns-test-service.dns-4436.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4436.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4436 jessie_tcp@dns-test-service.dns-4436 jessie_udp@dns-test-service.dns-4436.svc jessie_tcp@dns-test-service.dns-4436.svc jessie_udp@_http._tcp.dns-test-service.dns-4436.svc jessie_tcp@_http._tcp.dns-test-service.dns-4436.svc] + +Jul 27 02:44:13.605: INFO: DNS probes using dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf succeeded + +STEP: deleting the pod 07/27/23 02:44:13.605 +STEP: deleting the test service 07/27/23 02:44:13.64 +STEP: deleting the test headless service 07/27/23 02:44:13.722 +[AfterEach] [sig-network] DNS test/e2e/framework/node/init/init.go:32 -Jun 12 22:10:55.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +Jul 27 02:44:13.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] DNS test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +[DeferCleanup (Each)] [sig-network] DNS dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +[DeferCleanup (Each)] [sig-network] DNS tear down framework | framework.go:193 -STEP: Destroying namespace "resourcequota-1865" for this suite. 06/12/23 22:10:55.034 +STEP: Destroying namespace "dns-4436" for this suite. 07/27/23 02:44:13.778 ------------------------------ -• [SLOW TEST] [11.505 seconds] -[sig-api-machinery] ResourceQuota -test/e2e/apimachinery/framework.go:23 - should create a ResourceQuota and capture the life of a service. [Conformance] - test/e2e/apimachinery/resource_quota.go:100 +• [SLOW TEST] [10.336 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + test/e2e/network/dns.go:193 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] ResourceQuota + [BeforeEach] [sig-network] DNS set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:10:43.552 - Jun 12 22:10:43.553: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename resourcequota 06/12/23 22:10:43.555 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:10:43.611 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:10:43.623 - [BeforeEach] [sig-api-machinery] ResourceQuota + STEP: Creating a kubernetes client 07/27/23 02:44:03.471 + Jul 27 02:44:03.471: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename dns 07/27/23 02:44:03.472 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:44:03.534 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:44:03.548 + [BeforeEach] [sig-network] DNS test/e2e/framework/metrics/init/init.go:31 - [It] should create a ResourceQuota and capture the life of a service. [Conformance] - test/e2e/apimachinery/resource_quota.go:100 - STEP: Counting existing ResourceQuota 06/12/23 22:10:43.643 - STEP: Creating a ResourceQuota 06/12/23 22:10:48.656 - STEP: Ensuring resource quota status is calculated 06/12/23 22:10:48.671 - STEP: Creating a Service 06/12/23 22:10:50.685 - STEP: Creating a NodePort Service 06/12/23 22:10:50.743 - STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota 06/12/23 22:10:50.803 - STEP: Ensuring resource quota status captures service creation 06/12/23 22:10:50.867 - STEP: Deleting Services 06/12/23 22:10:52.881 - STEP: Ensuring resource quota status released usage 06/12/23 22:10:52.998 - [AfterEach] [sig-api-machinery] ResourceQuota + [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + test/e2e/network/dns.go:193 + STEP: Creating a test headless service 07/27/23 02:44:03.58 + STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4436 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4436;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4436 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4436;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4436.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4436.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4436.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4436.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4436.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4436.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4436.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4436.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4436.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4436.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4436.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4436.svc;check="$$(dig +notcp +noall +answer +search 40.216.21.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.21.216.40_udp@PTR;check="$$(dig +tcp +noall +answer +search 40.216.21.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.21.216.40_tcp@PTR;sleep 1; done + 07/27/23 02:44:03.648 + STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4436 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4436;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4436 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4436;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4436.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4436.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4436.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4436.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4436.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4436.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4436.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4436.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4436.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4436.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4436.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4436.svc;check="$$(dig +notcp +noall +answer +search 40.216.21.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.21.216.40_udp@PTR;check="$$(dig +tcp +noall +answer +search 40.216.21.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.21.216.40_tcp@PTR;sleep 1; done + 07/27/23 02:44:03.648 + STEP: creating a pod to probe DNS 07/27/23 02:44:03.648 + STEP: submitting the pod to kubernetes 07/27/23 02:44:03.648 + Jul 27 02:44:03.685: INFO: Waiting up to 15m0s for pod "dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf" in namespace "dns-4436" to be "running" + Jul 27 02:44:03.723: INFO: Pod "dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf": Phase="Pending", Reason="", readiness=false. Elapsed: 37.85033ms + Jul 27 02:44:05.735: INFO: Pod "dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049396439s + Jul 27 02:44:07.734: INFO: Pod "dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf": Phase="Running", Reason="", readiness=true. Elapsed: 4.049002499s + Jul 27 02:44:07.734: INFO: Pod "dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf" satisfied condition "running" + STEP: retrieving the pod 07/27/23 02:44:07.734 + STEP: looking for the results for each expected name from probers 07/27/23 02:44:07.744 + Jul 27 02:44:07.774: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf: the server could not find the requested resource (get pods dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf) + Jul 27 02:44:07.792: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf: the server could not find the requested resource (get pods dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf) + Jul 27 02:44:07.814: INFO: Unable to read wheezy_udp@dns-test-service.dns-4436 from pod dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf: the server could not find the requested resource (get pods dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf) + Jul 27 02:44:07.831: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4436 from pod dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf: the server could not find the requested resource (get pods dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf) + Jul 27 02:44:07.848: INFO: Unable to read wheezy_udp@dns-test-service.dns-4436.svc from pod dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf: the server could not find the requested resource (get pods dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf) + Jul 27 02:44:07.866: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4436.svc from pod dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf: the server could not find the requested resource (get pods dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf) + Jul 27 02:44:07.882: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4436.svc from pod dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf: the server could not find the requested resource (get pods dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf) + Jul 27 02:44:07.897: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4436.svc from pod dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf: the server could not find the requested resource (get pods dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf) + Jul 27 02:44:07.986: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf: the server could not find the requested resource (get pods dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf) + Jul 27 02:44:08.002: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf: the server could not find the requested resource (get pods dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf) + Jul 27 02:44:08.019: INFO: Unable to read jessie_udp@dns-test-service.dns-4436 from pod dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf: the server could not find the requested resource (get pods dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf) + Jul 27 02:44:08.037: INFO: Unable to read jessie_tcp@dns-test-service.dns-4436 from pod dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf: the server could not find the requested resource (get pods dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf) + Jul 27 02:44:08.055: INFO: Unable to read jessie_udp@dns-test-service.dns-4436.svc from pod dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf: the server could not find the requested resource (get pods dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf) + Jul 27 02:44:08.074: INFO: Unable to read jessie_tcp@dns-test-service.dns-4436.svc from pod dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf: the server could not find the requested resource (get pods dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf) + Jul 27 02:44:08.092: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4436.svc from pod dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf: the server could not find the requested resource (get pods dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf) + Jul 27 02:44:08.110: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4436.svc from pod dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf: the server could not find the requested resource (get pods dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf) + Jul 27 02:44:08.188: INFO: Lookups using dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4436 wheezy_tcp@dns-test-service.dns-4436 wheezy_udp@dns-test-service.dns-4436.svc wheezy_tcp@dns-test-service.dns-4436.svc wheezy_udp@_http._tcp.dns-test-service.dns-4436.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4436.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4436 jessie_tcp@dns-test-service.dns-4436 jessie_udp@dns-test-service.dns-4436.svc jessie_tcp@dns-test-service.dns-4436.svc jessie_udp@_http._tcp.dns-test-service.dns-4436.svc jessie_tcp@_http._tcp.dns-test-service.dns-4436.svc] + + Jul 27 02:44:13.605: INFO: DNS probes using dns-4436/dns-test-78fd1be3-f2fb-4b96-9d46-f0d1ab10c1bf succeeded + + STEP: deleting the pod 07/27/23 02:44:13.605 + STEP: deleting the test service 07/27/23 02:44:13.64 + STEP: deleting the test headless service 07/27/23 02:44:13.722 + [AfterEach] [sig-network] DNS test/e2e/framework/node/init/init.go:32 - Jun 12 22:10:55.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + Jul 27 02:44:13.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] DNS test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + [DeferCleanup (Each)] [sig-network] DNS dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + [DeferCleanup (Each)] [sig-network] DNS tear down framework | framework.go:193 - STEP: Destroying namespace "resourcequota-1865" for this suite. 06/12/23 22:10:55.034 + STEP: Destroying namespace "dns-4436" for this suite. 07/27/23 02:44:13.778 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSS +SSSSSSSS ------------------------------ -[sig-node] InitContainer [NodeConformance] - should invoke init containers on a RestartNever pod [Conformance] - test/e2e/common/node/init_container.go:177 -[BeforeEach] [sig-node] InitContainer [NodeConformance] +[sig-storage] EmptyDir wrapper volumes + should not conflict [Conformance] + test/e2e/storage/empty_dir_wrapper.go:67 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:10:55.07 -Jun 12 22:10:55.070: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename init-container 06/12/23 22:10:55.073 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:10:55.134 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:10:55.15 -[BeforeEach] [sig-node] InitContainer [NodeConformance] +STEP: Creating a kubernetes client 07/27/23 02:44:13.808 +Jul 27 02:44:13.808: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename emptydir-wrapper 07/27/23 02:44:13.809 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:44:13.852 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:44:13.863 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-node] InitContainer [NodeConformance] - test/e2e/common/node/init_container.go:165 -[It] should invoke init containers on a RestartNever pod [Conformance] - test/e2e/common/node/init_container.go:177 -STEP: creating the pod 06/12/23 22:10:55.173 -Jun 12 22:10:55.173: INFO: PodSpec: initContainers in spec.initContainers -[AfterEach] [sig-node] InitContainer [NodeConformance] +[It] should not conflict [Conformance] + test/e2e/storage/empty_dir_wrapper.go:67 +Jul 27 02:44:13.954: INFO: Waiting up to 5m0s for pod "pod-secrets-7fcea25c-8c15-4cd7-9005-28c6c60fc5a0" in namespace "emptydir-wrapper-2453" to be "running and ready" +Jul 27 02:44:13.969: INFO: Pod "pod-secrets-7fcea25c-8c15-4cd7-9005-28c6c60fc5a0": Phase="Pending", Reason="", readiness=false. Elapsed: 15.245919ms +Jul 27 02:44:13.969: INFO: The phase of Pod pod-secrets-7fcea25c-8c15-4cd7-9005-28c6c60fc5a0 is Pending, waiting for it to be Running (with Ready = true) +Jul 27 02:44:15.981: INFO: Pod "pod-secrets-7fcea25c-8c15-4cd7-9005-28c6c60fc5a0": Phase="Running", Reason="", readiness=true. Elapsed: 2.027027295s +Jul 27 02:44:15.981: INFO: The phase of Pod pod-secrets-7fcea25c-8c15-4cd7-9005-28c6c60fc5a0 is Running (Ready = true) +Jul 27 02:44:15.981: INFO: Pod "pod-secrets-7fcea25c-8c15-4cd7-9005-28c6c60fc5a0" satisfied condition "running and ready" +STEP: Cleaning up the secret 07/27/23 02:44:15.992 +STEP: Cleaning up the configmap 07/27/23 02:44:16.009 +STEP: Cleaning up the pod 07/27/23 02:44:16.028 +[AfterEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/node/init/init.go:32 -Jun 12 22:11:01.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] +Jul 27 02:44:16.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] +[DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] +[DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes tear down framework | framework.go:193 -STEP: Destroying namespace "init-container-3684" for this suite. 06/12/23 22:11:01.766 +STEP: Destroying namespace "emptydir-wrapper-2453" for this suite. 07/27/23 02:44:16.077 ------------------------------ -• [SLOW TEST] [6.721 seconds] -[sig-node] InitContainer [NodeConformance] -test/e2e/common/node/framework.go:23 - should invoke init containers on a RestartNever pod [Conformance] - test/e2e/common/node/init_container.go:177 +• [2.290 seconds] +[sig-storage] EmptyDir wrapper volumes +test/e2e/storage/utils/framework.go:23 + should not conflict [Conformance] + test/e2e/storage/empty_dir_wrapper.go:67 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] InitContainer [NodeConformance] + [BeforeEach] [sig-storage] EmptyDir wrapper volumes set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:10:55.07 - Jun 12 22:10:55.070: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename init-container 06/12/23 22:10:55.073 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:10:55.134 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:10:55.15 - [BeforeEach] [sig-node] InitContainer [NodeConformance] + STEP: Creating a kubernetes client 07/27/23 02:44:13.808 + Jul 27 02:44:13.808: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename emptydir-wrapper 07/27/23 02:44:13.809 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:44:13.852 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:44:13.863 + [BeforeEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-node] InitContainer [NodeConformance] - test/e2e/common/node/init_container.go:165 - [It] should invoke init containers on a RestartNever pod [Conformance] - test/e2e/common/node/init_container.go:177 - STEP: creating the pod 06/12/23 22:10:55.173 - Jun 12 22:10:55.173: INFO: PodSpec: initContainers in spec.initContainers - [AfterEach] [sig-node] InitContainer [NodeConformance] + [It] should not conflict [Conformance] + test/e2e/storage/empty_dir_wrapper.go:67 + Jul 27 02:44:13.954: INFO: Waiting up to 5m0s for pod "pod-secrets-7fcea25c-8c15-4cd7-9005-28c6c60fc5a0" in namespace "emptydir-wrapper-2453" to be "running and ready" + Jul 27 02:44:13.969: INFO: Pod "pod-secrets-7fcea25c-8c15-4cd7-9005-28c6c60fc5a0": Phase="Pending", Reason="", readiness=false. Elapsed: 15.245919ms + Jul 27 02:44:13.969: INFO: The phase of Pod pod-secrets-7fcea25c-8c15-4cd7-9005-28c6c60fc5a0 is Pending, waiting for it to be Running (with Ready = true) + Jul 27 02:44:15.981: INFO: Pod "pod-secrets-7fcea25c-8c15-4cd7-9005-28c6c60fc5a0": Phase="Running", Reason="", readiness=true. Elapsed: 2.027027295s + Jul 27 02:44:15.981: INFO: The phase of Pod pod-secrets-7fcea25c-8c15-4cd7-9005-28c6c60fc5a0 is Running (Ready = true) + Jul 27 02:44:15.981: INFO: Pod "pod-secrets-7fcea25c-8c15-4cd7-9005-28c6c60fc5a0" satisfied condition "running and ready" + STEP: Cleaning up the secret 07/27/23 02:44:15.992 + STEP: Cleaning up the configmap 07/27/23 02:44:16.009 + STEP: Cleaning up the pod 07/27/23 02:44:16.028 + [AfterEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/node/init/init.go:32 - Jun 12 22:11:01.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + Jul 27 02:44:16.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes tear down framework | framework.go:193 - STEP: Destroying namespace "init-container-3684" for this suite. 06/12/23 22:11:01.766 + STEP: Destroying namespace "emptydir-wrapper-2453" for this suite. 07/27/23 02:44:16.077 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-node] RuntimeClass - should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance] - test/e2e/common/node/runtimeclass.go:156 -[BeforeEach] [sig-node] RuntimeClass +[sig-network] DNS + should provide DNS for the cluster [Conformance] + test/e2e/network/dns.go:50 +[BeforeEach] [sig-network] DNS set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:11:01.796 -Jun 12 22:11:01.797: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename runtimeclass 06/12/23 22:11:01.798 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:11:01.858 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:11:01.875 -[BeforeEach] [sig-node] RuntimeClass +STEP: Creating a kubernetes client 07/27/23 02:44:16.1 +Jul 27 02:44:16.100: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename dns 07/27/23 02:44:16.1 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:44:16.143 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:44:16.154 +[BeforeEach] [sig-network] DNS test/e2e/framework/metrics/init/init.go:31 -[It] should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance] - test/e2e/common/node/runtimeclass.go:156 -STEP: Deleting RuntimeClass runtimeclass-404-delete-me 06/12/23 22:11:01.929 -STEP: Waiting for the RuntimeClass to disappear 06/12/23 22:11:01.954 -[AfterEach] [sig-node] RuntimeClass +[It] should provide DNS for the cluster [Conformance] + test/e2e/network/dns.go:50 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;sleep 1; done + 07/27/23 02:44:16.166 +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;sleep 1; done + 07/27/23 02:44:16.166 +STEP: creating a pod to probe DNS 07/27/23 02:44:16.166 +STEP: submitting the pod to kubernetes 07/27/23 02:44:16.166 +Jul 27 02:44:16.199: INFO: Waiting up to 15m0s for pod "dns-test-5d238bed-214a-45b5-9206-74179f8d611f" in namespace "dns-9794" to be "running" +Jul 27 02:44:16.215: INFO: Pod "dns-test-5d238bed-214a-45b5-9206-74179f8d611f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.102181ms +Jul 27 02:44:18.231: INFO: Pod "dns-test-5d238bed-214a-45b5-9206-74179f8d611f": Phase="Running", Reason="", readiness=true. Elapsed: 2.031726362s +Jul 27 02:44:18.231: INFO: Pod "dns-test-5d238bed-214a-45b5-9206-74179f8d611f" satisfied condition "running" +STEP: retrieving the pod 07/27/23 02:44:18.231 +STEP: looking for the results for each expected name from probers 07/27/23 02:44:18.26 +Jul 27 02:44:18.337: INFO: DNS probes using dns-9794/dns-test-5d238bed-214a-45b5-9206-74179f8d611f succeeded + +STEP: deleting the pod 07/27/23 02:44:18.337 +[AfterEach] [sig-network] DNS test/e2e/framework/node/init/init.go:32 -Jun 12 22:11:02.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] RuntimeClass +Jul 27 02:44:18.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] DNS test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] RuntimeClass +[DeferCleanup (Each)] [sig-network] DNS dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] RuntimeClass +[DeferCleanup (Each)] [sig-network] DNS tear down framework | framework.go:193 -STEP: Destroying namespace "runtimeclass-404" for this suite. 06/12/23 22:11:02.053 +STEP: Destroying namespace "dns-9794" for this suite. 07/27/23 02:44:18.379 ------------------------------ -• [0.297 seconds] -[sig-node] RuntimeClass -test/e2e/common/node/framework.go:23 - should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance] - test/e2e/common/node/runtimeclass.go:156 +• [2.299 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should provide DNS for the cluster [Conformance] + test/e2e/network/dns.go:50 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] RuntimeClass + [BeforeEach] [sig-network] DNS set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:11:01.796 - Jun 12 22:11:01.797: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename runtimeclass 06/12/23 22:11:01.798 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:11:01.858 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:11:01.875 - [BeforeEach] [sig-node] RuntimeClass + STEP: Creating a kubernetes client 07/27/23 02:44:16.1 + Jul 27 02:44:16.100: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename dns 07/27/23 02:44:16.1 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:44:16.143 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:44:16.154 + [BeforeEach] [sig-network] DNS test/e2e/framework/metrics/init/init.go:31 - [It] should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance] - test/e2e/common/node/runtimeclass.go:156 - STEP: Deleting RuntimeClass runtimeclass-404-delete-me 06/12/23 22:11:01.929 - STEP: Waiting for the RuntimeClass to disappear 06/12/23 22:11:01.954 - [AfterEach] [sig-node] RuntimeClass + [It] should provide DNS for the cluster [Conformance] + test/e2e/network/dns.go:50 + STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;sleep 1; done + 07/27/23 02:44:16.166 + STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;sleep 1; done + 07/27/23 02:44:16.166 + STEP: creating a pod to probe DNS 07/27/23 02:44:16.166 + STEP: submitting the pod to kubernetes 07/27/23 02:44:16.166 + Jul 27 02:44:16.199: INFO: Waiting up to 15m0s for pod "dns-test-5d238bed-214a-45b5-9206-74179f8d611f" in namespace "dns-9794" to be "running" + Jul 27 02:44:16.215: INFO: Pod "dns-test-5d238bed-214a-45b5-9206-74179f8d611f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.102181ms + Jul 27 02:44:18.231: INFO: Pod "dns-test-5d238bed-214a-45b5-9206-74179f8d611f": Phase="Running", Reason="", readiness=true. Elapsed: 2.031726362s + Jul 27 02:44:18.231: INFO: Pod "dns-test-5d238bed-214a-45b5-9206-74179f8d611f" satisfied condition "running" + STEP: retrieving the pod 07/27/23 02:44:18.231 + STEP: looking for the results for each expected name from probers 07/27/23 02:44:18.26 + Jul 27 02:44:18.337: INFO: DNS probes using dns-9794/dns-test-5d238bed-214a-45b5-9206-74179f8d611f succeeded + + STEP: deleting the pod 07/27/23 02:44:18.337 + [AfterEach] [sig-network] DNS test/e2e/framework/node/init/init.go:32 - Jun 12 22:11:02.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] RuntimeClass + Jul 27 02:44:18.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] DNS test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] RuntimeClass + [DeferCleanup (Each)] [sig-network] DNS dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] RuntimeClass + [DeferCleanup (Each)] [sig-network] DNS tear down framework | framework.go:193 - STEP: Destroying namespace "runtimeclass-404" for this suite. 06/12/23 22:11:02.053 + STEP: Destroying namespace "dns-9794" for this suite. 07/27/23 02:44:18.379 << End Captured GinkgoWriter Output ------------------------------ -[sig-storage] Projected secret - should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/projected_secret.go:56 -[BeforeEach] [sig-storage] Projected secret +SSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:109 +[BeforeEach] [sig-storage] Projected configMap set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:11:02.095 -Jun 12 22:11:02.096: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename projected 06/12/23 22:11:02.099 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:11:02.15 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:11:02.189 -[BeforeEach] [sig-storage] Projected secret +STEP: Creating a kubernetes client 07/27/23 02:44:18.399 +Jul 27 02:44:18.400: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename projected 07/27/23 02:44:18.4 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:44:18.442 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:44:18.456 +[BeforeEach] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:31 -[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/projected_secret.go:56 -STEP: Creating projection with secret that has name projected-secret-test-96bb89c0-ff0c-437a-a196-35f23e9df3fb 06/12/23 22:11:02.203 -STEP: Creating a pod to test consume secrets 06/12/23 22:11:02.226 -Jun 12 22:11:02.270: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a99961c4-40c4-42a9-9d46-8847dcdcc293" in namespace "projected-5522" to be "Succeeded or Failed" -Jun 12 22:11:02.290: INFO: Pod "pod-projected-secrets-a99961c4-40c4-42a9-9d46-8847dcdcc293": Phase="Pending", Reason="", readiness=false. Elapsed: 19.401885ms -Jun 12 22:11:04.303: INFO: Pod "pod-projected-secrets-a99961c4-40c4-42a9-9d46-8847dcdcc293": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032162707s -Jun 12 22:11:06.325: INFO: Pod "pod-projected-secrets-a99961c4-40c4-42a9-9d46-8847dcdcc293": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054903599s -Jun 12 22:11:08.312: INFO: Pod "pod-projected-secrets-a99961c4-40c4-42a9-9d46-8847dcdcc293": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.042066862s -STEP: Saw pod success 06/12/23 22:11:08.313 -Jun 12 22:11:08.313: INFO: Pod "pod-projected-secrets-a99961c4-40c4-42a9-9d46-8847dcdcc293" satisfied condition "Succeeded or Failed" -Jun 12 22:11:08.335: INFO: Trying to get logs from node 10.138.75.70 pod pod-projected-secrets-a99961c4-40c4-42a9-9d46-8847dcdcc293 container projected-secret-volume-test: -STEP: delete the pod 06/12/23 22:11:08.42 -Jun 12 22:11:08.470: INFO: Waiting for pod pod-projected-secrets-a99961c4-40c4-42a9-9d46-8847dcdcc293 to disappear -Jun 12 22:11:08.516: INFO: Pod pod-projected-secrets-a99961c4-40c4-42a9-9d46-8847dcdcc293 no longer exists -[AfterEach] [sig-storage] Projected secret +[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:109 +STEP: Creating configMap with name projected-configmap-test-volume-map-98a9778f-a5d3-472a-a33b-0cb141146e14 07/27/23 02:44:18.469 +STEP: Creating a pod to test consume configMaps 07/27/23 02:44:18.488 +Jul 27 02:44:18.520: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-622ef689-b0ee-47ab-a4a9-e870802be3c1" in namespace "projected-4470" to be "Succeeded or Failed" +Jul 27 02:44:18.534: INFO: Pod "pod-projected-configmaps-622ef689-b0ee-47ab-a4a9-e870802be3c1": Phase="Pending", Reason="", readiness=false. Elapsed: 13.907913ms +Jul 27 02:44:20.547: INFO: Pod "pod-projected-configmaps-622ef689-b0ee-47ab-a4a9-e870802be3c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026453822s +Jul 27 02:44:22.548: INFO: Pod "pod-projected-configmaps-622ef689-b0ee-47ab-a4a9-e870802be3c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027690574s +STEP: Saw pod success 07/27/23 02:44:22.548 +Jul 27 02:44:22.548: INFO: Pod "pod-projected-configmaps-622ef689-b0ee-47ab-a4a9-e870802be3c1" satisfied condition "Succeeded or Failed" +Jul 27 02:44:22.560: INFO: Trying to get logs from node 10.245.128.19 pod pod-projected-configmaps-622ef689-b0ee-47ab-a4a9-e870802be3c1 container agnhost-container: +STEP: delete the pod 07/27/23 02:44:22.591 +Jul 27 02:44:22.619: INFO: Waiting for pod pod-projected-configmaps-622ef689-b0ee-47ab-a4a9-e870802be3c1 to disappear +Jul 27 02:44:22.630: INFO: Pod pod-projected-configmaps-622ef689-b0ee-47ab-a4a9-e870802be3c1 no longer exists +[AfterEach] [sig-storage] Projected configMap test/e2e/framework/node/init/init.go:32 -Jun 12 22:11:08.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Projected secret +Jul 27 02:44:22.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Projected secret +[DeferCleanup (Each)] [sig-storage] Projected configMap dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Projected secret +[DeferCleanup (Each)] [sig-storage] Projected configMap tear down framework | framework.go:193 -STEP: Destroying namespace "projected-5522" for this suite. 06/12/23 22:11:08.583 +STEP: Destroying namespace "projected-4470" for this suite. 07/27/23 02:44:22.646 ------------------------------ -• [SLOW TEST] [6.517 seconds] -[sig-storage] Projected secret +• [4.268 seconds] +[sig-storage] Projected configMap test/e2e/common/storage/framework.go:23 - should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/projected_secret.go:56 + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:109 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Projected secret + [BeforeEach] [sig-storage] Projected configMap set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:11:02.095 - Jun 12 22:11:02.096: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename projected 06/12/23 22:11:02.099 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:11:02.15 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:11:02.189 - [BeforeEach] [sig-storage] Projected secret + STEP: Creating a kubernetes client 07/27/23 02:44:18.399 + Jul 27 02:44:18.400: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename projected 07/27/23 02:44:18.4 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:44:18.442 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:44:18.456 + [BeforeEach] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:31 - [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/projected_secret.go:56 - STEP: Creating projection with secret that has name projected-secret-test-96bb89c0-ff0c-437a-a196-35f23e9df3fb 06/12/23 22:11:02.203 - STEP: Creating a pod to test consume secrets 06/12/23 22:11:02.226 - Jun 12 22:11:02.270: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a99961c4-40c4-42a9-9d46-8847dcdcc293" in namespace "projected-5522" to be "Succeeded or Failed" - Jun 12 22:11:02.290: INFO: Pod "pod-projected-secrets-a99961c4-40c4-42a9-9d46-8847dcdcc293": Phase="Pending", Reason="", readiness=false. Elapsed: 19.401885ms - Jun 12 22:11:04.303: INFO: Pod "pod-projected-secrets-a99961c4-40c4-42a9-9d46-8847dcdcc293": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032162707s - Jun 12 22:11:06.325: INFO: Pod "pod-projected-secrets-a99961c4-40c4-42a9-9d46-8847dcdcc293": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054903599s - Jun 12 22:11:08.312: INFO: Pod "pod-projected-secrets-a99961c4-40c4-42a9-9d46-8847dcdcc293": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.042066862s - STEP: Saw pod success 06/12/23 22:11:08.313 - Jun 12 22:11:08.313: INFO: Pod "pod-projected-secrets-a99961c4-40c4-42a9-9d46-8847dcdcc293" satisfied condition "Succeeded or Failed" - Jun 12 22:11:08.335: INFO: Trying to get logs from node 10.138.75.70 pod pod-projected-secrets-a99961c4-40c4-42a9-9d46-8847dcdcc293 container projected-secret-volume-test: - STEP: delete the pod 06/12/23 22:11:08.42 - Jun 12 22:11:08.470: INFO: Waiting for pod pod-projected-secrets-a99961c4-40c4-42a9-9d46-8847dcdcc293 to disappear - Jun 12 22:11:08.516: INFO: Pod pod-projected-secrets-a99961c4-40c4-42a9-9d46-8847dcdcc293 no longer exists - [AfterEach] [sig-storage] Projected secret + [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:109 + STEP: Creating configMap with name projected-configmap-test-volume-map-98a9778f-a5d3-472a-a33b-0cb141146e14 07/27/23 02:44:18.469 + STEP: Creating a pod to test consume configMaps 07/27/23 02:44:18.488 + Jul 27 02:44:18.520: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-622ef689-b0ee-47ab-a4a9-e870802be3c1" in namespace "projected-4470" to be "Succeeded or Failed" + Jul 27 02:44:18.534: INFO: Pod "pod-projected-configmaps-622ef689-b0ee-47ab-a4a9-e870802be3c1": Phase="Pending", Reason="", readiness=false. Elapsed: 13.907913ms + Jul 27 02:44:20.547: INFO: Pod "pod-projected-configmaps-622ef689-b0ee-47ab-a4a9-e870802be3c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026453822s + Jul 27 02:44:22.548: INFO: Pod "pod-projected-configmaps-622ef689-b0ee-47ab-a4a9-e870802be3c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027690574s + STEP: Saw pod success 07/27/23 02:44:22.548 + Jul 27 02:44:22.548: INFO: Pod "pod-projected-configmaps-622ef689-b0ee-47ab-a4a9-e870802be3c1" satisfied condition "Succeeded or Failed" + Jul 27 02:44:22.560: INFO: Trying to get logs from node 10.245.128.19 pod pod-projected-configmaps-622ef689-b0ee-47ab-a4a9-e870802be3c1 container agnhost-container: + STEP: delete the pod 07/27/23 02:44:22.591 + Jul 27 02:44:22.619: INFO: Waiting for pod pod-projected-configmaps-622ef689-b0ee-47ab-a4a9-e870802be3c1 to disappear + Jul 27 02:44:22.630: INFO: Pod pod-projected-configmaps-622ef689-b0ee-47ab-a4a9-e870802be3c1 no longer exists + [AfterEach] [sig-storage] Projected configMap test/e2e/framework/node/init/init.go:32 - Jun 12 22:11:08.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Projected secret + Jul 27 02:44:22.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Projected secret + [DeferCleanup (Each)] [sig-storage] Projected configMap dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Projected secret + [DeferCleanup (Each)] [sig-storage] Projected configMap tear down framework | framework.go:193 - STEP: Destroying namespace "projected-5522" for this suite. 06/12/23 22:11:08.583 + STEP: Destroying namespace "projected-4470" for this suite. 07/27/23 02:44:22.646 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSS +SSS ------------------------------ -[sig-api-machinery] Garbage collector - should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] - test/e2e/apimachinery/garbage_collector.go:735 -[BeforeEach] [sig-api-machinery] Garbage collector +[sig-storage] Downward API volume + should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:193 +[BeforeEach] [sig-storage] Downward API volume set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:11:08.752 -Jun 12 22:11:08.752: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename gc 06/12/23 22:11:08.77 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:11:08.866 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:11:08.919 -[BeforeEach] [sig-api-machinery] Garbage collector +STEP: Creating a kubernetes client 07/27/23 02:44:22.667 +Jul 27 02:44:22.668: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename downward-api 07/27/23 02:44:22.668 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:44:22.713 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:44:22.725 +[BeforeEach] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:31 -[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] - test/e2e/apimachinery/garbage_collector.go:735 -STEP: create the rc1 06/12/23 22:11:09.272 -STEP: create the rc2 06/12/23 22:11:09.294 -STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well 06/12/23 22:11:14.489 -STEP: delete the rc simpletest-rc-to-be-deleted 06/12/23 22:11:16.686 -STEP: wait for the rc to be deleted 06/12/23 22:11:16.709 -STEP: Gathering metrics 06/12/23 22:11:21.785 -W0612 22:11:21.817507 23 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. -Jun 12 22:11:21.817: INFO: For apiserver_request_total: -For apiserver_request_latency_seconds: -For apiserver_init_events_total: -For garbage_collector_attempt_to_delete_queue_latency: -For garbage_collector_attempt_to_delete_work_duration: -For garbage_collector_attempt_to_orphan_queue_latency: -For garbage_collector_attempt_to_orphan_work_duration: -For garbage_collector_dirty_processing_latency_microseconds: -For garbage_collector_event_processing_latency_microseconds: -For garbage_collector_graph_changes_queue_latency: -For garbage_collector_graph_changes_work_duration: -For garbage_collector_orphan_processing_latency_microseconds: -For namespace_queue_latency: -For namespace_queue_latency_sum: -For namespace_queue_latency_count: -For namespace_retries: -For namespace_work_duration: -For namespace_work_duration_sum: -For namespace_work_duration_count: -For function_duration_seconds: -For errors_total: -For evicted_pods_total: - -Jun 12 22:11:21.818: INFO: Deleting pod "simpletest-rc-to-be-deleted-22nr4" in namespace "gc-6875" -Jun 12 22:11:21.866: INFO: Deleting pod "simpletest-rc-to-be-deleted-246cc" in namespace "gc-6875" -Jun 12 22:11:21.931: INFO: Deleting pod "simpletest-rc-to-be-deleted-2q4s8" in namespace "gc-6875" -Jun 12 22:11:22.088: INFO: Deleting pod "simpletest-rc-to-be-deleted-2qs9d" in namespace "gc-6875" -Jun 12 22:11:22.170: INFO: Deleting pod "simpletest-rc-to-be-deleted-2v7zf" in namespace "gc-6875" -Jun 12 22:11:22.254: INFO: Deleting pod "simpletest-rc-to-be-deleted-2w86k" in namespace "gc-6875" -Jun 12 22:11:22.344: INFO: Deleting pod "simpletest-rc-to-be-deleted-2x9dj" in namespace "gc-6875" -Jun 12 22:11:22.441: INFO: Deleting pod "simpletest-rc-to-be-deleted-426mr" in namespace "gc-6875" -Jun 12 22:11:22.512: INFO: Deleting pod "simpletest-rc-to-be-deleted-42vl2" in namespace "gc-6875" -Jun 12 22:11:22.564: INFO: Deleting pod "simpletest-rc-to-be-deleted-4d5n9" in namespace "gc-6875" -Jun 12 22:11:22.613: INFO: Deleting pod "simpletest-rc-to-be-deleted-4fjp2" in namespace "gc-6875" -Jun 12 22:11:22.678: INFO: Deleting pod "simpletest-rc-to-be-deleted-4jm58" in namespace "gc-6875" -Jun 12 22:11:22.722: INFO: Deleting pod "simpletest-rc-to-be-deleted-4nkfx" in namespace "gc-6875" -Jun 12 22:11:22.771: INFO: Deleting pod "simpletest-rc-to-be-deleted-5k26x" in namespace "gc-6875" -Jun 12 22:11:22.813: INFO: Deleting pod "simpletest-rc-to-be-deleted-5khn4" in namespace "gc-6875" -Jun 12 22:11:22.853: INFO: Deleting pod "simpletest-rc-to-be-deleted-5m2hb" in namespace "gc-6875" -Jun 12 22:11:22.907: INFO: Deleting pod "simpletest-rc-to-be-deleted-5n95m" in namespace "gc-6875" -Jun 12 22:11:22.952: INFO: Deleting pod "simpletest-rc-to-be-deleted-5x6xh" in namespace "gc-6875" -Jun 12 22:11:23.017: INFO: Deleting pod "simpletest-rc-to-be-deleted-6djtk" in namespace "gc-6875" -Jun 12 22:11:23.078: INFO: Deleting pod "simpletest-rc-to-be-deleted-6mdcv" in namespace "gc-6875" -Jun 12 22:11:23.113: INFO: Deleting pod "simpletest-rc-to-be-deleted-6q96n" in namespace "gc-6875" -Jun 12 22:11:23.176: INFO: Deleting pod "simpletest-rc-to-be-deleted-6w2pm" in namespace "gc-6875" -Jun 12 22:11:23.208: INFO: Deleting pod "simpletest-rc-to-be-deleted-728z5" in namespace "gc-6875" -Jun 12 22:11:23.278: INFO: Deleting pod "simpletest-rc-to-be-deleted-7wn6j" in namespace "gc-6875" -Jun 12 22:11:23.361: INFO: Deleting pod "simpletest-rc-to-be-deleted-7x877" in namespace "gc-6875" -Jun 12 22:11:23.409: INFO: Deleting pod "simpletest-rc-to-be-deleted-8nb2f" in namespace "gc-6875" -Jun 12 22:11:23.457: INFO: Deleting pod "simpletest-rc-to-be-deleted-92cs8" in namespace "gc-6875" -Jun 12 22:11:23.673: INFO: Deleting pod "simpletest-rc-to-be-deleted-94s8t" in namespace "gc-6875" -Jun 12 22:11:23.718: INFO: Deleting pod "simpletest-rc-to-be-deleted-95ckd" in namespace "gc-6875" -Jun 12 22:11:23.757: INFO: Deleting pod "simpletest-rc-to-be-deleted-99vff" in namespace "gc-6875" -Jun 12 22:11:23.809: INFO: Deleting pod "simpletest-rc-to-be-deleted-9b8xx" in namespace "gc-6875" -Jun 12 22:11:23.848: INFO: Deleting pod "simpletest-rc-to-be-deleted-9cj9f" in namespace "gc-6875" -Jun 12 22:11:23.882: INFO: Deleting pod "simpletest-rc-to-be-deleted-9ns5h" in namespace "gc-6875" -Jun 12 22:11:23.916: INFO: Deleting pod "simpletest-rc-to-be-deleted-bdhmc" in namespace "gc-6875" -Jun 12 22:11:23.965: INFO: Deleting pod "simpletest-rc-to-be-deleted-bgxnr" in namespace "gc-6875" -Jun 12 22:11:24.032: INFO: Deleting pod "simpletest-rc-to-be-deleted-c7fln" in namespace "gc-6875" -Jun 12 22:11:24.165: INFO: Deleting pod "simpletest-rc-to-be-deleted-ckdwm" in namespace "gc-6875" -Jun 12 22:11:24.271: INFO: Deleting pod "simpletest-rc-to-be-deleted-cztr5" in namespace "gc-6875" -Jun 12 22:11:24.326: INFO: Deleting pod "simpletest-rc-to-be-deleted-czw6t" in namespace "gc-6875" -Jun 12 22:11:24.366: INFO: Deleting pod "simpletest-rc-to-be-deleted-ddmtb" in namespace "gc-6875" -Jun 12 22:11:24.457: INFO: Deleting pod "simpletest-rc-to-be-deleted-dhrm5" in namespace "gc-6875" -Jun 12 22:11:24.489: INFO: Deleting pod "simpletest-rc-to-be-deleted-dkv22" in namespace "gc-6875" -Jun 12 22:11:24.540: INFO: Deleting pod "simpletest-rc-to-be-deleted-g2hmw" in namespace "gc-6875" -Jun 12 22:11:24.582: INFO: Deleting pod "simpletest-rc-to-be-deleted-gb8p5" in namespace "gc-6875" -Jun 12 22:11:24.624: INFO: Deleting pod "simpletest-rc-to-be-deleted-gbjck" in namespace "gc-6875" -Jun 12 22:11:24.659: INFO: Deleting pod "simpletest-rc-to-be-deleted-gvk4n" in namespace "gc-6875" -Jun 12 22:11:24.697: INFO: Deleting pod "simpletest-rc-to-be-deleted-hwtqd" in namespace "gc-6875" -Jun 12 22:11:24.738: INFO: Deleting pod "simpletest-rc-to-be-deleted-j5ckx" in namespace "gc-6875" -Jun 12 22:11:24.817: INFO: Deleting pod "simpletest-rc-to-be-deleted-jgxfd" in namespace "gc-6875" -Jun 12 22:11:24.888: INFO: Deleting pod "simpletest-rc-to-be-deleted-jvpqx" in namespace "gc-6875" -[AfterEach] [sig-api-machinery] Garbage collector +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:193 +STEP: Creating a pod to test downward API volume plugin 07/27/23 02:44:22.74 +Jul 27 02:44:23.778: INFO: Waiting up to 5m0s for pod "downwardapi-volume-090dbb34-9c1d-45b8-9506-06797ca84e7e" in namespace "downward-api-7690" to be "Succeeded or Failed" +Jul 27 02:44:23.794: INFO: Pod "downwardapi-volume-090dbb34-9c1d-45b8-9506-06797ca84e7e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.178442ms +Jul 27 02:44:25.805: INFO: Pod "downwardapi-volume-090dbb34-9c1d-45b8-9506-06797ca84e7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027648138s +Jul 27 02:44:27.807: INFO: Pod "downwardapi-volume-090dbb34-9c1d-45b8-9506-06797ca84e7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028958444s +STEP: Saw pod success 07/27/23 02:44:27.807 +Jul 27 02:44:27.807: INFO: Pod "downwardapi-volume-090dbb34-9c1d-45b8-9506-06797ca84e7e" satisfied condition "Succeeded or Failed" +Jul 27 02:44:27.819: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-090dbb34-9c1d-45b8-9506-06797ca84e7e container client-container: +STEP: delete the pod 07/27/23 02:44:27.858 +Jul 27 02:44:27.898: INFO: Waiting for pod downwardapi-volume-090dbb34-9c1d-45b8-9506-06797ca84e7e to disappear +Jul 27 02:44:27.913: INFO: Pod downwardapi-volume-090dbb34-9c1d-45b8-9506-06797ca84e7e no longer exists +[AfterEach] [sig-storage] Downward API volume test/e2e/framework/node/init/init.go:32 -Jun 12 22:11:24.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] Garbage collector +Jul 27 02:44:27.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] Garbage collector +[DeferCleanup (Each)] [sig-storage] Downward API volume dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] Garbage collector +[DeferCleanup (Each)] [sig-storage] Downward API volume tear down framework | framework.go:193 -STEP: Destroying namespace "gc-6875" for this suite. 06/12/23 22:11:24.967 +STEP: Destroying namespace "downward-api-7690" for this suite. 07/27/23 02:44:27.932 ------------------------------ -• [SLOW TEST] [16.238 seconds] -[sig-api-machinery] Garbage collector -test/e2e/apimachinery/framework.go:23 - should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] - test/e2e/apimachinery/garbage_collector.go:735 +• [SLOW TEST] [5.290 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:193 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] Garbage collector + [BeforeEach] [sig-storage] Downward API volume set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:11:08.752 - Jun 12 22:11:08.752: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename gc 06/12/23 22:11:08.77 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:11:08.866 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:11:08.919 - [BeforeEach] [sig-api-machinery] Garbage collector + STEP: Creating a kubernetes client 07/27/23 02:44:22.667 + Jul 27 02:44:22.668: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename downward-api 07/27/23 02:44:22.668 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:44:22.713 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:44:22.725 + [BeforeEach] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:31 - [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] - test/e2e/apimachinery/garbage_collector.go:735 - STEP: create the rc1 06/12/23 22:11:09.272 - STEP: create the rc2 06/12/23 22:11:09.294 - STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well 06/12/23 22:11:14.489 - STEP: delete the rc simpletest-rc-to-be-deleted 06/12/23 22:11:16.686 - STEP: wait for the rc to be deleted 06/12/23 22:11:16.709 - STEP: Gathering metrics 06/12/23 22:11:21.785 - W0612 22:11:21.817507 23 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. - Jun 12 22:11:21.817: INFO: For apiserver_request_total: - For apiserver_request_latency_seconds: - For apiserver_init_events_total: - For garbage_collector_attempt_to_delete_queue_latency: - For garbage_collector_attempt_to_delete_work_duration: - For garbage_collector_attempt_to_orphan_queue_latency: - For garbage_collector_attempt_to_orphan_work_duration: - For garbage_collector_dirty_processing_latency_microseconds: - For garbage_collector_event_processing_latency_microseconds: - For garbage_collector_graph_changes_queue_latency: - For garbage_collector_graph_changes_work_duration: - For garbage_collector_orphan_processing_latency_microseconds: - For namespace_queue_latency: - For namespace_queue_latency_sum: - For namespace_queue_latency_count: - For namespace_retries: - For namespace_work_duration: - For namespace_work_duration_sum: - For namespace_work_duration_count: - For function_duration_seconds: - For errors_total: - For evicted_pods_total: - - Jun 12 22:11:21.818: INFO: Deleting pod "simpletest-rc-to-be-deleted-22nr4" in namespace "gc-6875" - Jun 12 22:11:21.866: INFO: Deleting pod "simpletest-rc-to-be-deleted-246cc" in namespace "gc-6875" - Jun 12 22:11:21.931: INFO: Deleting pod "simpletest-rc-to-be-deleted-2q4s8" in namespace "gc-6875" - Jun 12 22:11:22.088: INFO: Deleting pod "simpletest-rc-to-be-deleted-2qs9d" in namespace "gc-6875" - Jun 12 22:11:22.170: INFO: Deleting pod "simpletest-rc-to-be-deleted-2v7zf" in namespace "gc-6875" - Jun 12 22:11:22.254: INFO: Deleting pod "simpletest-rc-to-be-deleted-2w86k" in namespace "gc-6875" - Jun 12 22:11:22.344: INFO: Deleting pod "simpletest-rc-to-be-deleted-2x9dj" in namespace "gc-6875" - Jun 12 22:11:22.441: INFO: Deleting pod "simpletest-rc-to-be-deleted-426mr" in namespace "gc-6875" - Jun 12 22:11:22.512: INFO: Deleting pod "simpletest-rc-to-be-deleted-42vl2" in namespace "gc-6875" - Jun 12 22:11:22.564: INFO: Deleting pod "simpletest-rc-to-be-deleted-4d5n9" in namespace "gc-6875" - Jun 12 22:11:22.613: INFO: Deleting pod "simpletest-rc-to-be-deleted-4fjp2" in namespace "gc-6875" - Jun 12 22:11:22.678: INFO: Deleting pod "simpletest-rc-to-be-deleted-4jm58" in namespace "gc-6875" - Jun 12 22:11:22.722: INFO: Deleting pod "simpletest-rc-to-be-deleted-4nkfx" in namespace "gc-6875" - Jun 12 22:11:22.771: INFO: Deleting pod "simpletest-rc-to-be-deleted-5k26x" in namespace "gc-6875" - Jun 12 22:11:22.813: INFO: Deleting pod "simpletest-rc-to-be-deleted-5khn4" in namespace "gc-6875" - Jun 12 22:11:22.853: INFO: Deleting pod "simpletest-rc-to-be-deleted-5m2hb" in namespace "gc-6875" - Jun 12 22:11:22.907: INFO: Deleting pod "simpletest-rc-to-be-deleted-5n95m" in namespace "gc-6875" - Jun 12 22:11:22.952: INFO: Deleting pod "simpletest-rc-to-be-deleted-5x6xh" in namespace "gc-6875" - Jun 12 22:11:23.017: INFO: Deleting pod "simpletest-rc-to-be-deleted-6djtk" in namespace "gc-6875" - Jun 12 22:11:23.078: INFO: Deleting pod "simpletest-rc-to-be-deleted-6mdcv" in namespace "gc-6875" - Jun 12 22:11:23.113: INFO: Deleting pod "simpletest-rc-to-be-deleted-6q96n" in namespace "gc-6875" - Jun 12 22:11:23.176: INFO: Deleting pod "simpletest-rc-to-be-deleted-6w2pm" in namespace "gc-6875" - Jun 12 22:11:23.208: INFO: Deleting pod "simpletest-rc-to-be-deleted-728z5" in namespace "gc-6875" - Jun 12 22:11:23.278: INFO: Deleting pod "simpletest-rc-to-be-deleted-7wn6j" in namespace "gc-6875" - Jun 12 22:11:23.361: INFO: Deleting pod "simpletest-rc-to-be-deleted-7x877" in namespace "gc-6875" - Jun 12 22:11:23.409: INFO: Deleting pod "simpletest-rc-to-be-deleted-8nb2f" in namespace "gc-6875" - Jun 12 22:11:23.457: INFO: Deleting pod "simpletest-rc-to-be-deleted-92cs8" in namespace "gc-6875" - Jun 12 22:11:23.673: INFO: Deleting pod "simpletest-rc-to-be-deleted-94s8t" in namespace "gc-6875" - Jun 12 22:11:23.718: INFO: Deleting pod "simpletest-rc-to-be-deleted-95ckd" in namespace "gc-6875" - Jun 12 22:11:23.757: INFO: Deleting pod "simpletest-rc-to-be-deleted-99vff" in namespace "gc-6875" - Jun 12 22:11:23.809: INFO: Deleting pod "simpletest-rc-to-be-deleted-9b8xx" in namespace "gc-6875" - Jun 12 22:11:23.848: INFO: Deleting pod "simpletest-rc-to-be-deleted-9cj9f" in namespace "gc-6875" - Jun 12 22:11:23.882: INFO: Deleting pod "simpletest-rc-to-be-deleted-9ns5h" in namespace "gc-6875" - Jun 12 22:11:23.916: INFO: Deleting pod "simpletest-rc-to-be-deleted-bdhmc" in namespace "gc-6875" - Jun 12 22:11:23.965: INFO: Deleting pod "simpletest-rc-to-be-deleted-bgxnr" in namespace "gc-6875" - Jun 12 22:11:24.032: INFO: Deleting pod "simpletest-rc-to-be-deleted-c7fln" in namespace "gc-6875" - Jun 12 22:11:24.165: INFO: Deleting pod "simpletest-rc-to-be-deleted-ckdwm" in namespace "gc-6875" - Jun 12 22:11:24.271: INFO: Deleting pod "simpletest-rc-to-be-deleted-cztr5" in namespace "gc-6875" - Jun 12 22:11:24.326: INFO: Deleting pod "simpletest-rc-to-be-deleted-czw6t" in namespace "gc-6875" - Jun 12 22:11:24.366: INFO: Deleting pod "simpletest-rc-to-be-deleted-ddmtb" in namespace "gc-6875" - Jun 12 22:11:24.457: INFO: Deleting pod "simpletest-rc-to-be-deleted-dhrm5" in namespace "gc-6875" - Jun 12 22:11:24.489: INFO: Deleting pod "simpletest-rc-to-be-deleted-dkv22" in namespace "gc-6875" - Jun 12 22:11:24.540: INFO: Deleting pod "simpletest-rc-to-be-deleted-g2hmw" in namespace "gc-6875" - Jun 12 22:11:24.582: INFO: Deleting pod "simpletest-rc-to-be-deleted-gb8p5" in namespace "gc-6875" - Jun 12 22:11:24.624: INFO: Deleting pod "simpletest-rc-to-be-deleted-gbjck" in namespace "gc-6875" - Jun 12 22:11:24.659: INFO: Deleting pod "simpletest-rc-to-be-deleted-gvk4n" in namespace "gc-6875" - Jun 12 22:11:24.697: INFO: Deleting pod "simpletest-rc-to-be-deleted-hwtqd" in namespace "gc-6875" - Jun 12 22:11:24.738: INFO: Deleting pod "simpletest-rc-to-be-deleted-j5ckx" in namespace "gc-6875" - Jun 12 22:11:24.817: INFO: Deleting pod "simpletest-rc-to-be-deleted-jgxfd" in namespace "gc-6875" - Jun 12 22:11:24.888: INFO: Deleting pod "simpletest-rc-to-be-deleted-jvpqx" in namespace "gc-6875" - [AfterEach] [sig-api-machinery] Garbage collector + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 + [It] should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:193 + STEP: Creating a pod to test downward API volume plugin 07/27/23 02:44:22.74 + Jul 27 02:44:23.778: INFO: Waiting up to 5m0s for pod "downwardapi-volume-090dbb34-9c1d-45b8-9506-06797ca84e7e" in namespace "downward-api-7690" to be "Succeeded or Failed" + Jul 27 02:44:23.794: INFO: Pod "downwardapi-volume-090dbb34-9c1d-45b8-9506-06797ca84e7e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.178442ms + Jul 27 02:44:25.805: INFO: Pod "downwardapi-volume-090dbb34-9c1d-45b8-9506-06797ca84e7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027648138s + Jul 27 02:44:27.807: INFO: Pod "downwardapi-volume-090dbb34-9c1d-45b8-9506-06797ca84e7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028958444s + STEP: Saw pod success 07/27/23 02:44:27.807 + Jul 27 02:44:27.807: INFO: Pod "downwardapi-volume-090dbb34-9c1d-45b8-9506-06797ca84e7e" satisfied condition "Succeeded or Failed" + Jul 27 02:44:27.819: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-090dbb34-9c1d-45b8-9506-06797ca84e7e container client-container: + STEP: delete the pod 07/27/23 02:44:27.858 + Jul 27 02:44:27.898: INFO: Waiting for pod downwardapi-volume-090dbb34-9c1d-45b8-9506-06797ca84e7e to disappear + Jul 27 02:44:27.913: INFO: Pod downwardapi-volume-090dbb34-9c1d-45b8-9506-06797ca84e7e no longer exists + [AfterEach] [sig-storage] Downward API volume test/e2e/framework/node/init/init.go:32 - Jun 12 22:11:24.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + Jul 27 02:44:27.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + [DeferCleanup (Each)] [sig-storage] Downward API volume dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + [DeferCleanup (Each)] [sig-storage] Downward API volume tear down framework | framework.go:193 - STEP: Destroying namespace "gc-6875" for this suite. 06/12/23 22:11:24.967 + STEP: Destroying namespace "downward-api-7690" for this suite. 07/27/23 02:44:27.932 << End Captured GinkgoWriter Output ------------------------------ -SSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] - should be able to convert from CR v1 to CR v2 [Conformance] - test/e2e/apimachinery/crd_conversion_webhook.go:149 -[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] +[sig-storage] Projected secret + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:46 +[BeforeEach] [sig-storage] Projected secret set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:11:25.008 -Jun 12 22:11:25.009: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename crd-webhook 06/12/23 22:11:25.014 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:11:25.097 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:11:25.115 -[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] +STEP: Creating a kubernetes client 07/27/23 02:44:27.959 +Jul 27 02:44:27.959: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename projected 07/27/23 02:44:27.96 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:44:28.011 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:44:28.022 +[BeforeEach] [sig-storage] Projected secret test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/crd_conversion_webhook.go:128 -STEP: Setting up server cert 06/12/23 22:11:25.138 -STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication 06/12/23 22:11:27.313 -STEP: Deploying the custom resource conversion webhook pod 06/12/23 22:11:27.346 -STEP: Wait for the deployment to be ready 06/12/23 22:11:27.386 -Jun 12 22:11:27.441: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set -Jun 12 22:11:29.492: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-74ff66dd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:11:31.515: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-74ff66dd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:11:33.518: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-74ff66dd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:11:35.513: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-74ff66dd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:11:37.511: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-74ff66dd47\" is progressing."}}, CollisionCount:(*int32)(nil)} -STEP: Deploying the webhook service 06/12/23 22:11:39.593 -STEP: Verifying the service has paired with the endpoint 06/12/23 22:11:39.642 -Jun 12 22:11:40.642: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 -[It] should be able to convert from CR v1 to CR v2 [Conformance] - test/e2e/apimachinery/crd_conversion_webhook.go:149 -Jun 12 22:11:40.694: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Creating a v1 custom resource 06/12/23 22:11:43.629 -STEP: v2 custom resource should be converted 06/12/23 22:11:43.65 -[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:46 +STEP: Creating projection with secret that has name projected-secret-test-2d75a43f-4edd-4049-9ba2-a8a969e98b8d 07/27/23 02:44:28.033 +STEP: Creating a pod to test consume secrets 07/27/23 02:44:28.054 +Jul 27 02:44:28.110: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c079da60-5331-40f0-a620-8ca03842b67e" in namespace "projected-9871" to be "Succeeded or Failed" +Jul 27 02:44:28.126: INFO: Pod "pod-projected-secrets-c079da60-5331-40f0-a620-8ca03842b67e": Phase="Pending", Reason="", readiness=false. Elapsed: 15.733711ms +Jul 27 02:44:30.145: INFO: Pod "pod-projected-secrets-c079da60-5331-40f0-a620-8ca03842b67e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034513526s +Jul 27 02:44:32.138: INFO: Pod "pod-projected-secrets-c079da60-5331-40f0-a620-8ca03842b67e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027573485s +STEP: Saw pod success 07/27/23 02:44:32.138 +Jul 27 02:44:32.138: INFO: Pod "pod-projected-secrets-c079da60-5331-40f0-a620-8ca03842b67e" satisfied condition "Succeeded or Failed" +Jul 27 02:44:32.156: INFO: Trying to get logs from node 10.245.128.19 pod pod-projected-secrets-c079da60-5331-40f0-a620-8ca03842b67e container projected-secret-volume-test: +STEP: delete the pod 07/27/23 02:44:32.189 +Jul 27 02:44:32.224: INFO: Waiting for pod pod-projected-secrets-c079da60-5331-40f0-a620-8ca03842b67e to disappear +Jul 27 02:44:32.234: INFO: Pod pod-projected-secrets-c079da60-5331-40f0-a620-8ca03842b67e no longer exists +[AfterEach] [sig-storage] Projected secret test/e2e/framework/node/init/init.go:32 -Jun 12 22:11:44.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/crd_conversion_webhook.go:139 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] +Jul 27 02:44:32.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected secret test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-storage] Projected secret dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-storage] Projected secret tear down framework | framework.go:193 -STEP: Destroying namespace "crd-webhook-4594" for this suite. 06/12/23 22:11:44.637 +STEP: Destroying namespace "projected-9871" for this suite. 07/27/23 02:44:32.259 ------------------------------ -• [SLOW TEST] [19.702 seconds] -[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] -test/e2e/apimachinery/framework.go:23 - should be able to convert from CR v1 to CR v2 [Conformance] - test/e2e/apimachinery/crd_conversion_webhook.go:149 +• [4.321 seconds] +[sig-storage] Projected secret +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:46 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + [BeforeEach] [sig-storage] Projected secret set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:11:25.008 - Jun 12 22:11:25.009: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename crd-webhook 06/12/23 22:11:25.014 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:11:25.097 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:11:25.115 - [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + STEP: Creating a kubernetes client 07/27/23 02:44:27.959 + Jul 27 02:44:27.959: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename projected 07/27/23 02:44:27.96 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:44:28.011 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:44:28.022 + [BeforeEach] [sig-storage] Projected secret test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/crd_conversion_webhook.go:128 - STEP: Setting up server cert 06/12/23 22:11:25.138 - STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication 06/12/23 22:11:27.313 - STEP: Deploying the custom resource conversion webhook pod 06/12/23 22:11:27.346 - STEP: Wait for the deployment to be ready 06/12/23 22:11:27.386 - Jun 12 22:11:27.441: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set - Jun 12 22:11:29.492: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-74ff66dd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:11:31.515: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-74ff66dd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:11:33.518: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-74ff66dd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:11:35.513: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-74ff66dd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:11:37.511: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 11, 27, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-74ff66dd47\" is progressing."}}, CollisionCount:(*int32)(nil)} - STEP: Deploying the webhook service 06/12/23 22:11:39.593 - STEP: Verifying the service has paired with the endpoint 06/12/23 22:11:39.642 - Jun 12 22:11:40.642: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 - [It] should be able to convert from CR v1 to CR v2 [Conformance] - test/e2e/apimachinery/crd_conversion_webhook.go:149 - Jun 12 22:11:40.694: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Creating a v1 custom resource 06/12/23 22:11:43.629 - STEP: v2 custom resource should be converted 06/12/23 22:11:43.65 - [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + [It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:46 + STEP: Creating projection with secret that has name projected-secret-test-2d75a43f-4edd-4049-9ba2-a8a969e98b8d 07/27/23 02:44:28.033 + STEP: Creating a pod to test consume secrets 07/27/23 02:44:28.054 + Jul 27 02:44:28.110: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c079da60-5331-40f0-a620-8ca03842b67e" in namespace "projected-9871" to be "Succeeded or Failed" + Jul 27 02:44:28.126: INFO: Pod "pod-projected-secrets-c079da60-5331-40f0-a620-8ca03842b67e": Phase="Pending", Reason="", readiness=false. Elapsed: 15.733711ms + Jul 27 02:44:30.145: INFO: Pod "pod-projected-secrets-c079da60-5331-40f0-a620-8ca03842b67e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034513526s + Jul 27 02:44:32.138: INFO: Pod "pod-projected-secrets-c079da60-5331-40f0-a620-8ca03842b67e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027573485s + STEP: Saw pod success 07/27/23 02:44:32.138 + Jul 27 02:44:32.138: INFO: Pod "pod-projected-secrets-c079da60-5331-40f0-a620-8ca03842b67e" satisfied condition "Succeeded or Failed" + Jul 27 02:44:32.156: INFO: Trying to get logs from node 10.245.128.19 pod pod-projected-secrets-c079da60-5331-40f0-a620-8ca03842b67e container projected-secret-volume-test: + STEP: delete the pod 07/27/23 02:44:32.189 + Jul 27 02:44:32.224: INFO: Waiting for pod pod-projected-secrets-c079da60-5331-40f0-a620-8ca03842b67e to disappear + Jul 27 02:44:32.234: INFO: Pod pod-projected-secrets-c079da60-5331-40f0-a620-8ca03842b67e no longer exists + [AfterEach] [sig-storage] Projected secret test/e2e/framework/node/init/init.go:32 - Jun 12 22:11:44.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/crd_conversion_webhook.go:139 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + Jul 27 02:44:32.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected secret test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-storage] Projected secret dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-storage] Projected secret tear down framework | framework.go:193 - STEP: Destroying namespace "crd-webhook-4594" for this suite. 06/12/23 22:11:44.637 + STEP: Destroying namespace "projected-9871" for this suite. 07/27/23 02:44:32.259 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSS +SSSS ------------------------------ -[sig-storage] Projected downwardAPI - should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:84 -[BeforeEach] [sig-storage] Projected downwardAPI +[sig-apps] Daemon set [Serial] + should run and stop complex daemon [Conformance] + test/e2e/apps/daemon_set.go:194 +[BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:11:44.712 -Jun 12 22:11:44.712: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename projected 06/12/23 22:11:44.736 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:11:44.829 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:11:44.863 -[BeforeEach] [sig-storage] Projected downwardAPI +STEP: Creating a kubernetes client 07/27/23 02:44:32.281 +Jul 27 02:44:32.281: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename daemonsets 07/27/23 02:44:32.282 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:44:32.324 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:44:32.347 +[BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-storage] Projected downwardAPI - test/e2e/common/storage/projected_downwardapi.go:44 -[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:84 -STEP: Creating a pod to test downward API volume plugin 06/12/23 22:11:44.893 -Jun 12 22:11:44.934: INFO: Waiting up to 5m0s for pod "downwardapi-volume-68218788-38d0-4df2-a4ce-150fe5bea6a6" in namespace "projected-9490" to be "Succeeded or Failed" -Jun 12 22:11:44.955: INFO: Pod "downwardapi-volume-68218788-38d0-4df2-a4ce-150fe5bea6a6": Phase="Pending", Reason="", readiness=false. Elapsed: 21.069218ms -Jun 12 22:11:46.970: INFO: Pod "downwardapi-volume-68218788-38d0-4df2-a4ce-150fe5bea6a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036664271s -Jun 12 22:11:48.967: INFO: Pod "downwardapi-volume-68218788-38d0-4df2-a4ce-150fe5bea6a6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033846752s -Jun 12 22:11:50.969: INFO: Pod "downwardapi-volume-68218788-38d0-4df2-a4ce-150fe5bea6a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035677445s -STEP: Saw pod success 06/12/23 22:11:50.97 -Jun 12 22:11:50.970: INFO: Pod "downwardapi-volume-68218788-38d0-4df2-a4ce-150fe5bea6a6" satisfied condition "Succeeded or Failed" -Jun 12 22:11:51.006: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-68218788-38d0-4df2-a4ce-150fe5bea6a6 container client-container: -STEP: delete the pod 06/12/23 22:11:51.035 -Jun 12 22:11:51.076: INFO: Waiting for pod downwardapi-volume-68218788-38d0-4df2-a4ce-150fe5bea6a6 to disappear -Jun 12 22:11:51.088: INFO: Pod downwardapi-volume-68218788-38d0-4df2-a4ce-150fe5bea6a6 no longer exists -[AfterEach] [sig-storage] Projected downwardAPI +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 +[It] should run and stop complex daemon [Conformance] + test/e2e/apps/daemon_set.go:194 +Jul 27 02:44:32.493: INFO: Creating daemon "daemon-set" with a node selector +STEP: Initially, daemon pods should not be running on any nodes. 07/27/23 02:44:32.51 +Jul 27 02:44:32.521: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jul 27 02:44:32.521: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +STEP: Change node label to blue, check that daemon pod is launched. 07/27/23 02:44:32.521 +Jul 27 02:44:32.620: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jul 27 02:44:32.620: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 +Jul 27 02:44:33.777: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jul 27 02:44:33.777: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 +Jul 27 02:44:34.797: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Jul 27 02:44:34.797: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set +STEP: Update the node label to green, and wait for daemons to be unscheduled 07/27/23 02:44:34.826 +Jul 27 02:44:34.918: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jul 27 02:44:34.918: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate 07/27/23 02:44:34.918 +Jul 27 02:44:34.993: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jul 27 02:44:34.993: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 +Jul 27 02:44:36.006: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jul 27 02:44:36.006: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 +Jul 27 02:44:37.005: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jul 27 02:44:37.005: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 +Jul 27 02:44:38.004: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jul 27 02:44:38.004: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 +Jul 27 02:44:39.005: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jul 27 02:44:39.005: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 +Jul 27 02:44:40.005: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Jul 27 02:44:40.005: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 +STEP: Deleting DaemonSet "daemon-set" 07/27/23 02:44:40.029 +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5847, will wait for the garbage collector to delete the pods 07/27/23 02:44:40.029 +Jul 27 02:44:40.113: INFO: Deleting DaemonSet.extensions daemon-set took: 21.353314ms +Jul 27 02:44:40.213: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.206691ms +Jul 27 02:44:42.525: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jul 27 02:44:42.525: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Jul 27 02:44:42.539: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"117092"},"items":null} + +Jul 27 02:44:42.554: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"117092"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 -Jun 12 22:11:51.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Projected downwardAPI +Jul 27 02:44:42.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Projected downwardAPI +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Projected downwardAPI +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 -STEP: Destroying namespace "projected-9490" for this suite. 06/12/23 22:11:51.136 +STEP: Destroying namespace "daemonsets-5847" for this suite. 07/27/23 02:44:42.695 ------------------------------ -• [SLOW TEST] [6.467 seconds] -[sig-storage] Projected downwardAPI -test/e2e/common/storage/framework.go:23 - should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:84 +• [SLOW TEST] [10.443 seconds] +[sig-apps] Daemon set [Serial] +test/e2e/apps/framework.go:23 + should run and stop complex daemon [Conformance] + test/e2e/apps/daemon_set.go:194 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Projected downwardAPI + [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:11:44.712 - Jun 12 22:11:44.712: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename projected 06/12/23 22:11:44.736 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:11:44.829 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:11:44.863 - [BeforeEach] [sig-storage] Projected downwardAPI + STEP: Creating a kubernetes client 07/27/23 02:44:32.281 + Jul 27 02:44:32.281: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename daemonsets 07/27/23 02:44:32.282 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:44:32.324 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:44:32.347 + [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-storage] Projected downwardAPI - test/e2e/common/storage/projected_downwardapi.go:44 - [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:84 - STEP: Creating a pod to test downward API volume plugin 06/12/23 22:11:44.893 - Jun 12 22:11:44.934: INFO: Waiting up to 5m0s for pod "downwardapi-volume-68218788-38d0-4df2-a4ce-150fe5bea6a6" in namespace "projected-9490" to be "Succeeded or Failed" - Jun 12 22:11:44.955: INFO: Pod "downwardapi-volume-68218788-38d0-4df2-a4ce-150fe5bea6a6": Phase="Pending", Reason="", readiness=false. Elapsed: 21.069218ms - Jun 12 22:11:46.970: INFO: Pod "downwardapi-volume-68218788-38d0-4df2-a4ce-150fe5bea6a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036664271s - Jun 12 22:11:48.967: INFO: Pod "downwardapi-volume-68218788-38d0-4df2-a4ce-150fe5bea6a6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033846752s - Jun 12 22:11:50.969: INFO: Pod "downwardapi-volume-68218788-38d0-4df2-a4ce-150fe5bea6a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035677445s - STEP: Saw pod success 06/12/23 22:11:50.97 - Jun 12 22:11:50.970: INFO: Pod "downwardapi-volume-68218788-38d0-4df2-a4ce-150fe5bea6a6" satisfied condition "Succeeded or Failed" - Jun 12 22:11:51.006: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-68218788-38d0-4df2-a4ce-150fe5bea6a6 container client-container: - STEP: delete the pod 06/12/23 22:11:51.035 - Jun 12 22:11:51.076: INFO: Waiting for pod downwardapi-volume-68218788-38d0-4df2-a4ce-150fe5bea6a6 to disappear - Jun 12 22:11:51.088: INFO: Pod downwardapi-volume-68218788-38d0-4df2-a4ce-150fe5bea6a6 no longer exists - [AfterEach] [sig-storage] Projected downwardAPI + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 + [It] should run and stop complex daemon [Conformance] + test/e2e/apps/daemon_set.go:194 + Jul 27 02:44:32.493: INFO: Creating daemon "daemon-set" with a node selector + STEP: Initially, daemon pods should not be running on any nodes. 07/27/23 02:44:32.51 + Jul 27 02:44:32.521: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jul 27 02:44:32.521: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set + STEP: Change node label to blue, check that daemon pod is launched. 07/27/23 02:44:32.521 + Jul 27 02:44:32.620: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jul 27 02:44:32.620: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 + Jul 27 02:44:33.777: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jul 27 02:44:33.777: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 + Jul 27 02:44:34.797: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 + Jul 27 02:44:34.797: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set + STEP: Update the node label to green, and wait for daemons to be unscheduled 07/27/23 02:44:34.826 + Jul 27 02:44:34.918: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jul 27 02:44:34.918: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set + STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate 07/27/23 02:44:34.918 + Jul 27 02:44:34.993: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jul 27 02:44:34.993: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 + Jul 27 02:44:36.006: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jul 27 02:44:36.006: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 + Jul 27 02:44:37.005: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jul 27 02:44:37.005: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 + Jul 27 02:44:38.004: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jul 27 02:44:38.004: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 + Jul 27 02:44:39.005: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jul 27 02:44:39.005: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 + Jul 27 02:44:40.005: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 + Jul 27 02:44:40.005: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 + STEP: Deleting DaemonSet "daemon-set" 07/27/23 02:44:40.029 + STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5847, will wait for the garbage collector to delete the pods 07/27/23 02:44:40.029 + Jul 27 02:44:40.113: INFO: Deleting DaemonSet.extensions daemon-set took: 21.353314ms + Jul 27 02:44:40.213: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.206691ms + Jul 27 02:44:42.525: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jul 27 02:44:42.525: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set + Jul 27 02:44:42.539: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"117092"},"items":null} + + Jul 27 02:44:42.554: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"117092"},"items":null} + + [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 - Jun 12 22:11:51.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + Jul 27 02:44:42.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 - STEP: Destroying namespace "projected-9490" for this suite. 06/12/23 22:11:51.136 + STEP: Destroying namespace "daemonsets-5847" for this suite. 07/27/23 02:44:42.695 << End Captured GinkgoWriter Output ------------------------------ -SSSSSS +SSSSSSSSSSSSSSSS ------------------------------ -[sig-apps] DisruptionController - should update/patch PodDisruptionBudget status [Conformance] - test/e2e/apps/disruption.go:164 -[BeforeEach] [sig-apps] DisruptionController +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop http hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:212 +[BeforeEach] [sig-node] Container Lifecycle Hook set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:11:51.18 -Jun 12 22:11:51.180: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename disruption 06/12/23 22:11:51.182 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:11:51.26 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:11:51.276 -[BeforeEach] [sig-apps] DisruptionController +STEP: Creating a kubernetes client 07/27/23 02:44:42.725 +Jul 27 02:44:42.725: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename container-lifecycle-hook 07/27/23 02:44:42.726 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:44:42.791 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:44:42.803 +[BeforeEach] [sig-node] Container Lifecycle Hook test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-apps] DisruptionController - test/e2e/apps/disruption.go:72 -[It] should update/patch PodDisruptionBudget status [Conformance] - test/e2e/apps/disruption.go:164 -STEP: Waiting for the pdb to be processed 06/12/23 22:11:51.303 -STEP: Updating PodDisruptionBudget status 06/12/23 22:11:53.327 -STEP: Waiting for all pods to be running 06/12/23 22:11:53.354 -Jun 12 22:11:53.369: INFO: running pods: 0 < 1 -Jun 12 22:11:55.382: INFO: running pods: 0 < 1 -STEP: locating a running pod 06/12/23 22:11:57.382 -STEP: Waiting for the pdb to be processed 06/12/23 22:11:57.421 -STEP: Patching PodDisruptionBudget status 06/12/23 22:11:57.441 -STEP: Waiting for the pdb to be processed 06/12/23 22:11:57.466 -[AfterEach] [sig-apps] DisruptionController +[BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:77 +STEP: create the container to handle the HTTPGet hook request. 07/27/23 02:44:42.834 +Jul 27 02:44:42.870: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-240" to be "running and ready" +Jul 27 02:44:42.903: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 32.376087ms +Jul 27 02:44:42.903: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Jul 27 02:44:44.917: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047087326s +Jul 27 02:44:44.917: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Jul 27 02:44:46.919: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 4.048244765s +Jul 27 02:44:46.919: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) +Jul 27 02:44:46.919: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" +[It] should execute prestop http hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:212 +STEP: create the pod with lifecycle hook 07/27/23 02:44:46.944 +Jul 27 02:44:46.965: INFO: Waiting up to 5m0s for pod "pod-with-prestop-http-hook" in namespace "container-lifecycle-hook-240" to be "running and ready" +Jul 27 02:44:46.980: INFO: Pod "pod-with-prestop-http-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 14.827741ms +Jul 27 02:44:46.980: INFO: The phase of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) +Jul 27 02:44:49.019: INFO: Pod "pod-with-prestop-http-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053958051s +Jul 27 02:44:49.019: INFO: The phase of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) +Jul 27 02:44:50.991: INFO: Pod "pod-with-prestop-http-hook": Phase="Running", Reason="", readiness=true. Elapsed: 4.025848705s +Jul 27 02:44:50.991: INFO: The phase of Pod pod-with-prestop-http-hook is Running (Ready = true) +Jul 27 02:44:50.991: INFO: Pod "pod-with-prestop-http-hook" satisfied condition "running and ready" +STEP: delete the pod with lifecycle hook 07/27/23 02:44:51.002 +Jul 27 02:44:51.022: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Jul 27 02:44:51.033: INFO: Pod pod-with-prestop-http-hook still exists +Jul 27 02:44:53.034: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Jul 27 02:44:53.045: INFO: Pod pod-with-prestop-http-hook no longer exists +STEP: check prestop hook 07/27/23 02:44:53.045 +[AfterEach] [sig-node] Container Lifecycle Hook test/e2e/framework/node/init/init.go:32 -Jun 12 22:11:57.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] DisruptionController +Jul 27 02:44:53.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] DisruptionController +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] DisruptionController +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook tear down framework | framework.go:193 -STEP: Destroying namespace "disruption-3145" for this suite. 06/12/23 22:11:57.506 +STEP: Destroying namespace "container-lifecycle-hook-240" for this suite. 07/27/23 02:44:53.121 ------------------------------ -• [SLOW TEST] [6.350 seconds] -[sig-apps] DisruptionController -test/e2e/apps/framework.go:23 - should update/patch PodDisruptionBudget status [Conformance] - test/e2e/apps/disruption.go:164 +• [SLOW TEST] [10.418 seconds] +[sig-node] Container Lifecycle Hook +test/e2e/common/node/framework.go:23 + when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:46 + should execute prestop http hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:212 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] DisruptionController + [BeforeEach] [sig-node] Container Lifecycle Hook set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:11:51.18 - Jun 12 22:11:51.180: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename disruption 06/12/23 22:11:51.182 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:11:51.26 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:11:51.276 - [BeforeEach] [sig-apps] DisruptionController + STEP: Creating a kubernetes client 07/27/23 02:44:42.725 + Jul 27 02:44:42.725: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename container-lifecycle-hook 07/27/23 02:44:42.726 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:44:42.791 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:44:42.803 + [BeforeEach] [sig-node] Container Lifecycle Hook test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-apps] DisruptionController - test/e2e/apps/disruption.go:72 - [It] should update/patch PodDisruptionBudget status [Conformance] - test/e2e/apps/disruption.go:164 - STEP: Waiting for the pdb to be processed 06/12/23 22:11:51.303 - STEP: Updating PodDisruptionBudget status 06/12/23 22:11:53.327 - STEP: Waiting for all pods to be running 06/12/23 22:11:53.354 - Jun 12 22:11:53.369: INFO: running pods: 0 < 1 - Jun 12 22:11:55.382: INFO: running pods: 0 < 1 - STEP: locating a running pod 06/12/23 22:11:57.382 - STEP: Waiting for the pdb to be processed 06/12/23 22:11:57.421 - STEP: Patching PodDisruptionBudget status 06/12/23 22:11:57.441 - STEP: Waiting for the pdb to be processed 06/12/23 22:11:57.466 - [AfterEach] [sig-apps] DisruptionController + [BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:77 + STEP: create the container to handle the HTTPGet hook request. 07/27/23 02:44:42.834 + Jul 27 02:44:42.870: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-240" to be "running and ready" + Jul 27 02:44:42.903: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 32.376087ms + Jul 27 02:44:42.903: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) + Jul 27 02:44:44.917: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047087326s + Jul 27 02:44:44.917: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) + Jul 27 02:44:46.919: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 4.048244765s + Jul 27 02:44:46.919: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) + Jul 27 02:44:46.919: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" + [It] should execute prestop http hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:212 + STEP: create the pod with lifecycle hook 07/27/23 02:44:46.944 + Jul 27 02:44:46.965: INFO: Waiting up to 5m0s for pod "pod-with-prestop-http-hook" in namespace "container-lifecycle-hook-240" to be "running and ready" + Jul 27 02:44:46.980: INFO: Pod "pod-with-prestop-http-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 14.827741ms + Jul 27 02:44:46.980: INFO: The phase of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) + Jul 27 02:44:49.019: INFO: Pod "pod-with-prestop-http-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053958051s + Jul 27 02:44:49.019: INFO: The phase of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) + Jul 27 02:44:50.991: INFO: Pod "pod-with-prestop-http-hook": Phase="Running", Reason="", readiness=true. Elapsed: 4.025848705s + Jul 27 02:44:50.991: INFO: The phase of Pod pod-with-prestop-http-hook is Running (Ready = true) + Jul 27 02:44:50.991: INFO: Pod "pod-with-prestop-http-hook" satisfied condition "running and ready" + STEP: delete the pod with lifecycle hook 07/27/23 02:44:51.002 + Jul 27 02:44:51.022: INFO: Waiting for pod pod-with-prestop-http-hook to disappear + Jul 27 02:44:51.033: INFO: Pod pod-with-prestop-http-hook still exists + Jul 27 02:44:53.034: INFO: Waiting for pod pod-with-prestop-http-hook to disappear + Jul 27 02:44:53.045: INFO: Pod pod-with-prestop-http-hook no longer exists + STEP: check prestop hook 07/27/23 02:44:53.045 + [AfterEach] [sig-node] Container Lifecycle Hook test/e2e/framework/node/init/init.go:32 - Jun 12 22:11:57.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] DisruptionController + Jul 27 02:44:53.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] DisruptionController + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] DisruptionController + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook tear down framework | framework.go:193 - STEP: Destroying namespace "disruption-3145" for this suite. 06/12/23 22:11:57.506 + STEP: Destroying namespace "container-lifecycle-hook-240" for this suite. 07/27/23 02:44:53.121 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSS +SSS ------------------------------ -[sig-apps] Deployment - Deployment should have a working scale subresource [Conformance] - test/e2e/apps/deployment.go:150 -[BeforeEach] [sig-apps] Deployment - set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:11:57.536 -Jun 12 22:11:57.536: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename deployment 06/12/23 22:11:57.541 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:11:57.599 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:11:57.615 -[BeforeEach] [sig-apps] Deployment - test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:91 -[It] Deployment should have a working scale subresource [Conformance] - test/e2e/apps/deployment.go:150 -Jun 12 22:11:57.628: INFO: Creating simple deployment test-new-deployment -Jun 12 22:11:57.682: INFO: deployment "test-new-deployment" doesn't have the required revision set -Jun 12 22:11:59.717: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 11, 57, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 11, 57, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 11, 57, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 11, 57, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-7f5969cbc7\" is progressing."}}, CollisionCount:(*int32)(nil)} -STEP: getting scale subresource 06/12/23 22:12:01.745 -STEP: updating a scale subresource 06/12/23 22:12:01.758 -STEP: verifying the deployment Spec.Replicas was modified 06/12/23 22:12:01.776 -STEP: Patch a scale subresource 06/12/23 22:12:01.789 -[AfterEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:84 -Jun 12 22:12:01.877: INFO: Deployment "test-new-deployment": -&Deployment{ObjectMeta:{test-new-deployment deployment-4912 e7e36304-68dd-4c68-9fdd-9c8ec7d24db1 136070 3 2023-06-12 22:11:57 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {e2e.test Update apps/v1 2023-06-12 22:11:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-06-12 22:12:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00672ac98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-7f5969cbc7" has successfully progressed.,LastUpdateTime:2023-06-12 22:12:00 +0000 UTC,LastTransitionTime:2023-06-12 22:11:57 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-06-12 22:12:01 +0000 UTC,LastTransitionTime:2023-06-12 22:12:01 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} - -Jun 12 22:12:01.888: INFO: New ReplicaSet "test-new-deployment-7f5969cbc7" of Deployment "test-new-deployment": -&ReplicaSet{ObjectMeta:{test-new-deployment-7f5969cbc7 deployment-4912 ec94ab64-e7c9-4603-98ba-bf46bad626ef 136073 3 2023-06-12 22:11:57 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[deployment.kubernetes.io/desired-replicas:4 deployment.kubernetes.io/max-replicas:5 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment e7e36304-68dd-4c68-9fdd-9c8ec7d24db1 0xc00672b107 0xc00672b108}] [] [{kube-controller-manager Update apps/v1 2023-06-12 22:12:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e7e36304-68dd-4c68-9fdd-9c8ec7d24db1\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-06-12 22:12:01 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 7f5969cbc7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00672b198 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} -Jun 12 22:12:01.913: INFO: Pod "test-new-deployment-7f5969cbc7-54nkk" is not available: -&Pod{ObjectMeta:{test-new-deployment-7f5969cbc7-54nkk test-new-deployment-7f5969cbc7- deployment-4912 8a9bebd7-a074-48af-9366-b3c787fe52f6 136072 0 2023-06-12 22:12:01 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-new-deployment-7f5969cbc7 ec94ab64-e7c9-4603-98ba-bf46bad626ef 0xc00672b5d7 0xc00672b5d8}] [] [{kube-controller-manager Update v1 2023-06-12 22:12:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec94ab64-e7c9-4603-98ba-bf46bad626ef\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-06-12 22:12:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-p8cm4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p8cm4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.112,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c63,c17,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-26fh4,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:12:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:12:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:12:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:12:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.112,PodIP:,StartTime:2023-06-12 22:12:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} -Jun 12 22:12:01.914: INFO: Pod "test-new-deployment-7f5969cbc7-7gdtq" is not available: -&Pod{ObjectMeta:{test-new-deployment-7f5969cbc7-7gdtq test-new-deployment-7f5969cbc7- deployment-4912 57017741-8520-4cbc-b8e2-e33ae2031f5e 136079 0 2023-06-12 22:12:01 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-new-deployment-7f5969cbc7 ec94ab64-e7c9-4603-98ba-bf46bad626ef 0xc00672b7f7 0xc00672b7f8}] [] [{kube-controller-manager Update v1 2023-06-12 22:12:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec94ab64-e7c9-4603-98ba-bf46bad626ef\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-77rfb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-77rfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c63,c17,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-26fh4,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} -Jun 12 22:12:01.916: INFO: Pod "test-new-deployment-7f5969cbc7-c6ttl" is available: -&Pod{ObjectMeta:{test-new-deployment-7f5969cbc7-c6ttl test-new-deployment-7f5969cbc7- deployment-4912 c674e022-b2f4-4a69-a64c-4a34c1712166 136050 0 2023-06-12 22:11:57 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:eaf3ad464102cec5ff72267a6822d3c31a00dbe10b456c5df9dfd82424c9b4a4 cni.projectcalico.org/podIP:172.30.224.34/32 cni.projectcalico.org/podIPs:172.30.224.34/32 k8s.v1.cni.cncf.io/network-status:[{ - "name": "k8s-pod-network", - "ips": [ - "172.30.224.34" - ], - "default": true, - "dns": {} -}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-new-deployment-7f5969cbc7 ec94ab64-e7c9-4603-98ba-bf46bad626ef 0xc00672b9a7 0xc00672b9a8}] [] [{kube-controller-manager Update v1 2023-06-12 22:11:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec94ab64-e7c9-4603-98ba-bf46bad626ef\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-06-12 22:11:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-06-12 22:11:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-06-12 22:12:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.30.224.34\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-p78dk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p78dk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.70,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c63,c17,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:11:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:12:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:12:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:11:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.70,PodIP:172.30.224.34,StartTime:2023-06-12 22:11:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-12 22:11:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://c76e9f69e068d70711ee369b368b0d283f7bd592576dc9a781ee1a0b4bbf88a0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.30.224.34,},},EphemeralContainerStatuses:[]ContainerStatus{},},} -Jun 12 22:12:01.916: INFO: Pod "test-new-deployment-7f5969cbc7-wrw8v" is not available: -&Pod{ObjectMeta:{test-new-deployment-7f5969cbc7-wrw8v test-new-deployment-7f5969cbc7- deployment-4912 1c036f18-4470-44ff-b222-44f560f086cc 136077 0 2023-06-12 22:12:01 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-new-deployment-7f5969cbc7 ec94ab64-e7c9-4603-98ba-bf46bad626ef 0xc00672bc17 0xc00672bc18}] [] [{kube-controller-manager Update v1 2023-06-12 22:12:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec94ab64-e7c9-4603-98ba-bf46bad626ef\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7989d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7989d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.116,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c63,c17,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-26fh4,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:12:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} -[AfterEach] [sig-apps] Deployment +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource with different stored version [Conformance] + test/e2e/apimachinery/webhook.go:323 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 07/27/23 02:44:53.144 +Jul 27 02:44:53.144: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename webhook 07/27/23 02:44:53.145 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:44:53.192 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:44:53.204 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 07/27/23 02:44:53.263 +STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 02:44:53.598 +STEP: Deploying the webhook pod 07/27/23 02:44:53.635 +STEP: Wait for the deployment to be ready 07/27/23 02:44:53.672 +Jul 27 02:44:53.698: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 07/27/23 02:44:55.732 +STEP: Verifying the service has paired with the endpoint 07/27/23 02:44:55.791 +Jul 27 02:44:56.791: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource with different stored version [Conformance] + test/e2e/apimachinery/webhook.go:323 +Jul 27 02:44:56.804: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3337-crds.webhook.example.com via the AdmissionRegistration API 07/27/23 02:44:57.367 +STEP: Creating a custom resource while v1 is storage version 07/27/23 02:44:57.414 +STEP: Patching Custom Resource Definition to set v2 as storage 07/27/23 02:44:59.637 +STEP: Patching the custom resource while v2 is storage version 07/27/23 02:44:59.656 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 -Jun 12 22:12:01.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] Deployment +Jul 27 02:45:00.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] Deployment +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] Deployment +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 -STEP: Destroying namespace "deployment-4912" for this suite. 06/12/23 22:12:01.939 +STEP: Destroying namespace "webhook-1002" for this suite. 07/27/23 02:45:00.562 +STEP: Destroying namespace "webhook-1002-markers" for this suite. 07/27/23 02:45:00.589 ------------------------------ -• [4.429 seconds] -[sig-apps] Deployment -test/e2e/apps/framework.go:23 - Deployment should have a working scale subresource [Conformance] - test/e2e/apps/deployment.go:150 +• [SLOW TEST] [7.486 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should mutate custom resource with different stored version [Conformance] + test/e2e/apimachinery/webhook.go:323 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] Deployment + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:11:57.536 - Jun 12 22:11:57.536: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename deployment 06/12/23 22:11:57.541 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:11:57.599 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:11:57.615 - [BeforeEach] [sig-apps] Deployment + STEP: Creating a kubernetes client 07/27/23 02:44:53.144 + Jul 27 02:44:53.144: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename webhook 07/27/23 02:44:53.145 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:44:53.192 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:44:53.204 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:91 - [It] Deployment should have a working scale subresource [Conformance] - test/e2e/apps/deployment.go:150 - Jun 12 22:11:57.628: INFO: Creating simple deployment test-new-deployment - Jun 12 22:11:57.682: INFO: deployment "test-new-deployment" doesn't have the required revision set - Jun 12 22:11:59.717: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 11, 57, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 11, 57, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 11, 57, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 11, 57, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-7f5969cbc7\" is progressing."}}, CollisionCount:(*int32)(nil)} - STEP: getting scale subresource 06/12/23 22:12:01.745 - STEP: updating a scale subresource 06/12/23 22:12:01.758 - STEP: verifying the deployment Spec.Replicas was modified 06/12/23 22:12:01.776 - STEP: Patch a scale subresource 06/12/23 22:12:01.789 - [AfterEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:84 - Jun 12 22:12:01.877: INFO: Deployment "test-new-deployment": - &Deployment{ObjectMeta:{test-new-deployment deployment-4912 e7e36304-68dd-4c68-9fdd-9c8ec7d24db1 136070 3 2023-06-12 22:11:57 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {e2e.test Update apps/v1 2023-06-12 22:11:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-06-12 22:12:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00672ac98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-7f5969cbc7" has successfully progressed.,LastUpdateTime:2023-06-12 22:12:00 +0000 UTC,LastTransitionTime:2023-06-12 22:11:57 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-06-12 22:12:01 +0000 UTC,LastTransitionTime:2023-06-12 22:12:01 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} - - Jun 12 22:12:01.888: INFO: New ReplicaSet "test-new-deployment-7f5969cbc7" of Deployment "test-new-deployment": - &ReplicaSet{ObjectMeta:{test-new-deployment-7f5969cbc7 deployment-4912 ec94ab64-e7c9-4603-98ba-bf46bad626ef 136073 3 2023-06-12 22:11:57 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[deployment.kubernetes.io/desired-replicas:4 deployment.kubernetes.io/max-replicas:5 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment e7e36304-68dd-4c68-9fdd-9c8ec7d24db1 0xc00672b107 0xc00672b108}] [] [{kube-controller-manager Update apps/v1 2023-06-12 22:12:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e7e36304-68dd-4c68-9fdd-9c8ec7d24db1\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-06-12 22:12:01 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 7f5969cbc7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00672b198 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} - Jun 12 22:12:01.913: INFO: Pod "test-new-deployment-7f5969cbc7-54nkk" is not available: - &Pod{ObjectMeta:{test-new-deployment-7f5969cbc7-54nkk test-new-deployment-7f5969cbc7- deployment-4912 8a9bebd7-a074-48af-9366-b3c787fe52f6 136072 0 2023-06-12 22:12:01 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-new-deployment-7f5969cbc7 ec94ab64-e7c9-4603-98ba-bf46bad626ef 0xc00672b5d7 0xc00672b5d8}] [] [{kube-controller-manager Update v1 2023-06-12 22:12:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec94ab64-e7c9-4603-98ba-bf46bad626ef\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-06-12 22:12:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-p8cm4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p8cm4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.112,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c63,c17,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-26fh4,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:12:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:12:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:12:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:12:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.112,PodIP:,StartTime:2023-06-12 22:12:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} - Jun 12 22:12:01.914: INFO: Pod "test-new-deployment-7f5969cbc7-7gdtq" is not available: - &Pod{ObjectMeta:{test-new-deployment-7f5969cbc7-7gdtq test-new-deployment-7f5969cbc7- deployment-4912 57017741-8520-4cbc-b8e2-e33ae2031f5e 136079 0 2023-06-12 22:12:01 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-new-deployment-7f5969cbc7 ec94ab64-e7c9-4603-98ba-bf46bad626ef 0xc00672b7f7 0xc00672b7f8}] [] [{kube-controller-manager Update v1 2023-06-12 22:12:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec94ab64-e7c9-4603-98ba-bf46bad626ef\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-77rfb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-77rfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c63,c17,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-26fh4,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} - Jun 12 22:12:01.916: INFO: Pod "test-new-deployment-7f5969cbc7-c6ttl" is available: - &Pod{ObjectMeta:{test-new-deployment-7f5969cbc7-c6ttl test-new-deployment-7f5969cbc7- deployment-4912 c674e022-b2f4-4a69-a64c-4a34c1712166 136050 0 2023-06-12 22:11:57 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[cni.projectcalico.org/containerID:eaf3ad464102cec5ff72267a6822d3c31a00dbe10b456c5df9dfd82424c9b4a4 cni.projectcalico.org/podIP:172.30.224.34/32 cni.projectcalico.org/podIPs:172.30.224.34/32 k8s.v1.cni.cncf.io/network-status:[{ - "name": "k8s-pod-network", - "ips": [ - "172.30.224.34" - ], - "default": true, - "dns": {} - }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-new-deployment-7f5969cbc7 ec94ab64-e7c9-4603-98ba-bf46bad626ef 0xc00672b9a7 0xc00672b9a8}] [] [{kube-controller-manager Update v1 2023-06-12 22:11:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec94ab64-e7c9-4603-98ba-bf46bad626ef\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-06-12 22:11:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-06-12 22:11:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-06-12 22:12:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.30.224.34\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-p78dk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p78dk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.70,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c63,c17,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:11:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:12:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:12:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:11:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.70,PodIP:172.30.224.34,StartTime:2023-06-12 22:11:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-12 22:11:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://c76e9f69e068d70711ee369b368b0d283f7bd592576dc9a781ee1a0b4bbf88a0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.30.224.34,},},EphemeralContainerStatuses:[]ContainerStatus{},},} - Jun 12 22:12:01.916: INFO: Pod "test-new-deployment-7f5969cbc7-wrw8v" is not available: - &Pod{ObjectMeta:{test-new-deployment-7f5969cbc7-wrw8v test-new-deployment-7f5969cbc7- deployment-4912 1c036f18-4470-44ff-b222-44f560f086cc 136077 0 2023-06-12 22:12:01 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-new-deployment-7f5969cbc7 ec94ab64-e7c9-4603-98ba-bf46bad626ef 0xc00672bc17 0xc00672bc18}] [] [{kube-controller-manager Update v1 2023-06-12 22:12:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec94ab64-e7c9-4603-98ba-bf46bad626ef\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7989d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7989d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.116,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c63,c17,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-26fh4,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:12:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} - [AfterEach] [sig-apps] Deployment + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 07/27/23 02:44:53.263 + STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 02:44:53.598 + STEP: Deploying the webhook pod 07/27/23 02:44:53.635 + STEP: Wait for the deployment to be ready 07/27/23 02:44:53.672 + Jul 27 02:44:53.698: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 07/27/23 02:44:55.732 + STEP: Verifying the service has paired with the endpoint 07/27/23 02:44:55.791 + Jul 27 02:44:56.791: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should mutate custom resource with different stored version [Conformance] + test/e2e/apimachinery/webhook.go:323 + Jul 27 02:44:56.804: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3337-crds.webhook.example.com via the AdmissionRegistration API 07/27/23 02:44:57.367 + STEP: Creating a custom resource while v1 is storage version 07/27/23 02:44:57.414 + STEP: Patching Custom Resource Definition to set v2 as storage 07/27/23 02:44:59.637 + STEP: Patching the custom resource while v2 is storage version 07/27/23 02:44:59.656 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 22:12:01.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] Deployment + Jul 27 02:45:00.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] Deployment + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] Deployment + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "deployment-4912" for this suite. 06/12/23 22:12:01.939 + STEP: Destroying namespace "webhook-1002" for this suite. 07/27/23 02:45:00.562 + STEP: Destroying namespace "webhook-1002-markers" for this suite. 07/27/23 02:45:00.589 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] Projected configMap - should be consumable from pods in volume with mappings [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:89 -[BeforeEach] [sig-storage] Projected configMap +[sig-storage] Downward API volume + should update annotations on modification [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:162 +[BeforeEach] [sig-storage] Downward API volume set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:12:01.97 -Jun 12 22:12:01.971: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename projected 06/12/23 22:12:01.974 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:12:02.035 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:12:02.057 -[BeforeEach] [sig-storage] Projected configMap +STEP: Creating a kubernetes client 07/27/23 02:45:00.63 +Jul 27 02:45:00.630: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename downward-api 07/27/23 02:45:00.631 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:45:00.698 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:45:00.713 +[BeforeEach] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:31 -[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:89 -STEP: Creating configMap with name projected-configmap-test-volume-map-8a88640f-c94e-4c89-94a0-0cd5f07bb28e 06/12/23 22:12:02.078 -STEP: Creating a pod to test consume configMaps 06/12/23 22:12:02.104 -Jun 12 22:12:02.134: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0f36ecbf-66b5-4cb3-b6e8-280a4a49b368" in namespace "projected-5259" to be "Succeeded or Failed" -Jun 12 22:12:02.150: INFO: Pod "pod-projected-configmaps-0f36ecbf-66b5-4cb3-b6e8-280a4a49b368": Phase="Pending", Reason="", readiness=false. Elapsed: 16.32727ms -Jun 12 22:12:04.167: INFO: Pod "pod-projected-configmaps-0f36ecbf-66b5-4cb3-b6e8-280a4a49b368": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032719624s -Jun 12 22:12:06.165: INFO: Pod "pod-projected-configmaps-0f36ecbf-66b5-4cb3-b6e8-280a4a49b368": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030911519s -Jun 12 22:12:08.215: INFO: Pod "pod-projected-configmaps-0f36ecbf-66b5-4cb3-b6e8-280a4a49b368": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.081463589s -STEP: Saw pod success 06/12/23 22:12:08.215 -Jun 12 22:12:08.216: INFO: Pod "pod-projected-configmaps-0f36ecbf-66b5-4cb3-b6e8-280a4a49b368" satisfied condition "Succeeded or Failed" -Jun 12 22:12:08.245: INFO: Trying to get logs from node 10.138.75.70 pod pod-projected-configmaps-0f36ecbf-66b5-4cb3-b6e8-280a4a49b368 container agnhost-container: -STEP: delete the pod 06/12/23 22:12:08.316 -Jun 12 22:12:08.405: INFO: Waiting for pod pod-projected-configmaps-0f36ecbf-66b5-4cb3-b6e8-280a4a49b368 to disappear -Jun 12 22:12:08.420: INFO: Pod pod-projected-configmaps-0f36ecbf-66b5-4cb3-b6e8-280a4a49b368 no longer exists -[AfterEach] [sig-storage] Projected configMap +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 +[It] should update annotations on modification [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:162 +STEP: Creating the pod 07/27/23 02:45:00.734 +W0727 02:45:00.793483 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +Jul 27 02:45:00.793: INFO: Waiting up to 5m0s for pod "annotationupdatef3597332-8012-41f4-965a-25c2aa74d59c" in namespace "downward-api-5717" to be "running and ready" +Jul 27 02:45:00.806: INFO: Pod "annotationupdatef3597332-8012-41f4-965a-25c2aa74d59c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.075157ms +Jul 27 02:45:00.806: INFO: The phase of Pod annotationupdatef3597332-8012-41f4-965a-25c2aa74d59c is Pending, waiting for it to be Running (with Ready = true) +Jul 27 02:45:02.828: INFO: Pod "annotationupdatef3597332-8012-41f4-965a-25c2aa74d59c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034678148s +Jul 27 02:45:02.828: INFO: The phase of Pod annotationupdatef3597332-8012-41f4-965a-25c2aa74d59c is Pending, waiting for it to be Running (with Ready = true) +Jul 27 02:45:04.820: INFO: Pod "annotationupdatef3597332-8012-41f4-965a-25c2aa74d59c": Phase="Running", Reason="", readiness=true. Elapsed: 4.027012585s +Jul 27 02:45:04.820: INFO: The phase of Pod annotationupdatef3597332-8012-41f4-965a-25c2aa74d59c is Running (Ready = true) +Jul 27 02:45:04.820: INFO: Pod "annotationupdatef3597332-8012-41f4-965a-25c2aa74d59c" satisfied condition "running and ready" +Jul 27 02:45:05.408: INFO: Successfully updated pod "annotationupdatef3597332-8012-41f4-965a-25c2aa74d59c" +[AfterEach] [sig-storage] Downward API volume test/e2e/framework/node/init/init.go:32 -Jun 12 22:12:08.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Projected configMap +Jul 27 02:45:07.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Projected configMap +[DeferCleanup (Each)] [sig-storage] Downward API volume dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Projected configMap +[DeferCleanup (Each)] [sig-storage] Downward API volume tear down framework | framework.go:193 -STEP: Destroying namespace "projected-5259" for this suite. 06/12/23 22:12:08.451 +STEP: Destroying namespace "downward-api-5717" for this suite. 07/27/23 02:45:07.512 ------------------------------ -• [SLOW TEST] [6.505 seconds] -[sig-storage] Projected configMap +• [SLOW TEST] [6.902 seconds] +[sig-storage] Downward API volume test/e2e/common/storage/framework.go:23 - should be consumable from pods in volume with mappings [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:89 + should update annotations on modification [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:162 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Projected configMap + [BeforeEach] [sig-storage] Downward API volume set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:12:01.97 - Jun 12 22:12:01.971: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename projected 06/12/23 22:12:01.974 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:12:02.035 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:12:02.057 - [BeforeEach] [sig-storage] Projected configMap + STEP: Creating a kubernetes client 07/27/23 02:45:00.63 + Jul 27 02:45:00.630: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename downward-api 07/27/23 02:45:00.631 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:45:00.698 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:45:00.713 + [BeforeEach] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:31 - [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:89 - STEP: Creating configMap with name projected-configmap-test-volume-map-8a88640f-c94e-4c89-94a0-0cd5f07bb28e 06/12/23 22:12:02.078 - STEP: Creating a pod to test consume configMaps 06/12/23 22:12:02.104 - Jun 12 22:12:02.134: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0f36ecbf-66b5-4cb3-b6e8-280a4a49b368" in namespace "projected-5259" to be "Succeeded or Failed" - Jun 12 22:12:02.150: INFO: Pod "pod-projected-configmaps-0f36ecbf-66b5-4cb3-b6e8-280a4a49b368": Phase="Pending", Reason="", readiness=false. Elapsed: 16.32727ms - Jun 12 22:12:04.167: INFO: Pod "pod-projected-configmaps-0f36ecbf-66b5-4cb3-b6e8-280a4a49b368": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032719624s - Jun 12 22:12:06.165: INFO: Pod "pod-projected-configmaps-0f36ecbf-66b5-4cb3-b6e8-280a4a49b368": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030911519s - Jun 12 22:12:08.215: INFO: Pod "pod-projected-configmaps-0f36ecbf-66b5-4cb3-b6e8-280a4a49b368": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.081463589s - STEP: Saw pod success 06/12/23 22:12:08.215 - Jun 12 22:12:08.216: INFO: Pod "pod-projected-configmaps-0f36ecbf-66b5-4cb3-b6e8-280a4a49b368" satisfied condition "Succeeded or Failed" - Jun 12 22:12:08.245: INFO: Trying to get logs from node 10.138.75.70 pod pod-projected-configmaps-0f36ecbf-66b5-4cb3-b6e8-280a4a49b368 container agnhost-container: - STEP: delete the pod 06/12/23 22:12:08.316 - Jun 12 22:12:08.405: INFO: Waiting for pod pod-projected-configmaps-0f36ecbf-66b5-4cb3-b6e8-280a4a49b368 to disappear - Jun 12 22:12:08.420: INFO: Pod pod-projected-configmaps-0f36ecbf-66b5-4cb3-b6e8-280a4a49b368 no longer exists - [AfterEach] [sig-storage] Projected configMap + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 + [It] should update annotations on modification [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:162 + STEP: Creating the pod 07/27/23 02:45:00.734 + W0727 02:45:00.793483 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + Jul 27 02:45:00.793: INFO: Waiting up to 5m0s for pod "annotationupdatef3597332-8012-41f4-965a-25c2aa74d59c" in namespace "downward-api-5717" to be "running and ready" + Jul 27 02:45:00.806: INFO: Pod "annotationupdatef3597332-8012-41f4-965a-25c2aa74d59c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.075157ms + Jul 27 02:45:00.806: INFO: The phase of Pod annotationupdatef3597332-8012-41f4-965a-25c2aa74d59c is Pending, waiting for it to be Running (with Ready = true) + Jul 27 02:45:02.828: INFO: Pod "annotationupdatef3597332-8012-41f4-965a-25c2aa74d59c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034678148s + Jul 27 02:45:02.828: INFO: The phase of Pod annotationupdatef3597332-8012-41f4-965a-25c2aa74d59c is Pending, waiting for it to be Running (with Ready = true) + Jul 27 02:45:04.820: INFO: Pod "annotationupdatef3597332-8012-41f4-965a-25c2aa74d59c": Phase="Running", Reason="", readiness=true. Elapsed: 4.027012585s + Jul 27 02:45:04.820: INFO: The phase of Pod annotationupdatef3597332-8012-41f4-965a-25c2aa74d59c is Running (Ready = true) + Jul 27 02:45:04.820: INFO: Pod "annotationupdatef3597332-8012-41f4-965a-25c2aa74d59c" satisfied condition "running and ready" + Jul 27 02:45:05.408: INFO: Successfully updated pod "annotationupdatef3597332-8012-41f4-965a-25c2aa74d59c" + [AfterEach] [sig-storage] Downward API volume test/e2e/framework/node/init/init.go:32 - Jun 12 22:12:08.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Projected configMap + Jul 27 02:45:07.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Projected configMap + [DeferCleanup (Each)] [sig-storage] Downward API volume dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Projected configMap + [DeferCleanup (Each)] [sig-storage] Downward API volume tear down framework | framework.go:193 - STEP: Destroying namespace "projected-5259" for this suite. 06/12/23 22:12:08.451 + STEP: Destroying namespace "downward-api-5717" for this suite. 07/27/23 02:45:07.512 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSS +SSSSSSSSSSSSSSS ------------------------------ -[sig-network] DNS - should provide DNS for the cluster [Conformance] - test/e2e/network/dns.go:50 -[BeforeEach] [sig-network] DNS +[sig-cli] Kubectl client Kubectl logs + should be able to retrieve and filter logs [Conformance] + test/e2e/kubectl/kubectl.go:1592 +[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:12:08.479 -Jun 12 22:12:08.479: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename dns 06/12/23 22:12:08.482 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:12:08.541 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:12:08.558 -[BeforeEach] [sig-network] DNS +STEP: Creating a kubernetes client 07/27/23 02:45:07.536 +Jul 27 02:45:07.536: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename kubectl 07/27/23 02:45:07.537 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:45:07.591 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:45:07.603 +[BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 -[It] should provide DNS for the cluster [Conformance] - test/e2e/network/dns.go:50 -STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;sleep 1; done - 06/12/23 22:12:08.604 -STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;sleep 1; done - 06/12/23 22:12:08.605 -STEP: creating a pod to probe DNS 06/12/23 22:12:08.605 -STEP: submitting the pod to kubernetes 06/12/23 22:12:08.605 -Jun 12 22:12:08.642: INFO: Waiting up to 15m0s for pod "dns-test-0f49daf4-8d44-4b5e-8d52-6c56b96586a9" in namespace "dns-4830" to be "running" -Jun 12 22:12:08.681: INFO: Pod "dns-test-0f49daf4-8d44-4b5e-8d52-6c56b96586a9": Phase="Pending", Reason="", readiness=false. Elapsed: 39.11015ms -Jun 12 22:12:10.695: INFO: Pod "dns-test-0f49daf4-8d44-4b5e-8d52-6c56b96586a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053135723s -Jun 12 22:12:12.701: INFO: Pod "dns-test-0f49daf4-8d44-4b5e-8d52-6c56b96586a9": Phase="Running", Reason="", readiness=true. Elapsed: 4.058570668s -Jun 12 22:12:12.701: INFO: Pod "dns-test-0f49daf4-8d44-4b5e-8d52-6c56b96586a9" satisfied condition "running" -STEP: retrieving the pod 06/12/23 22:12:12.701 -STEP: looking for the results for each expected name from probers 06/12/23 22:12:12.716 -Jun 12 22:12:12.888: INFO: DNS probes using dns-4830/dns-test-0f49daf4-8d44-4b5e-8d52-6c56b96586a9 succeeded - -STEP: deleting the pod 06/12/23 22:12:12.888 -[AfterEach] [sig-network] DNS +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[BeforeEach] Kubectl logs + test/e2e/kubectl/kubectl.go:1572 +STEP: creating an pod 07/27/23 02:45:07.652 +Jul 27 02:45:07.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-4927 run logs-generator --image=registry.k8s.io/e2e-test-images/agnhost:2.43 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s' +Jul 27 02:45:07.739: INFO: stderr: "" +Jul 27 02:45:07.739: INFO: stdout: "pod/logs-generator created\n" +[It] should be able to retrieve and filter logs [Conformance] + test/e2e/kubectl/kubectl.go:1592 +STEP: Waiting for log generator to start. 07/27/23 02:45:07.739 +Jul 27 02:45:07.739: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] +Jul 27 02:45:07.739: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-4927" to be "running and ready, or succeeded" +Jul 27 02:45:07.764: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 24.59398ms +Jul 27 02:45:07.764: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'logs-generator' on '' to be 'Running' but was 'Pending' +Jul 27 02:45:09.777: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.037538365s +Jul 27 02:45:09.777: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" +Jul 27 02:45:09.777: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] +STEP: checking for a matching strings 07/27/23 02:45:09.777 +Jul 27 02:45:09.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-4927 logs logs-generator logs-generator' +Jul 27 02:45:09.916: INFO: stderr: "" +Jul 27 02:45:09.916: INFO: stdout: "I0727 02:45:08.954201 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/2ll 531\nI0727 02:45:09.154244 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/n79 266\nI0727 02:45:09.354852 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/fkqb 298\nI0727 02:45:09.554122 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/2dl 208\nI0727 02:45:09.754453 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/x6rw 402\n" +STEP: limiting log lines 07/27/23 02:45:09.916 +Jul 27 02:45:09.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-4927 logs logs-generator logs-generator --tail=1' +Jul 27 02:45:10.045: INFO: stderr: "" +Jul 27 02:45:10.045: INFO: stdout: "I0727 02:45:09.954876 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/8b8 543\n" +Jul 27 02:45:10.045: INFO: got output "I0727 02:45:09.954876 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/8b8 543\n" +STEP: limiting log bytes 07/27/23 02:45:10.045 +Jul 27 02:45:10.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-4927 logs logs-generator logs-generator --limit-bytes=1' +Jul 27 02:45:10.213: INFO: stderr: "" +Jul 27 02:45:10.213: INFO: stdout: "I" +Jul 27 02:45:10.213: INFO: got output "I" +STEP: exposing timestamps 07/27/23 02:45:10.213 +Jul 27 02:45:10.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-4927 logs logs-generator logs-generator --tail=1 --timestamps' +Jul 27 02:45:10.373: INFO: stderr: "" +Jul 27 02:45:10.373: INFO: stdout: "2023-07-26T21:45:10.154343476-05:00 I0727 02:45:10.154260 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/9dqw 552\n" +Jul 27 02:45:10.373: INFO: got output "2023-07-26T21:45:10.154343476-05:00 I0727 02:45:10.154260 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/9dqw 552\n" +STEP: restricting to a time range 07/27/23 02:45:10.373 +Jul 27 02:45:12.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-4927 logs logs-generator logs-generator --since=1s' +Jul 27 02:45:12.990: INFO: stderr: "" +Jul 27 02:45:12.990: INFO: stdout: "I0727 02:45:12.154277 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/kdb4 432\nI0727 02:45:12.354598 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/4dm4 217\nI0727 02:45:12.555102 1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/cqw 221\nI0727 02:45:12.754417 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/g87 420\nI0727 02:45:12.954730 1 logs_generator.go:76] 20 POST /api/v1/namespaces/kube-system/pods/c7cm 584\n" +Jul 27 02:45:12.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-4927 logs logs-generator logs-generator --since=24h' +Jul 27 02:45:13.092: INFO: stderr: "" +Jul 27 02:45:13.092: INFO: stdout: "I0727 02:45:08.954201 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/2ll 531\nI0727 02:45:09.154244 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/n79 266\nI0727 02:45:09.354852 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/fkqb 298\nI0727 02:45:09.554122 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/2dl 208\nI0727 02:45:09.754453 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/x6rw 402\nI0727 02:45:09.954876 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/8b8 543\nI0727 02:45:10.154260 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/9dqw 552\nI0727 02:45:10.354574 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/dkx 354\nI0727 02:45:10.555160 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/ss9h 416\nI0727 02:45:10.754583 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/xkr7 546\nI0727 02:45:10.955030 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/f4z 270\nI0727 02:45:11.154324 1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/ljv 420\nI0727 02:45:11.354729 1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/wcp 244\nI0727 02:45:11.554142 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/l5j 542\nI0727 02:45:11.754601 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/kube-system/pods/7n5d 380\nI0727 02:45:11.954960 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/ns/pods/np68 590\nI0727 02:45:12.154277 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/kdb4 432\nI0727 02:45:12.354598 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/4dm4 217\nI0727 02:45:12.555102 1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/cqw 221\nI0727 02:45:12.754417 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/g87 420\nI0727 02:45:12.954730 1 logs_generator.go:76] 20 POST /api/v1/namespaces/kube-system/pods/c7cm 584\n" +[AfterEach] Kubectl logs + test/e2e/kubectl/kubectl.go:1577 +Jul 27 02:45:13.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-4927 delete pod logs-generator' +Jul 27 02:45:14.417: INFO: stderr: "" +Jul 27 02:45:14.417: INFO: stdout: "pod \"logs-generator\" deleted\n" +[AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 -Jun 12 22:12:12.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] DNS +Jul 27 02:45:14.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] DNS +[DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] DNS +[DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 -STEP: Destroying namespace "dns-4830" for this suite. 06/12/23 22:12:13 +STEP: Destroying namespace "kubectl-4927" for this suite. 07/27/23 02:45:14.435 ------------------------------ -• [4.577 seconds] -[sig-network] DNS -test/e2e/network/common/framework.go:23 - should provide DNS for the cluster [Conformance] - test/e2e/network/dns.go:50 +• [SLOW TEST] [6.921 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl logs + test/e2e/kubectl/kubectl.go:1569 + should be able to retrieve and filter logs [Conformance] + test/e2e/kubectl/kubectl.go:1592 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] DNS + [BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:12:08.479 - Jun 12 22:12:08.479: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename dns 06/12/23 22:12:08.482 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:12:08.541 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:12:08.558 - [BeforeEach] [sig-network] DNS + STEP: Creating a kubernetes client 07/27/23 02:45:07.536 + Jul 27 02:45:07.536: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename kubectl 07/27/23 02:45:07.537 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:45:07.591 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:45:07.603 + [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 - [It] should provide DNS for the cluster [Conformance] - test/e2e/network/dns.go:50 - STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;sleep 1; done - 06/12/23 22:12:08.604 - STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;sleep 1; done - 06/12/23 22:12:08.605 - STEP: creating a pod to probe DNS 06/12/23 22:12:08.605 - STEP: submitting the pod to kubernetes 06/12/23 22:12:08.605 - Jun 12 22:12:08.642: INFO: Waiting up to 15m0s for pod "dns-test-0f49daf4-8d44-4b5e-8d52-6c56b96586a9" in namespace "dns-4830" to be "running" - Jun 12 22:12:08.681: INFO: Pod "dns-test-0f49daf4-8d44-4b5e-8d52-6c56b96586a9": Phase="Pending", Reason="", readiness=false. Elapsed: 39.11015ms - Jun 12 22:12:10.695: INFO: Pod "dns-test-0f49daf4-8d44-4b5e-8d52-6c56b96586a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053135723s - Jun 12 22:12:12.701: INFO: Pod "dns-test-0f49daf4-8d44-4b5e-8d52-6c56b96586a9": Phase="Running", Reason="", readiness=true. Elapsed: 4.058570668s - Jun 12 22:12:12.701: INFO: Pod "dns-test-0f49daf4-8d44-4b5e-8d52-6c56b96586a9" satisfied condition "running" - STEP: retrieving the pod 06/12/23 22:12:12.701 - STEP: looking for the results for each expected name from probers 06/12/23 22:12:12.716 - Jun 12 22:12:12.888: INFO: DNS probes using dns-4830/dns-test-0f49daf4-8d44-4b5e-8d52-6c56b96586a9 succeeded - - STEP: deleting the pod 06/12/23 22:12:12.888 - [AfterEach] [sig-network] DNS + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [BeforeEach] Kubectl logs + test/e2e/kubectl/kubectl.go:1572 + STEP: creating an pod 07/27/23 02:45:07.652 + Jul 27 02:45:07.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-4927 run logs-generator --image=registry.k8s.io/e2e-test-images/agnhost:2.43 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s' + Jul 27 02:45:07.739: INFO: stderr: "" + Jul 27 02:45:07.739: INFO: stdout: "pod/logs-generator created\n" + [It] should be able to retrieve and filter logs [Conformance] + test/e2e/kubectl/kubectl.go:1592 + STEP: Waiting for log generator to start. 07/27/23 02:45:07.739 + Jul 27 02:45:07.739: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] + Jul 27 02:45:07.739: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-4927" to be "running and ready, or succeeded" + Jul 27 02:45:07.764: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 24.59398ms + Jul 27 02:45:07.764: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'logs-generator' on '' to be 'Running' but was 'Pending' + Jul 27 02:45:09.777: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.037538365s + Jul 27 02:45:09.777: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" + Jul 27 02:45:09.777: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] + STEP: checking for a matching strings 07/27/23 02:45:09.777 + Jul 27 02:45:09.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-4927 logs logs-generator logs-generator' + Jul 27 02:45:09.916: INFO: stderr: "" + Jul 27 02:45:09.916: INFO: stdout: "I0727 02:45:08.954201 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/2ll 531\nI0727 02:45:09.154244 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/n79 266\nI0727 02:45:09.354852 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/fkqb 298\nI0727 02:45:09.554122 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/2dl 208\nI0727 02:45:09.754453 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/x6rw 402\n" + STEP: limiting log lines 07/27/23 02:45:09.916 + Jul 27 02:45:09.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-4927 logs logs-generator logs-generator --tail=1' + Jul 27 02:45:10.045: INFO: stderr: "" + Jul 27 02:45:10.045: INFO: stdout: "I0727 02:45:09.954876 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/8b8 543\n" + Jul 27 02:45:10.045: INFO: got output "I0727 02:45:09.954876 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/8b8 543\n" + STEP: limiting log bytes 07/27/23 02:45:10.045 + Jul 27 02:45:10.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-4927 logs logs-generator logs-generator --limit-bytes=1' + Jul 27 02:45:10.213: INFO: stderr: "" + Jul 27 02:45:10.213: INFO: stdout: "I" + Jul 27 02:45:10.213: INFO: got output "I" + STEP: exposing timestamps 07/27/23 02:45:10.213 + Jul 27 02:45:10.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-4927 logs logs-generator logs-generator --tail=1 --timestamps' + Jul 27 02:45:10.373: INFO: stderr: "" + Jul 27 02:45:10.373: INFO: stdout: "2023-07-26T21:45:10.154343476-05:00 I0727 02:45:10.154260 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/9dqw 552\n" + Jul 27 02:45:10.373: INFO: got output "2023-07-26T21:45:10.154343476-05:00 I0727 02:45:10.154260 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/9dqw 552\n" + STEP: restricting to a time range 07/27/23 02:45:10.373 + Jul 27 02:45:12.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-4927 logs logs-generator logs-generator --since=1s' + Jul 27 02:45:12.990: INFO: stderr: "" + Jul 27 02:45:12.990: INFO: stdout: "I0727 02:45:12.154277 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/kdb4 432\nI0727 02:45:12.354598 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/4dm4 217\nI0727 02:45:12.555102 1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/cqw 221\nI0727 02:45:12.754417 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/g87 420\nI0727 02:45:12.954730 1 logs_generator.go:76] 20 POST /api/v1/namespaces/kube-system/pods/c7cm 584\n" + Jul 27 02:45:12.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-4927 logs logs-generator logs-generator --since=24h' + Jul 27 02:45:13.092: INFO: stderr: "" + Jul 27 02:45:13.092: INFO: stdout: "I0727 02:45:08.954201 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/2ll 531\nI0727 02:45:09.154244 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/n79 266\nI0727 02:45:09.354852 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/fkqb 298\nI0727 02:45:09.554122 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/2dl 208\nI0727 02:45:09.754453 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/x6rw 402\nI0727 02:45:09.954876 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/8b8 543\nI0727 02:45:10.154260 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/9dqw 552\nI0727 02:45:10.354574 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/dkx 354\nI0727 02:45:10.555160 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/ss9h 416\nI0727 02:45:10.754583 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/xkr7 546\nI0727 02:45:10.955030 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/f4z 270\nI0727 02:45:11.154324 1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/ljv 420\nI0727 02:45:11.354729 1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/wcp 244\nI0727 02:45:11.554142 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/l5j 542\nI0727 02:45:11.754601 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/kube-system/pods/7n5d 380\nI0727 02:45:11.954960 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/ns/pods/np68 590\nI0727 02:45:12.154277 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/kdb4 432\nI0727 02:45:12.354598 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/4dm4 217\nI0727 02:45:12.555102 1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/cqw 221\nI0727 02:45:12.754417 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/g87 420\nI0727 02:45:12.954730 1 logs_generator.go:76] 20 POST /api/v1/namespaces/kube-system/pods/c7cm 584\n" + [AfterEach] Kubectl logs + test/e2e/kubectl/kubectl.go:1577 + Jul 27 02:45:13.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-4927 delete pod logs-generator' + Jul 27 02:45:14.417: INFO: stderr: "" + Jul 27 02:45:14.417: INFO: stdout: "pod \"logs-generator\" deleted\n" + [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 - Jun 12 22:12:12.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] DNS + Jul 27 02:45:14.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] DNS + [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] DNS + [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 - STEP: Destroying namespace "dns-4830" for this suite. 06/12/23 22:12:13 + STEP: Destroying namespace "kubectl-4927" for this suite. 07/27/23 02:45:14.435 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] ConfigMap - should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:109 -[BeforeEach] [sig-storage] ConfigMap +[sig-cli] Kubectl client Kubectl diff + should check if kubectl diff finds a difference for Deployments [Conformance] + test/e2e/kubectl/kubectl.go:931 +[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:12:13.058 -Jun 12 22:12:13.059: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename configmap 06/12/23 22:12:13.063 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:12:13.133 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:12:13.174 -[BeforeEach] [sig-storage] ConfigMap +STEP: Creating a kubernetes client 07/27/23 02:45:14.458 +Jul 27 02:45:14.458: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename kubectl 07/27/23 02:45:14.459 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:45:14.53 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:45:14.543 +[BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 -[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:109 -STEP: Creating configMap with name configmap-test-volume-map-52ccaf4f-044d-4d8b-8bf8-99e10fac6820 06/12/23 22:12:13.195 -STEP: Creating a pod to test consume configMaps 06/12/23 22:12:13.213 -Jun 12 22:12:13.251: INFO: Waiting up to 5m0s for pod "pod-configmaps-f14c48fc-5610-47b4-8160-095343023ff5" in namespace "configmap-4675" to be "Succeeded or Failed" -Jun 12 22:12:13.273: INFO: Pod "pod-configmaps-f14c48fc-5610-47b4-8160-095343023ff5": Phase="Pending", Reason="", readiness=false. Elapsed: 21.673415ms -Jun 12 22:12:15.287: INFO: Pod "pod-configmaps-f14c48fc-5610-47b4-8160-095343023ff5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03572855s -Jun 12 22:12:17.307: INFO: Pod "pod-configmaps-f14c48fc-5610-47b4-8160-095343023ff5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056084891s -Jun 12 22:12:19.348: INFO: Pod "pod-configmaps-f14c48fc-5610-47b4-8160-095343023ff5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096547811s -Jun 12 22:12:21.288: INFO: Pod "pod-configmaps-f14c48fc-5610-47b4-8160-095343023ff5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.036894354s -STEP: Saw pod success 06/12/23 22:12:21.288 -Jun 12 22:12:21.288: INFO: Pod "pod-configmaps-f14c48fc-5610-47b4-8160-095343023ff5" satisfied condition "Succeeded or Failed" -Jun 12 22:12:21.303: INFO: Trying to get logs from node 10.138.75.70 pod pod-configmaps-f14c48fc-5610-47b4-8160-095343023ff5 container agnhost-container: -STEP: delete the pod 06/12/23 22:12:21.329 -Jun 12 22:12:21.361: INFO: Waiting for pod pod-configmaps-f14c48fc-5610-47b4-8160-095343023ff5 to disappear -Jun 12 22:12:21.373: INFO: Pod pod-configmaps-f14c48fc-5610-47b4-8160-095343023ff5 no longer exists -[AfterEach] [sig-storage] ConfigMap +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[It] should check if kubectl diff finds a difference for Deployments [Conformance] + test/e2e/kubectl/kubectl.go:931 +STEP: create deployment with httpd image 07/27/23 02:45:14.557 +Jul 27 02:45:14.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-9238 create -f -' +Jul 27 02:45:16.933: INFO: stderr: "" +Jul 27 02:45:16.933: INFO: stdout: "deployment.apps/httpd-deployment created\n" +STEP: verify diff finds difference between live and declared image 07/27/23 02:45:16.933 +Jul 27 02:45:16.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-9238 diff -f -' +Jul 27 02:45:17.305: INFO: rc: 1 +Jul 27 02:45:17.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-9238 delete -f -' +Jul 27 02:45:17.392: INFO: stderr: "" +Jul 27 02:45:17.392: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" +[AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 -Jun 12 22:12:21.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] ConfigMap +Jul 27 02:45:17.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] ConfigMap +[DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] ConfigMap +[DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 -STEP: Destroying namespace "configmap-4675" for this suite. 06/12/23 22:12:21.387 +STEP: Destroying namespace "kubectl-9238" for this suite. 07/27/23 02:45:17.41 ------------------------------ -• [SLOW TEST] [8.354 seconds] -[sig-storage] ConfigMap -test/e2e/common/storage/framework.go:23 - should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:109 +• [2.972 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl diff + test/e2e/kubectl/kubectl.go:925 + should check if kubectl diff finds a difference for Deployments [Conformance] + test/e2e/kubectl/kubectl.go:931 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] ConfigMap + [BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:12:13.058 - Jun 12 22:12:13.059: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename configmap 06/12/23 22:12:13.063 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:12:13.133 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:12:13.174 - [BeforeEach] [sig-storage] ConfigMap + STEP: Creating a kubernetes client 07/27/23 02:45:14.458 + Jul 27 02:45:14.458: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename kubectl 07/27/23 02:45:14.459 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:45:14.53 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:45:14.543 + [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 - [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:109 - STEP: Creating configMap with name configmap-test-volume-map-52ccaf4f-044d-4d8b-8bf8-99e10fac6820 06/12/23 22:12:13.195 - STEP: Creating a pod to test consume configMaps 06/12/23 22:12:13.213 - Jun 12 22:12:13.251: INFO: Waiting up to 5m0s for pod "pod-configmaps-f14c48fc-5610-47b4-8160-095343023ff5" in namespace "configmap-4675" to be "Succeeded or Failed" - Jun 12 22:12:13.273: INFO: Pod "pod-configmaps-f14c48fc-5610-47b4-8160-095343023ff5": Phase="Pending", Reason="", readiness=false. Elapsed: 21.673415ms - Jun 12 22:12:15.287: INFO: Pod "pod-configmaps-f14c48fc-5610-47b4-8160-095343023ff5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03572855s - Jun 12 22:12:17.307: INFO: Pod "pod-configmaps-f14c48fc-5610-47b4-8160-095343023ff5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056084891s - Jun 12 22:12:19.348: INFO: Pod "pod-configmaps-f14c48fc-5610-47b4-8160-095343023ff5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096547811s - Jun 12 22:12:21.288: INFO: Pod "pod-configmaps-f14c48fc-5610-47b4-8160-095343023ff5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.036894354s - STEP: Saw pod success 06/12/23 22:12:21.288 - Jun 12 22:12:21.288: INFO: Pod "pod-configmaps-f14c48fc-5610-47b4-8160-095343023ff5" satisfied condition "Succeeded or Failed" - Jun 12 22:12:21.303: INFO: Trying to get logs from node 10.138.75.70 pod pod-configmaps-f14c48fc-5610-47b4-8160-095343023ff5 container agnhost-container: - STEP: delete the pod 06/12/23 22:12:21.329 - Jun 12 22:12:21.361: INFO: Waiting for pod pod-configmaps-f14c48fc-5610-47b4-8160-095343023ff5 to disappear - Jun 12 22:12:21.373: INFO: Pod pod-configmaps-f14c48fc-5610-47b4-8160-095343023ff5 no longer exists - [AfterEach] [sig-storage] ConfigMap + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [It] should check if kubectl diff finds a difference for Deployments [Conformance] + test/e2e/kubectl/kubectl.go:931 + STEP: create deployment with httpd image 07/27/23 02:45:14.557 + Jul 27 02:45:14.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-9238 create -f -' + Jul 27 02:45:16.933: INFO: stderr: "" + Jul 27 02:45:16.933: INFO: stdout: "deployment.apps/httpd-deployment created\n" + STEP: verify diff finds difference between live and declared image 07/27/23 02:45:16.933 + Jul 27 02:45:16.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-9238 diff -f -' + Jul 27 02:45:17.305: INFO: rc: 1 + Jul 27 02:45:17.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-9238 delete -f -' + Jul 27 02:45:17.392: INFO: stderr: "" + Jul 27 02:45:17.392: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" + [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 - Jun 12 22:12:21.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] ConfigMap + Jul 27 02:45:17.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] ConfigMap + [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] ConfigMap + [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 - STEP: Destroying namespace "configmap-4675" for this suite. 06/12/23 22:12:21.387 + STEP: Destroying namespace "kubectl-9238" for this suite. 07/27/23 02:45:17.41 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSS ------------------------------ -[sig-network] DNS - should provide DNS for pods for Subdomain [Conformance] - test/e2e/network/dns.go:290 -[BeforeEach] [sig-network] DNS +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: udp [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:93 +[BeforeEach] [sig-network] Networking set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:12:21.429 -Jun 12 22:12:21.429: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename dns 06/12/23 22:12:21.432 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:12:21.483 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:12:21.543 -[BeforeEach] [sig-network] DNS +STEP: Creating a kubernetes client 07/27/23 02:45:17.431 +Jul 27 02:45:17.431: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename pod-network-test 07/27/23 02:45:17.432 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:45:17.497 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:45:17.51 +[BeforeEach] [sig-network] Networking test/e2e/framework/metrics/init/init.go:31 -[It] should provide DNS for pods for Subdomain [Conformance] - test/e2e/network/dns.go:290 -STEP: Creating a test headless service 06/12/23 22:12:21.573 -STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9255.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-9255.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9255.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9255.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9255.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-9255.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9255.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-9255.svc.cluster.local;sleep 1; done - 06/12/23 22:12:21.592 -STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9255.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-9255.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9255.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-9255.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9255.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-9255.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9255.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-9255.svc.cluster.local;sleep 1; done - 06/12/23 22:12:21.592 -STEP: creating a pod to probe DNS 06/12/23 22:12:21.593 -STEP: submitting the pod to kubernetes 06/12/23 22:12:21.593 -Jun 12 22:12:21.630: INFO: Waiting up to 15m0s for pod "dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f" in namespace "dns-9255" to be "running" -Jun 12 22:12:21.644: INFO: Pod "dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f": Phase="Pending", Reason="", readiness=false. Elapsed: 13.233725ms -Jun 12 22:12:23.657: INFO: Pod "dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026646265s -Jun 12 22:12:25.659: INFO: Pod "dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f": Phase="Running", Reason="", readiness=true. Elapsed: 4.028083116s -Jun 12 22:12:25.659: INFO: Pod "dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f" satisfied condition "running" -STEP: retrieving the pod 06/12/23 22:12:25.659 -STEP: looking for the results for each expected name from probers 06/12/23 22:12:25.671 -Jun 12 22:12:25.767: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9255.svc.cluster.local from pod dns-9255/dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f: the server could not find the requested resource (get pods dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f) -Jun 12 22:12:25.797: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9255.svc.cluster.local from pod dns-9255/dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f: the server could not find the requested resource (get pods dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f) -Jun 12 22:12:25.880: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9255.svc.cluster.local from pod dns-9255/dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f: the server could not find the requested resource (get pods dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f) -Jun 12 22:12:25.923: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9255.svc.cluster.local from pod dns-9255/dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f: the server could not find the requested resource (get pods dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f) -Jun 12 22:12:25.942: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9255.svc.cluster.local from pod dns-9255/dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f: the server could not find the requested resource (get pods dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f) -Jun 12 22:12:25.961: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9255.svc.cluster.local from pod dns-9255/dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f: the server could not find the requested resource (get pods dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f) -Jun 12 22:12:25.979: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9255.svc.cluster.local from pod dns-9255/dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f: the server could not find the requested resource (get pods dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f) -Jun 12 22:12:26.000: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9255.svc.cluster.local from pod dns-9255/dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f: the server could not find the requested resource (get pods dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f) -Jun 12 22:12:26.000: INFO: Lookups using dns-9255/dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9255.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9255.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9255.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9255.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9255.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9255.svc.cluster.local jessie_udp@dns-test-service-2.dns-9255.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9255.svc.cluster.local] - -Jun 12 22:12:31.270: INFO: DNS probes using dns-9255/dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f succeeded - -STEP: deleting the pod 06/12/23 22:12:31.271 -STEP: deleting the test headless service 06/12/23 22:12:31.335 -[AfterEach] [sig-network] DNS +[It] should function for intra-pod communication: udp [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:93 +STEP: Performing setup for networking test in namespace pod-network-test-1986 07/27/23 02:45:17.52 +STEP: creating a selector 07/27/23 02:45:17.52 +STEP: Creating the service pods in kubernetes 07/27/23 02:45:17.52 +Jul 27 02:45:17.520: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Jul 27 02:45:17.631: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-1986" to be "running and ready" +Jul 27 02:45:17.643: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 11.603996ms +Jul 27 02:45:17.643: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Jul 27 02:45:19.658: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.026756252s +Jul 27 02:45:19.658: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 02:45:21.655: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.023647326s +Jul 27 02:45:21.655: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 02:45:23.655: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.023411903s +Jul 27 02:45:23.655: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 02:45:25.686: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.054856134s +Jul 27 02:45:25.686: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 02:45:27.656: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.024078295s +Jul 27 02:45:27.656: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 02:45:29.656: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.024316949s +Jul 27 02:45:29.656: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 02:45:31.669: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.037197671s +Jul 27 02:45:31.669: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 02:45:33.655: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.023628203s +Jul 27 02:45:33.655: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 02:45:35.656: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.024652263s +Jul 27 02:45:35.656: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 02:45:37.661: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.029313286s +Jul 27 02:45:37.661: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jul 27 02:45:39.655: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 22.023770658s +Jul 27 02:45:39.655: INFO: The phase of Pod netserver-0 is Running (Ready = true) +Jul 27 02:45:39.655: INFO: Pod "netserver-0" satisfied condition "running and ready" +Jul 27 02:45:39.674: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-1986" to be "running and ready" +Jul 27 02:45:39.685: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 11.708476ms +Jul 27 02:45:39.686: INFO: The phase of Pod netserver-1 is Running (Ready = true) +Jul 27 02:45:39.686: INFO: Pod "netserver-1" satisfied condition "running and ready" +Jul 27 02:45:39.696: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-1986" to be "running and ready" +Jul 27 02:45:39.707: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 10.700223ms +Jul 27 02:45:39.707: INFO: The phase of Pod netserver-2 is Running (Ready = true) +Jul 27 02:45:39.707: INFO: Pod "netserver-2" satisfied condition "running and ready" +STEP: Creating test pods 07/27/23 02:45:39.718 +Jul 27 02:45:39.742: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-1986" to be "running" +Jul 27 02:45:39.759: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 17.393967ms +Jul 27 02:45:41.773: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.031212625s +Jul 27 02:45:41.773: INFO: Pod "test-container-pod" satisfied condition "running" +Jul 27 02:45:41.785: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 +Jul 27 02:45:41.785: INFO: Breadth first check of 172.17.218.63 on host 10.245.128.17... +Jul 27 02:45:41.795: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.17.225.2:9080/dial?request=hostname&protocol=udp&host=172.17.218.63&port=8081&tries=1'] Namespace:pod-network-test-1986 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jul 27 02:45:41.795: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 02:45:41.795: INFO: ExecWithOptions: Clientset creation +Jul 27 02:45:41.795: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-1986/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F172.17.225.2%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D172.17.218.63%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) +Jul 27 02:45:41.942: INFO: Waiting for responses: map[] +Jul 27 02:45:41.942: INFO: reached 172.17.218.63 after 0/1 tries +Jul 27 02:45:41.942: INFO: Breadth first check of 172.17.230.181 on host 10.245.128.18... +Jul 27 02:45:41.959: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.17.225.2:9080/dial?request=hostname&protocol=udp&host=172.17.230.181&port=8081&tries=1'] Namespace:pod-network-test-1986 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jul 27 02:45:41.959: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 02:45:41.959: INFO: ExecWithOptions: Clientset creation +Jul 27 02:45:41.959: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-1986/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F172.17.225.2%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D172.17.230.181%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) +Jul 27 02:45:42.078: INFO: Waiting for responses: map[] +Jul 27 02:45:42.078: INFO: reached 172.17.230.181 after 0/1 tries +Jul 27 02:45:42.078: INFO: Breadth first check of 172.17.225.62 on host 10.245.128.19... +Jul 27 02:45:42.090: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.17.225.2:9080/dial?request=hostname&protocol=udp&host=172.17.225.62&port=8081&tries=1'] Namespace:pod-network-test-1986 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jul 27 02:45:42.090: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 02:45:42.090: INFO: ExecWithOptions: Clientset creation +Jul 27 02:45:42.090: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-1986/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F172.17.225.2%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D172.17.225.62%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) +Jul 27 02:45:42.238: INFO: Waiting for responses: map[] +Jul 27 02:45:42.238: INFO: reached 172.17.225.62 after 0/1 tries +Jul 27 02:45:42.238: INFO: Going to retry 0 out of 3 pods.... +[AfterEach] [sig-network] Networking test/e2e/framework/node/init/init.go:32 -Jun 12 22:12:31.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] DNS +Jul 27 02:45:42.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Networking test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] DNS +[DeferCleanup (Each)] [sig-network] Networking dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] DNS +[DeferCleanup (Each)] [sig-network] Networking tear down framework | framework.go:193 -STEP: Destroying namespace "dns-9255" for this suite. 06/12/23 22:12:31.422 +STEP: Destroying namespace "pod-network-test-1986" for this suite. 07/27/23 02:45:42.256 ------------------------------ -• [SLOW TEST] [10.057 seconds] -[sig-network] DNS -test/e2e/network/common/framework.go:23 - should provide DNS for pods for Subdomain [Conformance] - test/e2e/network/dns.go:290 +• [SLOW TEST] [24.846 seconds] +[sig-network] Networking +test/e2e/common/network/framework.go:23 + Granular Checks: Pods + test/e2e/common/network/networking.go:32 + should function for intra-pod communication: udp [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:93 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] DNS + [BeforeEach] [sig-network] Networking set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:12:21.429 - Jun 12 22:12:21.429: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename dns 06/12/23 22:12:21.432 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:12:21.483 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:12:21.543 - [BeforeEach] [sig-network] DNS + STEP: Creating a kubernetes client 07/27/23 02:45:17.431 + Jul 27 02:45:17.431: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename pod-network-test 07/27/23 02:45:17.432 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:45:17.497 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:45:17.51 + [BeforeEach] [sig-network] Networking test/e2e/framework/metrics/init/init.go:31 - [It] should provide DNS for pods for Subdomain [Conformance] - test/e2e/network/dns.go:290 - STEP: Creating a test headless service 06/12/23 22:12:21.573 - STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9255.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-9255.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9255.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9255.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9255.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-9255.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9255.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-9255.svc.cluster.local;sleep 1; done - 06/12/23 22:12:21.592 - STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9255.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-9255.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9255.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-9255.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9255.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-9255.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9255.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-9255.svc.cluster.local;sleep 1; done - 06/12/23 22:12:21.592 - STEP: creating a pod to probe DNS 06/12/23 22:12:21.593 - STEP: submitting the pod to kubernetes 06/12/23 22:12:21.593 - Jun 12 22:12:21.630: INFO: Waiting up to 15m0s for pod "dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f" in namespace "dns-9255" to be "running" - Jun 12 22:12:21.644: INFO: Pod "dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f": Phase="Pending", Reason="", readiness=false. Elapsed: 13.233725ms - Jun 12 22:12:23.657: INFO: Pod "dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026646265s - Jun 12 22:12:25.659: INFO: Pod "dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f": Phase="Running", Reason="", readiness=true. Elapsed: 4.028083116s - Jun 12 22:12:25.659: INFO: Pod "dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f" satisfied condition "running" - STEP: retrieving the pod 06/12/23 22:12:25.659 - STEP: looking for the results for each expected name from probers 06/12/23 22:12:25.671 - Jun 12 22:12:25.767: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9255.svc.cluster.local from pod dns-9255/dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f: the server could not find the requested resource (get pods dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f) - Jun 12 22:12:25.797: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9255.svc.cluster.local from pod dns-9255/dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f: the server could not find the requested resource (get pods dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f) - Jun 12 22:12:25.880: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9255.svc.cluster.local from pod dns-9255/dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f: the server could not find the requested resource (get pods dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f) - Jun 12 22:12:25.923: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9255.svc.cluster.local from pod dns-9255/dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f: the server could not find the requested resource (get pods dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f) - Jun 12 22:12:25.942: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9255.svc.cluster.local from pod dns-9255/dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f: the server could not find the requested resource (get pods dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f) - Jun 12 22:12:25.961: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9255.svc.cluster.local from pod dns-9255/dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f: the server could not find the requested resource (get pods dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f) - Jun 12 22:12:25.979: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9255.svc.cluster.local from pod dns-9255/dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f: the server could not find the requested resource (get pods dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f) - Jun 12 22:12:26.000: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9255.svc.cluster.local from pod dns-9255/dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f: the server could not find the requested resource (get pods dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f) - Jun 12 22:12:26.000: INFO: Lookups using dns-9255/dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9255.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9255.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9255.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9255.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9255.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9255.svc.cluster.local jessie_udp@dns-test-service-2.dns-9255.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9255.svc.cluster.local] - - Jun 12 22:12:31.270: INFO: DNS probes using dns-9255/dns-test-d116f5e1-4168-4312-af49-5a9b2d747d8f succeeded - - STEP: deleting the pod 06/12/23 22:12:31.271 - STEP: deleting the test headless service 06/12/23 22:12:31.335 - [AfterEach] [sig-network] DNS + [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:93 + STEP: Performing setup for networking test in namespace pod-network-test-1986 07/27/23 02:45:17.52 + STEP: creating a selector 07/27/23 02:45:17.52 + STEP: Creating the service pods in kubernetes 07/27/23 02:45:17.52 + Jul 27 02:45:17.520: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable + Jul 27 02:45:17.631: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-1986" to be "running and ready" + Jul 27 02:45:17.643: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 11.603996ms + Jul 27 02:45:17.643: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) + Jul 27 02:45:19.658: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.026756252s + Jul 27 02:45:19.658: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 02:45:21.655: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.023647326s + Jul 27 02:45:21.655: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 02:45:23.655: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.023411903s + Jul 27 02:45:23.655: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 02:45:25.686: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.054856134s + Jul 27 02:45:25.686: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 02:45:27.656: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.024078295s + Jul 27 02:45:27.656: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 02:45:29.656: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.024316949s + Jul 27 02:45:29.656: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 02:45:31.669: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.037197671s + Jul 27 02:45:31.669: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 02:45:33.655: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.023628203s + Jul 27 02:45:33.655: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 02:45:35.656: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.024652263s + Jul 27 02:45:35.656: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 02:45:37.661: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.029313286s + Jul 27 02:45:37.661: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jul 27 02:45:39.655: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 22.023770658s + Jul 27 02:45:39.655: INFO: The phase of Pod netserver-0 is Running (Ready = true) + Jul 27 02:45:39.655: INFO: Pod "netserver-0" satisfied condition "running and ready" + Jul 27 02:45:39.674: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-1986" to be "running and ready" + Jul 27 02:45:39.685: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 11.708476ms + Jul 27 02:45:39.686: INFO: The phase of Pod netserver-1 is Running (Ready = true) + Jul 27 02:45:39.686: INFO: Pod "netserver-1" satisfied condition "running and ready" + Jul 27 02:45:39.696: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-1986" to be "running and ready" + Jul 27 02:45:39.707: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 10.700223ms + Jul 27 02:45:39.707: INFO: The phase of Pod netserver-2 is Running (Ready = true) + Jul 27 02:45:39.707: INFO: Pod "netserver-2" satisfied condition "running and ready" + STEP: Creating test pods 07/27/23 02:45:39.718 + Jul 27 02:45:39.742: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-1986" to be "running" + Jul 27 02:45:39.759: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 17.393967ms + Jul 27 02:45:41.773: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.031212625s + Jul 27 02:45:41.773: INFO: Pod "test-container-pod" satisfied condition "running" + Jul 27 02:45:41.785: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 + Jul 27 02:45:41.785: INFO: Breadth first check of 172.17.218.63 on host 10.245.128.17... + Jul 27 02:45:41.795: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.17.225.2:9080/dial?request=hostname&protocol=udp&host=172.17.218.63&port=8081&tries=1'] Namespace:pod-network-test-1986 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jul 27 02:45:41.795: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 02:45:41.795: INFO: ExecWithOptions: Clientset creation + Jul 27 02:45:41.795: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-1986/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F172.17.225.2%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D172.17.218.63%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) + Jul 27 02:45:41.942: INFO: Waiting for responses: map[] + Jul 27 02:45:41.942: INFO: reached 172.17.218.63 after 0/1 tries + Jul 27 02:45:41.942: INFO: Breadth first check of 172.17.230.181 on host 10.245.128.18... + Jul 27 02:45:41.959: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.17.225.2:9080/dial?request=hostname&protocol=udp&host=172.17.230.181&port=8081&tries=1'] Namespace:pod-network-test-1986 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jul 27 02:45:41.959: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 02:45:41.959: INFO: ExecWithOptions: Clientset creation + Jul 27 02:45:41.959: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-1986/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F172.17.225.2%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D172.17.230.181%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) + Jul 27 02:45:42.078: INFO: Waiting for responses: map[] + Jul 27 02:45:42.078: INFO: reached 172.17.230.181 after 0/1 tries + Jul 27 02:45:42.078: INFO: Breadth first check of 172.17.225.62 on host 10.245.128.19... + Jul 27 02:45:42.090: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.17.225.2:9080/dial?request=hostname&protocol=udp&host=172.17.225.62&port=8081&tries=1'] Namespace:pod-network-test-1986 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jul 27 02:45:42.090: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 02:45:42.090: INFO: ExecWithOptions: Clientset creation + Jul 27 02:45:42.090: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/pod-network-test-1986/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F172.17.225.2%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D172.17.225.62%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) + Jul 27 02:45:42.238: INFO: Waiting for responses: map[] + Jul 27 02:45:42.238: INFO: reached 172.17.225.62 after 0/1 tries + Jul 27 02:45:42.238: INFO: Going to retry 0 out of 3 pods.... + [AfterEach] [sig-network] Networking test/e2e/framework/node/init/init.go:32 - Jun 12 22:12:31.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] DNS + Jul 27 02:45:42.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Networking test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] DNS + [DeferCleanup (Each)] [sig-network] Networking dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] DNS + [DeferCleanup (Each)] [sig-network] Networking tear down framework | framework.go:193 - STEP: Destroying namespace "dns-9255" for this suite. 06/12/23 22:12:31.422 + STEP: Destroying namespace "pod-network-test-1986" for this suite. 07/27/23 02:45:42.256 << End Captured GinkgoWriter Output ------------------------------ -SS +SSS ------------------------------ -[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] - should validate Statefulset Status endpoints [Conformance] - test/e2e/apps/statefulset.go:977 -[BeforeEach] [sig-apps] StatefulSet +[sig-storage] Projected downwardAPI + should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:193 +[BeforeEach] [sig-storage] Projected downwardAPI set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:12:31.515 -Jun 12 22:12:31.516: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename statefulset 06/12/23 22:12:31.52 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:12:31.588 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:12:31.598 -[BeforeEach] [sig-apps] StatefulSet +STEP: Creating a kubernetes client 07/27/23 02:45:42.278 +Jul 27 02:45:42.278: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename projected 07/27/23 02:45:42.279 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:45:42.33 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:45:42.344 +[BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-apps] StatefulSet - test/e2e/apps/statefulset.go:98 -[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:113 -STEP: Creating service test in namespace statefulset-5423 06/12/23 22:12:31.609 -[It] should validate Statefulset Status endpoints [Conformance] - test/e2e/apps/statefulset.go:977 -STEP: Creating statefulset ss in namespace statefulset-5423 06/12/23 22:12:31.65 -Jun 12 22:12:31.682: INFO: Found 0 stateful pods, waiting for 1 -Jun 12 22:12:41.696: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true -STEP: Patch Statefulset to include a label 06/12/23 22:12:41.717 -STEP: Getting /status 06/12/23 22:12:41.736 -Jun 12 22:12:41.749: INFO: StatefulSet ss has Conditions: []v1.StatefulSetCondition(nil) -STEP: updating the StatefulSet Status 06/12/23 22:12:41.749 -Jun 12 22:12:41.779: INFO: updatedStatus.Conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} -STEP: watching for the statefulset status to be updated 06/12/23 22:12:41.779 -Jun 12 22:12:41.785: INFO: Observed &StatefulSet event: ADDED -Jun 12 22:12:41.785: INFO: Found Statefulset ss in namespace statefulset-5423 with labels: map[e2e:testing] annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} -Jun 12 22:12:41.785: INFO: Statefulset ss has an updated status -STEP: patching the Statefulset Status 06/12/23 22:12:41.785 -Jun 12 22:12:41.786: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} -Jun 12 22:12:41.807: INFO: Patched status conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} -STEP: watching for the Statefulset status to be patched 06/12/23 22:12:41.807 -Jun 12 22:12:41.813: INFO: Observed &StatefulSet event: ADDED -[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:124 -Jun 12 22:12:41.813: INFO: Deleting all statefulset in ns statefulset-5423 -Jun 12 22:12:41.825: INFO: Scaling statefulset ss to 0 -Jun 12 22:12:51.890: INFO: Waiting for statefulset status.replicas updated to 0 -Jun 12 22:12:51.902: INFO: Deleting statefulset ss -[AfterEach] [sig-apps] StatefulSet +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:193 +STEP: Creating a pod to test downward API volume plugin 07/27/23 02:45:42.388 +Jul 27 02:45:42.423: INFO: Waiting up to 5m0s for pod "downwardapi-volume-775774c3-c9ed-4b89-ba31-55a6877e5d97" in namespace "projected-3756" to be "Succeeded or Failed" +Jul 27 02:45:42.440: INFO: Pod "downwardapi-volume-775774c3-c9ed-4b89-ba31-55a6877e5d97": Phase="Pending", Reason="", readiness=false. Elapsed: 16.334831ms +Jul 27 02:45:44.452: INFO: Pod "downwardapi-volume-775774c3-c9ed-4b89-ba31-55a6877e5d97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028880851s +Jul 27 02:45:46.456: INFO: Pod "downwardapi-volume-775774c3-c9ed-4b89-ba31-55a6877e5d97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032538404s +STEP: Saw pod success 07/27/23 02:45:46.456 +Jul 27 02:45:46.456: INFO: Pod "downwardapi-volume-775774c3-c9ed-4b89-ba31-55a6877e5d97" satisfied condition "Succeeded or Failed" +Jul 27 02:45:46.467: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-775774c3-c9ed-4b89-ba31-55a6877e5d97 container client-container: +STEP: delete the pod 07/27/23 02:45:46.591 +Jul 27 02:45:46.615: INFO: Waiting for pod downwardapi-volume-775774c3-c9ed-4b89-ba31-55a6877e5d97 to disappear +Jul 27 02:45:46.626: INFO: Pod downwardapi-volume-775774c3-c9ed-4b89-ba31-55a6877e5d97 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/node/init/init.go:32 -Jun 12 22:12:51.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] StatefulSet +Jul 27 02:45:46.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] StatefulSet +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] StatefulSet +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI tear down framework | framework.go:193 -STEP: Destroying namespace "statefulset-5423" for this suite. 06/12/23 22:12:51.97 +STEP: Destroying namespace "projected-3756" for this suite. 07/27/23 02:45:46.642 ------------------------------ -• [SLOW TEST] [20.496 seconds] -[sig-apps] StatefulSet -test/e2e/apps/framework.go:23 - Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:103 - should validate Statefulset Status endpoints [Conformance] - test/e2e/apps/statefulset.go:977 +• [4.385 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:193 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] StatefulSet + [BeforeEach] [sig-storage] Projected downwardAPI set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:12:31.515 - Jun 12 22:12:31.516: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename statefulset 06/12/23 22:12:31.52 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:12:31.588 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:12:31.598 - [BeforeEach] [sig-apps] StatefulSet + STEP: Creating a kubernetes client 07/27/23 02:45:42.278 + Jul 27 02:45:42.278: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename projected 07/27/23 02:45:42.279 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:45:42.33 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:45:42.344 + [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-apps] StatefulSet - test/e2e/apps/statefulset.go:98 - [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:113 - STEP: Creating service test in namespace statefulset-5423 06/12/23 22:12:31.609 - [It] should validate Statefulset Status endpoints [Conformance] - test/e2e/apps/statefulset.go:977 - STEP: Creating statefulset ss in namespace statefulset-5423 06/12/23 22:12:31.65 - Jun 12 22:12:31.682: INFO: Found 0 stateful pods, waiting for 1 - Jun 12 22:12:41.696: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true - STEP: Patch Statefulset to include a label 06/12/23 22:12:41.717 - STEP: Getting /status 06/12/23 22:12:41.736 - Jun 12 22:12:41.749: INFO: StatefulSet ss has Conditions: []v1.StatefulSetCondition(nil) - STEP: updating the StatefulSet Status 06/12/23 22:12:41.749 - Jun 12 22:12:41.779: INFO: updatedStatus.Conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} - STEP: watching for the statefulset status to be updated 06/12/23 22:12:41.779 - Jun 12 22:12:41.785: INFO: Observed &StatefulSet event: ADDED - Jun 12 22:12:41.785: INFO: Found Statefulset ss in namespace statefulset-5423 with labels: map[e2e:testing] annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} - Jun 12 22:12:41.785: INFO: Statefulset ss has an updated status - STEP: patching the Statefulset Status 06/12/23 22:12:41.785 - Jun 12 22:12:41.786: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} - Jun 12 22:12:41.807: INFO: Patched status conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} - STEP: watching for the Statefulset status to be patched 06/12/23 22:12:41.807 - Jun 12 22:12:41.813: INFO: Observed &StatefulSet event: ADDED - [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] - test/e2e/apps/statefulset.go:124 - Jun 12 22:12:41.813: INFO: Deleting all statefulset in ns statefulset-5423 - Jun 12 22:12:41.825: INFO: Scaling statefulset ss to 0 - Jun 12 22:12:51.890: INFO: Waiting for statefulset status.replicas updated to 0 - Jun 12 22:12:51.902: INFO: Deleting statefulset ss - [AfterEach] [sig-apps] StatefulSet + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 + [It] should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:193 + STEP: Creating a pod to test downward API volume plugin 07/27/23 02:45:42.388 + Jul 27 02:45:42.423: INFO: Waiting up to 5m0s for pod "downwardapi-volume-775774c3-c9ed-4b89-ba31-55a6877e5d97" in namespace "projected-3756" to be "Succeeded or Failed" + Jul 27 02:45:42.440: INFO: Pod "downwardapi-volume-775774c3-c9ed-4b89-ba31-55a6877e5d97": Phase="Pending", Reason="", readiness=false. Elapsed: 16.334831ms + Jul 27 02:45:44.452: INFO: Pod "downwardapi-volume-775774c3-c9ed-4b89-ba31-55a6877e5d97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028880851s + Jul 27 02:45:46.456: INFO: Pod "downwardapi-volume-775774c3-c9ed-4b89-ba31-55a6877e5d97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032538404s + STEP: Saw pod success 07/27/23 02:45:46.456 + Jul 27 02:45:46.456: INFO: Pod "downwardapi-volume-775774c3-c9ed-4b89-ba31-55a6877e5d97" satisfied condition "Succeeded or Failed" + Jul 27 02:45:46.467: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-775774c3-c9ed-4b89-ba31-55a6877e5d97 container client-container: + STEP: delete the pod 07/27/23 02:45:46.591 + Jul 27 02:45:46.615: INFO: Waiting for pod downwardapi-volume-775774c3-c9ed-4b89-ba31-55a6877e5d97 to disappear + Jul 27 02:45:46.626: INFO: Pod downwardapi-volume-775774c3-c9ed-4b89-ba31-55a6877e5d97 no longer exists + [AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/node/init/init.go:32 - Jun 12 22:12:51.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] StatefulSet + Jul 27 02:45:46.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] StatefulSet + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] StatefulSet + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI tear down framework | framework.go:193 - STEP: Destroying namespace "statefulset-5423" for this suite. 06/12/23 22:12:51.97 + STEP: Destroying namespace "projected-3756" for this suite. 07/27/23 02:45:46.642 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-apps] Deployment - RollingUpdateDeployment should delete old pods and create new ones [Conformance] - test/e2e/apps/deployment.go:105 -[BeforeEach] [sig-apps] Deployment +[sig-apps] Job + should apply changes to a job status [Conformance] + test/e2e/apps/job.go:636 +[BeforeEach] [sig-apps] Job set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:12:52.01 -Jun 12 22:12:52.011: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename deployment 06/12/23 22:12:52.014 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:12:52.151 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:12:52.174 -[BeforeEach] [sig-apps] Deployment +STEP: Creating a kubernetes client 07/27/23 02:45:46.664 +Jul 27 02:45:46.664: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename job 07/27/23 02:45:46.665 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:45:46.705 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:45:46.717 +[BeforeEach] [sig-apps] Job test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:91 -[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] - test/e2e/apps/deployment.go:105 -Jun 12 22:12:52.216: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) -Jun 12 22:12:52.307: INFO: Pod name sample-pod: Found 1 pods out of 1 -STEP: ensuring each pod is running 06/12/23 22:12:52.308 -Jun 12 22:12:52.308: INFO: Waiting up to 5m0s for pod "test-rolling-update-controller-7fwwq" in namespace "deployment-1906" to be "running" -Jun 12 22:12:52.333: INFO: Pod "test-rolling-update-controller-7fwwq": Phase="Pending", Reason="", readiness=false. Elapsed: 25.055115ms -Jun 12 22:12:54.353: INFO: Pod "test-rolling-update-controller-7fwwq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045044758s -Jun 12 22:12:56.348: INFO: Pod "test-rolling-update-controller-7fwwq": Phase="Running", Reason="", readiness=true. Elapsed: 4.040428625s -Jun 12 22:12:56.348: INFO: Pod "test-rolling-update-controller-7fwwq" satisfied condition "running" -Jun 12 22:12:56.348: INFO: Creating deployment "test-rolling-update-deployment" -Jun 12 22:12:56.367: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has -Jun 12 22:12:56.390: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created -Jun 12 22:12:58.447: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected -Jun 12 22:12:58.459: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 12, 56, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 12, 56, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 12, 56, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 12, 56, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-7549d9f46d\" is progressing."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:13:00.553: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) -[AfterEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:84 -Jun 12 22:13:00.623: INFO: Deployment "test-rolling-update-deployment": -&Deployment{ObjectMeta:{test-rolling-update-deployment deployment-1906 61b2c99e-1999-4fa3-91f9-8726bb0666c9 136986 1 2023-06-12 22:12:56 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2023-06-12 22:12:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-06-12 22:12:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003518dd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-06-12 22:12:56 +0000 UTC,LastTransitionTime:2023-06-12 22:12:56 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-7549d9f46d" has successfully progressed.,LastUpdateTime:2023-06-12 22:12:58 +0000 UTC,LastTransitionTime:2023-06-12 22:12:56 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} - -Jun 12 22:13:00.635: INFO: New ReplicaSet "test-rolling-update-deployment-7549d9f46d" of Deployment "test-rolling-update-deployment": -&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-7549d9f46d deployment-1906 207cfa30-9380-45da-a783-2439ab582a88 136975 1 2023-06-12 22:12:56 +0000 UTC map[name:sample-pod pod-template-hash:7549d9f46d] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 61b2c99e-1999-4fa3-91f9-8726bb0666c9 0xc003519a77 0xc003519a78}] [] [{kube-controller-manager Update apps/v1 2023-06-12 22:12:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61b2c99e-1999-4fa3-91f9-8726bb0666c9\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-06-12 22:12:58 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 7549d9f46d,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:7549d9f46d] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003519db8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} -Jun 12 22:13:00.635: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": -Jun 12 22:13:00.636: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-1906 b2782278-0526-4d14-8a8f-2fe756d46caa 136984 2 2023-06-12 22:12:52 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 61b2c99e-1999-4fa3-91f9-8726bb0666c9 0xc0035194d7 0xc0035194d8}] [] [{e2e.test Update apps/v1 2023-06-12 22:12:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-06-12 22:12:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61b2c99e-1999-4fa3-91f9-8726bb0666c9\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2023-06-12 22:12:58 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0035198f8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} -Jun 12 22:13:00.648: INFO: Pod "test-rolling-update-deployment-7549d9f46d-5kscg" is available: -&Pod{ObjectMeta:{test-rolling-update-deployment-7549d9f46d-5kscg test-rolling-update-deployment-7549d9f46d- deployment-1906 2280744e-2b14-4ac4-ad99-c47c4c60a199 136974 0 2023-06-12 22:12:56 +0000 UTC map[name:sample-pod pod-template-hash:7549d9f46d] map[cni.projectcalico.org/containerID:a2a53f99f53377cfb546241c31c52db0b8eb0f3838c2e7c36d7843d908b26a8d cni.projectcalico.org/podIP:172.30.224.13/32 cni.projectcalico.org/podIPs:172.30.224.13/32 k8s.v1.cni.cncf.io/network-status:[{ - "name": "k8s-pod-network", - "ips": [ - "172.30.224.13" - ], - "default": true, - "dns": {} -}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-rolling-update-deployment-7549d9f46d 207cfa30-9380-45da-a783-2439ab582a88 0xc004b808c7 0xc004b808c8}] [] [{kube-controller-manager Update v1 2023-06-12 22:12:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"207cfa30-9380-45da-a783-2439ab582a88\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-06-12 22:12:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-06-12 22:12:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-06-12 22:12:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.30.224.13\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qnvv6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qnvv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.70,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c63,c47,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-mjgzp,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:12:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:12:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:12:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:12:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.70,PodIP:172.30.224.13,StartTime:2023-06-12 22:12:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-12 22:12:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,ImageID:registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://596f445e688ed2efb0d461a99f4d5325bccf10404da693eaf127bd1b009b8068,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.30.224.13,},},EphemeralContainerStatuses:[]ContainerStatus{},},} -[AfterEach] [sig-apps] Deployment +[It] should apply changes to a job status [Conformance] + test/e2e/apps/job.go:636 +STEP: Creating a job 07/27/23 02:45:46.73 +W0727 02:45:46.754892 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "c" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "c" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "c" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "c" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +STEP: Ensure pods equal to parallelism count is attached to the job 07/27/23 02:45:46.754 +STEP: patching /status 07/27/23 02:45:48.79 +STEP: updating /status 07/27/23 02:45:48.832 +STEP: get /status 07/27/23 02:45:48.867 +[AfterEach] [sig-apps] Job test/e2e/framework/node/init/init.go:32 -Jun 12 22:13:00.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] Deployment +Jul 27 02:45:48.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Job test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] Deployment +[DeferCleanup (Each)] [sig-apps] Job dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] Deployment +[DeferCleanup (Each)] [sig-apps] Job tear down framework | framework.go:193 -STEP: Destroying namespace "deployment-1906" for this suite. 06/12/23 22:13:00.672 +STEP: Destroying namespace "job-8282" for this suite. 07/27/23 02:45:48.944 ------------------------------ -• [SLOW TEST] [8.716 seconds] -[sig-apps] Deployment +• [2.303 seconds] +[sig-apps] Job test/e2e/apps/framework.go:23 - RollingUpdateDeployment should delete old pods and create new ones [Conformance] - test/e2e/apps/deployment.go:105 + should apply changes to a job status [Conformance] + test/e2e/apps/job.go:636 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] Deployment + [BeforeEach] [sig-apps] Job set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:12:52.01 - Jun 12 22:12:52.011: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename deployment 06/12/23 22:12:52.014 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:12:52.151 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:12:52.174 - [BeforeEach] [sig-apps] Deployment + STEP: Creating a kubernetes client 07/27/23 02:45:46.664 + Jul 27 02:45:46.664: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename job 07/27/23 02:45:46.665 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:45:46.705 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:45:46.717 + [BeforeEach] [sig-apps] Job test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:91 - [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] - test/e2e/apps/deployment.go:105 - Jun 12 22:12:52.216: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) - Jun 12 22:12:52.307: INFO: Pod name sample-pod: Found 1 pods out of 1 - STEP: ensuring each pod is running 06/12/23 22:12:52.308 - Jun 12 22:12:52.308: INFO: Waiting up to 5m0s for pod "test-rolling-update-controller-7fwwq" in namespace "deployment-1906" to be "running" - Jun 12 22:12:52.333: INFO: Pod "test-rolling-update-controller-7fwwq": Phase="Pending", Reason="", readiness=false. Elapsed: 25.055115ms - Jun 12 22:12:54.353: INFO: Pod "test-rolling-update-controller-7fwwq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045044758s - Jun 12 22:12:56.348: INFO: Pod "test-rolling-update-controller-7fwwq": Phase="Running", Reason="", readiness=true. Elapsed: 4.040428625s - Jun 12 22:12:56.348: INFO: Pod "test-rolling-update-controller-7fwwq" satisfied condition "running" - Jun 12 22:12:56.348: INFO: Creating deployment "test-rolling-update-deployment" - Jun 12 22:12:56.367: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has - Jun 12 22:12:56.390: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created - Jun 12 22:12:58.447: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected - Jun 12 22:12:58.459: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 12, 56, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 12, 56, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 12, 56, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 12, 56, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-7549d9f46d\" is progressing."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:13:00.553: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) - [AfterEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:84 - Jun 12 22:13:00.623: INFO: Deployment "test-rolling-update-deployment": - &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-1906 61b2c99e-1999-4fa3-91f9-8726bb0666c9 136986 1 2023-06-12 22:12:56 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2023-06-12 22:12:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-06-12 22:12:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003518dd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-06-12 22:12:56 +0000 UTC,LastTransitionTime:2023-06-12 22:12:56 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-7549d9f46d" has successfully progressed.,LastUpdateTime:2023-06-12 22:12:58 +0000 UTC,LastTransitionTime:2023-06-12 22:12:56 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} - - Jun 12 22:13:00.635: INFO: New ReplicaSet "test-rolling-update-deployment-7549d9f46d" of Deployment "test-rolling-update-deployment": - &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-7549d9f46d deployment-1906 207cfa30-9380-45da-a783-2439ab582a88 136975 1 2023-06-12 22:12:56 +0000 UTC map[name:sample-pod pod-template-hash:7549d9f46d] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 61b2c99e-1999-4fa3-91f9-8726bb0666c9 0xc003519a77 0xc003519a78}] [] [{kube-controller-manager Update apps/v1 2023-06-12 22:12:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61b2c99e-1999-4fa3-91f9-8726bb0666c9\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-06-12 22:12:58 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 7549d9f46d,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:7549d9f46d] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003519db8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} - Jun 12 22:13:00.635: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": - Jun 12 22:13:00.636: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-1906 b2782278-0526-4d14-8a8f-2fe756d46caa 136984 2 2023-06-12 22:12:52 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 61b2c99e-1999-4fa3-91f9-8726bb0666c9 0xc0035194d7 0xc0035194d8}] [] [{e2e.test Update apps/v1 2023-06-12 22:12:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-06-12 22:12:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61b2c99e-1999-4fa3-91f9-8726bb0666c9\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2023-06-12 22:12:58 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0035198f8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} - Jun 12 22:13:00.648: INFO: Pod "test-rolling-update-deployment-7549d9f46d-5kscg" is available: - &Pod{ObjectMeta:{test-rolling-update-deployment-7549d9f46d-5kscg test-rolling-update-deployment-7549d9f46d- deployment-1906 2280744e-2b14-4ac4-ad99-c47c4c60a199 136974 0 2023-06-12 22:12:56 +0000 UTC map[name:sample-pod pod-template-hash:7549d9f46d] map[cni.projectcalico.org/containerID:a2a53f99f53377cfb546241c31c52db0b8eb0f3838c2e7c36d7843d908b26a8d cni.projectcalico.org/podIP:172.30.224.13/32 cni.projectcalico.org/podIPs:172.30.224.13/32 k8s.v1.cni.cncf.io/network-status:[{ - "name": "k8s-pod-network", - "ips": [ - "172.30.224.13" - ], - "default": true, - "dns": {} - }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-rolling-update-deployment-7549d9f46d 207cfa30-9380-45da-a783-2439ab582a88 0xc004b808c7 0xc004b808c8}] [] [{kube-controller-manager Update v1 2023-06-12 22:12:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"207cfa30-9380-45da-a783-2439ab582a88\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-06-12 22:12:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-06-12 22:12:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-06-12 22:12:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.30.224.13\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qnvv6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qnvv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.70,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c63,c47,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-mjgzp,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:12:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:12:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:12:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:12:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.70,PodIP:172.30.224.13,StartTime:2023-06-12 22:12:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-12 22:12:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,ImageID:registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://596f445e688ed2efb0d461a99f4d5325bccf10404da693eaf127bd1b009b8068,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.30.224.13,},},EphemeralContainerStatuses:[]ContainerStatus{},},} - [AfterEach] [sig-apps] Deployment + [It] should apply changes to a job status [Conformance] + test/e2e/apps/job.go:636 + STEP: Creating a job 07/27/23 02:45:46.73 + W0727 02:45:46.754892 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "c" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "c" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "c" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "c" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + STEP: Ensure pods equal to parallelism count is attached to the job 07/27/23 02:45:46.754 + STEP: patching /status 07/27/23 02:45:48.79 + STEP: updating /status 07/27/23 02:45:48.832 + STEP: get /status 07/27/23 02:45:48.867 + [AfterEach] [sig-apps] Job test/e2e/framework/node/init/init.go:32 - Jun 12 22:13:00.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] Deployment + Jul 27 02:45:48.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Job test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] Deployment + [DeferCleanup (Each)] [sig-apps] Job dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] Deployment + [DeferCleanup (Each)] [sig-apps] Job tear down framework | framework.go:193 - STEP: Destroying namespace "deployment-1906" for this suite. 06/12/23 22:13:00.672 + STEP: Destroying namespace "job-8282" for this suite. 07/27/23 02:45:48.944 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] CSIInlineVolumes - should support ephemeral VolumeLifecycleMode in CSIDriver API [Conformance] - test/e2e/storage/csi_inline.go:46 -[BeforeEach] [sig-storage] CSIInlineVolumes +[sig-storage] Downward API volume + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:68 +[BeforeEach] [sig-storage] Downward API volume set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:13:00.742 -Jun 12 22:13:00.743: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename csiinlinevolumes 06/12/23 22:13:00.746 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:13:00.842 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:13:00.856 -[BeforeEach] [sig-storage] CSIInlineVolumes +STEP: Creating a kubernetes client 07/27/23 02:45:48.987 +Jul 27 02:45:48.987: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename downward-api 07/27/23 02:45:48.988 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:45:49.04 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:45:49.055 +[BeforeEach] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:31 -[It] should support ephemeral VolumeLifecycleMode in CSIDriver API [Conformance] - test/e2e/storage/csi_inline.go:46 -STEP: creating 06/12/23 22:13:00.868 -STEP: getting 06/12/23 22:13:00.929 -STEP: listing 06/12/23 22:13:00.958 -STEP: deleting 06/12/23 22:13:00.97 -[AfterEach] [sig-storage] CSIInlineVolumes +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:68 +STEP: Creating a pod to test downward API volume plugin 07/27/23 02:45:49.075 +Jul 27 02:45:49.108: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ff15359c-a724-4f5d-bd05-6872d70dffcc" in namespace "downward-api-7716" to be "Succeeded or Failed" +Jul 27 02:45:49.127: INFO: Pod "downwardapi-volume-ff15359c-a724-4f5d-bd05-6872d70dffcc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.867254ms +Jul 27 02:45:51.152: INFO: Pod "downwardapi-volume-ff15359c-a724-4f5d-bd05-6872d70dffcc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04370321s +Jul 27 02:45:53.139: INFO: Pod "downwardapi-volume-ff15359c-a724-4f5d-bd05-6872d70dffcc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03106571s +STEP: Saw pod success 07/27/23 02:45:53.139 +Jul 27 02:45:53.140: INFO: Pod "downwardapi-volume-ff15359c-a724-4f5d-bd05-6872d70dffcc" satisfied condition "Succeeded or Failed" +Jul 27 02:45:53.151: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-ff15359c-a724-4f5d-bd05-6872d70dffcc container client-container: +STEP: delete the pod 07/27/23 02:45:53.175 +Jul 27 02:45:53.207: INFO: Waiting for pod downwardapi-volume-ff15359c-a724-4f5d-bd05-6872d70dffcc to disappear +Jul 27 02:45:53.217: INFO: Pod downwardapi-volume-ff15359c-a724-4f5d-bd05-6872d70dffcc no longer exists +[AfterEach] [sig-storage] Downward API volume test/e2e/framework/node/init/init.go:32 -Jun 12 22:13:01.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] CSIInlineVolumes +Jul 27 02:45:53.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] CSIInlineVolumes +[DeferCleanup (Each)] [sig-storage] Downward API volume dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] CSIInlineVolumes +[DeferCleanup (Each)] [sig-storage] Downward API volume tear down framework | framework.go:193 -STEP: Destroying namespace "csiinlinevolumes-4584" for this suite. 06/12/23 22:13:01.061 +STEP: Destroying namespace "downward-api-7716" for this suite. 07/27/23 02:45:53.237 ------------------------------ -• [0.351 seconds] -[sig-storage] CSIInlineVolumes -test/e2e/storage/utils/framework.go:23 - should support ephemeral VolumeLifecycleMode in CSIDriver API [Conformance] - test/e2e/storage/csi_inline.go:46 +• [4.270 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:68 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] CSIInlineVolumes + [BeforeEach] [sig-storage] Downward API volume set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:13:00.742 - Jun 12 22:13:00.743: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename csiinlinevolumes 06/12/23 22:13:00.746 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:13:00.842 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:13:00.856 - [BeforeEach] [sig-storage] CSIInlineVolumes + STEP: Creating a kubernetes client 07/27/23 02:45:48.987 + Jul 27 02:45:48.987: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename downward-api 07/27/23 02:45:48.988 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:45:49.04 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:45:49.055 + [BeforeEach] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:31 - [It] should support ephemeral VolumeLifecycleMode in CSIDriver API [Conformance] - test/e2e/storage/csi_inline.go:46 - STEP: creating 06/12/23 22:13:00.868 - STEP: getting 06/12/23 22:13:00.929 - STEP: listing 06/12/23 22:13:00.958 - STEP: deleting 06/12/23 22:13:00.97 - [AfterEach] [sig-storage] CSIInlineVolumes + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 + [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:68 + STEP: Creating a pod to test downward API volume plugin 07/27/23 02:45:49.075 + Jul 27 02:45:49.108: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ff15359c-a724-4f5d-bd05-6872d70dffcc" in namespace "downward-api-7716" to be "Succeeded or Failed" + Jul 27 02:45:49.127: INFO: Pod "downwardapi-volume-ff15359c-a724-4f5d-bd05-6872d70dffcc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.867254ms + Jul 27 02:45:51.152: INFO: Pod "downwardapi-volume-ff15359c-a724-4f5d-bd05-6872d70dffcc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04370321s + Jul 27 02:45:53.139: INFO: Pod "downwardapi-volume-ff15359c-a724-4f5d-bd05-6872d70dffcc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03106571s + STEP: Saw pod success 07/27/23 02:45:53.139 + Jul 27 02:45:53.140: INFO: Pod "downwardapi-volume-ff15359c-a724-4f5d-bd05-6872d70dffcc" satisfied condition "Succeeded or Failed" + Jul 27 02:45:53.151: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-ff15359c-a724-4f5d-bd05-6872d70dffcc container client-container: + STEP: delete the pod 07/27/23 02:45:53.175 + Jul 27 02:45:53.207: INFO: Waiting for pod downwardapi-volume-ff15359c-a724-4f5d-bd05-6872d70dffcc to disappear + Jul 27 02:45:53.217: INFO: Pod downwardapi-volume-ff15359c-a724-4f5d-bd05-6872d70dffcc no longer exists + [AfterEach] [sig-storage] Downward API volume test/e2e/framework/node/init/init.go:32 - Jun 12 22:13:01.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] CSIInlineVolumes + Jul 27 02:45:53.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] CSIInlineVolumes + [DeferCleanup (Each)] [sig-storage] Downward API volume dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] CSIInlineVolumes + [DeferCleanup (Each)] [sig-storage] Downward API volume tear down framework | framework.go:193 - STEP: Destroying namespace "csiinlinevolumes-4584" for this suite. 06/12/23 22:13:01.061 + STEP: Destroying namespace "downward-api-7716" for this suite. 07/27/23 02:45:53.237 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSS +SSS ------------------------------ -[sig-cli] Kubectl client Kubectl label - should update the label on a resource [Conformance] - test/e2e/kubectl/kubectl.go:1509 -[BeforeEach] [sig-cli] Kubectl client +[sig-storage] EmptyDir volumes + pod should support shared volumes between containers [Conformance] + test/e2e/common/storage/empty_dir.go:227 +[BeforeEach] [sig-storage] EmptyDir volumes set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:13:01.099 -Jun 12 22:13:01.099: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename kubectl 06/12/23 22:13:01.101 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:13:01.164 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:13:01.177 -[BeforeEach] [sig-cli] Kubectl client +STEP: Creating a kubernetes client 07/27/23 02:45:53.257 +Jul 27 02:45:53.257: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename emptydir 07/27/23 02:45:53.257 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:45:53.298 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:45:53.313 +[BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-cli] Kubectl client - test/e2e/kubectl/kubectl.go:274 -[BeforeEach] Kubectl label - test/e2e/kubectl/kubectl.go:1494 -STEP: creating the pod 06/12/23 22:13:01.189 -Jun 12 22:13:01.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-4921 create -f -' -Jun 12 22:13:07.712: INFO: stderr: "" -Jun 12 22:13:07.712: INFO: stdout: "pod/pause created\n" -Jun 12 22:13:07.712: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] -Jun 12 22:13:07.712: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-4921" to be "running and ready" -Jun 12 22:13:07.755: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 42.487824ms -Jun 12 22:13:07.755: INFO: Error evaluating pod condition running and ready: want pod 'pause' on '10.138.75.70' to be 'Running' but was 'Pending' -Jun 12 22:13:09.767: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054838382s -Jun 12 22:13:09.767: INFO: Error evaluating pod condition running and ready: want pod 'pause' on '10.138.75.70' to be 'Running' but was 'Pending' -Jun 12 22:13:11.768: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.056305373s -Jun 12 22:13:11.768: INFO: Pod "pause" satisfied condition "running and ready" -Jun 12 22:13:11.768: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] -[It] should update the label on a resource [Conformance] - test/e2e/kubectl/kubectl.go:1509 -STEP: adding the label testing-label with value testing-label-value to a pod 06/12/23 22:13:11.769 -Jun 12 22:13:11.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-4921 label pods pause testing-label=testing-label-value' -Jun 12 22:13:11.971: INFO: stderr: "" -Jun 12 22:13:11.971: INFO: stdout: "pod/pause labeled\n" -STEP: verifying the pod has the label testing-label with the value testing-label-value 06/12/23 22:13:11.971 -Jun 12 22:13:11.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-4921 get pod pause -L testing-label' -Jun 12 22:13:12.178: INFO: stderr: "" -Jun 12 22:13:12.178: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" -STEP: removing the label testing-label of a pod 06/12/23 22:13:12.178 -Jun 12 22:13:12.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-4921 label pods pause testing-label-' -Jun 12 22:13:12.569: INFO: stderr: "" -Jun 12 22:13:12.569: INFO: stdout: "pod/pause unlabeled\n" -STEP: verifying the pod doesn't have the label testing-label 06/12/23 22:13:12.569 -Jun 12 22:13:12.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-4921 get pod pause -L testing-label' -Jun 12 22:13:12.815: INFO: stderr: "" -Jun 12 22:13:12.815: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" -[AfterEach] Kubectl label - test/e2e/kubectl/kubectl.go:1500 -STEP: using delete to clean up resources 06/12/23 22:13:12.816 -Jun 12 22:13:12.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-4921 delete --grace-period=0 --force -f -' -Jun 12 22:13:13.125: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" -Jun 12 22:13:13.125: INFO: stdout: "pod \"pause\" force deleted\n" -Jun 12 22:13:13.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-4921 get rc,svc -l name=pause --no-headers' -Jun 12 22:13:13.776: INFO: stderr: "No resources found in kubectl-4921 namespace.\n" -Jun 12 22:13:13.776: INFO: stdout: "" -Jun 12 22:13:13.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-4921 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' -Jun 12 22:13:14.108: INFO: stderr: "" -Jun 12 22:13:14.108: INFO: stdout: "" -[AfterEach] [sig-cli] Kubectl client +[It] pod should support shared volumes between containers [Conformance] + test/e2e/common/storage/empty_dir.go:227 +STEP: Creating Pod 07/27/23 02:45:53.326 +Jul 27 02:45:54.359: INFO: Waiting up to 5m0s for pod "pod-sharedvolume-0fbc05ac-3877-4594-a337-daab0007613b" in namespace "emptydir-4572" to be "running" +Jul 27 02:45:54.373: INFO: Pod "pod-sharedvolume-0fbc05ac-3877-4594-a337-daab0007613b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.203365ms +Jul 27 02:45:56.394: INFO: Pod "pod-sharedvolume-0fbc05ac-3877-4594-a337-daab0007613b": Phase="Running", Reason="", readiness=false. Elapsed: 2.034184692s +Jul 27 02:45:56.394: INFO: Pod "pod-sharedvolume-0fbc05ac-3877-4594-a337-daab0007613b" satisfied condition "running" +STEP: Reading file content from the nginx-container 07/27/23 02:45:56.394 +Jul 27 02:45:56.394: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-4572 PodName:pod-sharedvolume-0fbc05ac-3877-4594-a337-daab0007613b ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jul 27 02:45:56.394: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 02:45:56.394: INFO: ExecWithOptions: Clientset creation +Jul 27 02:45:56.394: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/emptydir-4572/pods/pod-sharedvolume-0fbc05ac-3877-4594-a337-daab0007613b/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fusr%2Fshare%2Fvolumeshare%2Fshareddata.txt&container=busybox-main-container&container=busybox-main-container&stderr=true&stdout=true) +Jul 27 02:45:56.508: INFO: Exec stderr: "" +[AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/node/init/init.go:32 -Jun 12 22:13:14.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-cli] Kubectl client +Jul 27 02:45:56.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-cli] Kubectl client +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-cli] Kubectl client +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes tear down framework | framework.go:193 -STEP: Destroying namespace "kubectl-4921" for this suite. 06/12/23 22:13:14.131 +STEP: Destroying namespace "emptydir-4572" for this suite. 07/27/23 02:45:56.527 ------------------------------ -• [SLOW TEST] [13.069 seconds] -[sig-cli] Kubectl client -test/e2e/kubectl/framework.go:23 - Kubectl label - test/e2e/kubectl/kubectl.go:1492 - should update the label on a resource [Conformance] - test/e2e/kubectl/kubectl.go:1509 +• [3.329 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + pod should support shared volumes between containers [Conformance] + test/e2e/common/storage/empty_dir.go:227 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-cli] Kubectl client + [BeforeEach] [sig-storage] EmptyDir volumes set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:13:01.099 - Jun 12 22:13:01.099: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename kubectl 06/12/23 22:13:01.101 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:13:01.164 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:13:01.177 - [BeforeEach] [sig-cli] Kubectl client + STEP: Creating a kubernetes client 07/27/23 02:45:53.257 + Jul 27 02:45:53.257: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename emptydir 07/27/23 02:45:53.257 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:45:53.298 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:45:53.313 + [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-cli] Kubectl client - test/e2e/kubectl/kubectl.go:274 - [BeforeEach] Kubectl label - test/e2e/kubectl/kubectl.go:1494 - STEP: creating the pod 06/12/23 22:13:01.189 - Jun 12 22:13:01.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-4921 create -f -' - Jun 12 22:13:07.712: INFO: stderr: "" - Jun 12 22:13:07.712: INFO: stdout: "pod/pause created\n" - Jun 12 22:13:07.712: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] - Jun 12 22:13:07.712: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-4921" to be "running and ready" - Jun 12 22:13:07.755: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 42.487824ms - Jun 12 22:13:07.755: INFO: Error evaluating pod condition running and ready: want pod 'pause' on '10.138.75.70' to be 'Running' but was 'Pending' - Jun 12 22:13:09.767: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054838382s - Jun 12 22:13:09.767: INFO: Error evaluating pod condition running and ready: want pod 'pause' on '10.138.75.70' to be 'Running' but was 'Pending' - Jun 12 22:13:11.768: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.056305373s - Jun 12 22:13:11.768: INFO: Pod "pause" satisfied condition "running and ready" - Jun 12 22:13:11.768: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] - [It] should update the label on a resource [Conformance] - test/e2e/kubectl/kubectl.go:1509 - STEP: adding the label testing-label with value testing-label-value to a pod 06/12/23 22:13:11.769 - Jun 12 22:13:11.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-4921 label pods pause testing-label=testing-label-value' - Jun 12 22:13:11.971: INFO: stderr: "" - Jun 12 22:13:11.971: INFO: stdout: "pod/pause labeled\n" - STEP: verifying the pod has the label testing-label with the value testing-label-value 06/12/23 22:13:11.971 - Jun 12 22:13:11.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-4921 get pod pause -L testing-label' - Jun 12 22:13:12.178: INFO: stderr: "" - Jun 12 22:13:12.178: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" - STEP: removing the label testing-label of a pod 06/12/23 22:13:12.178 - Jun 12 22:13:12.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-4921 label pods pause testing-label-' - Jun 12 22:13:12.569: INFO: stderr: "" - Jun 12 22:13:12.569: INFO: stdout: "pod/pause unlabeled\n" - STEP: verifying the pod doesn't have the label testing-label 06/12/23 22:13:12.569 - Jun 12 22:13:12.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-4921 get pod pause -L testing-label' - Jun 12 22:13:12.815: INFO: stderr: "" - Jun 12 22:13:12.815: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" - [AfterEach] Kubectl label - test/e2e/kubectl/kubectl.go:1500 - STEP: using delete to clean up resources 06/12/23 22:13:12.816 - Jun 12 22:13:12.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-4921 delete --grace-period=0 --force -f -' - Jun 12 22:13:13.125: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" - Jun 12 22:13:13.125: INFO: stdout: "pod \"pause\" force deleted\n" - Jun 12 22:13:13.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-4921 get rc,svc -l name=pause --no-headers' - Jun 12 22:13:13.776: INFO: stderr: "No resources found in kubectl-4921 namespace.\n" - Jun 12 22:13:13.776: INFO: stdout: "" - Jun 12 22:13:13.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-4921 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' - Jun 12 22:13:14.108: INFO: stderr: "" - Jun 12 22:13:14.108: INFO: stdout: "" - [AfterEach] [sig-cli] Kubectl client + [It] pod should support shared volumes between containers [Conformance] + test/e2e/common/storage/empty_dir.go:227 + STEP: Creating Pod 07/27/23 02:45:53.326 + Jul 27 02:45:54.359: INFO: Waiting up to 5m0s for pod "pod-sharedvolume-0fbc05ac-3877-4594-a337-daab0007613b" in namespace "emptydir-4572" to be "running" + Jul 27 02:45:54.373: INFO: Pod "pod-sharedvolume-0fbc05ac-3877-4594-a337-daab0007613b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.203365ms + Jul 27 02:45:56.394: INFO: Pod "pod-sharedvolume-0fbc05ac-3877-4594-a337-daab0007613b": Phase="Running", Reason="", readiness=false. Elapsed: 2.034184692s + Jul 27 02:45:56.394: INFO: Pod "pod-sharedvolume-0fbc05ac-3877-4594-a337-daab0007613b" satisfied condition "running" + STEP: Reading file content from the nginx-container 07/27/23 02:45:56.394 + Jul 27 02:45:56.394: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-4572 PodName:pod-sharedvolume-0fbc05ac-3877-4594-a337-daab0007613b ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jul 27 02:45:56.394: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 02:45:56.394: INFO: ExecWithOptions: Clientset creation + Jul 27 02:45:56.394: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/emptydir-4572/pods/pod-sharedvolume-0fbc05ac-3877-4594-a337-daab0007613b/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fusr%2Fshare%2Fvolumeshare%2Fshareddata.txt&container=busybox-main-container&container=busybox-main-container&stderr=true&stdout=true) + Jul 27 02:45:56.508: INFO: Exec stderr: "" + [AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/node/init/init.go:32 - Jun 12 22:13:14.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-cli] Kubectl client + Jul 27 02:45:56.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-cli] Kubectl client + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-cli] Kubectl client + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes tear down framework | framework.go:193 - STEP: Destroying namespace "kubectl-4921" for this suite. 06/12/23 22:13:14.131 + STEP: Destroying namespace "emptydir-4572" for this suite. 07/27/23 02:45:56.527 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSS +SSSSSSSSSSSS ------------------------------ -[sig-apps] Daemon set [Serial] - should update pod when spec was updated and update strategy is RollingUpdate [Conformance] - test/e2e/apps/daemon_set.go:374 -[BeforeEach] [sig-apps] Daemon set [Serial] +[sig-storage] Projected downwardAPI + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:68 +[BeforeEach] [sig-storage] Projected downwardAPI set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:13:14.172 -Jun 12 22:13:14.172: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename daemonsets 06/12/23 22:13:14.177 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:13:14.252 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:13:14.276 -[BeforeEach] [sig-apps] Daemon set [Serial] +STEP: Creating a kubernetes client 07/27/23 02:45:56.586 +Jul 27 02:45:56.586: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename projected 07/27/23 02:45:56.587 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:45:56.625 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:45:56.636 +[BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-apps] Daemon set [Serial] - test/e2e/apps/daemon_set.go:146 -[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] - test/e2e/apps/daemon_set.go:374 -Jun 12 22:13:14.412: INFO: Creating simple daemon set daemon-set -STEP: Check that daemon pods launch on every node of the cluster. 06/12/23 22:13:14.46 -Jun 12 22:13:14.492: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 22:13:14.492: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 -Jun 12 22:13:15.531: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 22:13:15.531: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 -Jun 12 22:13:16.525: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 22:13:16.525: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 -Jun 12 22:13:17.525: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 -Jun 12 22:13:17.525: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 -Jun 12 22:13:18.603: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 -Jun 12 22:13:18.604: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set -STEP: Update daemon pods image. 06/12/23 22:13:18.735 -STEP: Check that daemon pods images are updated. 06/12/23 22:13:18.837 -Jun 12 22:13:18.863: INFO: Wrong image for pod: daemon-set-8m68x. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. -Jun 12 22:13:18.863: INFO: Wrong image for pod: daemon-set-qkzbl. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. -Jun 12 22:13:19.956: INFO: Wrong image for pod: daemon-set-8m68x. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. -Jun 12 22:13:19.956: INFO: Wrong image for pod: daemon-set-qkzbl. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. -Jun 12 22:13:20.908: INFO: Wrong image for pod: daemon-set-8m68x. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. -Jun 12 22:13:20.908: INFO: Wrong image for pod: daemon-set-qkzbl. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. -Jun 12 22:13:21.944: INFO: Wrong image for pod: daemon-set-8m68x. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. -Jun 12 22:13:21.944: INFO: Wrong image for pod: daemon-set-qkzbl. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. -Jun 12 22:13:21.944: INFO: Pod daemon-set-vtrgb is not available -Jun 12 22:13:22.913: INFO: Wrong image for pod: daemon-set-8m68x. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. -Jun 12 22:13:22.913: INFO: Wrong image for pod: daemon-set-qkzbl. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. -Jun 12 22:13:22.913: INFO: Pod daemon-set-vtrgb is not available -Jun 12 22:13:23.913: INFO: Wrong image for pod: daemon-set-qkzbl. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. -Jun 12 22:13:24.910: INFO: Wrong image for pod: daemon-set-qkzbl. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. -Jun 12 22:13:25.917: INFO: Wrong image for pod: daemon-set-qkzbl. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. -Jun 12 22:13:25.917: INFO: Pod daemon-set-tphf6 is not available -Jun 12 22:13:26.928: INFO: Wrong image for pod: daemon-set-qkzbl. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. -Jun 12 22:13:26.928: INFO: Pod daemon-set-tphf6 is not available -Jun 12 22:13:30.908: INFO: Pod daemon-set-rzmjd is not available -STEP: Check that daemon pods are still running on every node of the cluster. 06/12/23 22:13:30.96 -Jun 12 22:13:31.027: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 -Jun 12 22:13:31.027: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 -Jun 12 22:13:32.069: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 -Jun 12 22:13:32.069: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 -Jun 12 22:13:33.063: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 -Jun 12 22:13:33.063: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set -[AfterEach] [sig-apps] Daemon set [Serial] - test/e2e/apps/daemon_set.go:111 -STEP: Deleting DaemonSet "daemon-set" 06/12/23 22:13:33.136 -STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5024, will wait for the garbage collector to delete the pods 06/12/23 22:13:33.137 -Jun 12 22:13:33.222: INFO: Deleting DaemonSet.extensions daemon-set took: 22.253639ms -Jun 12 22:13:33.323: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.07955ms -Jun 12 22:13:37.039: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 -Jun 12 22:13:37.039: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set -Jun 12 22:13:37.052: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"137549"},"items":null} - -Jun 12 22:13:37.064: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"137549"},"items":null} - -[AfterEach] [sig-apps] Daemon set [Serial] +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:68 +STEP: Creating a pod to test downward API volume plugin 07/27/23 02:45:56.648 +Jul 27 02:45:56.690: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ca194682-fb18-4260-96fa-112cb94c3b75" in namespace "projected-5374" to be "Succeeded or Failed" +Jul 27 02:45:56.706: INFO: Pod "downwardapi-volume-ca194682-fb18-4260-96fa-112cb94c3b75": Phase="Pending", Reason="", readiness=false. Elapsed: 15.351833ms +Jul 27 02:45:58.717: INFO: Pod "downwardapi-volume-ca194682-fb18-4260-96fa-112cb94c3b75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026396074s +Jul 27 02:46:00.718: INFO: Pod "downwardapi-volume-ca194682-fb18-4260-96fa-112cb94c3b75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028237606s +STEP: Saw pod success 07/27/23 02:46:00.718 +Jul 27 02:46:00.719: INFO: Pod "downwardapi-volume-ca194682-fb18-4260-96fa-112cb94c3b75" satisfied condition "Succeeded or Failed" +Jul 27 02:46:00.730: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-ca194682-fb18-4260-96fa-112cb94c3b75 container client-container: +STEP: delete the pod 07/27/23 02:46:00.759 +Jul 27 02:46:00.794: INFO: Waiting for pod downwardapi-volume-ca194682-fb18-4260-96fa-112cb94c3b75 to disappear +Jul 27 02:46:00.807: INFO: Pod downwardapi-volume-ca194682-fb18-4260-96fa-112cb94c3b75 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/node/init/init.go:32 -Jun 12 22:13:37.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] +Jul 27 02:46:00.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI tear down framework | framework.go:193 -STEP: Destroying namespace "daemonsets-5024" for this suite. 06/12/23 22:13:37.138 +STEP: Destroying namespace "projected-5374" for this suite. 07/27/23 02:46:00.826 ------------------------------ -• [SLOW TEST] [22.992 seconds] -[sig-apps] Daemon set [Serial] -test/e2e/apps/framework.go:23 - should update pod when spec was updated and update strategy is RollingUpdate [Conformance] - test/e2e/apps/daemon_set.go:374 +• [4.263 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:68 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] Daemon set [Serial] + [BeforeEach] [sig-storage] Projected downwardAPI set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:13:14.172 - Jun 12 22:13:14.172: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename daemonsets 06/12/23 22:13:14.177 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:13:14.252 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:13:14.276 - [BeforeEach] [sig-apps] Daemon set [Serial] + STEP: Creating a kubernetes client 07/27/23 02:45:56.586 + Jul 27 02:45:56.586: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename projected 07/27/23 02:45:56.587 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:45:56.625 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:45:56.636 + [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-apps] Daemon set [Serial] - test/e2e/apps/daemon_set.go:146 - [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] - test/e2e/apps/daemon_set.go:374 - Jun 12 22:13:14.412: INFO: Creating simple daemon set daemon-set - STEP: Check that daemon pods launch on every node of the cluster. 06/12/23 22:13:14.46 - Jun 12 22:13:14.492: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 22:13:14.492: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 - Jun 12 22:13:15.531: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 22:13:15.531: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 - Jun 12 22:13:16.525: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 22:13:16.525: INFO: Node 10.138.75.112 is running 0 daemon pod, expected 1 - Jun 12 22:13:17.525: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 - Jun 12 22:13:17.525: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 - Jun 12 22:13:18.603: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 - Jun 12 22:13:18.604: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set - STEP: Update daemon pods image. 06/12/23 22:13:18.735 - STEP: Check that daemon pods images are updated. 06/12/23 22:13:18.837 - Jun 12 22:13:18.863: INFO: Wrong image for pod: daemon-set-8m68x. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. - Jun 12 22:13:18.863: INFO: Wrong image for pod: daemon-set-qkzbl. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. - Jun 12 22:13:19.956: INFO: Wrong image for pod: daemon-set-8m68x. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. - Jun 12 22:13:19.956: INFO: Wrong image for pod: daemon-set-qkzbl. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. - Jun 12 22:13:20.908: INFO: Wrong image for pod: daemon-set-8m68x. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. - Jun 12 22:13:20.908: INFO: Wrong image for pod: daemon-set-qkzbl. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. - Jun 12 22:13:21.944: INFO: Wrong image for pod: daemon-set-8m68x. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. - Jun 12 22:13:21.944: INFO: Wrong image for pod: daemon-set-qkzbl. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. - Jun 12 22:13:21.944: INFO: Pod daemon-set-vtrgb is not available - Jun 12 22:13:22.913: INFO: Wrong image for pod: daemon-set-8m68x. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. - Jun 12 22:13:22.913: INFO: Wrong image for pod: daemon-set-qkzbl. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. - Jun 12 22:13:22.913: INFO: Pod daemon-set-vtrgb is not available - Jun 12 22:13:23.913: INFO: Wrong image for pod: daemon-set-qkzbl. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. - Jun 12 22:13:24.910: INFO: Wrong image for pod: daemon-set-qkzbl. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. - Jun 12 22:13:25.917: INFO: Wrong image for pod: daemon-set-qkzbl. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. - Jun 12 22:13:25.917: INFO: Pod daemon-set-tphf6 is not available - Jun 12 22:13:26.928: INFO: Wrong image for pod: daemon-set-qkzbl. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. - Jun 12 22:13:26.928: INFO: Pod daemon-set-tphf6 is not available - Jun 12 22:13:30.908: INFO: Pod daemon-set-rzmjd is not available - STEP: Check that daemon pods are still running on every node of the cluster. 06/12/23 22:13:30.96 - Jun 12 22:13:31.027: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 - Jun 12 22:13:31.027: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 - Jun 12 22:13:32.069: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 - Jun 12 22:13:32.069: INFO: Node 10.138.75.116 is running 0 daemon pod, expected 1 - Jun 12 22:13:33.063: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 - Jun 12 22:13:33.063: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set - [AfterEach] [sig-apps] Daemon set [Serial] - test/e2e/apps/daemon_set.go:111 - STEP: Deleting DaemonSet "daemon-set" 06/12/23 22:13:33.136 - STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5024, will wait for the garbage collector to delete the pods 06/12/23 22:13:33.137 - Jun 12 22:13:33.222: INFO: Deleting DaemonSet.extensions daemon-set took: 22.253639ms - Jun 12 22:13:33.323: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.07955ms - Jun 12 22:13:37.039: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 - Jun 12 22:13:37.039: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set - Jun 12 22:13:37.052: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"137549"},"items":null} - - Jun 12 22:13:37.064: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"137549"},"items":null} - - [AfterEach] [sig-apps] Daemon set [Serial] + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 + [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:68 + STEP: Creating a pod to test downward API volume plugin 07/27/23 02:45:56.648 + Jul 27 02:45:56.690: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ca194682-fb18-4260-96fa-112cb94c3b75" in namespace "projected-5374" to be "Succeeded or Failed" + Jul 27 02:45:56.706: INFO: Pod "downwardapi-volume-ca194682-fb18-4260-96fa-112cb94c3b75": Phase="Pending", Reason="", readiness=false. Elapsed: 15.351833ms + Jul 27 02:45:58.717: INFO: Pod "downwardapi-volume-ca194682-fb18-4260-96fa-112cb94c3b75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026396074s + Jul 27 02:46:00.718: INFO: Pod "downwardapi-volume-ca194682-fb18-4260-96fa-112cb94c3b75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028237606s + STEP: Saw pod success 07/27/23 02:46:00.718 + Jul 27 02:46:00.719: INFO: Pod "downwardapi-volume-ca194682-fb18-4260-96fa-112cb94c3b75" satisfied condition "Succeeded or Failed" + Jul 27 02:46:00.730: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-ca194682-fb18-4260-96fa-112cb94c3b75 container client-container: + STEP: delete the pod 07/27/23 02:46:00.759 + Jul 27 02:46:00.794: INFO: Waiting for pod downwardapi-volume-ca194682-fb18-4260-96fa-112cb94c3b75 to disappear + Jul 27 02:46:00.807: INFO: Pod downwardapi-volume-ca194682-fb18-4260-96fa-112cb94c3b75 no longer exists + [AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/node/init/init.go:32 - Jun 12 22:13:37.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + Jul 27 02:46:00.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI tear down framework | framework.go:193 - STEP: Destroying namespace "daemonsets-5024" for this suite. 06/12/23 22:13:37.138 + STEP: Destroying namespace "projected-5374" for this suite. 07/27/23 02:46:00.826 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSS ------------------------------ -[sig-instrumentation] Events API - should ensure that an event can be fetched, patched, deleted, and listed [Conformance] - test/e2e/instrumentation/events.go:98 -[BeforeEach] [sig-instrumentation] Events API +[sig-node] Downward API + should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:166 +[BeforeEach] [sig-node] Downward API set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:13:37.171 -Jun 12 22:13:37.171: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename events 06/12/23 22:13:37.175 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:13:37.227 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:13:37.243 -[BeforeEach] [sig-instrumentation] Events API +STEP: Creating a kubernetes client 07/27/23 02:46:00.85 +Jul 27 02:46:00.850: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename downward-api 07/27/23 02:46:00.851 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:46:00.896 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:46:00.908 +[BeforeEach] [sig-node] Downward API test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-instrumentation] Events API - test/e2e/instrumentation/events.go:84 -[It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] - test/e2e/instrumentation/events.go:98 -STEP: creating a test event 06/12/23 22:13:37.269 -STEP: listing events in all namespaces 06/12/23 22:13:37.293 -STEP: listing events in test namespace 06/12/23 22:13:37.371 -STEP: listing events with field selection filtering on source 06/12/23 22:13:37.382 -STEP: listing events with field selection filtering on reportingController 06/12/23 22:13:37.393 -STEP: getting the test event 06/12/23 22:13:37.407 -STEP: patching the test event 06/12/23 22:13:37.419 -STEP: getting the test event 06/12/23 22:13:37.449 -STEP: updating the test event 06/12/23 22:13:37.463 -STEP: getting the test event 06/12/23 22:13:37.485 -STEP: deleting the test event 06/12/23 22:13:37.502 -STEP: listing events in all namespaces 06/12/23 22:13:37.527 -STEP: listing events in test namespace 06/12/23 22:13:37.591 -[AfterEach] [sig-instrumentation] Events API +[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:166 +STEP: Creating a pod to test downward api env vars 07/27/23 02:46:00.924 +Jul 27 02:46:00.988: INFO: Waiting up to 5m0s for pod "downward-api-bb21f14c-5276-49e8-b2c0-f9041373ab92" in namespace "downward-api-6847" to be "Succeeded or Failed" +Jul 27 02:46:01.009: INFO: Pod "downward-api-bb21f14c-5276-49e8-b2c0-f9041373ab92": Phase="Pending", Reason="", readiness=false. Elapsed: 20.40836ms +Jul 27 02:46:03.021: INFO: Pod "downward-api-bb21f14c-5276-49e8-b2c0-f9041373ab92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032815413s +Jul 27 02:46:05.030: INFO: Pod "downward-api-bb21f14c-5276-49e8-b2c0-f9041373ab92": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041024266s +Jul 27 02:46:07.025: INFO: Pod "downward-api-bb21f14c-5276-49e8-b2c0-f9041373ab92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036347952s +STEP: Saw pod success 07/27/23 02:46:07.025 +Jul 27 02:46:07.025: INFO: Pod "downward-api-bb21f14c-5276-49e8-b2c0-f9041373ab92" satisfied condition "Succeeded or Failed" +Jul 27 02:46:07.065: INFO: Trying to get logs from node 10.245.128.19 pod downward-api-bb21f14c-5276-49e8-b2c0-f9041373ab92 container dapi-container: +STEP: delete the pod 07/27/23 02:46:07.114 +Jul 27 02:46:07.237: INFO: Waiting for pod downward-api-bb21f14c-5276-49e8-b2c0-f9041373ab92 to disappear +Jul 27 02:46:07.254: INFO: Pod downward-api-bb21f14c-5276-49e8-b2c0-f9041373ab92 no longer exists +[AfterEach] [sig-node] Downward API test/e2e/framework/node/init/init.go:32 -Jun 12 22:13:37.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-instrumentation] Events API +Jul 27 02:46:07.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Downward API test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-instrumentation] Events API +[DeferCleanup (Each)] [sig-node] Downward API dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-instrumentation] Events API +[DeferCleanup (Each)] [sig-node] Downward API tear down framework | framework.go:193 -STEP: Destroying namespace "events-9823" for this suite. 06/12/23 22:13:37.622 +STEP: Destroying namespace "downward-api-6847" for this suite. 07/27/23 02:46:07.275 ------------------------------ -• [0.477 seconds] -[sig-instrumentation] Events API -test/e2e/instrumentation/common/framework.go:23 - should ensure that an event can be fetched, patched, deleted, and listed [Conformance] - test/e2e/instrumentation/events.go:98 +• [SLOW TEST] [6.447 seconds] +[sig-node] Downward API +test/e2e/common/node/framework.go:23 + should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:166 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-instrumentation] Events API + [BeforeEach] [sig-node] Downward API set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:13:37.171 - Jun 12 22:13:37.171: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename events 06/12/23 22:13:37.175 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:13:37.227 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:13:37.243 - [BeforeEach] [sig-instrumentation] Events API + STEP: Creating a kubernetes client 07/27/23 02:46:00.85 + Jul 27 02:46:00.850: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename downward-api 07/27/23 02:46:00.851 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:46:00.896 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:46:00.908 + [BeforeEach] [sig-node] Downward API test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-instrumentation] Events API - test/e2e/instrumentation/events.go:84 - [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] - test/e2e/instrumentation/events.go:98 - STEP: creating a test event 06/12/23 22:13:37.269 - STEP: listing events in all namespaces 06/12/23 22:13:37.293 - STEP: listing events in test namespace 06/12/23 22:13:37.371 - STEP: listing events with field selection filtering on source 06/12/23 22:13:37.382 - STEP: listing events with field selection filtering on reportingController 06/12/23 22:13:37.393 - STEP: getting the test event 06/12/23 22:13:37.407 - STEP: patching the test event 06/12/23 22:13:37.419 - STEP: getting the test event 06/12/23 22:13:37.449 - STEP: updating the test event 06/12/23 22:13:37.463 - STEP: getting the test event 06/12/23 22:13:37.485 - STEP: deleting the test event 06/12/23 22:13:37.502 - STEP: listing events in all namespaces 06/12/23 22:13:37.527 - STEP: listing events in test namespace 06/12/23 22:13:37.591 - [AfterEach] [sig-instrumentation] Events API + [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:166 + STEP: Creating a pod to test downward api env vars 07/27/23 02:46:00.924 + Jul 27 02:46:00.988: INFO: Waiting up to 5m0s for pod "downward-api-bb21f14c-5276-49e8-b2c0-f9041373ab92" in namespace "downward-api-6847" to be "Succeeded or Failed" + Jul 27 02:46:01.009: INFO: Pod "downward-api-bb21f14c-5276-49e8-b2c0-f9041373ab92": Phase="Pending", Reason="", readiness=false. Elapsed: 20.40836ms + Jul 27 02:46:03.021: INFO: Pod "downward-api-bb21f14c-5276-49e8-b2c0-f9041373ab92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032815413s + Jul 27 02:46:05.030: INFO: Pod "downward-api-bb21f14c-5276-49e8-b2c0-f9041373ab92": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041024266s + Jul 27 02:46:07.025: INFO: Pod "downward-api-bb21f14c-5276-49e8-b2c0-f9041373ab92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036347952s + STEP: Saw pod success 07/27/23 02:46:07.025 + Jul 27 02:46:07.025: INFO: Pod "downward-api-bb21f14c-5276-49e8-b2c0-f9041373ab92" satisfied condition "Succeeded or Failed" + Jul 27 02:46:07.065: INFO: Trying to get logs from node 10.245.128.19 pod downward-api-bb21f14c-5276-49e8-b2c0-f9041373ab92 container dapi-container: + STEP: delete the pod 07/27/23 02:46:07.114 + Jul 27 02:46:07.237: INFO: Waiting for pod downward-api-bb21f14c-5276-49e8-b2c0-f9041373ab92 to disappear + Jul 27 02:46:07.254: INFO: Pod downward-api-bb21f14c-5276-49e8-b2c0-f9041373ab92 no longer exists + [AfterEach] [sig-node] Downward API test/e2e/framework/node/init/init.go:32 - Jun 12 22:13:37.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-instrumentation] Events API + Jul 27 02:46:07.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Downward API test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-instrumentation] Events API + [DeferCleanup (Each)] [sig-node] Downward API dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-instrumentation] Events API + [DeferCleanup (Each)] [sig-node] Downward API tear down framework | framework.go:193 - STEP: Destroying namespace "events-9823" for this suite. 06/12/23 22:13:37.622 + STEP: Destroying namespace "downward-api-6847" for this suite. 07/27/23 02:46:07.275 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSS ------------------------------- -[sig-node] Variable Expansion - should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] - test/e2e/common/node/expansion.go:186 -[BeforeEach] [sig-node] Variable Expansion +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod [Conformance] + test/e2e/storage/subpath.go:70 +[BeforeEach] [sig-storage] Subpath set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:13:37.65 -Jun 12 22:13:37.650: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename var-expansion 06/12/23 22:13:37.654 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:13:37.738 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:13:37.753 -[BeforeEach] [sig-node] Variable Expansion +STEP: Creating a kubernetes client 07/27/23 02:46:07.298 +Jul 27 02:46:07.298: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename subpath 07/27/23 02:46:07.299 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:46:07.34 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:46:07.351 +[BeforeEach] [sig-storage] Subpath test/e2e/framework/metrics/init/init.go:31 -[It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] - test/e2e/common/node/expansion.go:186 -Jun 12 22:13:37.809: INFO: Waiting up to 2m0s for pod "var-expansion-138f5017-5453-4286-94bd-e761eeb2c84a" in namespace "var-expansion-4327" to be "container 0 failed with reason CreateContainerConfigError" -Jun 12 22:13:37.827: INFO: Pod "var-expansion-138f5017-5453-4286-94bd-e761eeb2c84a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.789744ms -Jun 12 22:13:39.843: INFO: Pod "var-expansion-138f5017-5453-4286-94bd-e761eeb2c84a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033406623s -Jun 12 22:13:39.843: INFO: Pod "var-expansion-138f5017-5453-4286-94bd-e761eeb2c84a" satisfied condition "container 0 failed with reason CreateContainerConfigError" -Jun 12 22:13:39.843: INFO: Deleting pod "var-expansion-138f5017-5453-4286-94bd-e761eeb2c84a" in namespace "var-expansion-4327" -Jun 12 22:13:39.867: INFO: Wait up to 5m0s for pod "var-expansion-138f5017-5453-4286-94bd-e761eeb2c84a" to be fully deleted -[AfterEach] [sig-node] Variable Expansion +[BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 +STEP: Setting up data 07/27/23 02:46:07.369 +[It] should support subpaths with configmap pod [Conformance] + test/e2e/storage/subpath.go:70 +STEP: Creating pod pod-subpath-test-configmap-lk5b 07/27/23 02:46:07.503 +STEP: Creating a pod to test atomic-volume-subpath 07/27/23 02:46:07.503 +Jul 27 02:46:07.538: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-lk5b" in namespace "subpath-8029" to be "Succeeded or Failed" +Jul 27 02:46:07.550: INFO: Pod "pod-subpath-test-configmap-lk5b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.302842ms +Jul 27 02:46:09.562: INFO: Pod "pod-subpath-test-configmap-lk5b": Phase="Running", Reason="", readiness=true. Elapsed: 2.023348986s +Jul 27 02:46:11.566: INFO: Pod "pod-subpath-test-configmap-lk5b": Phase="Running", Reason="", readiness=true. Elapsed: 4.027013821s +Jul 27 02:46:13.563: INFO: Pod "pod-subpath-test-configmap-lk5b": Phase="Running", Reason="", readiness=true. Elapsed: 6.024714906s +Jul 27 02:46:15.563: INFO: Pod "pod-subpath-test-configmap-lk5b": Phase="Running", Reason="", readiness=true. Elapsed: 8.024173906s +Jul 27 02:46:17.570: INFO: Pod "pod-subpath-test-configmap-lk5b": Phase="Running", Reason="", readiness=true. Elapsed: 10.031750659s +Jul 27 02:46:19.562: INFO: Pod "pod-subpath-test-configmap-lk5b": Phase="Running", Reason="", readiness=true. Elapsed: 12.023772878s +Jul 27 02:46:21.562: INFO: Pod "pod-subpath-test-configmap-lk5b": Phase="Running", Reason="", readiness=true. Elapsed: 14.0232975s +Jul 27 02:46:23.562: INFO: Pod "pod-subpath-test-configmap-lk5b": Phase="Running", Reason="", readiness=true. Elapsed: 16.023108236s +Jul 27 02:46:25.585: INFO: Pod "pod-subpath-test-configmap-lk5b": Phase="Running", Reason="", readiness=true. Elapsed: 18.046337939s +Jul 27 02:46:27.561: INFO: Pod "pod-subpath-test-configmap-lk5b": Phase="Running", Reason="", readiness=true. Elapsed: 20.022202383s +Jul 27 02:46:29.560: INFO: Pod "pod-subpath-test-configmap-lk5b": Phase="Running", Reason="", readiness=false. Elapsed: 22.021380445s +Jul 27 02:46:31.573: INFO: Pod "pod-subpath-test-configmap-lk5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.034086127s +STEP: Saw pod success 07/27/23 02:46:31.573 +Jul 27 02:46:31.573: INFO: Pod "pod-subpath-test-configmap-lk5b" satisfied condition "Succeeded or Failed" +Jul 27 02:46:31.583: INFO: Trying to get logs from node 10.245.128.19 pod pod-subpath-test-configmap-lk5b container test-container-subpath-configmap-lk5b: +STEP: delete the pod 07/27/23 02:46:31.621 +Jul 27 02:46:31.725: INFO: Waiting for pod pod-subpath-test-configmap-lk5b to disappear +Jul 27 02:46:31.762: INFO: Pod pod-subpath-test-configmap-lk5b no longer exists +STEP: Deleting pod pod-subpath-test-configmap-lk5b 07/27/23 02:46:31.762 +Jul 27 02:46:31.762: INFO: Deleting pod "pod-subpath-test-configmap-lk5b" in namespace "subpath-8029" +[AfterEach] [sig-storage] Subpath test/e2e/framework/node/init/init.go:32 -Jun 12 22:13:43.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Variable Expansion +Jul 27 02:46:31.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Subpath test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Variable Expansion +[DeferCleanup (Each)] [sig-storage] Subpath dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Variable Expansion +[DeferCleanup (Each)] [sig-storage] Subpath tear down framework | framework.go:193 -STEP: Destroying namespace "var-expansion-4327" for this suite. 06/12/23 22:13:43.913 +STEP: Destroying namespace "subpath-8029" for this suite. 07/27/23 02:46:31.816 ------------------------------ -• [SLOW TEST] [6.289 seconds] -[sig-node] Variable Expansion -test/e2e/common/node/framework.go:23 - should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] - test/e2e/common/node/expansion.go:186 +• [SLOW TEST] [24.546 seconds] +[sig-storage] Subpath +test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + test/e2e/storage/subpath.go:36 + should support subpaths with configmap pod [Conformance] + test/e2e/storage/subpath.go:70 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Variable Expansion + [BeforeEach] [sig-storage] Subpath set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:13:37.65 - Jun 12 22:13:37.650: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename var-expansion 06/12/23 22:13:37.654 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:13:37.738 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:13:37.753 - [BeforeEach] [sig-node] Variable Expansion + STEP: Creating a kubernetes client 07/27/23 02:46:07.298 + Jul 27 02:46:07.298: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename subpath 07/27/23 02:46:07.299 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:46:07.34 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:46:07.351 + [BeforeEach] [sig-storage] Subpath test/e2e/framework/metrics/init/init.go:31 - [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] - test/e2e/common/node/expansion.go:186 - Jun 12 22:13:37.809: INFO: Waiting up to 2m0s for pod "var-expansion-138f5017-5453-4286-94bd-e761eeb2c84a" in namespace "var-expansion-4327" to be "container 0 failed with reason CreateContainerConfigError" - Jun 12 22:13:37.827: INFO: Pod "var-expansion-138f5017-5453-4286-94bd-e761eeb2c84a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.789744ms - Jun 12 22:13:39.843: INFO: Pod "var-expansion-138f5017-5453-4286-94bd-e761eeb2c84a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033406623s - Jun 12 22:13:39.843: INFO: Pod "var-expansion-138f5017-5453-4286-94bd-e761eeb2c84a" satisfied condition "container 0 failed with reason CreateContainerConfigError" - Jun 12 22:13:39.843: INFO: Deleting pod "var-expansion-138f5017-5453-4286-94bd-e761eeb2c84a" in namespace "var-expansion-4327" - Jun 12 22:13:39.867: INFO: Wait up to 5m0s for pod "var-expansion-138f5017-5453-4286-94bd-e761eeb2c84a" to be fully deleted - [AfterEach] [sig-node] Variable Expansion + [BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 + STEP: Setting up data 07/27/23 02:46:07.369 + [It] should support subpaths with configmap pod [Conformance] + test/e2e/storage/subpath.go:70 + STEP: Creating pod pod-subpath-test-configmap-lk5b 07/27/23 02:46:07.503 + STEP: Creating a pod to test atomic-volume-subpath 07/27/23 02:46:07.503 + Jul 27 02:46:07.538: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-lk5b" in namespace "subpath-8029" to be "Succeeded or Failed" + Jul 27 02:46:07.550: INFO: Pod "pod-subpath-test-configmap-lk5b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.302842ms + Jul 27 02:46:09.562: INFO: Pod "pod-subpath-test-configmap-lk5b": Phase="Running", Reason="", readiness=true. Elapsed: 2.023348986s + Jul 27 02:46:11.566: INFO: Pod "pod-subpath-test-configmap-lk5b": Phase="Running", Reason="", readiness=true. Elapsed: 4.027013821s + Jul 27 02:46:13.563: INFO: Pod "pod-subpath-test-configmap-lk5b": Phase="Running", Reason="", readiness=true. Elapsed: 6.024714906s + Jul 27 02:46:15.563: INFO: Pod "pod-subpath-test-configmap-lk5b": Phase="Running", Reason="", readiness=true. Elapsed: 8.024173906s + Jul 27 02:46:17.570: INFO: Pod "pod-subpath-test-configmap-lk5b": Phase="Running", Reason="", readiness=true. Elapsed: 10.031750659s + Jul 27 02:46:19.562: INFO: Pod "pod-subpath-test-configmap-lk5b": Phase="Running", Reason="", readiness=true. Elapsed: 12.023772878s + Jul 27 02:46:21.562: INFO: Pod "pod-subpath-test-configmap-lk5b": Phase="Running", Reason="", readiness=true. Elapsed: 14.0232975s + Jul 27 02:46:23.562: INFO: Pod "pod-subpath-test-configmap-lk5b": Phase="Running", Reason="", readiness=true. Elapsed: 16.023108236s + Jul 27 02:46:25.585: INFO: Pod "pod-subpath-test-configmap-lk5b": Phase="Running", Reason="", readiness=true. Elapsed: 18.046337939s + Jul 27 02:46:27.561: INFO: Pod "pod-subpath-test-configmap-lk5b": Phase="Running", Reason="", readiness=true. Elapsed: 20.022202383s + Jul 27 02:46:29.560: INFO: Pod "pod-subpath-test-configmap-lk5b": Phase="Running", Reason="", readiness=false. Elapsed: 22.021380445s + Jul 27 02:46:31.573: INFO: Pod "pod-subpath-test-configmap-lk5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.034086127s + STEP: Saw pod success 07/27/23 02:46:31.573 + Jul 27 02:46:31.573: INFO: Pod "pod-subpath-test-configmap-lk5b" satisfied condition "Succeeded or Failed" + Jul 27 02:46:31.583: INFO: Trying to get logs from node 10.245.128.19 pod pod-subpath-test-configmap-lk5b container test-container-subpath-configmap-lk5b: + STEP: delete the pod 07/27/23 02:46:31.621 + Jul 27 02:46:31.725: INFO: Waiting for pod pod-subpath-test-configmap-lk5b to disappear + Jul 27 02:46:31.762: INFO: Pod pod-subpath-test-configmap-lk5b no longer exists + STEP: Deleting pod pod-subpath-test-configmap-lk5b 07/27/23 02:46:31.762 + Jul 27 02:46:31.762: INFO: Deleting pod "pod-subpath-test-configmap-lk5b" in namespace "subpath-8029" + [AfterEach] [sig-storage] Subpath test/e2e/framework/node/init/init.go:32 - Jun 12 22:13:43.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Variable Expansion + Jul 27 02:46:31.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Subpath test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Variable Expansion + [DeferCleanup (Each)] [sig-storage] Subpath dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Variable Expansion + [DeferCleanup (Each)] [sig-storage] Subpath tear down framework | framework.go:193 - STEP: Destroying namespace "var-expansion-4327" for this suite. 06/12/23 22:13:43.913 + STEP: Destroying namespace "subpath-8029" for this suite. 07/27/23 02:46:31.816 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-auth] ServiceAccounts - ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] - test/e2e/auth/service_accounts.go:531 -[BeforeEach] [sig-auth] ServiceAccounts +[sig-storage] EmptyDir volumes + should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:107 +[BeforeEach] [sig-storage] EmptyDir volumes set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:13:43.943 -Jun 12 22:13:43.944: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename svcaccounts 06/12/23 22:13:43.947 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:13:44.002 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:13:44.023 -[BeforeEach] [sig-auth] ServiceAccounts +STEP: Creating a kubernetes client 07/27/23 02:46:31.846 +Jul 27 02:46:31.846: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename emptydir 07/27/23 02:46:31.847 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:46:31.892 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:46:31.903 +[BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:31 -[It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] - test/e2e/auth/service_accounts.go:531 -Jun 12 22:13:44.143: INFO: created pod -Jun 12 22:13:44.143: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-8439" to be "Succeeded or Failed" -Jun 12 22:13:44.167: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 24.558352ms -Jun 12 22:13:46.183: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040162244s -Jun 12 22:13:48.181: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038258404s -Jun 12 22:13:50.182: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.039737173s -STEP: Saw pod success 06/12/23 22:13:50.183 -Jun 12 22:13:50.183: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" -Jun 12 22:14:20.184: INFO: polling logs -Jun 12 22:14:20.236: INFO: Pod logs: -I0612 22:13:45.915865 1 log.go:198] OK: Got token -I0612 22:13:45.915961 1 log.go:198] validating with in-cluster discovery -I0612 22:13:45.917032 1 log.go:198] OK: got issuer https://kubernetes.default.svc -I0612 22:13:45.917100 1 log.go:198] Full, not-validated claims: -openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc", Subject:"system:serviceaccount:svcaccounts-8439:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1686608624, NotBefore:1686608024, IssuedAt:1686608024, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-8439", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"56e59110-42f1-4336-b89a-9ccb25e1505d"}}} -I0612 22:13:45.956735 1 log.go:198] OK: Constructed OIDC provider for issuer https://kubernetes.default.svc -I0612 22:13:45.977979 1 log.go:198] OK: Validated signature on JWT -I0612 22:13:45.978251 1 log.go:198] OK: Got valid claims from token! -I0612 22:13:45.978322 1 log.go:198] Full, validated claims: -&openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc", Subject:"system:serviceaccount:svcaccounts-8439:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1686608624, NotBefore:1686608024, IssuedAt:1686608024, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-8439", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"56e59110-42f1-4336-b89a-9ccb25e1505d"}}} - -Jun 12 22:14:20.236: INFO: completed pod -[AfterEach] [sig-auth] ServiceAccounts +[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:107 +STEP: Creating a pod to test emptydir 0666 on tmpfs 07/27/23 02:46:31.914 +W0727 02:46:32.010920 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "test-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "test-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "test-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "test-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +Jul 27 02:46:32.011: INFO: Waiting up to 5m0s for pod "pod-7dc74664-6681-4f33-9724-97293ba38f94" in namespace "emptydir-5365" to be "Succeeded or Failed" +Jul 27 02:46:32.026: INFO: Pod "pod-7dc74664-6681-4f33-9724-97293ba38f94": Phase="Pending", Reason="", readiness=false. Elapsed: 15.169298ms +Jul 27 02:46:34.038: INFO: Pod "pod-7dc74664-6681-4f33-9724-97293ba38f94": Phase="Running", Reason="", readiness=true. Elapsed: 2.027627017s +Jul 27 02:46:36.039: INFO: Pod "pod-7dc74664-6681-4f33-9724-97293ba38f94": Phase="Running", Reason="", readiness=false. Elapsed: 4.028601076s +Jul 27 02:46:38.038: INFO: Pod "pod-7dc74664-6681-4f33-9724-97293ba38f94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027509225s +STEP: Saw pod success 07/27/23 02:46:38.038 +Jul 27 02:46:38.038: INFO: Pod "pod-7dc74664-6681-4f33-9724-97293ba38f94" satisfied condition "Succeeded or Failed" +Jul 27 02:46:38.049: INFO: Trying to get logs from node 10.245.128.19 pod pod-7dc74664-6681-4f33-9724-97293ba38f94 container test-container: +STEP: delete the pod 07/27/23 02:46:38.088 +Jul 27 02:46:38.128: INFO: Waiting for pod pod-7dc74664-6681-4f33-9724-97293ba38f94 to disappear +Jul 27 02:46:38.140: INFO: Pod pod-7dc74664-6681-4f33-9724-97293ba38f94 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/node/init/init.go:32 -Jun 12 22:14:20.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-auth] ServiceAccounts +Jul 27 02:46:38.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-auth] ServiceAccounts +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-auth] ServiceAccounts +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes tear down framework | framework.go:193 -STEP: Destroying namespace "svcaccounts-8439" for this suite. 06/12/23 22:14:20.27 +STEP: Destroying namespace "emptydir-5365" for this suite. 07/27/23 02:46:38.159 ------------------------------ -• [SLOW TEST] [36.382 seconds] -[sig-auth] ServiceAccounts -test/e2e/auth/framework.go:23 - ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] - test/e2e/auth/service_accounts.go:531 +• [SLOW TEST] [6.346 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:107 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-auth] ServiceAccounts + [BeforeEach] [sig-storage] EmptyDir volumes set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:13:43.943 - Jun 12 22:13:43.944: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename svcaccounts 06/12/23 22:13:43.947 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:13:44.002 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:13:44.023 - [BeforeEach] [sig-auth] ServiceAccounts + STEP: Creating a kubernetes client 07/27/23 02:46:31.846 + Jul 27 02:46:31.846: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename emptydir 07/27/23 02:46:31.847 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:46:31.892 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:46:31.903 + [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:31 - [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] - test/e2e/auth/service_accounts.go:531 - Jun 12 22:13:44.143: INFO: created pod - Jun 12 22:13:44.143: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-8439" to be "Succeeded or Failed" - Jun 12 22:13:44.167: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 24.558352ms - Jun 12 22:13:46.183: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040162244s - Jun 12 22:13:48.181: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038258404s - Jun 12 22:13:50.182: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.039737173s - STEP: Saw pod success 06/12/23 22:13:50.183 - Jun 12 22:13:50.183: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" - Jun 12 22:14:20.184: INFO: polling logs - Jun 12 22:14:20.236: INFO: Pod logs: - I0612 22:13:45.915865 1 log.go:198] OK: Got token - I0612 22:13:45.915961 1 log.go:198] validating with in-cluster discovery - I0612 22:13:45.917032 1 log.go:198] OK: got issuer https://kubernetes.default.svc - I0612 22:13:45.917100 1 log.go:198] Full, not-validated claims: - openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc", Subject:"system:serviceaccount:svcaccounts-8439:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1686608624, NotBefore:1686608024, IssuedAt:1686608024, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-8439", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"56e59110-42f1-4336-b89a-9ccb25e1505d"}}} - I0612 22:13:45.956735 1 log.go:198] OK: Constructed OIDC provider for issuer https://kubernetes.default.svc - I0612 22:13:45.977979 1 log.go:198] OK: Validated signature on JWT - I0612 22:13:45.978251 1 log.go:198] OK: Got valid claims from token! - I0612 22:13:45.978322 1 log.go:198] Full, validated claims: - &openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc", Subject:"system:serviceaccount:svcaccounts-8439:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1686608624, NotBefore:1686608024, IssuedAt:1686608024, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-8439", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"56e59110-42f1-4336-b89a-9ccb25e1505d"}}} - - Jun 12 22:14:20.236: INFO: completed pod - [AfterEach] [sig-auth] ServiceAccounts + [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:107 + STEP: Creating a pod to test emptydir 0666 on tmpfs 07/27/23 02:46:31.914 + W0727 02:46:32.010920 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "test-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "test-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "test-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "test-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + Jul 27 02:46:32.011: INFO: Waiting up to 5m0s for pod "pod-7dc74664-6681-4f33-9724-97293ba38f94" in namespace "emptydir-5365" to be "Succeeded or Failed" + Jul 27 02:46:32.026: INFO: Pod "pod-7dc74664-6681-4f33-9724-97293ba38f94": Phase="Pending", Reason="", readiness=false. Elapsed: 15.169298ms + Jul 27 02:46:34.038: INFO: Pod "pod-7dc74664-6681-4f33-9724-97293ba38f94": Phase="Running", Reason="", readiness=true. Elapsed: 2.027627017s + Jul 27 02:46:36.039: INFO: Pod "pod-7dc74664-6681-4f33-9724-97293ba38f94": Phase="Running", Reason="", readiness=false. Elapsed: 4.028601076s + Jul 27 02:46:38.038: INFO: Pod "pod-7dc74664-6681-4f33-9724-97293ba38f94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027509225s + STEP: Saw pod success 07/27/23 02:46:38.038 + Jul 27 02:46:38.038: INFO: Pod "pod-7dc74664-6681-4f33-9724-97293ba38f94" satisfied condition "Succeeded or Failed" + Jul 27 02:46:38.049: INFO: Trying to get logs from node 10.245.128.19 pod pod-7dc74664-6681-4f33-9724-97293ba38f94 container test-container: + STEP: delete the pod 07/27/23 02:46:38.088 + Jul 27 02:46:38.128: INFO: Waiting for pod pod-7dc74664-6681-4f33-9724-97293ba38f94 to disappear + Jul 27 02:46:38.140: INFO: Pod pod-7dc74664-6681-4f33-9724-97293ba38f94 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/node/init/init.go:32 - Jun 12 22:14:20.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-auth] ServiceAccounts + Jul 27 02:46:38.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-auth] ServiceAccounts + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-auth] ServiceAccounts + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes tear down framework | framework.go:193 - STEP: Destroying namespace "svcaccounts-8439" for this suite. 06/12/23 22:14:20.27 + STEP: Destroying namespace "emptydir-5365" for this suite. 07/27/23 02:46:38.159 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-cli] Kubectl client Kubectl server-side dry-run - should check if kubectl can dry-run update Pods [Conformance] - test/e2e/kubectl/kubectl.go:962 -[BeforeEach] [sig-cli] Kubectl client +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate pod and apply defaults after mutation [Conformance] + test/e2e/apimachinery/webhook.go:264 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:14:20.346 -Jun 12 22:14:20.346: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename kubectl 06/12/23 22:14:20.348 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:14:20.451 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:14:20.466 -[BeforeEach] [sig-cli] Kubectl client +STEP: Creating a kubernetes client 07/27/23 02:46:38.194 +Jul 27 02:46:38.194: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename webhook 07/27/23 02:46:38.195 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:46:38.237 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:46:38.248 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-cli] Kubectl client - test/e2e/kubectl/kubectl.go:274 -[It] should check if kubectl can dry-run update Pods [Conformance] - test/e2e/kubectl/kubectl.go:962 -STEP: running the image registry.k8s.io/e2e-test-images/httpd:2.4.38-4 06/12/23 22:14:20.482 -Jun 12 22:14:20.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-3031 run e2e-test-httpd-pod --image=registry.k8s.io/e2e-test-images/httpd:2.4.38-4 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' -Jun 12 22:14:20.734: INFO: stderr: "" -Jun 12 22:14:20.735: INFO: stdout: "pod/e2e-test-httpd-pod created\n" -STEP: replace the image in the pod with server-side dry-run 06/12/23 22:14:20.735 -Jun 12 22:14:20.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-3031 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "registry.k8s.io/e2e-test-images/busybox:1.29-4"}]}} --dry-run=server' -Jun 12 22:14:22.201: INFO: stderr: "" -Jun 12 22:14:22.202: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" -STEP: verifying the pod e2e-test-httpd-pod has the right image registry.k8s.io/e2e-test-images/httpd:2.4.38-4 06/12/23 22:14:22.202 -Jun 12 22:14:22.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-3031 delete pods e2e-test-httpd-pod' -Jun 12 22:14:26.677: INFO: stderr: "" -Jun 12 22:14:26.677: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" -[AfterEach] [sig-cli] Kubectl client +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 07/27/23 02:46:38.372 +STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 02:46:39.247 +STEP: Deploying the webhook pod 07/27/23 02:46:39.288 +STEP: Wait for the deployment to be ready 07/27/23 02:46:39.336 +Jul 27 02:46:39.378: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Jul 27 02:46:41.414: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 2, 46, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 2, 46, 39, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 2, 46, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 2, 46, 39, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service 07/27/23 02:46:43.467 +STEP: Verifying the service has paired with the endpoint 07/27/23 02:46:43.497 +Jul 27 02:46:44.498: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate pod and apply defaults after mutation [Conformance] + test/e2e/apimachinery/webhook.go:264 +STEP: Registering the mutating pod webhook via the AdmissionRegistration API 07/27/23 02:46:44.519 +STEP: create a pod that should be updated by the webhook 07/27/23 02:46:44.63 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 -Jun 12 22:14:26.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-cli] Kubectl client +Jul 27 02:46:44.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-cli] Kubectl client +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-cli] Kubectl client +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 -STEP: Destroying namespace "kubectl-3031" for this suite. 06/12/23 22:14:26.693 +STEP: Destroying namespace "webhook-4736" for this suite. 07/27/23 02:46:44.976 +STEP: Destroying namespace "webhook-4736-markers" for this suite. 07/27/23 02:46:45.032 ------------------------------ -• [SLOW TEST] [6.373 seconds] -[sig-cli] Kubectl client -test/e2e/kubectl/framework.go:23 - Kubectl server-side dry-run - test/e2e/kubectl/kubectl.go:956 - should check if kubectl can dry-run update Pods [Conformance] - test/e2e/kubectl/kubectl.go:962 +• [SLOW TEST] [6.878 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should mutate pod and apply defaults after mutation [Conformance] + test/e2e/apimachinery/webhook.go:264 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-cli] Kubectl client + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:14:20.346 - Jun 12 22:14:20.346: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename kubectl 06/12/23 22:14:20.348 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:14:20.451 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:14:20.466 - [BeforeEach] [sig-cli] Kubectl client + STEP: Creating a kubernetes client 07/27/23 02:46:38.194 + Jul 27 02:46:38.194: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename webhook 07/27/23 02:46:38.195 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:46:38.237 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:46:38.248 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-cli] Kubectl client - test/e2e/kubectl/kubectl.go:274 - [It] should check if kubectl can dry-run update Pods [Conformance] - test/e2e/kubectl/kubectl.go:962 - STEP: running the image registry.k8s.io/e2e-test-images/httpd:2.4.38-4 06/12/23 22:14:20.482 - Jun 12 22:14:20.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-3031 run e2e-test-httpd-pod --image=registry.k8s.io/e2e-test-images/httpd:2.4.38-4 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' - Jun 12 22:14:20.734: INFO: stderr: "" - Jun 12 22:14:20.735: INFO: stdout: "pod/e2e-test-httpd-pod created\n" - STEP: replace the image in the pod with server-side dry-run 06/12/23 22:14:20.735 - Jun 12 22:14:20.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-3031 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "registry.k8s.io/e2e-test-images/busybox:1.29-4"}]}} --dry-run=server' - Jun 12 22:14:22.201: INFO: stderr: "" - Jun 12 22:14:22.202: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" - STEP: verifying the pod e2e-test-httpd-pod has the right image registry.k8s.io/e2e-test-images/httpd:2.4.38-4 06/12/23 22:14:22.202 - Jun 12 22:14:22.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-3031 delete pods e2e-test-httpd-pod' - Jun 12 22:14:26.677: INFO: stderr: "" - Jun 12 22:14:26.677: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" - [AfterEach] [sig-cli] Kubectl client + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 07/27/23 02:46:38.372 + STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 02:46:39.247 + STEP: Deploying the webhook pod 07/27/23 02:46:39.288 + STEP: Wait for the deployment to be ready 07/27/23 02:46:39.336 + Jul 27 02:46:39.378: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + Jul 27 02:46:41.414: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.July, 27, 2, 46, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 2, 46, 39, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.July, 27, 2, 46, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.July, 27, 2, 46, 39, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} + STEP: Deploying the webhook service 07/27/23 02:46:43.467 + STEP: Verifying the service has paired with the endpoint 07/27/23 02:46:43.497 + Jul 27 02:46:44.498: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should mutate pod and apply defaults after mutation [Conformance] + test/e2e/apimachinery/webhook.go:264 + STEP: Registering the mutating pod webhook via the AdmissionRegistration API 07/27/23 02:46:44.519 + STEP: create a pod that should be updated by the webhook 07/27/23 02:46:44.63 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 22:14:26.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-cli] Kubectl client + Jul 27 02:46:44.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-cli] Kubectl client + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-cli] Kubectl client + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "kubectl-3031" for this suite. 06/12/23 22:14:26.693 + STEP: Destroying namespace "webhook-4736" for this suite. 07/27/23 02:46:44.976 + STEP: Destroying namespace "webhook-4736-markers" for this suite. 07/27/23 02:46:45.032 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSS ------------------------------ -[sig-storage] Downward API volume - should provide container's cpu request [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:221 -[BeforeEach] [sig-storage] Downward API volume +[sig-node] RuntimeClass + should support RuntimeClasses API operations [Conformance] + test/e2e/common/node/runtimeclass.go:189 +[BeforeEach] [sig-node] RuntimeClass set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:14:26.722 -Jun 12 22:14:26.722: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename downward-api 06/12/23 22:14:26.724 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:14:26.774 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:14:26.816 -[BeforeEach] [sig-storage] Downward API volume +STEP: Creating a kubernetes client 07/27/23 02:46:45.073 +Jul 27 02:46:45.073: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename runtimeclass 07/27/23 02:46:45.074 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:46:45.128 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:46:45.142 +[BeforeEach] [sig-node] RuntimeClass test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-storage] Downward API volume - test/e2e/common/storage/downwardapi_volume.go:44 -[It] should provide container's cpu request [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:221 -STEP: Creating a pod to test downward API volume plugin 06/12/23 22:14:26.842 -Jun 12 22:14:26.918: INFO: Waiting up to 5m0s for pod "downwardapi-volume-320fbbe8-e05b-498e-a2b8-36a64f8f7d3a" in namespace "downward-api-3651" to be "Succeeded or Failed" -Jun 12 22:14:26.934: INFO: Pod "downwardapi-volume-320fbbe8-e05b-498e-a2b8-36a64f8f7d3a": Phase="Pending", Reason="", readiness=false. Elapsed: 15.485902ms -Jun 12 22:14:28.976: INFO: Pod "downwardapi-volume-320fbbe8-e05b-498e-a2b8-36a64f8f7d3a": Phase="Running", Reason="", readiness=true. Elapsed: 2.058008479s -Jun 12 22:14:30.949: INFO: Pod "downwardapi-volume-320fbbe8-e05b-498e-a2b8-36a64f8f7d3a": Phase="Running", Reason="", readiness=false. Elapsed: 4.031086096s -Jun 12 22:14:32.962: INFO: Pod "downwardapi-volume-320fbbe8-e05b-498e-a2b8-36a64f8f7d3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.043538836s -STEP: Saw pod success 06/12/23 22:14:32.962 -Jun 12 22:14:32.962: INFO: Pod "downwardapi-volume-320fbbe8-e05b-498e-a2b8-36a64f8f7d3a" satisfied condition "Succeeded or Failed" -Jun 12 22:14:33.012: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-320fbbe8-e05b-498e-a2b8-36a64f8f7d3a container client-container: -STEP: delete the pod 06/12/23 22:14:33.146 -Jun 12 22:14:33.190: INFO: Waiting for pod downwardapi-volume-320fbbe8-e05b-498e-a2b8-36a64f8f7d3a to disappear -Jun 12 22:14:33.203: INFO: Pod downwardapi-volume-320fbbe8-e05b-498e-a2b8-36a64f8f7d3a no longer exists -[AfterEach] [sig-storage] Downward API volume +[It] should support RuntimeClasses API operations [Conformance] + test/e2e/common/node/runtimeclass.go:189 +STEP: getting /apis 07/27/23 02:46:45.153 +STEP: getting /apis/node.k8s.io 07/27/23 02:46:45.196 +STEP: getting /apis/node.k8s.io/v1 07/27/23 02:46:45.202 +STEP: creating 07/27/23 02:46:45.207 +STEP: watching 07/27/23 02:46:45.324 +Jul 27 02:46:45.324: INFO: starting watch +STEP: getting 07/27/23 02:46:45.376 +STEP: listing 07/27/23 02:46:45.39 +STEP: patching 07/27/23 02:46:45.402 +STEP: updating 07/27/23 02:46:45.417 +Jul 27 02:46:45.434: INFO: waiting for watch events with expected annotations +STEP: deleting 07/27/23 02:46:45.434 +STEP: deleting a collection 07/27/23 02:46:45.51 +[AfterEach] [sig-node] RuntimeClass test/e2e/framework/node/init/init.go:32 -Jun 12 22:14:33.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Downward API volume +Jul 27 02:46:45.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] RuntimeClass test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Downward API volume +[DeferCleanup (Each)] [sig-node] RuntimeClass dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Downward API volume +[DeferCleanup (Each)] [sig-node] RuntimeClass tear down framework | framework.go:193 -STEP: Destroying namespace "downward-api-3651" for this suite. 06/12/23 22:14:33.22 +STEP: Destroying namespace "runtimeclass-7708" for this suite. 07/27/23 02:46:45.636 ------------------------------ -• [SLOW TEST] [6.532 seconds] -[sig-storage] Downward API volume -test/e2e/common/storage/framework.go:23 - should provide container's cpu request [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:221 +• [0.592 seconds] +[sig-node] RuntimeClass +test/e2e/common/node/framework.go:23 + should support RuntimeClasses API operations [Conformance] + test/e2e/common/node/runtimeclass.go:189 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Downward API volume + [BeforeEach] [sig-node] RuntimeClass set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:14:26.722 - Jun 12 22:14:26.722: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename downward-api 06/12/23 22:14:26.724 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:14:26.774 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:14:26.816 - [BeforeEach] [sig-storage] Downward API volume + STEP: Creating a kubernetes client 07/27/23 02:46:45.073 + Jul 27 02:46:45.073: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename runtimeclass 07/27/23 02:46:45.074 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:46:45.128 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:46:45.142 + [BeforeEach] [sig-node] RuntimeClass test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-storage] Downward API volume - test/e2e/common/storage/downwardapi_volume.go:44 - [It] should provide container's cpu request [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:221 - STEP: Creating a pod to test downward API volume plugin 06/12/23 22:14:26.842 - Jun 12 22:14:26.918: INFO: Waiting up to 5m0s for pod "downwardapi-volume-320fbbe8-e05b-498e-a2b8-36a64f8f7d3a" in namespace "downward-api-3651" to be "Succeeded or Failed" - Jun 12 22:14:26.934: INFO: Pod "downwardapi-volume-320fbbe8-e05b-498e-a2b8-36a64f8f7d3a": Phase="Pending", Reason="", readiness=false. Elapsed: 15.485902ms - Jun 12 22:14:28.976: INFO: Pod "downwardapi-volume-320fbbe8-e05b-498e-a2b8-36a64f8f7d3a": Phase="Running", Reason="", readiness=true. Elapsed: 2.058008479s - Jun 12 22:14:30.949: INFO: Pod "downwardapi-volume-320fbbe8-e05b-498e-a2b8-36a64f8f7d3a": Phase="Running", Reason="", readiness=false. Elapsed: 4.031086096s - Jun 12 22:14:32.962: INFO: Pod "downwardapi-volume-320fbbe8-e05b-498e-a2b8-36a64f8f7d3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.043538836s - STEP: Saw pod success 06/12/23 22:14:32.962 - Jun 12 22:14:32.962: INFO: Pod "downwardapi-volume-320fbbe8-e05b-498e-a2b8-36a64f8f7d3a" satisfied condition "Succeeded or Failed" - Jun 12 22:14:33.012: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-320fbbe8-e05b-498e-a2b8-36a64f8f7d3a container client-container: - STEP: delete the pod 06/12/23 22:14:33.146 - Jun 12 22:14:33.190: INFO: Waiting for pod downwardapi-volume-320fbbe8-e05b-498e-a2b8-36a64f8f7d3a to disappear - Jun 12 22:14:33.203: INFO: Pod downwardapi-volume-320fbbe8-e05b-498e-a2b8-36a64f8f7d3a no longer exists - [AfterEach] [sig-storage] Downward API volume + [It] should support RuntimeClasses API operations [Conformance] + test/e2e/common/node/runtimeclass.go:189 + STEP: getting /apis 07/27/23 02:46:45.153 + STEP: getting /apis/node.k8s.io 07/27/23 02:46:45.196 + STEP: getting /apis/node.k8s.io/v1 07/27/23 02:46:45.202 + STEP: creating 07/27/23 02:46:45.207 + STEP: watching 07/27/23 02:46:45.324 + Jul 27 02:46:45.324: INFO: starting watch + STEP: getting 07/27/23 02:46:45.376 + STEP: listing 07/27/23 02:46:45.39 + STEP: patching 07/27/23 02:46:45.402 + STEP: updating 07/27/23 02:46:45.417 + Jul 27 02:46:45.434: INFO: waiting for watch events with expected annotations + STEP: deleting 07/27/23 02:46:45.434 + STEP: deleting a collection 07/27/23 02:46:45.51 + [AfterEach] [sig-node] RuntimeClass test/e2e/framework/node/init/init.go:32 - Jun 12 22:14:33.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Downward API volume + Jul 27 02:46:45.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] RuntimeClass test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Downward API volume + [DeferCleanup (Each)] [sig-node] RuntimeClass dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Downward API volume + [DeferCleanup (Each)] [sig-node] RuntimeClass tear down framework | framework.go:193 - STEP: Destroying namespace "downward-api-3651" for this suite. 06/12/23 22:14:33.22 + STEP: Destroying namespace "runtimeclass-7708" for this suite. 07/27/23 02:46:45.636 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] - works for multiple CRDs of same group and version but different kinds [Conformance] - test/e2e/apimachinery/crd_publish_openapi.go:357 -[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +[sig-api-machinery] Garbage collector + should delete pods created by rc when not orphaning [Conformance] + test/e2e/apimachinery/garbage_collector.go:312 +[BeforeEach] [sig-api-machinery] Garbage collector set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:14:33.269 -Jun 12 22:14:33.269: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename crd-publish-openapi 06/12/23 22:14:33.272 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:14:33.334 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:14:33.354 -[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +STEP: Creating a kubernetes client 07/27/23 02:46:45.666 +Jul 27 02:46:45.666: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename gc 07/27/23 02:46:45.666 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:46:45.747 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:46:45.85 +[BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:31 -[It] works for multiple CRDs of same group and version but different kinds [Conformance] - test/e2e/apimachinery/crd_publish_openapi.go:357 -STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation 06/12/23 22:14:33.369 -Jun 12 22:14:33.372: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 22:14:44.164: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +[It] should delete pods created by rc when not orphaning [Conformance] + test/e2e/apimachinery/garbage_collector.go:312 +STEP: create the rc 07/27/23 02:46:45.991 +STEP: delete the rc 07/27/23 02:46:51.029 +STEP: wait for all pods to be garbage collected 07/27/23 02:46:51.049 +STEP: Gathering metrics 07/27/23 02:46:56.072 +W0727 02:46:56.108800 20 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +Jul 27 02:46:56.108: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 -Jun 12 22:15:15.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +Jul 27 02:46:56.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector tear down framework | framework.go:193 -STEP: Destroying namespace "crd-publish-openapi-5094" for this suite. 06/12/23 22:15:15.808 +STEP: Destroying namespace "gc-9971" for this suite. 07/27/23 02:46:56.122 ------------------------------ -• [SLOW TEST] [42.574 seconds] -[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +• [SLOW TEST] [10.480 seconds] +[sig-api-machinery] Garbage collector test/e2e/apimachinery/framework.go:23 - works for multiple CRDs of same group and version but different kinds [Conformance] - test/e2e/apimachinery/crd_publish_openapi.go:357 + should delete pods created by rc when not orphaning [Conformance] + test/e2e/apimachinery/garbage_collector.go:312 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + [BeforeEach] [sig-api-machinery] Garbage collector set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:14:33.269 - Jun 12 22:14:33.269: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename crd-publish-openapi 06/12/23 22:14:33.272 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:14:33.334 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:14:33.354 - [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + STEP: Creating a kubernetes client 07/27/23 02:46:45.666 + Jul 27 02:46:45.666: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename gc 07/27/23 02:46:45.666 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:46:45.747 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:46:45.85 + [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:31 - [It] works for multiple CRDs of same group and version but different kinds [Conformance] - test/e2e/apimachinery/crd_publish_openapi.go:357 - STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation 06/12/23 22:14:33.369 - Jun 12 22:14:33.372: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 22:14:44.164: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + [It] should delete pods created by rc when not orphaning [Conformance] + test/e2e/apimachinery/garbage_collector.go:312 + STEP: create the rc 07/27/23 02:46:45.991 + STEP: delete the rc 07/27/23 02:46:51.029 + STEP: wait for all pods to be garbage collected 07/27/23 02:46:51.049 + STEP: Gathering metrics 07/27/23 02:46:56.072 + W0727 02:46:56.108800 20 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. + Jul 27 02:46:56.108: INFO: For apiserver_request_total: + For apiserver_request_latency_seconds: + For apiserver_init_events_total: + For garbage_collector_attempt_to_delete_queue_latency: + For garbage_collector_attempt_to_delete_work_duration: + For garbage_collector_attempt_to_orphan_queue_latency: + For garbage_collector_attempt_to_orphan_work_duration: + For garbage_collector_dirty_processing_latency_microseconds: + For garbage_collector_event_processing_latency_microseconds: + For garbage_collector_graph_changes_queue_latency: + For garbage_collector_graph_changes_work_duration: + For garbage_collector_orphan_processing_latency_microseconds: + For namespace_queue_latency: + For namespace_queue_latency_sum: + For namespace_queue_latency_count: + For namespace_retries: + For namespace_work_duration: + For namespace_work_duration_sum: + For namespace_work_duration_count: + For function_duration_seconds: + For errors_total: + For evicted_pods_total: + + [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 - Jun 12 22:15:15.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + Jul 27 02:46:56.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector tear down framework | framework.go:193 - STEP: Destroying namespace "crd-publish-openapi-5094" for this suite. 06/12/23 22:15:15.808 + STEP: Destroying namespace "gc-9971" for this suite. 07/27/23 02:46:56.122 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSS ------------------------------ -[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook - should execute poststart exec hook properly [NodeConformance] [Conformance] - test/e2e/common/node/lifecycle_hook.go:134 -[BeforeEach] [sig-node] Container Lifecycle Hook +[sig-node] Probing container + should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:215 +[BeforeEach] [sig-node] Probing container set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:15:15.887 -Jun 12 22:15:15.888: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename container-lifecycle-hook 06/12/23 22:15:15.893 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:15:15.971 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:15:16.047 -[BeforeEach] [sig-node] Container Lifecycle Hook +STEP: Creating a kubernetes client 07/27/23 02:46:56.146 +Jul 27 02:46:56.146: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename container-probe 07/27/23 02:46:56.147 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:46:56.198 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:46:56.211 +[BeforeEach] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] when create a pod with lifecycle hook - test/e2e/common/node/lifecycle_hook.go:77 -STEP: create the container to handle the HTTPGet hook request. 06/12/23 22:15:16.103 -Jun 12 22:15:16.143: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-2220" to be "running and ready" -Jun 12 22:15:16.185: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 41.501505ms -Jun 12 22:15:16.185: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) -Jun 12 22:15:18.194: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050618111s -Jun 12 22:15:18.195: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) -Jun 12 22:15:20.192: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 4.04889425s -Jun 12 22:15:20.193: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) -Jun 12 22:15:20.193: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" -[It] should execute poststart exec hook properly [NodeConformance] [Conformance] - test/e2e/common/node/lifecycle_hook.go:134 -STEP: create the pod with lifecycle hook 06/12/23 22:15:20.202 -Jun 12 22:15:20.214: INFO: Waiting up to 5m0s for pod "pod-with-poststart-exec-hook" in namespace "container-lifecycle-hook-2220" to be "running and ready" -Jun 12 22:15:20.223: INFO: Pod "pod-with-poststart-exec-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 8.743774ms -Jun 12 22:15:20.223: INFO: The phase of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) -Jun 12 22:15:22.230: INFO: Pod "pod-with-poststart-exec-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016178559s -Jun 12 22:15:22.230: INFO: The phase of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) -Jun 12 22:15:24.247: INFO: Pod "pod-with-poststart-exec-hook": Phase="Running", Reason="", readiness=true. Elapsed: 4.032660176s -Jun 12 22:15:24.247: INFO: The phase of Pod pod-with-poststart-exec-hook is Running (Ready = true) -Jun 12 22:15:24.247: INFO: Pod "pod-with-poststart-exec-hook" satisfied condition "running and ready" -STEP: check poststart hook 06/12/23 22:15:24.257 -STEP: delete the pod with lifecycle hook 06/12/23 22:15:24.322 -Jun 12 22:15:24.334: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear -Jun 12 22:15:24.342: INFO: Pod pod-with-poststart-exec-hook still exists -Jun 12 22:15:26.344: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear -Jun 12 22:15:26.352: INFO: Pod pod-with-poststart-exec-hook no longer exists -[AfterEach] [sig-node] Container Lifecycle Hook +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 +[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:215 +STEP: Creating pod test-webserver-2203bf38-7dbf-4662-948e-1f51cad5c3a6 in namespace container-probe-3972 07/27/23 02:46:56.221 +W0727 02:46:56.282780 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "test-webserver" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "test-webserver" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "test-webserver" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "test-webserver" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +Jul 27 02:46:56.282: INFO: Waiting up to 5m0s for pod "test-webserver-2203bf38-7dbf-4662-948e-1f51cad5c3a6" in namespace "container-probe-3972" to be "not pending" +Jul 27 02:46:56.298: INFO: Pod "test-webserver-2203bf38-7dbf-4662-948e-1f51cad5c3a6": Phase="Pending", Reason="", readiness=false. Elapsed: 15.717045ms +Jul 27 02:46:58.310: INFO: Pod "test-webserver-2203bf38-7dbf-4662-948e-1f51cad5c3a6": Phase="Running", Reason="", readiness=true. Elapsed: 2.027769128s +Jul 27 02:46:58.310: INFO: Pod "test-webserver-2203bf38-7dbf-4662-948e-1f51cad5c3a6" satisfied condition "not pending" +Jul 27 02:46:58.310: INFO: Started pod test-webserver-2203bf38-7dbf-4662-948e-1f51cad5c3a6 in namespace container-probe-3972 +STEP: checking the pod's current state and verifying that restartCount is present 07/27/23 02:46:58.31 +Jul 27 02:46:58.321: INFO: Initial restart count of pod test-webserver-2203bf38-7dbf-4662-948e-1f51cad5c3a6 is 0 +STEP: deleting the pod 07/27/23 02:50:58.38 +[AfterEach] [sig-node] Probing container test/e2e/framework/node/init/init.go:32 -Jun 12 22:15:26.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook +Jul 27 02:50:58.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook +[DeferCleanup (Each)] [sig-node] Probing container dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook +[DeferCleanup (Each)] [sig-node] Probing container tear down framework | framework.go:193 -STEP: Destroying namespace "container-lifecycle-hook-2220" for this suite. 06/12/23 22:15:26.365 +STEP: Destroying namespace "container-probe-3972" for this suite. 07/27/23 02:50:58.452 ------------------------------ -• [SLOW TEST] [10.498 seconds] -[sig-node] Container Lifecycle Hook +• [SLOW TEST] [242.326 seconds] +[sig-node] Probing container test/e2e/common/node/framework.go:23 - when create a pod with lifecycle hook - test/e2e/common/node/lifecycle_hook.go:46 - should execute poststart exec hook properly [NodeConformance] [Conformance] - test/e2e/common/node/lifecycle_hook.go:134 + should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:215 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Container Lifecycle Hook + [BeforeEach] [sig-node] Probing container set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:15:15.887 - Jun 12 22:15:15.888: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename container-lifecycle-hook 06/12/23 22:15:15.893 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:15:15.971 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:15:16.047 - [BeforeEach] [sig-node] Container Lifecycle Hook + STEP: Creating a kubernetes client 07/27/23 02:46:56.146 + Jul 27 02:46:56.146: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename container-probe 07/27/23 02:46:56.147 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:46:56.198 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:46:56.211 + [BeforeEach] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] when create a pod with lifecycle hook - test/e2e/common/node/lifecycle_hook.go:77 - STEP: create the container to handle the HTTPGet hook request. 06/12/23 22:15:16.103 - Jun 12 22:15:16.143: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-2220" to be "running and ready" - Jun 12 22:15:16.185: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 41.501505ms - Jun 12 22:15:16.185: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) - Jun 12 22:15:18.194: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050618111s - Jun 12 22:15:18.195: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) - Jun 12 22:15:20.192: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 4.04889425s - Jun 12 22:15:20.193: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) - Jun 12 22:15:20.193: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" - [It] should execute poststart exec hook properly [NodeConformance] [Conformance] - test/e2e/common/node/lifecycle_hook.go:134 - STEP: create the pod with lifecycle hook 06/12/23 22:15:20.202 - Jun 12 22:15:20.214: INFO: Waiting up to 5m0s for pod "pod-with-poststart-exec-hook" in namespace "container-lifecycle-hook-2220" to be "running and ready" - Jun 12 22:15:20.223: INFO: Pod "pod-with-poststart-exec-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 8.743774ms - Jun 12 22:15:20.223: INFO: The phase of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) - Jun 12 22:15:22.230: INFO: Pod "pod-with-poststart-exec-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016178559s - Jun 12 22:15:22.230: INFO: The phase of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) - Jun 12 22:15:24.247: INFO: Pod "pod-with-poststart-exec-hook": Phase="Running", Reason="", readiness=true. Elapsed: 4.032660176s - Jun 12 22:15:24.247: INFO: The phase of Pod pod-with-poststart-exec-hook is Running (Ready = true) - Jun 12 22:15:24.247: INFO: Pod "pod-with-poststart-exec-hook" satisfied condition "running and ready" - STEP: check poststart hook 06/12/23 22:15:24.257 - STEP: delete the pod with lifecycle hook 06/12/23 22:15:24.322 - Jun 12 22:15:24.334: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear - Jun 12 22:15:24.342: INFO: Pod pod-with-poststart-exec-hook still exists - Jun 12 22:15:26.344: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear - Jun 12 22:15:26.352: INFO: Pod pod-with-poststart-exec-hook no longer exists - [AfterEach] [sig-node] Container Lifecycle Hook + [BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 + [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:215 + STEP: Creating pod test-webserver-2203bf38-7dbf-4662-948e-1f51cad5c3a6 in namespace container-probe-3972 07/27/23 02:46:56.221 + W0727 02:46:56.282780 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "test-webserver" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "test-webserver" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "test-webserver" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "test-webserver" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + Jul 27 02:46:56.282: INFO: Waiting up to 5m0s for pod "test-webserver-2203bf38-7dbf-4662-948e-1f51cad5c3a6" in namespace "container-probe-3972" to be "not pending" + Jul 27 02:46:56.298: INFO: Pod "test-webserver-2203bf38-7dbf-4662-948e-1f51cad5c3a6": Phase="Pending", Reason="", readiness=false. Elapsed: 15.717045ms + Jul 27 02:46:58.310: INFO: Pod "test-webserver-2203bf38-7dbf-4662-948e-1f51cad5c3a6": Phase="Running", Reason="", readiness=true. Elapsed: 2.027769128s + Jul 27 02:46:58.310: INFO: Pod "test-webserver-2203bf38-7dbf-4662-948e-1f51cad5c3a6" satisfied condition "not pending" + Jul 27 02:46:58.310: INFO: Started pod test-webserver-2203bf38-7dbf-4662-948e-1f51cad5c3a6 in namespace container-probe-3972 + STEP: checking the pod's current state and verifying that restartCount is present 07/27/23 02:46:58.31 + Jul 27 02:46:58.321: INFO: Initial restart count of pod test-webserver-2203bf38-7dbf-4662-948e-1f51cad5c3a6 is 0 + STEP: deleting the pod 07/27/23 02:50:58.38 + [AfterEach] [sig-node] Probing container test/e2e/framework/node/init/init.go:32 - Jun 12 22:15:26.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + Jul 27 02:50:58.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + [DeferCleanup (Each)] [sig-node] Probing container dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + [DeferCleanup (Each)] [sig-node] Probing container tear down framework | framework.go:193 - STEP: Destroying namespace "container-lifecycle-hook-2220" for this suite. 06/12/23 22:15:26.365 + STEP: Destroying namespace "container-probe-3972" for this suite. 07/27/23 02:50:58.452 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-cli] Kubectl client Proxy server - should support proxy with --port 0 [Conformance] - test/e2e/kubectl/kubectl.go:1787 -[BeforeEach] [sig-cli] Kubectl client +[sig-node] ConfigMap + should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/common/node/configmap.go:93 +[BeforeEach] [sig-node] ConfigMap set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:15:26.39 -Jun 12 22:15:26.390: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename kubectl 06/12/23 22:15:26.392 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:15:26.45 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:15:26.459 -[BeforeEach] [sig-cli] Kubectl client +STEP: Creating a kubernetes client 07/27/23 02:50:58.473 +Jul 27 02:50:58.473: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename configmap 07/27/23 02:50:58.474 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:50:58.521 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:50:58.534 +[BeforeEach] [sig-node] ConfigMap test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-cli] Kubectl client - test/e2e/kubectl/kubectl.go:274 -[It] should support proxy with --port 0 [Conformance] - test/e2e/kubectl/kubectl.go:1787 -STEP: starting the proxy server 06/12/23 22:15:26.474 -Jun 12 22:15:26.475: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-9079 proxy -p 0 --disable-filter' -STEP: curling proxy /api/ output 06/12/23 22:15:26.724 -[AfterEach] [sig-cli] Kubectl client +[It] should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/common/node/configmap.go:93 +STEP: Creating configMap configmap-3373/configmap-test-2b85ab77-0f7c-443d-9735-36d83172dc07 07/27/23 02:50:58.546 +STEP: Creating a pod to test consume configMaps 07/27/23 02:50:58.563 +Jul 27 02:50:59.595: INFO: Waiting up to 5m0s for pod "pod-configmaps-d0aadbcd-d796-4407-9554-3c41af59f10c" in namespace "configmap-3373" to be "Succeeded or Failed" +Jul 27 02:50:59.612: INFO: Pod "pod-configmaps-d0aadbcd-d796-4407-9554-3c41af59f10c": Phase="Pending", Reason="", readiness=false. Elapsed: 17.111459ms +Jul 27 02:51:01.624: INFO: Pod "pod-configmaps-d0aadbcd-d796-4407-9554-3c41af59f10c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028510894s +Jul 27 02:51:03.624: INFO: Pod "pod-configmaps-d0aadbcd-d796-4407-9554-3c41af59f10c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028887004s +STEP: Saw pod success 07/27/23 02:51:03.624 +Jul 27 02:51:03.624: INFO: Pod "pod-configmaps-d0aadbcd-d796-4407-9554-3c41af59f10c" satisfied condition "Succeeded or Failed" +Jul 27 02:51:03.649: INFO: Trying to get logs from node 10.245.128.19 pod pod-configmaps-d0aadbcd-d796-4407-9554-3c41af59f10c container env-test: +STEP: delete the pod 07/27/23 02:51:03.721 +Jul 27 02:51:03.756: INFO: Waiting for pod pod-configmaps-d0aadbcd-d796-4407-9554-3c41af59f10c to disappear +Jul 27 02:51:03.767: INFO: Pod pod-configmaps-d0aadbcd-d796-4407-9554-3c41af59f10c no longer exists +[AfterEach] [sig-node] ConfigMap test/e2e/framework/node/init/init.go:32 -Jun 12 22:15:26.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-cli] Kubectl client +Jul 27 02:51:03.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] ConfigMap test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-cli] Kubectl client +[DeferCleanup (Each)] [sig-node] ConfigMap dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-cli] Kubectl client +[DeferCleanup (Each)] [sig-node] ConfigMap tear down framework | framework.go:193 -STEP: Destroying namespace "kubectl-9079" for this suite. 06/12/23 22:15:26.787 +STEP: Destroying namespace "configmap-3373" for this suite. 07/27/23 02:51:03.786 ------------------------------ -• [0.418 seconds] -[sig-cli] Kubectl client -test/e2e/kubectl/framework.go:23 - Proxy server - test/e2e/kubectl/kubectl.go:1780 - should support proxy with --port 0 [Conformance] - test/e2e/kubectl/kubectl.go:1787 +• [SLOW TEST] [5.335 seconds] +[sig-node] ConfigMap +test/e2e/common/node/framework.go:23 + should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/common/node/configmap.go:93 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-cli] Kubectl client + [BeforeEach] [sig-node] ConfigMap set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:15:26.39 - Jun 12 22:15:26.390: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename kubectl 06/12/23 22:15:26.392 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:15:26.45 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:15:26.459 - [BeforeEach] [sig-cli] Kubectl client + STEP: Creating a kubernetes client 07/27/23 02:50:58.473 + Jul 27 02:50:58.473: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename configmap 07/27/23 02:50:58.474 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:50:58.521 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:50:58.534 + [BeforeEach] [sig-node] ConfigMap test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-cli] Kubectl client - test/e2e/kubectl/kubectl.go:274 - [It] should support proxy with --port 0 [Conformance] - test/e2e/kubectl/kubectl.go:1787 - STEP: starting the proxy server 06/12/23 22:15:26.474 - Jun 12 22:15:26.475: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-9079 proxy -p 0 --disable-filter' - STEP: curling proxy /api/ output 06/12/23 22:15:26.724 - [AfterEach] [sig-cli] Kubectl client + [It] should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/common/node/configmap.go:93 + STEP: Creating configMap configmap-3373/configmap-test-2b85ab77-0f7c-443d-9735-36d83172dc07 07/27/23 02:50:58.546 + STEP: Creating a pod to test consume configMaps 07/27/23 02:50:58.563 + Jul 27 02:50:59.595: INFO: Waiting up to 5m0s for pod "pod-configmaps-d0aadbcd-d796-4407-9554-3c41af59f10c" in namespace "configmap-3373" to be "Succeeded or Failed" + Jul 27 02:50:59.612: INFO: Pod "pod-configmaps-d0aadbcd-d796-4407-9554-3c41af59f10c": Phase="Pending", Reason="", readiness=false. Elapsed: 17.111459ms + Jul 27 02:51:01.624: INFO: Pod "pod-configmaps-d0aadbcd-d796-4407-9554-3c41af59f10c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028510894s + Jul 27 02:51:03.624: INFO: Pod "pod-configmaps-d0aadbcd-d796-4407-9554-3c41af59f10c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028887004s + STEP: Saw pod success 07/27/23 02:51:03.624 + Jul 27 02:51:03.624: INFO: Pod "pod-configmaps-d0aadbcd-d796-4407-9554-3c41af59f10c" satisfied condition "Succeeded or Failed" + Jul 27 02:51:03.649: INFO: Trying to get logs from node 10.245.128.19 pod pod-configmaps-d0aadbcd-d796-4407-9554-3c41af59f10c container env-test: + STEP: delete the pod 07/27/23 02:51:03.721 + Jul 27 02:51:03.756: INFO: Waiting for pod pod-configmaps-d0aadbcd-d796-4407-9554-3c41af59f10c to disappear + Jul 27 02:51:03.767: INFO: Pod pod-configmaps-d0aadbcd-d796-4407-9554-3c41af59f10c no longer exists + [AfterEach] [sig-node] ConfigMap test/e2e/framework/node/init/init.go:32 - Jun 12 22:15:26.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-cli] Kubectl client + Jul 27 02:51:03.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] ConfigMap test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-cli] Kubectl client + [DeferCleanup (Each)] [sig-node] ConfigMap dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-cli] Kubectl client + [DeferCleanup (Each)] [sig-node] ConfigMap tear down framework | framework.go:193 - STEP: Destroying namespace "kubectl-9079" for this suite. 06/12/23 22:15:26.787 + STEP: Destroying namespace "configmap-3373" for this suite. 07/27/23 02:51:03.786 << End Captured GinkgoWriter Output ------------------------------ -SSSSSS +SSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-network] EndpointSlice - should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] - test/e2e/network/endpointslice.go:102 -[BeforeEach] [sig-network] EndpointSlice +[sig-storage] EmptyDir volumes + should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:137 +[BeforeEach] [sig-storage] EmptyDir volumes set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:15:26.812 -Jun 12 22:15:26.812: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename endpointslice 06/12/23 22:15:26.816 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:15:26.885 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:15:26.914 -[BeforeEach] [sig-network] EndpointSlice +STEP: Creating a kubernetes client 07/27/23 02:51:03.809 +Jul 27 02:51:03.809: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename emptydir 07/27/23 02:51:03.81 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:51:03.858 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:51:03.873 +[BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-network] EndpointSlice - test/e2e/network/endpointslice.go:52 -[It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] - test/e2e/network/endpointslice.go:102 -[AfterEach] [sig-network] EndpointSlice +[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:137 +STEP: Creating a pod to test emptydir 0666 on tmpfs 07/27/23 02:51:03.887 +W0727 02:51:03.932101 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "test-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "test-container" must set securityContext.capabilities.drop=["ALL"]), seccompProfile (pod or container "test-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +Jul 27 02:51:03.932: INFO: Waiting up to 5m0s for pod "pod-bea4ed0b-b46d-4dfb-89e3-fa9f726032b1" in namespace "emptydir-2521" to be "Succeeded or Failed" +Jul 27 02:51:03.944: INFO: Pod "pod-bea4ed0b-b46d-4dfb-89e3-fa9f726032b1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.175858ms +Jul 27 02:51:05.956: INFO: Pod "pod-bea4ed0b-b46d-4dfb-89e3-fa9f726032b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024721159s +Jul 27 02:51:07.956: INFO: Pod "pod-bea4ed0b-b46d-4dfb-89e3-fa9f726032b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024014044s +Jul 27 02:51:09.957: INFO: Pod "pod-bea4ed0b-b46d-4dfb-89e3-fa9f726032b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025598485s +STEP: Saw pod success 07/27/23 02:51:09.957 +Jul 27 02:51:09.958: INFO: Pod "pod-bea4ed0b-b46d-4dfb-89e3-fa9f726032b1" satisfied condition "Succeeded or Failed" +Jul 27 02:51:09.967: INFO: Trying to get logs from node 10.245.128.19 pod pod-bea4ed0b-b46d-4dfb-89e3-fa9f726032b1 container test-container: +STEP: delete the pod 07/27/23 02:51:10 +Jul 27 02:51:10.028: INFO: Waiting for pod pod-bea4ed0b-b46d-4dfb-89e3-fa9f726032b1 to disappear +Jul 27 02:51:10.042: INFO: Pod pod-bea4ed0b-b46d-4dfb-89e3-fa9f726032b1 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/node/init/init.go:32 -Jun 12 22:15:29.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] EndpointSlice +Jul 27 02:51:10.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] EndpointSlice +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] EndpointSlice +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes tear down framework | framework.go:193 -STEP: Destroying namespace "endpointslice-8587" for this suite. 06/12/23 22:15:29.179 +STEP: Destroying namespace "emptydir-2521" for this suite. 07/27/23 02:51:10.059 ------------------------------ -• [2.388 seconds] -[sig-network] EndpointSlice -test/e2e/network/common/framework.go:23 - should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] - test/e2e/network/endpointslice.go:102 +• [SLOW TEST] [6.290 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:137 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] EndpointSlice + [BeforeEach] [sig-storage] EmptyDir volumes set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:15:26.812 - Jun 12 22:15:26.812: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename endpointslice 06/12/23 22:15:26.816 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:15:26.885 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:15:26.914 - [BeforeEach] [sig-network] EndpointSlice + STEP: Creating a kubernetes client 07/27/23 02:51:03.809 + Jul 27 02:51:03.809: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename emptydir 07/27/23 02:51:03.81 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:51:03.858 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:51:03.873 + [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-network] EndpointSlice - test/e2e/network/endpointslice.go:52 - [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] - test/e2e/network/endpointslice.go:102 - [AfterEach] [sig-network] EndpointSlice + [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:137 + STEP: Creating a pod to test emptydir 0666 on tmpfs 07/27/23 02:51:03.887 + W0727 02:51:03.932101 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "test-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "test-container" must set securityContext.capabilities.drop=["ALL"]), seccompProfile (pod or container "test-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + Jul 27 02:51:03.932: INFO: Waiting up to 5m0s for pod "pod-bea4ed0b-b46d-4dfb-89e3-fa9f726032b1" in namespace "emptydir-2521" to be "Succeeded or Failed" + Jul 27 02:51:03.944: INFO: Pod "pod-bea4ed0b-b46d-4dfb-89e3-fa9f726032b1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.175858ms + Jul 27 02:51:05.956: INFO: Pod "pod-bea4ed0b-b46d-4dfb-89e3-fa9f726032b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024721159s + Jul 27 02:51:07.956: INFO: Pod "pod-bea4ed0b-b46d-4dfb-89e3-fa9f726032b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024014044s + Jul 27 02:51:09.957: INFO: Pod "pod-bea4ed0b-b46d-4dfb-89e3-fa9f726032b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025598485s + STEP: Saw pod success 07/27/23 02:51:09.957 + Jul 27 02:51:09.958: INFO: Pod "pod-bea4ed0b-b46d-4dfb-89e3-fa9f726032b1" satisfied condition "Succeeded or Failed" + Jul 27 02:51:09.967: INFO: Trying to get logs from node 10.245.128.19 pod pod-bea4ed0b-b46d-4dfb-89e3-fa9f726032b1 container test-container: + STEP: delete the pod 07/27/23 02:51:10 + Jul 27 02:51:10.028: INFO: Waiting for pod pod-bea4ed0b-b46d-4dfb-89e3-fa9f726032b1 to disappear + Jul 27 02:51:10.042: INFO: Pod pod-bea4ed0b-b46d-4dfb-89e3-fa9f726032b1 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/node/init/init.go:32 - Jun 12 22:15:29.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] EndpointSlice + Jul 27 02:51:10.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] EndpointSlice + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] EndpointSlice + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes tear down framework | framework.go:193 - STEP: Destroying namespace "endpointslice-8587" for this suite. 06/12/23 22:15:29.179 + STEP: Destroying namespace "emptydir-2521" for this suite. 07/27/23 02:51:10.059 << End Captured GinkgoWriter Output ------------------------------ -SSSS +SSSSS ------------------------------ [sig-auth] ServiceAccounts - should run through the lifecycle of a ServiceAccount [Conformance] - test/e2e/auth/service_accounts.go:649 + should mount projected service account token [Conformance] + test/e2e/auth/service_accounts.go:275 [BeforeEach] [sig-auth] ServiceAccounts set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:15:29.203 -Jun 12 22:15:29.203: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename svcaccounts 06/12/23 22:15:29.207 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:15:29.256 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:15:29.276 +STEP: Creating a kubernetes client 07/27/23 02:51:10.1 +Jul 27 02:51:10.100: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename svcaccounts 07/27/23 02:51:10.101 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:51:10.164 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:51:10.176 [BeforeEach] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:31 -[It] should run through the lifecycle of a ServiceAccount [Conformance] - test/e2e/auth/service_accounts.go:649 -STEP: creating a ServiceAccount 06/12/23 22:15:29.303 -STEP: watching for the ServiceAccount to be added 06/12/23 22:15:29.334 -STEP: patching the ServiceAccount 06/12/23 22:15:29.356 -STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) 06/12/23 22:15:29.391 -STEP: deleting the ServiceAccount 06/12/23 22:15:29.413 +[It] should mount projected service account token [Conformance] + test/e2e/auth/service_accounts.go:275 +STEP: Creating a pod to test service account token: 07/27/23 02:51:10.188 +Jul 27 02:51:10.217: INFO: Waiting up to 5m0s for pod "test-pod-336868e6-1052-4e6e-9b05-cd2026328023" in namespace "svcaccounts-3424" to be "Succeeded or Failed" +Jul 27 02:51:10.233: INFO: Pod "test-pod-336868e6-1052-4e6e-9b05-cd2026328023": Phase="Pending", Reason="", readiness=false. Elapsed: 16.587788ms +Jul 27 02:51:12.246: INFO: Pod "test-pod-336868e6-1052-4e6e-9b05-cd2026328023": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028727374s +Jul 27 02:51:14.252: INFO: Pod "test-pod-336868e6-1052-4e6e-9b05-cd2026328023": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035257427s +STEP: Saw pod success 07/27/23 02:51:14.252 +Jul 27 02:51:14.252: INFO: Pod "test-pod-336868e6-1052-4e6e-9b05-cd2026328023" satisfied condition "Succeeded or Failed" +Jul 27 02:51:14.265: INFO: Trying to get logs from node 10.245.128.19 pod test-pod-336868e6-1052-4e6e-9b05-cd2026328023 container agnhost-container: +STEP: delete the pod 07/27/23 02:51:14.292 +Jul 27 02:51:14.331: INFO: Waiting for pod test-pod-336868e6-1052-4e6e-9b05-cd2026328023 to disappear +Jul 27 02:51:14.343: INFO: Pod test-pod-336868e6-1052-4e6e-9b05-cd2026328023 no longer exists [AfterEach] [sig-auth] ServiceAccounts test/e2e/framework/node/init/init.go:32 -Jun 12 22:15:29.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +Jul 27 02:51:14.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-auth] ServiceAccounts dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-auth] ServiceAccounts tear down framework | framework.go:193 -STEP: Destroying namespace "svcaccounts-2080" for this suite. 06/12/23 22:15:29.498 +STEP: Destroying namespace "svcaccounts-3424" for this suite. 07/27/23 02:51:14.361 ------------------------------ -• [0.319 seconds] +• [4.282 seconds] [sig-auth] ServiceAccounts test/e2e/auth/framework.go:23 - should run through the lifecycle of a ServiceAccount [Conformance] - test/e2e/auth/service_accounts.go:649 + should mount projected service account token [Conformance] + test/e2e/auth/service_accounts.go:275 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-auth] ServiceAccounts set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:15:29.203 - Jun 12 22:15:29.203: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename svcaccounts 06/12/23 22:15:29.207 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:15:29.256 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:15:29.276 + STEP: Creating a kubernetes client 07/27/23 02:51:10.1 + Jul 27 02:51:10.100: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename svcaccounts 07/27/23 02:51:10.101 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:51:10.164 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:51:10.176 [BeforeEach] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:31 - [It] should run through the lifecycle of a ServiceAccount [Conformance] - test/e2e/auth/service_accounts.go:649 - STEP: creating a ServiceAccount 06/12/23 22:15:29.303 - STEP: watching for the ServiceAccount to be added 06/12/23 22:15:29.334 - STEP: patching the ServiceAccount 06/12/23 22:15:29.356 - STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) 06/12/23 22:15:29.391 - STEP: deleting the ServiceAccount 06/12/23 22:15:29.413 + [It] should mount projected service account token [Conformance] + test/e2e/auth/service_accounts.go:275 + STEP: Creating a pod to test service account token: 07/27/23 02:51:10.188 + Jul 27 02:51:10.217: INFO: Waiting up to 5m0s for pod "test-pod-336868e6-1052-4e6e-9b05-cd2026328023" in namespace "svcaccounts-3424" to be "Succeeded or Failed" + Jul 27 02:51:10.233: INFO: Pod "test-pod-336868e6-1052-4e6e-9b05-cd2026328023": Phase="Pending", Reason="", readiness=false. Elapsed: 16.587788ms + Jul 27 02:51:12.246: INFO: Pod "test-pod-336868e6-1052-4e6e-9b05-cd2026328023": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028727374s + Jul 27 02:51:14.252: INFO: Pod "test-pod-336868e6-1052-4e6e-9b05-cd2026328023": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035257427s + STEP: Saw pod success 07/27/23 02:51:14.252 + Jul 27 02:51:14.252: INFO: Pod "test-pod-336868e6-1052-4e6e-9b05-cd2026328023" satisfied condition "Succeeded or Failed" + Jul 27 02:51:14.265: INFO: Trying to get logs from node 10.245.128.19 pod test-pod-336868e6-1052-4e6e-9b05-cd2026328023 container agnhost-container: + STEP: delete the pod 07/27/23 02:51:14.292 + Jul 27 02:51:14.331: INFO: Waiting for pod test-pod-336868e6-1052-4e6e-9b05-cd2026328023 to disappear + Jul 27 02:51:14.343: INFO: Pod test-pod-336868e6-1052-4e6e-9b05-cd2026328023 no longer exists [AfterEach] [sig-auth] ServiceAccounts test/e2e/framework/node/init/init.go:32 - Jun 12 22:15:29.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + Jul 27 02:51:14.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-auth] ServiceAccounts dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-auth] ServiceAccounts tear down framework | framework.go:193 - STEP: Destroying namespace "svcaccounts-2080" for this suite. 06/12/23 22:15:29.498 + STEP: Destroying namespace "svcaccounts-3424" for this suite. 07/27/23 02:51:14.361 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-cli] Kubectl client Kubectl api-versions - should check if v1 is in available api versions [Conformance] - test/e2e/kubectl/kubectl.go:824 -[BeforeEach] [sig-cli] Kubectl client +[sig-node] Pods + should delete a collection of pods [Conformance] + test/e2e/common/node/pods.go:845 +[BeforeEach] [sig-node] Pods set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:15:29.529 -Jun 12 22:15:29.530: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename kubectl 06/12/23 22:15:29.534 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:15:29.622 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:15:29.647 -[BeforeEach] [sig-cli] Kubectl client +STEP: Creating a kubernetes client 07/27/23 02:51:14.383 +Jul 27 02:51:14.383: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename pods 07/27/23 02:51:14.383 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:51:14.509 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:51:14.521 +[BeforeEach] [sig-node] Pods test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-cli] Kubectl client - test/e2e/kubectl/kubectl.go:274 -[It] should check if v1 is in available api versions [Conformance] - test/e2e/kubectl/kubectl.go:824 -STEP: validating api versions 06/12/23 22:15:29.679 -Jun 12 22:15:29.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-3912 api-versions' -Jun 12 22:15:30.553: INFO: stderr: "" -Jun 12 22:15:30.553: INFO: stdout: "admissionregistration.k8s.io/v1\napiextensions.k8s.io/v1\napiregistration.k8s.io/v1\napiserver.openshift.io/v1\napps.openshift.io/v1\napps/v1\nauthentication.k8s.io/v1\nauthorization.k8s.io/v1\nauthorization.openshift.io/v1\nautoscaling/v1\nautoscaling/v2\nbatch/v1\nbuild.openshift.io/v1\ncertificates.k8s.io/v1\ncloudcredential.openshift.io/v1\nconfig.openshift.io/v1\nconsole.openshift.io/v1\nconsole.openshift.io/v1alpha1\ncontrolplane.operator.openshift.io/v1alpha1\ncoordination.k8s.io/v1\ncrd.projectcalico.org/v1\ndiscovery.k8s.io/v1\nevents.k8s.io/v1\nflowcontrol.apiserver.k8s.io/v1beta2\nflowcontrol.apiserver.k8s.io/v1beta3\nhelm.openshift.io/v1beta1\nibm.com/v1alpha1\nimage.openshift.io/v1\nimageregistry.operator.openshift.io/v1\ningress.operator.openshift.io/v1\nk8s.cni.cncf.io/v1\nmachineconfiguration.openshift.io/v1\nmetrics.k8s.io/v1beta1\nmigration.k8s.io/v1alpha1\nmonitoring.coreos.com/v1\nmonitoring.coreos.com/v1alpha1\nmonitoring.coreos.com/v1beta1\nnetwork.operator.openshift.io/v1\nnetworking.k8s.io/v1\nnode.k8s.io/v1\noauth.openshift.io/v1\noperator.openshift.io/v1\noperator.openshift.io/v1alpha1\noperator.tigera.io/v1\noperators.coreos.com/v1\noperators.coreos.com/v1alpha1\noperators.coreos.com/v1alpha2\noperators.coreos.com/v2\npackages.operators.coreos.com/v1\nperformance.openshift.io/v1\nperformance.openshift.io/v1alpha1\nperformance.openshift.io/v2\npolicy/v1\nproject.openshift.io/v1\nquota.openshift.io/v1\nrbac.authorization.k8s.io/v1\nroute.openshift.io/v1\nsamples.operator.openshift.io/v1\nscheduling.k8s.io/v1\nsecurity.internal.openshift.io/v1\nsecurity.openshift.io/v1\nsnapshot.storage.k8s.io/v1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\ntemplate.openshift.io/v1\ntuned.openshift.io/v1\nuser.openshift.io/v1\nv1\nwhereabouts.cni.cncf.io/v1alpha1\n" -[AfterEach] [sig-cli] Kubectl client +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 +[It] should delete a collection of pods [Conformance] + test/e2e/common/node/pods.go:845 +STEP: Create set of pods 07/27/23 02:51:14.531 +Jul 27 02:51:14.575: INFO: created test-pod-1 +Jul 27 02:51:14.599: INFO: created test-pod-2 +Jul 27 02:51:14.620: INFO: created test-pod-3 +STEP: waiting for all 3 pods to be running 07/27/23 02:51:14.62 +Jul 27 02:51:14.620: INFO: Waiting up to 5m0s for all pods (need at least 3) in namespace 'pods-708' to be running and ready +Jul 27 02:51:14.732: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed +Jul 27 02:51:14.732: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed +Jul 27 02:51:14.732: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed +Jul 27 02:51:14.732: INFO: 0 / 3 pods in namespace 'pods-708' are running and ready (0 seconds elapsed) +Jul 27 02:51:14.732: INFO: expected 0 pod replicas in namespace 'pods-708', 0 are Running and Ready. +Jul 27 02:51:14.732: INFO: POD NODE PHASE GRACE CONDITIONS +Jul 27 02:51:14.732: INFO: test-pod-1 10.245.128.19 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:51:14 +0000 UTC }] +Jul 27 02:51:14.732: INFO: test-pod-2 10.245.128.19 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:51:14 +0000 UTC }] +Jul 27 02:51:14.732: INFO: test-pod-3 10.245.128.19 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:51:14 +0000 UTC }] +Jul 27 02:51:14.732: INFO: +Jul 27 02:51:16.771: INFO: 3 / 3 pods in namespace 'pods-708' are running and ready (2 seconds elapsed) +Jul 27 02:51:16.771: INFO: expected 0 pod replicas in namespace 'pods-708', 0 are Running and Ready. +STEP: waiting for all pods to be deleted 07/27/23 02:51:16.821 +Jul 27 02:51:16.832: INFO: Pod quantity 3 is different from expected quantity 0 +Jul 27 02:51:17.845: INFO: Pod quantity 3 is different from expected quantity 0 +Jul 27 02:51:18.845: INFO: Pod quantity 3 is different from expected quantity 0 +Jul 27 02:51:19.845: INFO: Pod quantity 2 is different from expected quantity 0 +[AfterEach] [sig-node] Pods test/e2e/framework/node/init/init.go:32 -Jun 12 22:15:30.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-cli] Kubectl client +Jul 27 02:51:20.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Pods test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-cli] Kubectl client +[DeferCleanup (Each)] [sig-node] Pods dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-cli] Kubectl client +[DeferCleanup (Each)] [sig-node] Pods tear down framework | framework.go:193 -STEP: Destroying namespace "kubectl-3912" for this suite. 06/12/23 22:15:30.755 +STEP: Destroying namespace "pods-708" for this suite. 07/27/23 02:51:20.863 ------------------------------ -• [1.256 seconds] -[sig-cli] Kubectl client -test/e2e/kubectl/framework.go:23 - Kubectl api-versions - test/e2e/kubectl/kubectl.go:818 - should check if v1 is in available api versions [Conformance] - test/e2e/kubectl/kubectl.go:824 +• [SLOW TEST] [6.505 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should delete a collection of pods [Conformance] + test/e2e/common/node/pods.go:845 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-cli] Kubectl client + [BeforeEach] [sig-node] Pods set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:15:29.529 - Jun 12 22:15:29.530: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename kubectl 06/12/23 22:15:29.534 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:15:29.622 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:15:29.647 - [BeforeEach] [sig-cli] Kubectl client + STEP: Creating a kubernetes client 07/27/23 02:51:14.383 + Jul 27 02:51:14.383: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename pods 07/27/23 02:51:14.383 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:51:14.509 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:51:14.521 + [BeforeEach] [sig-node] Pods test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-cli] Kubectl client - test/e2e/kubectl/kubectl.go:274 - [It] should check if v1 is in available api versions [Conformance] - test/e2e/kubectl/kubectl.go:824 - STEP: validating api versions 06/12/23 22:15:29.679 - Jun 12 22:15:29.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-3912 api-versions' - Jun 12 22:15:30.553: INFO: stderr: "" - Jun 12 22:15:30.553: INFO: stdout: "admissionregistration.k8s.io/v1\napiextensions.k8s.io/v1\napiregistration.k8s.io/v1\napiserver.openshift.io/v1\napps.openshift.io/v1\napps/v1\nauthentication.k8s.io/v1\nauthorization.k8s.io/v1\nauthorization.openshift.io/v1\nautoscaling/v1\nautoscaling/v2\nbatch/v1\nbuild.openshift.io/v1\ncertificates.k8s.io/v1\ncloudcredential.openshift.io/v1\nconfig.openshift.io/v1\nconsole.openshift.io/v1\nconsole.openshift.io/v1alpha1\ncontrolplane.operator.openshift.io/v1alpha1\ncoordination.k8s.io/v1\ncrd.projectcalico.org/v1\ndiscovery.k8s.io/v1\nevents.k8s.io/v1\nflowcontrol.apiserver.k8s.io/v1beta2\nflowcontrol.apiserver.k8s.io/v1beta3\nhelm.openshift.io/v1beta1\nibm.com/v1alpha1\nimage.openshift.io/v1\nimageregistry.operator.openshift.io/v1\ningress.operator.openshift.io/v1\nk8s.cni.cncf.io/v1\nmachineconfiguration.openshift.io/v1\nmetrics.k8s.io/v1beta1\nmigration.k8s.io/v1alpha1\nmonitoring.coreos.com/v1\nmonitoring.coreos.com/v1alpha1\nmonitoring.coreos.com/v1beta1\nnetwork.operator.openshift.io/v1\nnetworking.k8s.io/v1\nnode.k8s.io/v1\noauth.openshift.io/v1\noperator.openshift.io/v1\noperator.openshift.io/v1alpha1\noperator.tigera.io/v1\noperators.coreos.com/v1\noperators.coreos.com/v1alpha1\noperators.coreos.com/v1alpha2\noperators.coreos.com/v2\npackages.operators.coreos.com/v1\nperformance.openshift.io/v1\nperformance.openshift.io/v1alpha1\nperformance.openshift.io/v2\npolicy/v1\nproject.openshift.io/v1\nquota.openshift.io/v1\nrbac.authorization.k8s.io/v1\nroute.openshift.io/v1\nsamples.operator.openshift.io/v1\nscheduling.k8s.io/v1\nsecurity.internal.openshift.io/v1\nsecurity.openshift.io/v1\nsnapshot.storage.k8s.io/v1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\ntemplate.openshift.io/v1\ntuned.openshift.io/v1\nuser.openshift.io/v1\nv1\nwhereabouts.cni.cncf.io/v1alpha1\n" - [AfterEach] [sig-cli] Kubectl client + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 + [It] should delete a collection of pods [Conformance] + test/e2e/common/node/pods.go:845 + STEP: Create set of pods 07/27/23 02:51:14.531 + Jul 27 02:51:14.575: INFO: created test-pod-1 + Jul 27 02:51:14.599: INFO: created test-pod-2 + Jul 27 02:51:14.620: INFO: created test-pod-3 + STEP: waiting for all 3 pods to be running 07/27/23 02:51:14.62 + Jul 27 02:51:14.620: INFO: Waiting up to 5m0s for all pods (need at least 3) in namespace 'pods-708' to be running and ready + Jul 27 02:51:14.732: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed + Jul 27 02:51:14.732: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed + Jul 27 02:51:14.732: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed + Jul 27 02:51:14.732: INFO: 0 / 3 pods in namespace 'pods-708' are running and ready (0 seconds elapsed) + Jul 27 02:51:14.732: INFO: expected 0 pod replicas in namespace 'pods-708', 0 are Running and Ready. + Jul 27 02:51:14.732: INFO: POD NODE PHASE GRACE CONDITIONS + Jul 27 02:51:14.732: INFO: test-pod-1 10.245.128.19 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:51:14 +0000 UTC }] + Jul 27 02:51:14.732: INFO: test-pod-2 10.245.128.19 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:51:14 +0000 UTC }] + Jul 27 02:51:14.732: INFO: test-pod-3 10.245.128.19 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:51:14 +0000 UTC }] + Jul 27 02:51:14.732: INFO: + Jul 27 02:51:16.771: INFO: 3 / 3 pods in namespace 'pods-708' are running and ready (2 seconds elapsed) + Jul 27 02:51:16.771: INFO: expected 0 pod replicas in namespace 'pods-708', 0 are Running and Ready. + STEP: waiting for all pods to be deleted 07/27/23 02:51:16.821 + Jul 27 02:51:16.832: INFO: Pod quantity 3 is different from expected quantity 0 + Jul 27 02:51:17.845: INFO: Pod quantity 3 is different from expected quantity 0 + Jul 27 02:51:18.845: INFO: Pod quantity 3 is different from expected quantity 0 + Jul 27 02:51:19.845: INFO: Pod quantity 2 is different from expected quantity 0 + [AfterEach] [sig-node] Pods test/e2e/framework/node/init/init.go:32 - Jun 12 22:15:30.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-cli] Kubectl client + Jul 27 02:51:20.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Pods test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-cli] Kubectl client + [DeferCleanup (Each)] [sig-node] Pods dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-cli] Kubectl client + [DeferCleanup (Each)] [sig-node] Pods tear down framework | framework.go:193 - STEP: Destroying namespace "kubectl-3912" for this suite. 06/12/23 22:15:30.755 + STEP: Destroying namespace "pods-708" for this suite. 07/27/23 02:51:20.863 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSS +SS ------------------------------ -[sig-storage] EmptyDir volumes - pod should support shared volumes between containers [Conformance] - test/e2e/common/storage/empty_dir.go:227 -[BeforeEach] [sig-storage] EmptyDir volumes +[sig-cli] Kubectl client Update Demo + should scale a replication controller [Conformance] + test/e2e/kubectl/kubectl.go:352 +[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:15:30.788 -Jun 12 22:15:30.788: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename emptydir 06/12/23 22:15:30.79 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:15:31.102 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:15:31.179 -[BeforeEach] [sig-storage] EmptyDir volumes +STEP: Creating a kubernetes client 07/27/23 02:51:20.888 +Jul 27 02:51:20.888: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename kubectl 07/27/23 02:51:20.889 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:51:20.935 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:51:20.947 +[BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 -[It] pod should support shared volumes between containers [Conformance] - test/e2e/common/storage/empty_dir.go:227 -STEP: Creating Pod 06/12/23 22:15:31.196 -Jun 12 22:15:31.227: INFO: Waiting up to 5m0s for pod "pod-sharedvolume-03b4ee2b-4497-49dc-8f5f-10e95abd5d8e" in namespace "emptydir-6366" to be "running" -Jun 12 22:15:31.238: INFO: Pod "pod-sharedvolume-03b4ee2b-4497-49dc-8f5f-10e95abd5d8e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.101427ms -Jun 12 22:15:33.271: INFO: Pod "pod-sharedvolume-03b4ee2b-4497-49dc-8f5f-10e95abd5d8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043580346s -Jun 12 22:15:35.246: INFO: Pod "pod-sharedvolume-03b4ee2b-4497-49dc-8f5f-10e95abd5d8e": Phase="Running", Reason="", readiness=false. Elapsed: 4.018803386s -Jun 12 22:15:35.246: INFO: Pod "pod-sharedvolume-03b4ee2b-4497-49dc-8f5f-10e95abd5d8e" satisfied condition "running" -STEP: Reading file content from the nginx-container 06/12/23 22:15:35.246 -Jun 12 22:15:35.246: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-6366 PodName:pod-sharedvolume-03b4ee2b-4497-49dc-8f5f-10e95abd5d8e ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} -Jun 12 22:15:35.246: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 22:15:35.248: INFO: ExecWithOptions: Clientset creation -Jun 12 22:15:35.248: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/emptydir-6366/pods/pod-sharedvolume-03b4ee2b-4497-49dc-8f5f-10e95abd5d8e/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fusr%2Fshare%2Fvolumeshare%2Fshareddata.txt&container=busybox-main-container&container=busybox-main-container&stderr=true&stdout=true) -Jun 12 22:15:35.464: INFO: Exec stderr: "" -[AfterEach] [sig-storage] EmptyDir volumes - test/e2e/framework/node/init/init.go:32 -Jun 12 22:15:35.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes - test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes - dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes - tear down framework | framework.go:193 -STEP: Destroying namespace "emptydir-6366" for this suite. 06/12/23 22:15:35.493 ------------------------------- -• [4.728 seconds] -[sig-storage] EmptyDir volumes -test/e2e/common/storage/framework.go:23 - pod should support shared volumes between containers [Conformance] - test/e2e/common/storage/empty_dir.go:227 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[BeforeEach] Update Demo + test/e2e/kubectl/kubectl.go:326 +[It] should scale a replication controller [Conformance] + test/e2e/kubectl/kubectl.go:352 +STEP: creating a replication controller 07/27/23 02:51:20.958 +Jul 27 02:51:20.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 create -f -' +Jul 27 02:51:21.358: INFO: stderr: "" +Jul 27 02:51:21.358: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. 07/27/23 02:51:21.358 +Jul 27 02:51:21.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Jul 27 02:51:21.459: INFO: stderr: "" +Jul 27 02:51:21.459: INFO: stdout: "update-demo-nautilus-92h4v update-demo-nautilus-xph4r " +Jul 27 02:51:21.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods update-demo-nautilus-92h4v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Jul 27 02:51:21.536: INFO: stderr: "" +Jul 27 02:51:21.536: INFO: stdout: "" +Jul 27 02:51:21.536: INFO: update-demo-nautilus-92h4v is created but not running +Jul 27 02:51:26.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Jul 27 02:51:26.635: INFO: stderr: "" +Jul 27 02:51:26.635: INFO: stdout: "update-demo-nautilus-92h4v update-demo-nautilus-xph4r " +Jul 27 02:51:26.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods update-demo-nautilus-92h4v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Jul 27 02:51:26.716: INFO: stderr: "" +Jul 27 02:51:26.716: INFO: stdout: "true" +Jul 27 02:51:26.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods update-demo-nautilus-92h4v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Jul 27 02:51:26.798: INFO: stderr: "" +Jul 27 02:51:26.798: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" +Jul 27 02:51:26.798: INFO: validating pod update-demo-nautilus-92h4v +Jul 27 02:51:26.825: INFO: got data: { + "image": "nautilus.jpg" +} - Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] EmptyDir volumes - set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:15:30.788 - Jun 12 22:15:30.788: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename emptydir 06/12/23 22:15:30.79 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:15:31.102 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:15:31.179 - [BeforeEach] [sig-storage] EmptyDir volumes - test/e2e/framework/metrics/init/init.go:31 - [It] pod should support shared volumes between containers [Conformance] - test/e2e/common/storage/empty_dir.go:227 - STEP: Creating Pod 06/12/23 22:15:31.196 - Jun 12 22:15:31.227: INFO: Waiting up to 5m0s for pod "pod-sharedvolume-03b4ee2b-4497-49dc-8f5f-10e95abd5d8e" in namespace "emptydir-6366" to be "running" - Jun 12 22:15:31.238: INFO: Pod "pod-sharedvolume-03b4ee2b-4497-49dc-8f5f-10e95abd5d8e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.101427ms - Jun 12 22:15:33.271: INFO: Pod "pod-sharedvolume-03b4ee2b-4497-49dc-8f5f-10e95abd5d8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043580346s - Jun 12 22:15:35.246: INFO: Pod "pod-sharedvolume-03b4ee2b-4497-49dc-8f5f-10e95abd5d8e": Phase="Running", Reason="", readiness=false. Elapsed: 4.018803386s - Jun 12 22:15:35.246: INFO: Pod "pod-sharedvolume-03b4ee2b-4497-49dc-8f5f-10e95abd5d8e" satisfied condition "running" - STEP: Reading file content from the nginx-container 06/12/23 22:15:35.246 - Jun 12 22:15:35.246: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-6366 PodName:pod-sharedvolume-03b4ee2b-4497-49dc-8f5f-10e95abd5d8e ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} - Jun 12 22:15:35.246: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 22:15:35.248: INFO: ExecWithOptions: Clientset creation - Jun 12 22:15:35.248: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/emptydir-6366/pods/pod-sharedvolume-03b4ee2b-4497-49dc-8f5f-10e95abd5d8e/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fusr%2Fshare%2Fvolumeshare%2Fshareddata.txt&container=busybox-main-container&container=busybox-main-container&stderr=true&stdout=true) - Jun 12 22:15:35.464: INFO: Exec stderr: "" - [AfterEach] [sig-storage] EmptyDir volumes - test/e2e/framework/node/init/init.go:32 - Jun 12 22:15:35.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes - test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes - dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes - tear down framework | framework.go:193 - STEP: Destroying namespace "emptydir-6366" for this suite. 06/12/23 22:15:35.493 - << End Captured GinkgoWriter Output ------------------------------- -SSSSSSSSSSSSSSS ------------------------------- -[sig-network] DNS - should support configurable pod DNS nameservers [Conformance] - test/e2e/network/dns.go:411 -[BeforeEach] [sig-network] DNS - set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:15:35.52 -Jun 12 22:15:35.520: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename dns 06/12/23 22:15:35.521 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:15:35.58 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:15:35.588 -[BeforeEach] [sig-network] DNS - test/e2e/framework/metrics/init/init.go:31 -[It] should support configurable pod DNS nameservers [Conformance] - test/e2e/network/dns.go:411 -STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... 06/12/23 22:15:35.596 -Jun 12 22:15:35.618: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-8320 f8d267a4-afae-499b-9ce1-20c703790d85 138990 0 2023-06-12 22:15:35 +0000 UTC map[] map[openshift.io/scc:anyuid] [] [] [{e2e.test Update v1 2023-06-12 22:15:35 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fqlvv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fqlvv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c65,c0,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} -Jun 12 22:15:35.619: INFO: Waiting up to 5m0s for pod "test-dns-nameservers" in namespace "dns-8320" to be "running and ready" -Jun 12 22:15:35.631: INFO: Pod "test-dns-nameservers": Phase="Pending", Reason="", readiness=false. Elapsed: 11.673807ms -Jun 12 22:15:35.632: INFO: The phase of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) -Jun 12 22:15:37.666: INFO: Pod "test-dns-nameservers": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046241572s -Jun 12 22:15:37.666: INFO: The phase of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) -Jun 12 22:15:39.642: INFO: Pod "test-dns-nameservers": Phase="Running", Reason="", readiness=true. Elapsed: 4.022610904s -Jun 12 22:15:39.643: INFO: The phase of Pod test-dns-nameservers is Running (Ready = true) -Jun 12 22:15:39.643: INFO: Pod "test-dns-nameservers" satisfied condition "running and ready" -STEP: Verifying customized DNS suffix list is configured on pod... 06/12/23 22:15:39.643 -Jun 12 22:15:39.644: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-8320 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} -Jun 12 22:15:39.644: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 22:15:39.647: INFO: ExecWithOptions: Clientset creation -Jun 12 22:15:39.647: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/dns-8320/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-suffix&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) -STEP: Verifying customized DNS server is configured on pod... 06/12/23 22:15:40.026 -Jun 12 22:15:40.026: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-8320 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} -Jun 12 22:15:40.026: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 22:15:40.027: INFO: ExecWithOptions: Clientset creation -Jun 12 22:15:40.027: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/dns-8320/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-server-list&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) -Jun 12 22:15:40.324: INFO: Deleting pod test-dns-nameservers... -[AfterEach] [sig-network] DNS +Jul 27 02:51:26.825: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jul 27 02:51:26.825: INFO: update-demo-nautilus-92h4v is verified up and running +Jul 27 02:51:26.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods update-demo-nautilus-xph4r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Jul 27 02:51:26.935: INFO: stderr: "" +Jul 27 02:51:26.935: INFO: stdout: "true" +Jul 27 02:51:26.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods update-demo-nautilus-xph4r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Jul 27 02:51:27.012: INFO: stderr: "" +Jul 27 02:51:27.012: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" +Jul 27 02:51:27.012: INFO: validating pod update-demo-nautilus-xph4r +Jul 27 02:51:27.042: INFO: got data: { + "image": "nautilus.jpg" +} + +Jul 27 02:51:27.042: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jul 27 02:51:27.042: INFO: update-demo-nautilus-xph4r is verified up and running +STEP: scaling down the replication controller 07/27/23 02:51:27.042 +Jul 27 02:51:27.046: INFO: scanned /root for discovery docs: +Jul 27 02:51:27.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 scale rc update-demo-nautilus --replicas=1 --timeout=5m' +Jul 27 02:51:28.232: INFO: stderr: "" +Jul 27 02:51:28.232: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. 07/27/23 02:51:28.232 +Jul 27 02:51:28.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Jul 27 02:51:28.345: INFO: stderr: "" +Jul 27 02:51:28.345: INFO: stdout: "update-demo-nautilus-92h4v update-demo-nautilus-xph4r " +STEP: Replicas for name=update-demo: expected=1 actual=2 07/27/23 02:51:28.345 +Jul 27 02:51:33.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Jul 27 02:51:33.425: INFO: stderr: "" +Jul 27 02:51:33.426: INFO: stdout: "update-demo-nautilus-92h4v " +Jul 27 02:51:33.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods update-demo-nautilus-92h4v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Jul 27 02:51:33.510: INFO: stderr: "" +Jul 27 02:51:33.510: INFO: stdout: "true" +Jul 27 02:51:33.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods update-demo-nautilus-92h4v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Jul 27 02:51:33.589: INFO: stderr: "" +Jul 27 02:51:33.589: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" +Jul 27 02:51:33.589: INFO: validating pod update-demo-nautilus-92h4v +Jul 27 02:51:33.604: INFO: got data: { + "image": "nautilus.jpg" +} + +Jul 27 02:51:33.604: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jul 27 02:51:33.604: INFO: update-demo-nautilus-92h4v is verified up and running +STEP: scaling up the replication controller 07/27/23 02:51:33.604 +Jul 27 02:51:33.606: INFO: scanned /root for discovery docs: +Jul 27 02:51:33.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 scale rc update-demo-nautilus --replicas=2 --timeout=5m' +Jul 27 02:51:34.737: INFO: stderr: "" +Jul 27 02:51:34.737: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. 07/27/23 02:51:34.737 +Jul 27 02:51:34.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Jul 27 02:51:34.830: INFO: stderr: "" +Jul 27 02:51:34.830: INFO: stdout: "update-demo-nautilus-92h4v update-demo-nautilus-gnft4 " +Jul 27 02:51:34.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods update-demo-nautilus-92h4v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Jul 27 02:51:34.916: INFO: stderr: "" +Jul 27 02:51:34.916: INFO: stdout: "true" +Jul 27 02:51:34.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods update-demo-nautilus-92h4v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Jul 27 02:51:34.993: INFO: stderr: "" +Jul 27 02:51:34.993: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" +Jul 27 02:51:34.993: INFO: validating pod update-demo-nautilus-92h4v +Jul 27 02:51:35.010: INFO: got data: { + "image": "nautilus.jpg" +} + +Jul 27 02:51:35.010: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jul 27 02:51:35.010: INFO: update-demo-nautilus-92h4v is verified up and running +Jul 27 02:51:35.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods update-demo-nautilus-gnft4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Jul 27 02:51:35.090: INFO: stderr: "" +Jul 27 02:51:35.090: INFO: stdout: "" +Jul 27 02:51:35.090: INFO: update-demo-nautilus-gnft4 is created but not running +Jul 27 02:51:40.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Jul 27 02:51:40.213: INFO: stderr: "" +Jul 27 02:51:40.213: INFO: stdout: "update-demo-nautilus-92h4v update-demo-nautilus-gnft4 " +Jul 27 02:51:40.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods update-demo-nautilus-92h4v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Jul 27 02:51:40.304: INFO: stderr: "" +Jul 27 02:51:40.304: INFO: stdout: "true" +Jul 27 02:51:40.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods update-demo-nautilus-92h4v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Jul 27 02:51:40.384: INFO: stderr: "" +Jul 27 02:51:40.384: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" +Jul 27 02:51:40.384: INFO: validating pod update-demo-nautilus-92h4v +Jul 27 02:51:40.402: INFO: got data: { + "image": "nautilus.jpg" +} + +Jul 27 02:51:40.402: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jul 27 02:51:40.402: INFO: update-demo-nautilus-92h4v is verified up and running +Jul 27 02:51:40.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods update-demo-nautilus-gnft4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Jul 27 02:51:40.471: INFO: stderr: "" +Jul 27 02:51:40.471: INFO: stdout: "true" +Jul 27 02:51:40.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods update-demo-nautilus-gnft4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Jul 27 02:51:40.547: INFO: stderr: "" +Jul 27 02:51:40.547: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" +Jul 27 02:51:40.547: INFO: validating pod update-demo-nautilus-gnft4 +Jul 27 02:51:40.569: INFO: got data: { + "image": "nautilus.jpg" +} + +Jul 27 02:51:40.569: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jul 27 02:51:40.569: INFO: update-demo-nautilus-gnft4 is verified up and running +STEP: using delete to clean up resources 07/27/23 02:51:40.569 +Jul 27 02:51:40.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 delete --grace-period=0 --force -f -' +Jul 27 02:51:40.656: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jul 27 02:51:40.656: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Jul 27 02:51:40.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get rc,svc -l name=update-demo --no-headers' +Jul 27 02:51:40.751: INFO: stderr: "No resources found in kubectl-7945 namespace.\n" +Jul 27 02:51:40.753: INFO: stdout: "" +Jul 27 02:51:40.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Jul 27 02:51:40.850: INFO: stderr: "" +Jul 27 02:51:40.850: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 -Jun 12 22:15:40.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] DNS +Jul 27 02:51:40.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] DNS +[DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] DNS +[DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 -STEP: Destroying namespace "dns-8320" for this suite. 06/12/23 22:15:40.366 +STEP: Destroying namespace "kubectl-7945" for this suite. 07/27/23 02:51:40.866 ------------------------------ -• [4.869 seconds] -[sig-network] DNS -test/e2e/network/common/framework.go:23 - should support configurable pod DNS nameservers [Conformance] - test/e2e/network/dns.go:411 +• [SLOW TEST] [20.003 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Update Demo + test/e2e/kubectl/kubectl.go:324 + should scale a replication controller [Conformance] + test/e2e/kubectl/kubectl.go:352 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] DNS + [BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:15:35.52 - Jun 12 22:15:35.520: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename dns 06/12/23 22:15:35.521 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:15:35.58 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:15:35.588 - [BeforeEach] [sig-network] DNS + STEP: Creating a kubernetes client 07/27/23 02:51:20.888 + Jul 27 02:51:20.888: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename kubectl 07/27/23 02:51:20.889 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:51:20.935 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:51:20.947 + [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 - [It] should support configurable pod DNS nameservers [Conformance] - test/e2e/network/dns.go:411 - STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... 06/12/23 22:15:35.596 - Jun 12 22:15:35.618: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-8320 f8d267a4-afae-499b-9ce1-20c703790d85 138990 0 2023-06-12 22:15:35 +0000 UTC map[] map[openshift.io/scc:anyuid] [] [] [{e2e.test Update v1 2023-06-12 22:15:35 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fqlvv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fqlvv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c65,c0,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} - Jun 12 22:15:35.619: INFO: Waiting up to 5m0s for pod "test-dns-nameservers" in namespace "dns-8320" to be "running and ready" - Jun 12 22:15:35.631: INFO: Pod "test-dns-nameservers": Phase="Pending", Reason="", readiness=false. Elapsed: 11.673807ms - Jun 12 22:15:35.632: INFO: The phase of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) - Jun 12 22:15:37.666: INFO: Pod "test-dns-nameservers": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046241572s - Jun 12 22:15:37.666: INFO: The phase of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) - Jun 12 22:15:39.642: INFO: Pod "test-dns-nameservers": Phase="Running", Reason="", readiness=true. Elapsed: 4.022610904s - Jun 12 22:15:39.643: INFO: The phase of Pod test-dns-nameservers is Running (Ready = true) - Jun 12 22:15:39.643: INFO: Pod "test-dns-nameservers" satisfied condition "running and ready" - STEP: Verifying customized DNS suffix list is configured on pod... 06/12/23 22:15:39.643 - Jun 12 22:15:39.644: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-8320 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} - Jun 12 22:15:39.644: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 22:15:39.647: INFO: ExecWithOptions: Clientset creation - Jun 12 22:15:39.647: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/dns-8320/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-suffix&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) - STEP: Verifying customized DNS server is configured on pod... 06/12/23 22:15:40.026 - Jun 12 22:15:40.026: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-8320 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} - Jun 12 22:15:40.026: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 22:15:40.027: INFO: ExecWithOptions: Clientset creation - Jun 12 22:15:40.027: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/dns-8320/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-server-list&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) - Jun 12 22:15:40.324: INFO: Deleting pod test-dns-nameservers... - [AfterEach] [sig-network] DNS + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [BeforeEach] Update Demo + test/e2e/kubectl/kubectl.go:326 + [It] should scale a replication controller [Conformance] + test/e2e/kubectl/kubectl.go:352 + STEP: creating a replication controller 07/27/23 02:51:20.958 + Jul 27 02:51:20.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 create -f -' + Jul 27 02:51:21.358: INFO: stderr: "" + Jul 27 02:51:21.358: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" + STEP: waiting for all containers in name=update-demo pods to come up. 07/27/23 02:51:21.358 + Jul 27 02:51:21.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Jul 27 02:51:21.459: INFO: stderr: "" + Jul 27 02:51:21.459: INFO: stdout: "update-demo-nautilus-92h4v update-demo-nautilus-xph4r " + Jul 27 02:51:21.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods update-demo-nautilus-92h4v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Jul 27 02:51:21.536: INFO: stderr: "" + Jul 27 02:51:21.536: INFO: stdout: "" + Jul 27 02:51:21.536: INFO: update-demo-nautilus-92h4v is created but not running + Jul 27 02:51:26.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Jul 27 02:51:26.635: INFO: stderr: "" + Jul 27 02:51:26.635: INFO: stdout: "update-demo-nautilus-92h4v update-demo-nautilus-xph4r " + Jul 27 02:51:26.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods update-demo-nautilus-92h4v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Jul 27 02:51:26.716: INFO: stderr: "" + Jul 27 02:51:26.716: INFO: stdout: "true" + Jul 27 02:51:26.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods update-demo-nautilus-92h4v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' + Jul 27 02:51:26.798: INFO: stderr: "" + Jul 27 02:51:26.798: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" + Jul 27 02:51:26.798: INFO: validating pod update-demo-nautilus-92h4v + Jul 27 02:51:26.825: INFO: got data: { + "image": "nautilus.jpg" + } + + Jul 27 02:51:26.825: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . + Jul 27 02:51:26.825: INFO: update-demo-nautilus-92h4v is verified up and running + Jul 27 02:51:26.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods update-demo-nautilus-xph4r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Jul 27 02:51:26.935: INFO: stderr: "" + Jul 27 02:51:26.935: INFO: stdout: "true" + Jul 27 02:51:26.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods update-demo-nautilus-xph4r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' + Jul 27 02:51:27.012: INFO: stderr: "" + Jul 27 02:51:27.012: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" + Jul 27 02:51:27.012: INFO: validating pod update-demo-nautilus-xph4r + Jul 27 02:51:27.042: INFO: got data: { + "image": "nautilus.jpg" + } + + Jul 27 02:51:27.042: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . + Jul 27 02:51:27.042: INFO: update-demo-nautilus-xph4r is verified up and running + STEP: scaling down the replication controller 07/27/23 02:51:27.042 + Jul 27 02:51:27.046: INFO: scanned /root for discovery docs: + Jul 27 02:51:27.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 scale rc update-demo-nautilus --replicas=1 --timeout=5m' + Jul 27 02:51:28.232: INFO: stderr: "" + Jul 27 02:51:28.232: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" + STEP: waiting for all containers in name=update-demo pods to come up. 07/27/23 02:51:28.232 + Jul 27 02:51:28.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Jul 27 02:51:28.345: INFO: stderr: "" + Jul 27 02:51:28.345: INFO: stdout: "update-demo-nautilus-92h4v update-demo-nautilus-xph4r " + STEP: Replicas for name=update-demo: expected=1 actual=2 07/27/23 02:51:28.345 + Jul 27 02:51:33.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Jul 27 02:51:33.425: INFO: stderr: "" + Jul 27 02:51:33.426: INFO: stdout: "update-demo-nautilus-92h4v " + Jul 27 02:51:33.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods update-demo-nautilus-92h4v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Jul 27 02:51:33.510: INFO: stderr: "" + Jul 27 02:51:33.510: INFO: stdout: "true" + Jul 27 02:51:33.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods update-demo-nautilus-92h4v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' + Jul 27 02:51:33.589: INFO: stderr: "" + Jul 27 02:51:33.589: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" + Jul 27 02:51:33.589: INFO: validating pod update-demo-nautilus-92h4v + Jul 27 02:51:33.604: INFO: got data: { + "image": "nautilus.jpg" + } + + Jul 27 02:51:33.604: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . + Jul 27 02:51:33.604: INFO: update-demo-nautilus-92h4v is verified up and running + STEP: scaling up the replication controller 07/27/23 02:51:33.604 + Jul 27 02:51:33.606: INFO: scanned /root for discovery docs: + Jul 27 02:51:33.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 scale rc update-demo-nautilus --replicas=2 --timeout=5m' + Jul 27 02:51:34.737: INFO: stderr: "" + Jul 27 02:51:34.737: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" + STEP: waiting for all containers in name=update-demo pods to come up. 07/27/23 02:51:34.737 + Jul 27 02:51:34.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Jul 27 02:51:34.830: INFO: stderr: "" + Jul 27 02:51:34.830: INFO: stdout: "update-demo-nautilus-92h4v update-demo-nautilus-gnft4 " + Jul 27 02:51:34.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods update-demo-nautilus-92h4v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Jul 27 02:51:34.916: INFO: stderr: "" + Jul 27 02:51:34.916: INFO: stdout: "true" + Jul 27 02:51:34.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods update-demo-nautilus-92h4v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' + Jul 27 02:51:34.993: INFO: stderr: "" + Jul 27 02:51:34.993: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" + Jul 27 02:51:34.993: INFO: validating pod update-demo-nautilus-92h4v + Jul 27 02:51:35.010: INFO: got data: { + "image": "nautilus.jpg" + } + + Jul 27 02:51:35.010: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . + Jul 27 02:51:35.010: INFO: update-demo-nautilus-92h4v is verified up and running + Jul 27 02:51:35.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods update-demo-nautilus-gnft4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Jul 27 02:51:35.090: INFO: stderr: "" + Jul 27 02:51:35.090: INFO: stdout: "" + Jul 27 02:51:35.090: INFO: update-demo-nautilus-gnft4 is created but not running + Jul 27 02:51:40.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Jul 27 02:51:40.213: INFO: stderr: "" + Jul 27 02:51:40.213: INFO: stdout: "update-demo-nautilus-92h4v update-demo-nautilus-gnft4 " + Jul 27 02:51:40.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods update-demo-nautilus-92h4v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Jul 27 02:51:40.304: INFO: stderr: "" + Jul 27 02:51:40.304: INFO: stdout: "true" + Jul 27 02:51:40.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods update-demo-nautilus-92h4v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' + Jul 27 02:51:40.384: INFO: stderr: "" + Jul 27 02:51:40.384: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" + Jul 27 02:51:40.384: INFO: validating pod update-demo-nautilus-92h4v + Jul 27 02:51:40.402: INFO: got data: { + "image": "nautilus.jpg" + } + + Jul 27 02:51:40.402: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . + Jul 27 02:51:40.402: INFO: update-demo-nautilus-92h4v is verified up and running + Jul 27 02:51:40.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods update-demo-nautilus-gnft4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Jul 27 02:51:40.471: INFO: stderr: "" + Jul 27 02:51:40.471: INFO: stdout: "true" + Jul 27 02:51:40.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods update-demo-nautilus-gnft4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' + Jul 27 02:51:40.547: INFO: stderr: "" + Jul 27 02:51:40.547: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" + Jul 27 02:51:40.547: INFO: validating pod update-demo-nautilus-gnft4 + Jul 27 02:51:40.569: INFO: got data: { + "image": "nautilus.jpg" + } + + Jul 27 02:51:40.569: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . + Jul 27 02:51:40.569: INFO: update-demo-nautilus-gnft4 is verified up and running + STEP: using delete to clean up resources 07/27/23 02:51:40.569 + Jul 27 02:51:40.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 delete --grace-period=0 --force -f -' + Jul 27 02:51:40.656: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Jul 27 02:51:40.656: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" + Jul 27 02:51:40.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get rc,svc -l name=update-demo --no-headers' + Jul 27 02:51:40.751: INFO: stderr: "No resources found in kubectl-7945 namespace.\n" + Jul 27 02:51:40.753: INFO: stdout: "" + Jul 27 02:51:40.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-7945 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' + Jul 27 02:51:40.850: INFO: stderr: "" + Jul 27 02:51:40.850: INFO: stdout: "" + [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 - Jun 12 22:15:40.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] DNS + Jul 27 02:51:40.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] DNS + [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] DNS + [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 - STEP: Destroying namespace "dns-8320" for this suite. 06/12/23 22:15:40.366 + STEP: Destroying namespace "kubectl-7945" for this suite. 07/27/23 02:51:40.866 << End Captured GinkgoWriter Output ------------------------------ -SS +SSSSSSS ------------------------------ -[sig-storage] Projected configMap - should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:99 -[BeforeEach] [sig-storage] Projected configMap +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should list, patch and delete a collection of StatefulSets [Conformance] + test/e2e/apps/statefulset.go:908 +[BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:15:40.401 -Jun 12 22:15:40.401: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename projected 06/12/23 22:15:40.404 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:15:40.463 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:15:40.498 -[BeforeEach] [sig-storage] Projected configMap +STEP: Creating a kubernetes client 07/27/23 02:51:40.891 +Jul 27 02:51:40.891: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename statefulset 07/27/23 02:51:40.892 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:51:40.936 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:51:40.949 +[BeforeEach] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:31 -[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:99 -STEP: Creating configMap with name projected-configmap-test-volume-map-8aaa1636-f9d2-4a5a-8324-52afcbd2c743 06/12/23 22:15:40.512 -STEP: Creating a pod to test consume configMaps 06/12/23 22:15:40.538 -Jun 12 22:15:40.569: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-155e326a-ff01-44c0-b084-a1c505dddede" in namespace "projected-8635" to be "Succeeded or Failed" -Jun 12 22:15:40.593: INFO: Pod "pod-projected-configmaps-155e326a-ff01-44c0-b084-a1c505dddede": Phase="Pending", Reason="", readiness=false. Elapsed: 23.099339ms -Jun 12 22:15:42.602: INFO: Pod "pod-projected-configmaps-155e326a-ff01-44c0-b084-a1c505dddede": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032476496s -Jun 12 22:15:44.601: INFO: Pod "pod-projected-configmaps-155e326a-ff01-44c0-b084-a1c505dddede": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031483687s -Jun 12 22:15:46.600: INFO: Pod "pod-projected-configmaps-155e326a-ff01-44c0-b084-a1c505dddede": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030872348s -STEP: Saw pod success 06/12/23 22:15:46.601 -Jun 12 22:15:46.601: INFO: Pod "pod-projected-configmaps-155e326a-ff01-44c0-b084-a1c505dddede" satisfied condition "Succeeded or Failed" -Jun 12 22:15:46.613: INFO: Trying to get logs from node 10.138.75.70 pod pod-projected-configmaps-155e326a-ff01-44c0-b084-a1c505dddede container agnhost-container: -STEP: delete the pod 06/12/23 22:15:46.687 -Jun 12 22:15:46.705: INFO: Waiting for pod pod-projected-configmaps-155e326a-ff01-44c0-b084-a1c505dddede to disappear -Jun 12 22:15:46.712: INFO: Pod pod-projected-configmaps-155e326a-ff01-44c0-b084-a1c505dddede no longer exists -[AfterEach] [sig-storage] Projected configMap +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 +STEP: Creating service test in namespace statefulset-9611 07/27/23 02:51:40.96 +[It] should list, patch and delete a collection of StatefulSets [Conformance] + test/e2e/apps/statefulset.go:908 +Jul 27 02:51:41.016: INFO: Found 0 stateful pods, waiting for 1 +Jul 27 02:51:51.030: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: patching the StatefulSet 07/27/23 02:51:51.052 +W0727 02:51:51.070922 20 warnings.go:70] unknown field "spec.template.spec.TerminationGracePeriodSeconds" +Jul 27 02:51:51.099: INFO: Found 1 stateful pods, waiting for 2 +Jul 27 02:52:01.111: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true +Jul 27 02:52:01.111: INFO: Waiting for pod test-ss-1 to enter Running - Ready=true, currently Running - Ready=true +STEP: Listing all StatefulSets 07/27/23 02:52:01.131 +STEP: Delete all of the StatefulSets 07/27/23 02:52:01.145 +STEP: Verify that StatefulSets have been deleted 07/27/23 02:52:01.173 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 +Jul 27 02:52:01.186: INFO: Deleting all statefulset in ns statefulset-9611 +[AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 -Jun 12 22:15:46.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Projected configMap +Jul 27 02:52:01.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Projected configMap +[DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Projected configMap +[DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 -STEP: Destroying namespace "projected-8635" for this suite. 06/12/23 22:15:46.767 +STEP: Destroying namespace "statefulset-9611" for this suite. 07/27/23 02:52:01.266 ------------------------------ -• [SLOW TEST] [6.410 seconds] -[sig-storage] Projected configMap -test/e2e/common/storage/framework.go:23 - should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:99 +• [SLOW TEST] [20.399 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:103 + should list, patch and delete a collection of StatefulSets [Conformance] + test/e2e/apps/statefulset.go:908 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Projected configMap + [BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:15:40.401 - Jun 12 22:15:40.401: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename projected 06/12/23 22:15:40.404 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:15:40.463 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:15:40.498 - [BeforeEach] [sig-storage] Projected configMap + STEP: Creating a kubernetes client 07/27/23 02:51:40.891 + Jul 27 02:51:40.891: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename statefulset 07/27/23 02:51:40.892 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:51:40.936 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:51:40.949 + [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:31 - [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:99 - STEP: Creating configMap with name projected-configmap-test-volume-map-8aaa1636-f9d2-4a5a-8324-52afcbd2c743 06/12/23 22:15:40.512 - STEP: Creating a pod to test consume configMaps 06/12/23 22:15:40.538 - Jun 12 22:15:40.569: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-155e326a-ff01-44c0-b084-a1c505dddede" in namespace "projected-8635" to be "Succeeded or Failed" - Jun 12 22:15:40.593: INFO: Pod "pod-projected-configmaps-155e326a-ff01-44c0-b084-a1c505dddede": Phase="Pending", Reason="", readiness=false. Elapsed: 23.099339ms - Jun 12 22:15:42.602: INFO: Pod "pod-projected-configmaps-155e326a-ff01-44c0-b084-a1c505dddede": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032476496s - Jun 12 22:15:44.601: INFO: Pod "pod-projected-configmaps-155e326a-ff01-44c0-b084-a1c505dddede": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031483687s - Jun 12 22:15:46.600: INFO: Pod "pod-projected-configmaps-155e326a-ff01-44c0-b084-a1c505dddede": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030872348s - STEP: Saw pod success 06/12/23 22:15:46.601 - Jun 12 22:15:46.601: INFO: Pod "pod-projected-configmaps-155e326a-ff01-44c0-b084-a1c505dddede" satisfied condition "Succeeded or Failed" - Jun 12 22:15:46.613: INFO: Trying to get logs from node 10.138.75.70 pod pod-projected-configmaps-155e326a-ff01-44c0-b084-a1c505dddede container agnhost-container: - STEP: delete the pod 06/12/23 22:15:46.687 - Jun 12 22:15:46.705: INFO: Waiting for pod pod-projected-configmaps-155e326a-ff01-44c0-b084-a1c505dddede to disappear - Jun 12 22:15:46.712: INFO: Pod pod-projected-configmaps-155e326a-ff01-44c0-b084-a1c505dddede no longer exists - [AfterEach] [sig-storage] Projected configMap + [BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 + [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 + STEP: Creating service test in namespace statefulset-9611 07/27/23 02:51:40.96 + [It] should list, patch and delete a collection of StatefulSets [Conformance] + test/e2e/apps/statefulset.go:908 + Jul 27 02:51:41.016: INFO: Found 0 stateful pods, waiting for 1 + Jul 27 02:51:51.030: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true + STEP: patching the StatefulSet 07/27/23 02:51:51.052 + W0727 02:51:51.070922 20 warnings.go:70] unknown field "spec.template.spec.TerminationGracePeriodSeconds" + Jul 27 02:51:51.099: INFO: Found 1 stateful pods, waiting for 2 + Jul 27 02:52:01.111: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true + Jul 27 02:52:01.111: INFO: Waiting for pod test-ss-1 to enter Running - Ready=true, currently Running - Ready=true + STEP: Listing all StatefulSets 07/27/23 02:52:01.131 + STEP: Delete all of the StatefulSets 07/27/23 02:52:01.145 + STEP: Verify that StatefulSets have been deleted 07/27/23 02:52:01.173 + [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 + Jul 27 02:52:01.186: INFO: Deleting all statefulset in ns statefulset-9611 + [AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 - Jun 12 22:15:46.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Projected configMap + Jul 27 02:52:01.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Projected configMap + [DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Projected configMap + [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 - STEP: Destroying namespace "projected-8635" for this suite. 06/12/23 22:15:46.767 + STEP: Destroying namespace "statefulset-9611" for this suite. 07/27/23 02:52:01.266 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SS ------------------------------ -[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - listing mutating webhooks should work [Conformance] - test/e2e/apimachinery/webhook.go:656 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[sig-node] Containers + should use the image defaults if command and args are blank [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:39 +[BeforeEach] [sig-node] Containers set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:15:46.814 -Jun 12 22:15:46.814: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename webhook 06/12/23 22:15:46.817 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:15:46.873 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:15:46.882 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +STEP: Creating a kubernetes client 07/27/23 02:52:01.29 +Jul 27 02:52:01.290: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename containers 07/27/23 02:52:01.291 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:52:01.334 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:52:01.345 +[BeforeEach] [sig-node] Containers test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:90 -STEP: Setting up server cert 06/12/23 22:15:47.105 -STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 22:15:48.601 -STEP: Deploying the webhook pod 06/12/23 22:15:48.632 -STEP: Wait for the deployment to be ready 06/12/23 22:15:48.658 -Jun 12 22:15:48.673: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created -Jun 12 22:15:50.694: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 15, 48, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 15, 48, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 15, 48, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 15, 48, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} -STEP: Deploying the webhook service 06/12/23 22:15:52.737 -STEP: Verifying the service has paired with the endpoint 06/12/23 22:15:52.782 -Jun 12 22:15:53.783: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 -[It] listing mutating webhooks should work [Conformance] - test/e2e/apimachinery/webhook.go:656 -STEP: Listing all of the created validation webhooks 06/12/23 22:15:54.01 -STEP: Creating a configMap that should be mutated 06/12/23 22:15:54.075 -STEP: Deleting the collection of validation webhooks 06/12/23 22:15:54.204 -STEP: Creating a configMap that should not be mutated 06/12/23 22:15:54.277 -[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:39 +Jul 27 02:52:02.386: INFO: Waiting up to 5m0s for pod "client-containers-fe23c615-b5d9-48f9-ae4e-ebc112c189ff" in namespace "containers-8815" to be "running" +Jul 27 02:52:02.397: INFO: Pod "client-containers-fe23c615-b5d9-48f9-ae4e-ebc112c189ff": Phase="Pending", Reason="", readiness=false. Elapsed: 11.521973ms +Jul 27 02:52:04.409: INFO: Pod "client-containers-fe23c615-b5d9-48f9-ae4e-ebc112c189ff": Phase="Running", Reason="", readiness=true. Elapsed: 2.023806111s +Jul 27 02:52:04.409: INFO: Pod "client-containers-fe23c615-b5d9-48f9-ae4e-ebc112c189ff" satisfied condition "running" +[AfterEach] [sig-node] Containers test/e2e/framework/node/init/init.go:32 -Jun 12 22:15:54.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:105 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +Jul 27 02:52:04.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Containers test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-node] Containers dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-node] Containers tear down framework | framework.go:193 -STEP: Destroying namespace "webhook-53" for this suite. 06/12/23 22:15:54.466 -STEP: Destroying namespace "webhook-53-markers" for this suite. 06/12/23 22:15:54.493 +STEP: Destroying namespace "containers-8815" for this suite. 07/27/23 02:52:04.456 ------------------------------ -• [SLOW TEST] [7.708 seconds] -[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] -test/e2e/apimachinery/framework.go:23 - listing mutating webhooks should work [Conformance] - test/e2e/apimachinery/webhook.go:656 +• [3.186 seconds] +[sig-node] Containers +test/e2e/common/node/framework.go:23 + should use the image defaults if command and args are blank [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:39 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [BeforeEach] [sig-node] Containers set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:15:46.814 - Jun 12 22:15:46.814: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename webhook 06/12/23 22:15:46.817 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:15:46.873 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:15:46.882 - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + STEP: Creating a kubernetes client 07/27/23 02:52:01.29 + Jul 27 02:52:01.290: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename containers 07/27/23 02:52:01.291 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:52:01.334 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:52:01.345 + [BeforeEach] [sig-node] Containers test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:90 - STEP: Setting up server cert 06/12/23 22:15:47.105 - STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 22:15:48.601 - STEP: Deploying the webhook pod 06/12/23 22:15:48.632 - STEP: Wait for the deployment to be ready 06/12/23 22:15:48.658 - Jun 12 22:15:48.673: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created - Jun 12 22:15:50.694: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 15, 48, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 15, 48, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 15, 48, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 15, 48, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} - STEP: Deploying the webhook service 06/12/23 22:15:52.737 - STEP: Verifying the service has paired with the endpoint 06/12/23 22:15:52.782 - Jun 12 22:15:53.783: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 - [It] listing mutating webhooks should work [Conformance] - test/e2e/apimachinery/webhook.go:656 - STEP: Listing all of the created validation webhooks 06/12/23 22:15:54.01 - STEP: Creating a configMap that should be mutated 06/12/23 22:15:54.075 - STEP: Deleting the collection of validation webhooks 06/12/23 22:15:54.204 - STEP: Creating a configMap that should not be mutated 06/12/23 22:15:54.277 - [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:39 + Jul 27 02:52:02.386: INFO: Waiting up to 5m0s for pod "client-containers-fe23c615-b5d9-48f9-ae4e-ebc112c189ff" in namespace "containers-8815" to be "running" + Jul 27 02:52:02.397: INFO: Pod "client-containers-fe23c615-b5d9-48f9-ae4e-ebc112c189ff": Phase="Pending", Reason="", readiness=false. Elapsed: 11.521973ms + Jul 27 02:52:04.409: INFO: Pod "client-containers-fe23c615-b5d9-48f9-ae4e-ebc112c189ff": Phase="Running", Reason="", readiness=true. Elapsed: 2.023806111s + Jul 27 02:52:04.409: INFO: Pod "client-containers-fe23c615-b5d9-48f9-ae4e-ebc112c189ff" satisfied condition "running" + [AfterEach] [sig-node] Containers test/e2e/framework/node/init/init.go:32 - Jun 12 22:15:54.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:105 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + Jul 27 02:52:04.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Containers test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-node] Containers dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-node] Containers tear down framework | framework.go:193 - STEP: Destroying namespace "webhook-53" for this suite. 06/12/23 22:15:54.466 - STEP: Destroying namespace "webhook-53-markers" for this suite. 06/12/23 22:15:54.493 + STEP: Destroying namespace "containers-8815" for this suite. 07/27/23 02:52:04.456 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSS ------------------------------- -[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath - runs ReplicaSets to verify preemption running path [Conformance] - test/e2e/scheduling/preemption.go:624 +[sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints + verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + test/e2e/scheduling/preemption.go:814 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:15:54.523 -Jun 12 22:15:54.523: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename sched-preemption 06/12/23 22:15:54.526 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:15:54.586 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:15:54.596 +STEP: Creating a kubernetes client 07/27/23 02:52:04.477 +Jul 27 02:52:04.477: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename sched-preemption 07/27/23 02:52:04.478 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:52:04.519 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:52:04.531 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:97 -Jun 12 22:15:54.667: INFO: Waiting up to 1m0s for all nodes to be ready -Jun 12 22:16:54.969: INFO: Waiting for terminating namespaces to be deleted... -[BeforeEach] PreemptionExecutionPath +Jul 27 02:52:04.598: INFO: Waiting up to 1m0s for all nodes to be ready +Jul 27 02:53:04.784: INFO: Waiting for terminating namespaces to be deleted... +[BeforeEach] PriorityClass endpoints set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:16:55.01 -Jun 12 22:16:55.011: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename sched-preemption-path 06/12/23 22:16:55.013 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:16:55.071 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:16:55.092 -[BeforeEach] PreemptionExecutionPath +STEP: Creating a kubernetes client 07/27/23 02:53:04.802 +Jul 27 02:53:04.802: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename sched-preemption-path 07/27/23 02:53:04.803 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:53:04.847 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:53:04.858 +[BeforeEach] PriorityClass endpoints test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] PreemptionExecutionPath - test/e2e/scheduling/preemption.go:576 -STEP: Finding an available node 06/12/23 22:16:55.105 -STEP: Trying to launch a pod without a label to get a node which can launch it. 06/12/23 22:16:55.105 -Jun 12 22:16:55.133: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-path-8892" to be "running" -Jun 12 22:16:55.141: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 7.71449ms -Jun 12 22:16:57.150: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016743472s -Jun 12 22:16:59.170: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 4.037364386s -Jun 12 22:16:59.170: INFO: Pod "without-label" satisfied condition "running" -STEP: Explicitly delete pod here to free the resource it takes. 06/12/23 22:16:59.177 -Jun 12 22:16:59.203: INFO: found a healthy node: 10.138.75.70 -[It] runs ReplicaSets to verify preemption running path [Conformance] - test/e2e/scheduling/preemption.go:624 -Jun 12 22:17:11.403: INFO: pods created so far: [1 1 1] -Jun 12 22:17:11.403: INFO: length of pods created so far: 3 -Jun 12 22:17:17.427: INFO: pods created so far: [2 2 1] -[AfterEach] PreemptionExecutionPath +[BeforeEach] PriorityClass endpoints + test/e2e/scheduling/preemption.go:771 +[It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + test/e2e/scheduling/preemption.go:814 +Jul 27 02:53:04.935: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: value: Forbidden: may not be changed in an update. +Jul 27 02:53:04.972: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: value: Forbidden: may not be changed in an update. +[AfterEach] PriorityClass endpoints test/e2e/framework/node/init/init.go:32 -Jun 12 22:17:24.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[AfterEach] PreemptionExecutionPath - test/e2e/scheduling/preemption.go:549 +Jul 27 02:53:05.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] PriorityClass endpoints + test/e2e/scheduling/preemption.go:787 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 -Jun 12 22:17:24.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +Jul 27 02:53:05.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 -[DeferCleanup (Each)] PreemptionExecutionPath +[DeferCleanup (Each)] PriorityClass endpoints test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] PreemptionExecutionPath +[DeferCleanup (Each)] PriorityClass endpoints dump namespaces | framework.go:196 -[DeferCleanup (Each)] PreemptionExecutionPath +[DeferCleanup (Each)] PriorityClass endpoints tear down framework | framework.go:193 -STEP: Destroying namespace "sched-preemption-path-8892" for this suite. 06/12/23 22:17:24.768 +STEP: Destroying namespace "sched-preemption-path-9063" for this suite. 07/27/23 02:53:05.246 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 -STEP: Destroying namespace "sched-preemption-5652" for this suite. 06/12/23 22:17:24.8 +STEP: Destroying namespace "sched-preemption-5719" for this suite. 07/27/23 02:53:05.269 ------------------------------ -• [SLOW TEST] [90.318 seconds] +• [SLOW TEST] [60.813 seconds] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/framework.go:40 - PreemptionExecutionPath - test/e2e/scheduling/preemption.go:537 - runs ReplicaSets to verify preemption running path [Conformance] - test/e2e/scheduling/preemption.go:624 + PriorityClass endpoints + test/e2e/scheduling/preemption.go:764 + verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + test/e2e/scheduling/preemption.go:814 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:15:54.523 - Jun 12 22:15:54.523: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename sched-preemption 06/12/23 22:15:54.526 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:15:54.586 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:15:54.596 + STEP: Creating a kubernetes client 07/27/23 02:52:04.477 + Jul 27 02:52:04.477: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename sched-preemption 07/27/23 02:52:04.478 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:52:04.519 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:52:04.531 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:97 - Jun 12 22:15:54.667: INFO: Waiting up to 1m0s for all nodes to be ready - Jun 12 22:16:54.969: INFO: Waiting for terminating namespaces to be deleted... - [BeforeEach] PreemptionExecutionPath + Jul 27 02:52:04.598: INFO: Waiting up to 1m0s for all nodes to be ready + Jul 27 02:53:04.784: INFO: Waiting for terminating namespaces to be deleted... + [BeforeEach] PriorityClass endpoints set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:16:55.01 - Jun 12 22:16:55.011: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename sched-preemption-path 06/12/23 22:16:55.013 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:16:55.071 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:16:55.092 - [BeforeEach] PreemptionExecutionPath + STEP: Creating a kubernetes client 07/27/23 02:53:04.802 + Jul 27 02:53:04.802: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename sched-preemption-path 07/27/23 02:53:04.803 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:53:04.847 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:53:04.858 + [BeforeEach] PriorityClass endpoints test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] PreemptionExecutionPath - test/e2e/scheduling/preemption.go:576 - STEP: Finding an available node 06/12/23 22:16:55.105 - STEP: Trying to launch a pod without a label to get a node which can launch it. 06/12/23 22:16:55.105 - Jun 12 22:16:55.133: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-path-8892" to be "running" - Jun 12 22:16:55.141: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 7.71449ms - Jun 12 22:16:57.150: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016743472s - Jun 12 22:16:59.170: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 4.037364386s - Jun 12 22:16:59.170: INFO: Pod "without-label" satisfied condition "running" - STEP: Explicitly delete pod here to free the resource it takes. 06/12/23 22:16:59.177 - Jun 12 22:16:59.203: INFO: found a healthy node: 10.138.75.70 - [It] runs ReplicaSets to verify preemption running path [Conformance] - test/e2e/scheduling/preemption.go:624 - Jun 12 22:17:11.403: INFO: pods created so far: [1 1 1] - Jun 12 22:17:11.403: INFO: length of pods created so far: 3 - Jun 12 22:17:17.427: INFO: pods created so far: [2 2 1] - [AfterEach] PreemptionExecutionPath + [BeforeEach] PriorityClass endpoints + test/e2e/scheduling/preemption.go:771 + [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + test/e2e/scheduling/preemption.go:814 + Jul 27 02:53:04.935: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: value: Forbidden: may not be changed in an update. + Jul 27 02:53:04.972: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: value: Forbidden: may not be changed in an update. + [AfterEach] PriorityClass endpoints test/e2e/framework/node/init/init.go:32 - Jun 12 22:17:24.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [AfterEach] PreemptionExecutionPath - test/e2e/scheduling/preemption.go:549 + Jul 27 02:53:05.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] PriorityClass endpoints + test/e2e/scheduling/preemption.go:787 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 - Jun 12 22:17:24.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + Jul 27 02:53:05.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 - [DeferCleanup (Each)] PreemptionExecutionPath + [DeferCleanup (Each)] PriorityClass endpoints test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] PreemptionExecutionPath + [DeferCleanup (Each)] PriorityClass endpoints dump namespaces | framework.go:196 - [DeferCleanup (Each)] PreemptionExecutionPath + [DeferCleanup (Each)] PriorityClass endpoints tear down framework | framework.go:193 - STEP: Destroying namespace "sched-preemption-path-8892" for this suite. 06/12/23 22:17:24.768 + STEP: Destroying namespace "sched-preemption-path-9063" for this suite. 07/27/23 02:53:05.246 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 - STEP: Destroying namespace "sched-preemption-5652" for this suite. 06/12/23 22:17:24.8 + STEP: Destroying namespace "sched-preemption-5719" for this suite. 07/27/23 02:53:05.269 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-apps] Deployment - deployment should delete old replica sets [Conformance] - test/e2e/apps/deployment.go:122 -[BeforeEach] [sig-apps] Deployment +[sig-storage] Secrets + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:79 +[BeforeEach] [sig-storage] Secrets set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:17:24.844 -Jun 12 22:17:24.844: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename deployment 06/12/23 22:17:24.849 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:17:24.916 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:17:24.934 -[BeforeEach] [sig-apps] Deployment +STEP: Creating a kubernetes client 07/27/23 02:53:05.291 +Jul 27 02:53:05.291: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename secrets 07/27/23 02:53:05.292 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:53:05.334 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:53:05.356 +[BeforeEach] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:91 -[It] deployment should delete old replica sets [Conformance] - test/e2e/apps/deployment.go:122 -Jun 12 22:17:24.968: INFO: Pod name cleanup-pod: Found 0 pods out of 1 -Jun 12 22:17:29.989: INFO: Pod name cleanup-pod: Found 1 pods out of 1 -STEP: ensuring each pod is running 06/12/23 22:17:29.989 -Jun 12 22:17:29.989: INFO: Creating deployment test-cleanup-deployment -STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up 06/12/23 22:17:30.039 -[AfterEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:84 -Jun 12 22:17:34.111: INFO: Deployment "test-cleanup-deployment": -&Deployment{ObjectMeta:{test-cleanup-deployment deployment-599 14668485-9c7a-42e2-95bf-7b53c0373a97 140313 1 2023-06-12 22:17:30 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2023-06-12 22:17:30 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-06-12 22:17:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00b4841d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-06-12 22:17:30 +0000 UTC,LastTransitionTime:2023-06-12 22:17:30 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-7698ff6f6b" has successfully progressed.,LastUpdateTime:2023-06-12 22:17:32 +0000 UTC,LastTransitionTime:2023-06-12 22:17:30 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} - -Jun 12 22:17:34.123: INFO: New ReplicaSet "test-cleanup-deployment-7698ff6f6b" of Deployment "test-cleanup-deployment": -&ReplicaSet{ObjectMeta:{test-cleanup-deployment-7698ff6f6b deployment-599 88953902-6278-489a-8971-15a979e9f747 140303 1 2023-06-12 22:17:30 +0000 UTC map[name:cleanup-pod pod-template-hash:7698ff6f6b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 14668485-9c7a-42e2-95bf-7b53c0373a97 0xc0052a7cc7 0xc0052a7cc8}] [] [{kube-controller-manager Update apps/v1 2023-06-12 22:17:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"14668485-9c7a-42e2-95bf-7b53c0373a97\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-06-12 22:17:31 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 7698ff6f6b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:7698ff6f6b] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0052a7d78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} -Jun 12 22:17:34.141: INFO: Pod "test-cleanup-deployment-7698ff6f6b-m55rw" is available: -&Pod{ObjectMeta:{test-cleanup-deployment-7698ff6f6b-m55rw test-cleanup-deployment-7698ff6f6b- deployment-599 587c5b7c-684f-445c-bf9a-1da4ddc82742 140302 0 2023-06-12 22:17:30 +0000 UTC map[name:cleanup-pod pod-template-hash:7698ff6f6b] map[cni.projectcalico.org/containerID:ab91681f6500af4ae5e9c5f5e7dda9cdf0fa388f3f3c2af700752933caff4100 cni.projectcalico.org/podIP:172.30.224.60/32 cni.projectcalico.org/podIPs:172.30.224.60/32 k8s.v1.cni.cncf.io/network-status:[{ - "name": "k8s-pod-network", - "ips": [ - "172.30.224.60" - ], - "default": true, - "dns": {} -}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-cleanup-deployment-7698ff6f6b 88953902-6278-489a-8971-15a979e9f747 0xc004368277 0xc004368278}] [] [{kube-controller-manager Update v1 2023-06-12 22:17:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"88953902-6278-489a-8971-15a979e9f747\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-06-12 22:17:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-06-12 22:17:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.30.224.60\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status} {multus Update v1 2023-06-12 22:17:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-46g2j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-46g2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.70,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c65,c30,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-fr7x2,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:17:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:17:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:17:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:17:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.70,PodIP:172.30.224.60,StartTime:2023-06-12 22:17:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-12 22:17:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,ImageID:registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://ea32fb030d62dc5625cd66a6546fb4606bc5a61c0fd7bb138e124f5858288bc4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.30.224.60,},},EphemeralContainerStatuses:[]ContainerStatus{},},} -[AfterEach] [sig-apps] Deployment +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:79 +STEP: Creating secret with name secret-test-map-e67c3e2b-9024-4e21-9ca1-ea7fea315732 07/27/23 02:53:05.368 +STEP: Creating a pod to test consume secrets 07/27/23 02:53:05.383 +Jul 27 02:53:05.412: INFO: Waiting up to 5m0s for pod "pod-secrets-ed33e9bc-0e60-4387-a55c-63b3ada66b20" in namespace "secrets-3129" to be "Succeeded or Failed" +Jul 27 02:53:05.423: INFO: Pod "pod-secrets-ed33e9bc-0e60-4387-a55c-63b3ada66b20": Phase="Pending", Reason="", readiness=false. Elapsed: 10.856225ms +Jul 27 02:53:07.435: INFO: Pod "pod-secrets-ed33e9bc-0e60-4387-a55c-63b3ada66b20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022662054s +Jul 27 02:53:09.436: INFO: Pod "pod-secrets-ed33e9bc-0e60-4387-a55c-63b3ada66b20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02411905s +STEP: Saw pod success 07/27/23 02:53:09.436 +Jul 27 02:53:09.437: INFO: Pod "pod-secrets-ed33e9bc-0e60-4387-a55c-63b3ada66b20" satisfied condition "Succeeded or Failed" +Jul 27 02:53:09.448: INFO: Trying to get logs from node 10.245.128.19 pod pod-secrets-ed33e9bc-0e60-4387-a55c-63b3ada66b20 container secret-volume-test: +STEP: delete the pod 07/27/23 02:53:09.505 +Jul 27 02:53:09.536: INFO: Waiting for pod pod-secrets-ed33e9bc-0e60-4387-a55c-63b3ada66b20 to disappear +Jul 27 02:53:09.547: INFO: Pod pod-secrets-ed33e9bc-0e60-4387-a55c-63b3ada66b20 no longer exists +[AfterEach] [sig-storage] Secrets test/e2e/framework/node/init/init.go:32 -Jun 12 22:17:34.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] Deployment +Jul 27 02:53:09.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] Deployment +[DeferCleanup (Each)] [sig-storage] Secrets dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] Deployment +[DeferCleanup (Each)] [sig-storage] Secrets tear down framework | framework.go:193 -STEP: Destroying namespace "deployment-599" for this suite. 06/12/23 22:17:34.16 ------------------------------- -• [SLOW TEST] [9.350 seconds] -[sig-apps] Deployment -test/e2e/apps/framework.go:23 - deployment should delete old replica sets [Conformance] - test/e2e/apps/deployment.go:122 +STEP: Destroying namespace "secrets-3129" for this suite. 07/27/23 02:53:09.562 +------------------------------ +• [4.288 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:79 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] Deployment + [BeforeEach] [sig-storage] Secrets set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:17:24.844 - Jun 12 22:17:24.844: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename deployment 06/12/23 22:17:24.849 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:17:24.916 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:17:24.934 - [BeforeEach] [sig-apps] Deployment + STEP: Creating a kubernetes client 07/27/23 02:53:05.291 + Jul 27 02:53:05.291: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename secrets 07/27/23 02:53:05.292 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:53:05.334 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:53:05.356 + [BeforeEach] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:91 - [It] deployment should delete old replica sets [Conformance] - test/e2e/apps/deployment.go:122 - Jun 12 22:17:24.968: INFO: Pod name cleanup-pod: Found 0 pods out of 1 - Jun 12 22:17:29.989: INFO: Pod name cleanup-pod: Found 1 pods out of 1 - STEP: ensuring each pod is running 06/12/23 22:17:29.989 - Jun 12 22:17:29.989: INFO: Creating deployment test-cleanup-deployment - STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up 06/12/23 22:17:30.039 - [AfterEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:84 - Jun 12 22:17:34.111: INFO: Deployment "test-cleanup-deployment": - &Deployment{ObjectMeta:{test-cleanup-deployment deployment-599 14668485-9c7a-42e2-95bf-7b53c0373a97 140313 1 2023-06-12 22:17:30 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2023-06-12 22:17:30 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-06-12 22:17:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00b4841d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-06-12 22:17:30 +0000 UTC,LastTransitionTime:2023-06-12 22:17:30 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-7698ff6f6b" has successfully progressed.,LastUpdateTime:2023-06-12 22:17:32 +0000 UTC,LastTransitionTime:2023-06-12 22:17:30 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} - - Jun 12 22:17:34.123: INFO: New ReplicaSet "test-cleanup-deployment-7698ff6f6b" of Deployment "test-cleanup-deployment": - &ReplicaSet{ObjectMeta:{test-cleanup-deployment-7698ff6f6b deployment-599 88953902-6278-489a-8971-15a979e9f747 140303 1 2023-06-12 22:17:30 +0000 UTC map[name:cleanup-pod pod-template-hash:7698ff6f6b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 14668485-9c7a-42e2-95bf-7b53c0373a97 0xc0052a7cc7 0xc0052a7cc8}] [] [{kube-controller-manager Update apps/v1 2023-06-12 22:17:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"14668485-9c7a-42e2-95bf-7b53c0373a97\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-06-12 22:17:31 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 7698ff6f6b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:7698ff6f6b] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0052a7d78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} - Jun 12 22:17:34.141: INFO: Pod "test-cleanup-deployment-7698ff6f6b-m55rw" is available: - &Pod{ObjectMeta:{test-cleanup-deployment-7698ff6f6b-m55rw test-cleanup-deployment-7698ff6f6b- deployment-599 587c5b7c-684f-445c-bf9a-1da4ddc82742 140302 0 2023-06-12 22:17:30 +0000 UTC map[name:cleanup-pod pod-template-hash:7698ff6f6b] map[cni.projectcalico.org/containerID:ab91681f6500af4ae5e9c5f5e7dda9cdf0fa388f3f3c2af700752933caff4100 cni.projectcalico.org/podIP:172.30.224.60/32 cni.projectcalico.org/podIPs:172.30.224.60/32 k8s.v1.cni.cncf.io/network-status:[{ - "name": "k8s-pod-network", - "ips": [ - "172.30.224.60" - ], - "default": true, - "dns": {} - }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-cleanup-deployment-7698ff6f6b 88953902-6278-489a-8971-15a979e9f747 0xc004368277 0xc004368278}] [] [{kube-controller-manager Update v1 2023-06-12 22:17:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"88953902-6278-489a-8971-15a979e9f747\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-06-12 22:17:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-06-12 22:17:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.30.224.60\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status} {multus Update v1 2023-06-12 22:17:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-46g2j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-46g2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.70,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c65,c30,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-fr7x2,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:17:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:17:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:17:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:17:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.70,PodIP:172.30.224.60,StartTime:2023-06-12 22:17:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-12 22:17:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,ImageID:registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://ea32fb030d62dc5625cd66a6546fb4606bc5a61c0fd7bb138e124f5858288bc4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.30.224.60,},},EphemeralContainerStatuses:[]ContainerStatus{},},} - [AfterEach] [sig-apps] Deployment + [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:79 + STEP: Creating secret with name secret-test-map-e67c3e2b-9024-4e21-9ca1-ea7fea315732 07/27/23 02:53:05.368 + STEP: Creating a pod to test consume secrets 07/27/23 02:53:05.383 + Jul 27 02:53:05.412: INFO: Waiting up to 5m0s for pod "pod-secrets-ed33e9bc-0e60-4387-a55c-63b3ada66b20" in namespace "secrets-3129" to be "Succeeded or Failed" + Jul 27 02:53:05.423: INFO: Pod "pod-secrets-ed33e9bc-0e60-4387-a55c-63b3ada66b20": Phase="Pending", Reason="", readiness=false. Elapsed: 10.856225ms + Jul 27 02:53:07.435: INFO: Pod "pod-secrets-ed33e9bc-0e60-4387-a55c-63b3ada66b20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022662054s + Jul 27 02:53:09.436: INFO: Pod "pod-secrets-ed33e9bc-0e60-4387-a55c-63b3ada66b20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02411905s + STEP: Saw pod success 07/27/23 02:53:09.436 + Jul 27 02:53:09.437: INFO: Pod "pod-secrets-ed33e9bc-0e60-4387-a55c-63b3ada66b20" satisfied condition "Succeeded or Failed" + Jul 27 02:53:09.448: INFO: Trying to get logs from node 10.245.128.19 pod pod-secrets-ed33e9bc-0e60-4387-a55c-63b3ada66b20 container secret-volume-test: + STEP: delete the pod 07/27/23 02:53:09.505 + Jul 27 02:53:09.536: INFO: Waiting for pod pod-secrets-ed33e9bc-0e60-4387-a55c-63b3ada66b20 to disappear + Jul 27 02:53:09.547: INFO: Pod pod-secrets-ed33e9bc-0e60-4387-a55c-63b3ada66b20 no longer exists + [AfterEach] [sig-storage] Secrets test/e2e/framework/node/init/init.go:32 - Jun 12 22:17:34.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] Deployment + Jul 27 02:53:09.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] Deployment + [DeferCleanup (Each)] [sig-storage] Secrets dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] Deployment + [DeferCleanup (Each)] [sig-storage] Secrets tear down framework | framework.go:193 - STEP: Destroying namespace "deployment-599" for this suite. 06/12/23 22:17:34.16 + STEP: Destroying namespace "secrets-3129" for this suite. 07/27/23 02:53:09.562 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSS ------------------------------ -[sig-api-machinery] Garbage collector - should delete RS created by deployment when not orphaning [Conformance] - test/e2e/apimachinery/garbage_collector.go:491 -[BeforeEach] [sig-api-machinery] Garbage collector +[sig-node] Variable Expansion + should allow substituting values in a container's command [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:73 +[BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:17:34.21 -Jun 12 22:17:34.210: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename gc 06/12/23 22:17:34.213 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:17:34.265 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:17:34.278 -[BeforeEach] [sig-api-machinery] Garbage collector +STEP: Creating a kubernetes client 07/27/23 02:53:09.58 +Jul 27 02:53:09.580: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename var-expansion 07/27/23 02:53:09.58 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:53:09.619 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:53:09.63 +[BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 -[It] should delete RS created by deployment when not orphaning [Conformance] - test/e2e/apimachinery/garbage_collector.go:491 -STEP: create the deployment 06/12/23 22:17:34.295 -STEP: Wait for the Deployment to create new ReplicaSet 06/12/23 22:17:34.309 -STEP: delete the deployment 06/12/23 22:17:34.824 -STEP: wait for all rs to be garbage collected 06/12/23 22:17:34.835 -STEP: expected 0 rs, got 1 rs 06/12/23 22:17:34.849 -STEP: expected 0 pods, got 2 pods 06/12/23 22:17:34.857 -STEP: Gathering metrics 06/12/23 22:17:35.377 -W0612 22:17:35.392359 23 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. -Jun 12 22:17:35.392: INFO: For apiserver_request_total: -For apiserver_request_latency_seconds: -For apiserver_init_events_total: -For garbage_collector_attempt_to_delete_queue_latency: -For garbage_collector_attempt_to_delete_work_duration: -For garbage_collector_attempt_to_orphan_queue_latency: -For garbage_collector_attempt_to_orphan_work_duration: -For garbage_collector_dirty_processing_latency_microseconds: -For garbage_collector_event_processing_latency_microseconds: -For garbage_collector_graph_changes_queue_latency: -For garbage_collector_graph_changes_work_duration: -For garbage_collector_orphan_processing_latency_microseconds: -For namespace_queue_latency: -For namespace_queue_latency_sum: -For namespace_queue_latency_count: -For namespace_retries: -For namespace_work_duration: -For namespace_work_duration_sum: -For namespace_work_duration_count: -For function_duration_seconds: -For errors_total: -For evicted_pods_total: - -[AfterEach] [sig-api-machinery] Garbage collector +[It] should allow substituting values in a container's command [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:73 +STEP: Creating a pod to test substitution in container's command 07/27/23 02:53:09.647 +Jul 27 02:53:09.677: INFO: Waiting up to 5m0s for pod "var-expansion-d9accf35-45bd-4783-8a9b-c2fcc8a2f387" in namespace "var-expansion-5520" to be "Succeeded or Failed" +Jul 27 02:53:09.690: INFO: Pod "var-expansion-d9accf35-45bd-4783-8a9b-c2fcc8a2f387": Phase="Pending", Reason="", readiness=false. Elapsed: 12.94324ms +Jul 27 02:53:11.703: INFO: Pod "var-expansion-d9accf35-45bd-4783-8a9b-c2fcc8a2f387": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025991169s +Jul 27 02:53:13.701: INFO: Pod "var-expansion-d9accf35-45bd-4783-8a9b-c2fcc8a2f387": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023981646s +Jul 27 02:53:15.754: INFO: Pod "var-expansion-d9accf35-45bd-4783-8a9b-c2fcc8a2f387": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.077127653s +STEP: Saw pod success 07/27/23 02:53:15.754 +Jul 27 02:53:15.755: INFO: Pod "var-expansion-d9accf35-45bd-4783-8a9b-c2fcc8a2f387" satisfied condition "Succeeded or Failed" +Jul 27 02:53:15.806: INFO: Trying to get logs from node 10.245.128.19 pod var-expansion-d9accf35-45bd-4783-8a9b-c2fcc8a2f387 container dapi-container: +STEP: delete the pod 07/27/23 02:53:15.83 +Jul 27 02:53:15.866: INFO: Waiting for pod var-expansion-d9accf35-45bd-4783-8a9b-c2fcc8a2f387 to disappear +Jul 27 02:53:15.885: INFO: Pod var-expansion-d9accf35-45bd-4783-8a9b-c2fcc8a2f387 no longer exists +[AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 -Jun 12 22:17:35.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] Garbage collector +Jul 27 02:53:15.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] Garbage collector +[DeferCleanup (Each)] [sig-node] Variable Expansion dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] Garbage collector +[DeferCleanup (Each)] [sig-node] Variable Expansion tear down framework | framework.go:193 -STEP: Destroying namespace "gc-6839" for this suite. 06/12/23 22:17:35.402 +STEP: Destroying namespace "var-expansion-5520" for this suite. 07/27/23 02:53:15.93 ------------------------------ -• [1.213 seconds] -[sig-api-machinery] Garbage collector -test/e2e/apimachinery/framework.go:23 - should delete RS created by deployment when not orphaning [Conformance] - test/e2e/apimachinery/garbage_collector.go:491 +• [SLOW TEST] [6.371 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should allow substituting values in a container's command [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:73 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] Garbage collector + [BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:17:34.21 - Jun 12 22:17:34.210: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename gc 06/12/23 22:17:34.213 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:17:34.265 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:17:34.278 - [BeforeEach] [sig-api-machinery] Garbage collector + STEP: Creating a kubernetes client 07/27/23 02:53:09.58 + Jul 27 02:53:09.580: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename var-expansion 07/27/23 02:53:09.58 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:53:09.619 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:53:09.63 + [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 - [It] should delete RS created by deployment when not orphaning [Conformance] - test/e2e/apimachinery/garbage_collector.go:491 - STEP: create the deployment 06/12/23 22:17:34.295 - STEP: Wait for the Deployment to create new ReplicaSet 06/12/23 22:17:34.309 - STEP: delete the deployment 06/12/23 22:17:34.824 - STEP: wait for all rs to be garbage collected 06/12/23 22:17:34.835 - STEP: expected 0 rs, got 1 rs 06/12/23 22:17:34.849 - STEP: expected 0 pods, got 2 pods 06/12/23 22:17:34.857 - STEP: Gathering metrics 06/12/23 22:17:35.377 - W0612 22:17:35.392359 23 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. - Jun 12 22:17:35.392: INFO: For apiserver_request_total: - For apiserver_request_latency_seconds: - For apiserver_init_events_total: - For garbage_collector_attempt_to_delete_queue_latency: - For garbage_collector_attempt_to_delete_work_duration: - For garbage_collector_attempt_to_orphan_queue_latency: - For garbage_collector_attempt_to_orphan_work_duration: - For garbage_collector_dirty_processing_latency_microseconds: - For garbage_collector_event_processing_latency_microseconds: - For garbage_collector_graph_changes_queue_latency: - For garbage_collector_graph_changes_work_duration: - For garbage_collector_orphan_processing_latency_microseconds: - For namespace_queue_latency: - For namespace_queue_latency_sum: - For namespace_queue_latency_count: - For namespace_retries: - For namespace_work_duration: - For namespace_work_duration_sum: - For namespace_work_duration_count: - For function_duration_seconds: - For errors_total: - For evicted_pods_total: - - [AfterEach] [sig-api-machinery] Garbage collector + [It] should allow substituting values in a container's command [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:73 + STEP: Creating a pod to test substitution in container's command 07/27/23 02:53:09.647 + Jul 27 02:53:09.677: INFO: Waiting up to 5m0s for pod "var-expansion-d9accf35-45bd-4783-8a9b-c2fcc8a2f387" in namespace "var-expansion-5520" to be "Succeeded or Failed" + Jul 27 02:53:09.690: INFO: Pod "var-expansion-d9accf35-45bd-4783-8a9b-c2fcc8a2f387": Phase="Pending", Reason="", readiness=false. Elapsed: 12.94324ms + Jul 27 02:53:11.703: INFO: Pod "var-expansion-d9accf35-45bd-4783-8a9b-c2fcc8a2f387": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025991169s + Jul 27 02:53:13.701: INFO: Pod "var-expansion-d9accf35-45bd-4783-8a9b-c2fcc8a2f387": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023981646s + Jul 27 02:53:15.754: INFO: Pod "var-expansion-d9accf35-45bd-4783-8a9b-c2fcc8a2f387": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.077127653s + STEP: Saw pod success 07/27/23 02:53:15.754 + Jul 27 02:53:15.755: INFO: Pod "var-expansion-d9accf35-45bd-4783-8a9b-c2fcc8a2f387" satisfied condition "Succeeded or Failed" + Jul 27 02:53:15.806: INFO: Trying to get logs from node 10.245.128.19 pod var-expansion-d9accf35-45bd-4783-8a9b-c2fcc8a2f387 container dapi-container: + STEP: delete the pod 07/27/23 02:53:15.83 + Jul 27 02:53:15.866: INFO: Waiting for pod var-expansion-d9accf35-45bd-4783-8a9b-c2fcc8a2f387 to disappear + Jul 27 02:53:15.885: INFO: Pod var-expansion-d9accf35-45bd-4783-8a9b-c2fcc8a2f387 no longer exists + [AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 - Jun 12 22:17:35.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + Jul 27 02:53:15.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + [DeferCleanup (Each)] [sig-node] Variable Expansion dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + [DeferCleanup (Each)] [sig-node] Variable Expansion tear down framework | framework.go:193 - STEP: Destroying namespace "gc-6839" for this suite. 06/12/23 22:17:35.402 + STEP: Destroying namespace "var-expansion-5520" for this suite. 07/27/23 02:53:15.93 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------- -[sig-node] Probing container - should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] - test/e2e/common/node/container_probe.go:152 -[BeforeEach] [sig-node] Probing container +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod with mountPath of existing file [Conformance] + test/e2e/storage/subpath.go:80 +[BeforeEach] [sig-storage] Subpath set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:17:35.447 -Jun 12 22:17:35.448: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename container-probe 06/12/23 22:17:35.449 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:17:35.501 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:17:35.511 -[BeforeEach] [sig-node] Probing container +STEP: Creating a kubernetes client 07/27/23 02:53:15.951 +Jul 27 02:53:15.951: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename subpath 07/27/23 02:53:15.952 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:53:16.05 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:53:16.068 +[BeforeEach] [sig-storage] Subpath test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-node] Probing container - test/e2e/common/node/container_probe.go:63 -[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] - test/e2e/common/node/container_probe.go:152 -STEP: Creating pod busybox-d614b39f-9256-4689-9970-a87ec3974eb2 in namespace container-probe-6653 06/12/23 22:17:35.522 -Jun 12 22:17:35.544: INFO: Waiting up to 5m0s for pod "busybox-d614b39f-9256-4689-9970-a87ec3974eb2" in namespace "container-probe-6653" to be "not pending" -Jun 12 22:17:35.550: INFO: Pod "busybox-d614b39f-9256-4689-9970-a87ec3974eb2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.252397ms -Jun 12 22:17:37.558: INFO: Pod "busybox-d614b39f-9256-4689-9970-a87ec3974eb2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014596981s -Jun 12 22:17:39.558: INFO: Pod "busybox-d614b39f-9256-4689-9970-a87ec3974eb2": Phase="Running", Reason="", readiness=true. Elapsed: 4.014816161s -Jun 12 22:17:39.558: INFO: Pod "busybox-d614b39f-9256-4689-9970-a87ec3974eb2" satisfied condition "not pending" -Jun 12 22:17:39.559: INFO: Started pod busybox-d614b39f-9256-4689-9970-a87ec3974eb2 in namespace container-probe-6653 -STEP: checking the pod's current state and verifying that restartCount is present 06/12/23 22:17:39.559 -Jun 12 22:17:39.565: INFO: Initial restart count of pod busybox-d614b39f-9256-4689-9970-a87ec3974eb2 is 0 -STEP: deleting the pod 06/12/23 22:21:41.531 -[AfterEach] [sig-node] Probing container +[BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 +STEP: Setting up data 07/27/23 02:53:16.08 +[It] should support subpaths with configmap pod with mountPath of existing file [Conformance] + test/e2e/storage/subpath.go:80 +STEP: Creating pod pod-subpath-test-configmap-8jpv 07/27/23 02:53:16.139 +STEP: Creating a pod to test atomic-volume-subpath 07/27/23 02:53:16.139 +Jul 27 02:53:16.184: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-8jpv" in namespace "subpath-2362" to be "Succeeded or Failed" +Jul 27 02:53:16.197: INFO: Pod "pod-subpath-test-configmap-8jpv": Phase="Pending", Reason="", readiness=false. Elapsed: 13.393448ms +Jul 27 02:53:18.210: INFO: Pod "pod-subpath-test-configmap-8jpv": Phase="Running", Reason="", readiness=true. Elapsed: 2.025739317s +Jul 27 02:53:20.209: INFO: Pod "pod-subpath-test-configmap-8jpv": Phase="Running", Reason="", readiness=true. Elapsed: 4.025282185s +Jul 27 02:53:22.210: INFO: Pod "pod-subpath-test-configmap-8jpv": Phase="Running", Reason="", readiness=true. Elapsed: 6.025818884s +Jul 27 02:53:24.212: INFO: Pod "pod-subpath-test-configmap-8jpv": Phase="Running", Reason="", readiness=true. Elapsed: 8.028452614s +Jul 27 02:53:26.214: INFO: Pod "pod-subpath-test-configmap-8jpv": Phase="Running", Reason="", readiness=true. Elapsed: 10.030400925s +Jul 27 02:53:28.210: INFO: Pod "pod-subpath-test-configmap-8jpv": Phase="Running", Reason="", readiness=true. Elapsed: 12.02566774s +Jul 27 02:53:30.211: INFO: Pod "pod-subpath-test-configmap-8jpv": Phase="Running", Reason="", readiness=true. Elapsed: 14.027321577s +Jul 27 02:53:32.211: INFO: Pod "pod-subpath-test-configmap-8jpv": Phase="Running", Reason="", readiness=true. Elapsed: 16.026469361s +Jul 27 02:53:34.211: INFO: Pod "pod-subpath-test-configmap-8jpv": Phase="Running", Reason="", readiness=true. Elapsed: 18.027270651s +Jul 27 02:53:36.211: INFO: Pod "pod-subpath-test-configmap-8jpv": Phase="Running", Reason="", readiness=true. Elapsed: 20.026670054s +Jul 27 02:53:38.212: INFO: Pod "pod-subpath-test-configmap-8jpv": Phase="Running", Reason="", readiness=false. Elapsed: 22.027493615s +Jul 27 02:53:40.210: INFO: Pod "pod-subpath-test-configmap-8jpv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.026434334s +STEP: Saw pod success 07/27/23 02:53:40.211 +Jul 27 02:53:40.211: INFO: Pod "pod-subpath-test-configmap-8jpv" satisfied condition "Succeeded or Failed" +Jul 27 02:53:40.224: INFO: Trying to get logs from node 10.245.128.19 pod pod-subpath-test-configmap-8jpv container test-container-subpath-configmap-8jpv: +STEP: delete the pod 07/27/23 02:53:40.254 +Jul 27 02:53:40.286: INFO: Waiting for pod pod-subpath-test-configmap-8jpv to disappear +Jul 27 02:53:40.297: INFO: Pod pod-subpath-test-configmap-8jpv no longer exists +STEP: Deleting pod pod-subpath-test-configmap-8jpv 07/27/23 02:53:40.297 +Jul 27 02:53:40.297: INFO: Deleting pod "pod-subpath-test-configmap-8jpv" in namespace "subpath-2362" +[AfterEach] [sig-storage] Subpath test/e2e/framework/node/init/init.go:32 -Jun 12 22:21:41.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Probing container +Jul 27 02:53:40.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Subpath test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Probing container +[DeferCleanup (Each)] [sig-storage] Subpath dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Probing container +[DeferCleanup (Each)] [sig-storage] Subpath tear down framework | framework.go:193 -STEP: Destroying namespace "container-probe-6653" for this suite. 06/12/23 22:21:41.594 +STEP: Destroying namespace "subpath-2362" for this suite. 07/27/23 02:53:40.325 ------------------------------ -• [SLOW TEST] [246.195 seconds] -[sig-node] Probing container -test/e2e/common/node/framework.go:23 - should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] - test/e2e/common/node/container_probe.go:152 +• [SLOW TEST] [24.396 seconds] +[sig-storage] Subpath +test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + test/e2e/storage/subpath.go:36 + should support subpaths with configmap pod with mountPath of existing file [Conformance] + test/e2e/storage/subpath.go:80 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Probing container + [BeforeEach] [sig-storage] Subpath set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:17:35.447 - Jun 12 22:17:35.448: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename container-probe 06/12/23 22:17:35.449 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:17:35.501 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:17:35.511 - [BeforeEach] [sig-node] Probing container + STEP: Creating a kubernetes client 07/27/23 02:53:15.951 + Jul 27 02:53:15.951: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename subpath 07/27/23 02:53:15.952 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:53:16.05 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:53:16.068 + [BeforeEach] [sig-storage] Subpath test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-node] Probing container - test/e2e/common/node/container_probe.go:63 - [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] - test/e2e/common/node/container_probe.go:152 - STEP: Creating pod busybox-d614b39f-9256-4689-9970-a87ec3974eb2 in namespace container-probe-6653 06/12/23 22:17:35.522 - Jun 12 22:17:35.544: INFO: Waiting up to 5m0s for pod "busybox-d614b39f-9256-4689-9970-a87ec3974eb2" in namespace "container-probe-6653" to be "not pending" - Jun 12 22:17:35.550: INFO: Pod "busybox-d614b39f-9256-4689-9970-a87ec3974eb2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.252397ms - Jun 12 22:17:37.558: INFO: Pod "busybox-d614b39f-9256-4689-9970-a87ec3974eb2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014596981s - Jun 12 22:17:39.558: INFO: Pod "busybox-d614b39f-9256-4689-9970-a87ec3974eb2": Phase="Running", Reason="", readiness=true. Elapsed: 4.014816161s - Jun 12 22:17:39.558: INFO: Pod "busybox-d614b39f-9256-4689-9970-a87ec3974eb2" satisfied condition "not pending" - Jun 12 22:17:39.559: INFO: Started pod busybox-d614b39f-9256-4689-9970-a87ec3974eb2 in namespace container-probe-6653 - STEP: checking the pod's current state and verifying that restartCount is present 06/12/23 22:17:39.559 - Jun 12 22:17:39.565: INFO: Initial restart count of pod busybox-d614b39f-9256-4689-9970-a87ec3974eb2 is 0 - STEP: deleting the pod 06/12/23 22:21:41.531 - [AfterEach] [sig-node] Probing container + [BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 + STEP: Setting up data 07/27/23 02:53:16.08 + [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] + test/e2e/storage/subpath.go:80 + STEP: Creating pod pod-subpath-test-configmap-8jpv 07/27/23 02:53:16.139 + STEP: Creating a pod to test atomic-volume-subpath 07/27/23 02:53:16.139 + Jul 27 02:53:16.184: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-8jpv" in namespace "subpath-2362" to be "Succeeded or Failed" + Jul 27 02:53:16.197: INFO: Pod "pod-subpath-test-configmap-8jpv": Phase="Pending", Reason="", readiness=false. Elapsed: 13.393448ms + Jul 27 02:53:18.210: INFO: Pod "pod-subpath-test-configmap-8jpv": Phase="Running", Reason="", readiness=true. Elapsed: 2.025739317s + Jul 27 02:53:20.209: INFO: Pod "pod-subpath-test-configmap-8jpv": Phase="Running", Reason="", readiness=true. Elapsed: 4.025282185s + Jul 27 02:53:22.210: INFO: Pod "pod-subpath-test-configmap-8jpv": Phase="Running", Reason="", readiness=true. Elapsed: 6.025818884s + Jul 27 02:53:24.212: INFO: Pod "pod-subpath-test-configmap-8jpv": Phase="Running", Reason="", readiness=true. Elapsed: 8.028452614s + Jul 27 02:53:26.214: INFO: Pod "pod-subpath-test-configmap-8jpv": Phase="Running", Reason="", readiness=true. Elapsed: 10.030400925s + Jul 27 02:53:28.210: INFO: Pod "pod-subpath-test-configmap-8jpv": Phase="Running", Reason="", readiness=true. Elapsed: 12.02566774s + Jul 27 02:53:30.211: INFO: Pod "pod-subpath-test-configmap-8jpv": Phase="Running", Reason="", readiness=true. Elapsed: 14.027321577s + Jul 27 02:53:32.211: INFO: Pod "pod-subpath-test-configmap-8jpv": Phase="Running", Reason="", readiness=true. Elapsed: 16.026469361s + Jul 27 02:53:34.211: INFO: Pod "pod-subpath-test-configmap-8jpv": Phase="Running", Reason="", readiness=true. Elapsed: 18.027270651s + Jul 27 02:53:36.211: INFO: Pod "pod-subpath-test-configmap-8jpv": Phase="Running", Reason="", readiness=true. Elapsed: 20.026670054s + Jul 27 02:53:38.212: INFO: Pod "pod-subpath-test-configmap-8jpv": Phase="Running", Reason="", readiness=false. Elapsed: 22.027493615s + Jul 27 02:53:40.210: INFO: Pod "pod-subpath-test-configmap-8jpv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.026434334s + STEP: Saw pod success 07/27/23 02:53:40.211 + Jul 27 02:53:40.211: INFO: Pod "pod-subpath-test-configmap-8jpv" satisfied condition "Succeeded or Failed" + Jul 27 02:53:40.224: INFO: Trying to get logs from node 10.245.128.19 pod pod-subpath-test-configmap-8jpv container test-container-subpath-configmap-8jpv: + STEP: delete the pod 07/27/23 02:53:40.254 + Jul 27 02:53:40.286: INFO: Waiting for pod pod-subpath-test-configmap-8jpv to disappear + Jul 27 02:53:40.297: INFO: Pod pod-subpath-test-configmap-8jpv no longer exists + STEP: Deleting pod pod-subpath-test-configmap-8jpv 07/27/23 02:53:40.297 + Jul 27 02:53:40.297: INFO: Deleting pod "pod-subpath-test-configmap-8jpv" in namespace "subpath-2362" + [AfterEach] [sig-storage] Subpath test/e2e/framework/node/init/init.go:32 - Jun 12 22:21:41.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Probing container + Jul 27 02:53:40.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Subpath test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Probing container + [DeferCleanup (Each)] [sig-storage] Subpath dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Probing container + [DeferCleanup (Each)] [sig-storage] Subpath tear down framework | framework.go:193 - STEP: Destroying namespace "container-probe-6653" for this suite. 06/12/23 22:21:41.594 + STEP: Destroying namespace "subpath-2362" for this suite. 07/27/23 02:53:40.325 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSS +SSS ------------------------------ -[sig-node] Container Runtime blackbox test on terminated container - should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] - test/e2e/common/node/runtime.go:216 -[BeforeEach] [sig-node] Container Runtime +[sig-apps] DisruptionController + should update/patch PodDisruptionBudget status [Conformance] + test/e2e/apps/disruption.go:164 +[BeforeEach] [sig-apps] DisruptionController set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:21:41.645 -Jun 12 22:21:41.645: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename container-runtime 06/12/23 22:21:41.648 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:21:41.718 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:21:41.731 -[BeforeEach] [sig-node] Container Runtime +STEP: Creating a kubernetes client 07/27/23 02:53:40.348 +Jul 27 02:53:40.348: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename disruption 07/27/23 02:53:40.348 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:53:40.42 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:53:40.435 +[BeforeEach] [sig-apps] DisruptionController test/e2e/framework/metrics/init/init.go:31 -[It] should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] - test/e2e/common/node/runtime.go:216 -STEP: create the container 06/12/23 22:21:41.74 -STEP: wait for the container to reach Failed 06/12/23 22:21:41.79 -STEP: get the container status 06/12/23 22:21:46.899 -STEP: the container should be terminated 06/12/23 22:21:46.909 -STEP: the termination message should be set 06/12/23 22:21:46.91 -Jun 12 22:21:46.911: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- -STEP: delete the container 06/12/23 22:21:46.911 -[AfterEach] [sig-node] Container Runtime +[BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:72 +[It] should update/patch PodDisruptionBudget status [Conformance] + test/e2e/apps/disruption.go:164 +STEP: Waiting for the pdb to be processed 07/27/23 02:53:40.488 +STEP: Updating PodDisruptionBudget status 07/27/23 02:53:42.516 +STEP: Waiting for all pods to be running 07/27/23 02:53:42.548 +Jul 27 02:53:42.562: INFO: running pods: 0 < 1 +Jul 27 02:53:44.579: INFO: running pods: 0 < 1 +STEP: locating a running pod 07/27/23 02:53:46.577 +STEP: Waiting for the pdb to be processed 07/27/23 02:53:46.619 +STEP: Patching PodDisruptionBudget status 07/27/23 02:53:46.642 +STEP: Waiting for the pdb to be processed 07/27/23 02:53:46.669 +[AfterEach] [sig-apps] DisruptionController test/e2e/framework/node/init/init.go:32 -Jun 12 22:21:46.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Container Runtime +Jul 27 02:53:46.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] DisruptionController test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Container Runtime +[DeferCleanup (Each)] [sig-apps] DisruptionController dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Container Runtime +[DeferCleanup (Each)] [sig-apps] DisruptionController tear down framework | framework.go:193 -STEP: Destroying namespace "container-runtime-7818" for this suite. 06/12/23 22:21:46.962 +STEP: Destroying namespace "disruption-1859" for this suite. 07/27/23 02:53:46.701 ------------------------------ -• [SLOW TEST] [5.341 seconds] -[sig-node] Container Runtime -test/e2e/common/node/framework.go:23 - blackbox test - test/e2e/common/node/runtime.go:44 - on terminated container - test/e2e/common/node/runtime.go:137 - should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] - test/e2e/common/node/runtime.go:216 +• [SLOW TEST] [6.404 seconds] +[sig-apps] DisruptionController +test/e2e/apps/framework.go:23 + should update/patch PodDisruptionBudget status [Conformance] + test/e2e/apps/disruption.go:164 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Container Runtime + [BeforeEach] [sig-apps] DisruptionController set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:21:41.645 - Jun 12 22:21:41.645: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename container-runtime 06/12/23 22:21:41.648 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:21:41.718 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:21:41.731 - [BeforeEach] [sig-node] Container Runtime + STEP: Creating a kubernetes client 07/27/23 02:53:40.348 + Jul 27 02:53:40.348: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename disruption 07/27/23 02:53:40.348 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:53:40.42 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:53:40.435 + [BeforeEach] [sig-apps] DisruptionController test/e2e/framework/metrics/init/init.go:31 - [It] should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] - test/e2e/common/node/runtime.go:216 - STEP: create the container 06/12/23 22:21:41.74 - STEP: wait for the container to reach Failed 06/12/23 22:21:41.79 - STEP: get the container status 06/12/23 22:21:46.899 - STEP: the container should be terminated 06/12/23 22:21:46.909 - STEP: the termination message should be set 06/12/23 22:21:46.91 - Jun 12 22:21:46.911: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- - STEP: delete the container 06/12/23 22:21:46.911 - [AfterEach] [sig-node] Container Runtime + [BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:72 + [It] should update/patch PodDisruptionBudget status [Conformance] + test/e2e/apps/disruption.go:164 + STEP: Waiting for the pdb to be processed 07/27/23 02:53:40.488 + STEP: Updating PodDisruptionBudget status 07/27/23 02:53:42.516 + STEP: Waiting for all pods to be running 07/27/23 02:53:42.548 + Jul 27 02:53:42.562: INFO: running pods: 0 < 1 + Jul 27 02:53:44.579: INFO: running pods: 0 < 1 + STEP: locating a running pod 07/27/23 02:53:46.577 + STEP: Waiting for the pdb to be processed 07/27/23 02:53:46.619 + STEP: Patching PodDisruptionBudget status 07/27/23 02:53:46.642 + STEP: Waiting for the pdb to be processed 07/27/23 02:53:46.669 + [AfterEach] [sig-apps] DisruptionController test/e2e/framework/node/init/init.go:32 - Jun 12 22:21:46.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Container Runtime + Jul 27 02:53:46.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] DisruptionController test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Container Runtime + [DeferCleanup (Each)] [sig-apps] DisruptionController dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Container Runtime + [DeferCleanup (Each)] [sig-apps] DisruptionController tear down framework | framework.go:193 - STEP: Destroying namespace "container-runtime-7818" for this suite. 06/12/23 22:21:46.962 + STEP: Destroying namespace "disruption-1859" for this suite. 07/27/23 02:53:46.701 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSS ------------------------------- -[sig-storage] Projected downwardAPI - should provide container's cpu limit [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:193 -[BeforeEach] [sig-storage] Projected downwardAPI +[sig-apps] ReplicationController + should surface a failure condition on a common issue like exceeded quota [Conformance] + test/e2e/apps/rc.go:83 +[BeforeEach] [sig-apps] ReplicationController set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:21:46.99 -Jun 12 22:21:46.990: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename projected 06/12/23 22:21:46.992 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:21:47.052 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:21:47.062 -[BeforeEach] [sig-storage] Projected downwardAPI +STEP: Creating a kubernetes client 07/27/23 02:53:46.752 +Jul 27 02:53:46.752: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename replication-controller 07/27/23 02:53:46.752 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:53:46.803 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:53:46.814 +[BeforeEach] [sig-apps] ReplicationController test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-storage] Projected downwardAPI - test/e2e/common/storage/projected_downwardapi.go:44 -[It] should provide container's cpu limit [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:193 -STEP: Creating a pod to test downward API volume plugin 06/12/23 22:21:47.071 -Jun 12 22:21:47.115: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2549d3fc-0d33-448f-9474-79d5997b8b82" in namespace "projected-5417" to be "Succeeded or Failed" -Jun 12 22:21:47.143: INFO: Pod "downwardapi-volume-2549d3fc-0d33-448f-9474-79d5997b8b82": Phase="Pending", Reason="", readiness=false. Elapsed: 27.910267ms -Jun 12 22:21:49.152: INFO: Pod "downwardapi-volume-2549d3fc-0d33-448f-9474-79d5997b8b82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036301358s -Jun 12 22:21:51.154: INFO: Pod "downwardapi-volume-2549d3fc-0d33-448f-9474-79d5997b8b82": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038903399s -Jun 12 22:21:53.152: INFO: Pod "downwardapi-volume-2549d3fc-0d33-448f-9474-79d5997b8b82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036897921s -STEP: Saw pod success 06/12/23 22:21:53.152 -Jun 12 22:21:53.152: INFO: Pod "downwardapi-volume-2549d3fc-0d33-448f-9474-79d5997b8b82" satisfied condition "Succeeded or Failed" -Jun 12 22:21:53.159: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-2549d3fc-0d33-448f-9474-79d5997b8b82 container client-container: -STEP: delete the pod 06/12/23 22:21:53.205 -Jun 12 22:21:53.229: INFO: Waiting for pod downwardapi-volume-2549d3fc-0d33-448f-9474-79d5997b8b82 to disappear -Jun 12 22:21:53.288: INFO: Pod downwardapi-volume-2549d3fc-0d33-448f-9474-79d5997b8b82 no longer exists -[AfterEach] [sig-storage] Projected downwardAPI +[BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 +[It] should surface a failure condition on a common issue like exceeded quota [Conformance] + test/e2e/apps/rc.go:83 +Jul 27 02:53:46.826: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace +STEP: Creating rc "condition-test" that asks for more than the allowed pod quota 07/27/23 02:53:46.863 +W0727 02:53:46.882325 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "httpd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "httpd" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "httpd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "httpd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +STEP: Checking rc "condition-test" has the desired failure condition set 07/27/23 02:53:46.882 +STEP: Scaling down rc "condition-test" to satisfy pod quota 07/27/23 02:53:47.91 +Jul 27 02:53:47.942: INFO: Updating replication controller "condition-test" +STEP: Checking rc "condition-test" has no failure condition set 07/27/23 02:53:47.942 +[AfterEach] [sig-apps] ReplicationController test/e2e/framework/node/init/init.go:32 -Jun 12 22:21:53.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Projected downwardAPI +Jul 27 02:53:47.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicationController test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Projected downwardAPI +[DeferCleanup (Each)] [sig-apps] ReplicationController dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Projected downwardAPI +[DeferCleanup (Each)] [sig-apps] ReplicationController tear down framework | framework.go:193 -STEP: Destroying namespace "projected-5417" for this suite. 06/12/23 22:21:53.314 +STEP: Destroying namespace "replication-controller-4053" for this suite. 07/27/23 02:53:47.974 ------------------------------ -• [SLOW TEST] [6.347 seconds] -[sig-storage] Projected downwardAPI -test/e2e/common/storage/framework.go:23 - should provide container's cpu limit [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:193 +• [1.250 seconds] +[sig-apps] ReplicationController +test/e2e/apps/framework.go:23 + should surface a failure condition on a common issue like exceeded quota [Conformance] + test/e2e/apps/rc.go:83 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Projected downwardAPI + [BeforeEach] [sig-apps] ReplicationController set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:21:46.99 - Jun 12 22:21:46.990: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename projected 06/12/23 22:21:46.992 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:21:47.052 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:21:47.062 - [BeforeEach] [sig-storage] Projected downwardAPI + STEP: Creating a kubernetes client 07/27/23 02:53:46.752 + Jul 27 02:53:46.752: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename replication-controller 07/27/23 02:53:46.752 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:53:46.803 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:53:46.814 + [BeforeEach] [sig-apps] ReplicationController test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-storage] Projected downwardAPI - test/e2e/common/storage/projected_downwardapi.go:44 - [It] should provide container's cpu limit [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:193 - STEP: Creating a pod to test downward API volume plugin 06/12/23 22:21:47.071 - Jun 12 22:21:47.115: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2549d3fc-0d33-448f-9474-79d5997b8b82" in namespace "projected-5417" to be "Succeeded or Failed" - Jun 12 22:21:47.143: INFO: Pod "downwardapi-volume-2549d3fc-0d33-448f-9474-79d5997b8b82": Phase="Pending", Reason="", readiness=false. Elapsed: 27.910267ms - Jun 12 22:21:49.152: INFO: Pod "downwardapi-volume-2549d3fc-0d33-448f-9474-79d5997b8b82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036301358s - Jun 12 22:21:51.154: INFO: Pod "downwardapi-volume-2549d3fc-0d33-448f-9474-79d5997b8b82": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038903399s - Jun 12 22:21:53.152: INFO: Pod "downwardapi-volume-2549d3fc-0d33-448f-9474-79d5997b8b82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036897921s - STEP: Saw pod success 06/12/23 22:21:53.152 - Jun 12 22:21:53.152: INFO: Pod "downwardapi-volume-2549d3fc-0d33-448f-9474-79d5997b8b82" satisfied condition "Succeeded or Failed" - Jun 12 22:21:53.159: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-2549d3fc-0d33-448f-9474-79d5997b8b82 container client-container: - STEP: delete the pod 06/12/23 22:21:53.205 - Jun 12 22:21:53.229: INFO: Waiting for pod downwardapi-volume-2549d3fc-0d33-448f-9474-79d5997b8b82 to disappear - Jun 12 22:21:53.288: INFO: Pod downwardapi-volume-2549d3fc-0d33-448f-9474-79d5997b8b82 no longer exists - [AfterEach] [sig-storage] Projected downwardAPI + [BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 + [It] should surface a failure condition on a common issue like exceeded quota [Conformance] + test/e2e/apps/rc.go:83 + Jul 27 02:53:46.826: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace + STEP: Creating rc "condition-test" that asks for more than the allowed pod quota 07/27/23 02:53:46.863 + W0727 02:53:46.882325 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "httpd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "httpd" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "httpd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "httpd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + STEP: Checking rc "condition-test" has the desired failure condition set 07/27/23 02:53:46.882 + STEP: Scaling down rc "condition-test" to satisfy pod quota 07/27/23 02:53:47.91 + Jul 27 02:53:47.942: INFO: Updating replication controller "condition-test" + STEP: Checking rc "condition-test" has no failure condition set 07/27/23 02:53:47.942 + [AfterEach] [sig-apps] ReplicationController test/e2e/framework/node/init/init.go:32 - Jun 12 22:21:53.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + Jul 27 02:53:47.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicationController test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + [DeferCleanup (Each)] [sig-apps] ReplicationController dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + [DeferCleanup (Each)] [sig-apps] ReplicationController tear down framework | framework.go:193 - STEP: Destroying namespace "projected-5417" for this suite. 06/12/23 22:21:53.314 + STEP: Destroying namespace "replication-controller-4053" for this suite. 07/27/23 02:53:47.974 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-instrumentation] Events API - should delete a collection of events [Conformance] - test/e2e/instrumentation/events.go:207 -[BeforeEach] [sig-instrumentation] Events API +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + patching/updating a validating webhook should work [Conformance] + test/e2e/apimachinery/webhook.go:413 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:21:53.353 -Jun 12 22:21:53.353: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename events 06/12/23 22:21:53.355 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:21:53.407 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:21:53.419 -[BeforeEach] [sig-instrumentation] Events API +STEP: Creating a kubernetes client 07/27/23 02:53:48.002 +Jul 27 02:53:48.002: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename webhook 07/27/23 02:53:48.003 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:53:48.047 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:53:48.057 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-instrumentation] Events API - test/e2e/instrumentation/events.go:84 -[It] should delete a collection of events [Conformance] - test/e2e/instrumentation/events.go:207 -STEP: Create set of events 06/12/23 22:21:53.441 -STEP: get a list of Events with a label in the current namespace 06/12/23 22:21:53.478 -STEP: delete a list of events 06/12/23 22:21:53.487 -Jun 12 22:21:53.487: INFO: requesting DeleteCollection of events -STEP: check that the list of events matches the requested quantity 06/12/23 22:21:53.524 -[AfterEach] [sig-instrumentation] Events API +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 07/27/23 02:53:48.157 +STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 02:53:48.52 +STEP: Deploying the webhook pod 07/27/23 02:53:48.559 +STEP: Wait for the deployment to be ready 07/27/23 02:53:48.592 +Jul 27 02:53:48.619: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 07/27/23 02:53:50.652 +STEP: Verifying the service has paired with the endpoint 07/27/23 02:53:50.683 +Jul 27 02:53:51.684: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] patching/updating a validating webhook should work [Conformance] + test/e2e/apimachinery/webhook.go:413 +STEP: Creating a validating webhook configuration 07/27/23 02:53:51.696 +STEP: Creating a configMap that does not comply to the validation webhook rules 07/27/23 02:53:51.75 +STEP: Updating a validating webhook configuration's rules to not include the create operation 07/27/23 02:53:51.802 +STEP: Creating a configMap that does not comply to the validation webhook rules 07/27/23 02:53:51.832 +STEP: Patching a validating webhook configuration's rules to include the create operation 07/27/23 02:53:51.862 +STEP: Creating a configMap that does not comply to the validation webhook rules 07/27/23 02:53:51.883 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 -Jun 12 22:21:53.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-instrumentation] Events API +Jul 27 02:53:51.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-instrumentation] Events API +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-instrumentation] Events API +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 -STEP: Destroying namespace "events-9992" for this suite. 06/12/23 22:21:53.542 +STEP: Destroying namespace "webhook-3396" for this suite. 07/27/23 02:53:52.116 +STEP: Destroying namespace "webhook-3396-markers" for this suite. 07/27/23 02:53:52.14 ------------------------------ -• [0.218 seconds] -[sig-instrumentation] Events API -test/e2e/instrumentation/common/framework.go:23 - should delete a collection of events [Conformance] - test/e2e/instrumentation/events.go:207 +• [4.157 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + patching/updating a validating webhook should work [Conformance] + test/e2e/apimachinery/webhook.go:413 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-instrumentation] Events API + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:21:53.353 - Jun 12 22:21:53.353: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename events 06/12/23 22:21:53.355 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:21:53.407 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:21:53.419 - [BeforeEach] [sig-instrumentation] Events API + STEP: Creating a kubernetes client 07/27/23 02:53:48.002 + Jul 27 02:53:48.002: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename webhook 07/27/23 02:53:48.003 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:53:48.047 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:53:48.057 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-instrumentation] Events API - test/e2e/instrumentation/events.go:84 - [It] should delete a collection of events [Conformance] - test/e2e/instrumentation/events.go:207 - STEP: Create set of events 06/12/23 22:21:53.441 - STEP: get a list of Events with a label in the current namespace 06/12/23 22:21:53.478 - STEP: delete a list of events 06/12/23 22:21:53.487 - Jun 12 22:21:53.487: INFO: requesting DeleteCollection of events - STEP: check that the list of events matches the requested quantity 06/12/23 22:21:53.524 - [AfterEach] [sig-instrumentation] Events API + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 07/27/23 02:53:48.157 + STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 02:53:48.52 + STEP: Deploying the webhook pod 07/27/23 02:53:48.559 + STEP: Wait for the deployment to be ready 07/27/23 02:53:48.592 + Jul 27 02:53:48.619: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 07/27/23 02:53:50.652 + STEP: Verifying the service has paired with the endpoint 07/27/23 02:53:50.683 + Jul 27 02:53:51.684: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] patching/updating a validating webhook should work [Conformance] + test/e2e/apimachinery/webhook.go:413 + STEP: Creating a validating webhook configuration 07/27/23 02:53:51.696 + STEP: Creating a configMap that does not comply to the validation webhook rules 07/27/23 02:53:51.75 + STEP: Updating a validating webhook configuration's rules to not include the create operation 07/27/23 02:53:51.802 + STEP: Creating a configMap that does not comply to the validation webhook rules 07/27/23 02:53:51.832 + STEP: Patching a validating webhook configuration's rules to include the create operation 07/27/23 02:53:51.862 + STEP: Creating a configMap that does not comply to the validation webhook rules 07/27/23 02:53:51.883 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 22:21:53.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-instrumentation] Events API + Jul 27 02:53:51.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-instrumentation] Events API + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-instrumentation] Events API + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "events-9992" for this suite. 06/12/23 22:21:53.542 + STEP: Destroying namespace "webhook-3396" for this suite. 07/27/23 02:53:52.116 + STEP: Destroying namespace "webhook-3396-markers" for this suite. 07/27/23 02:53:52.14 << End Captured GinkgoWriter Output ------------------------------ -SS +S ------------------------------ -[sig-apps] ReplicaSet - Replace and Patch tests [Conformance] - test/e2e/apps/replica_set.go:154 -[BeforeEach] [sig-apps] ReplicaSet +[sig-node] PreStop + should call prestop when killing a pod [Conformance] + test/e2e/node/pre_stop.go:168 +[BeforeEach] [sig-node] PreStop set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:21:53.572 -Jun 12 22:21:53.573: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename replicaset 06/12/23 22:21:53.578 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:21:53.641 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:21:53.651 -[BeforeEach] [sig-apps] ReplicaSet +STEP: Creating a kubernetes client 07/27/23 02:53:52.161 +Jul 27 02:53:52.161: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename prestop 07/27/23 02:53:52.162 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:53:52.204 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:53:52.216 +[BeforeEach] [sig-node] PreStop test/e2e/framework/metrics/init/init.go:31 -[It] Replace and Patch tests [Conformance] - test/e2e/apps/replica_set.go:154 -Jun 12 22:21:53.698: INFO: Pod name sample-pod: Found 0 pods out of 1 -Jun 12 22:21:58.707: INFO: Pod name sample-pod: Found 1 pods out of 1 -STEP: ensuring each pod is running 06/12/23 22:21:58.707 -STEP: Scaling up "test-rs" replicaset 06/12/23 22:21:58.707 -Jun 12 22:21:58.723: INFO: Updating replica set "test-rs" -STEP: patching the ReplicaSet 06/12/23 22:21:58.723 -W0612 22:21:58.735607 23 warnings.go:70] unknown field "spec.template.spec.TerminationGracePeriodSeconds" -Jun 12 22:21:58.740: INFO: observed ReplicaSet test-rs in namespace replicaset-9279 with ReadyReplicas 1, AvailableReplicas 1 -Jun 12 22:21:58.773: INFO: observed ReplicaSet test-rs in namespace replicaset-9279 with ReadyReplicas 1, AvailableReplicas 1 -Jun 12 22:21:58.805: INFO: observed ReplicaSet test-rs in namespace replicaset-9279 with ReadyReplicas 1, AvailableReplicas 1 -Jun 12 22:21:58.820: INFO: observed ReplicaSet test-rs in namespace replicaset-9279 with ReadyReplicas 1, AvailableReplicas 1 -Jun 12 22:22:02.004: INFO: observed ReplicaSet test-rs in namespace replicaset-9279 with ReadyReplicas 2, AvailableReplicas 2 -Jun 12 22:22:02.508: INFO: observed Replicaset test-rs in namespace replicaset-9279 with ReadyReplicas 3 found true -[AfterEach] [sig-apps] ReplicaSet +[BeforeEach] [sig-node] PreStop + test/e2e/node/pre_stop.go:159 +[It] should call prestop when killing a pod [Conformance] + test/e2e/node/pre_stop.go:168 +STEP: Creating server pod server in namespace prestop-112 07/27/23 02:53:52.227 +STEP: Waiting for pods to come up. 07/27/23 02:53:53.259 +Jul 27 02:53:53.259: INFO: Waiting up to 5m0s for pod "server" in namespace "prestop-112" to be "running" +Jul 27 02:53:53.271: INFO: Pod "server": Phase="Pending", Reason="", readiness=false. Elapsed: 11.734688ms +Jul 27 02:53:55.283: INFO: Pod "server": Phase="Running", Reason="", readiness=true. Elapsed: 2.023982631s +Jul 27 02:53:55.283: INFO: Pod "server" satisfied condition "running" +STEP: Creating tester pod tester in namespace prestop-112 07/27/23 02:53:55.295 +Jul 27 02:53:55.314: INFO: Waiting up to 5m0s for pod "tester" in namespace "prestop-112" to be "running" +Jul 27 02:53:55.326: INFO: Pod "tester": Phase="Pending", Reason="", readiness=false. Elapsed: 11.713309ms +Jul 27 02:53:57.340: INFO: Pod "tester": Phase="Running", Reason="", readiness=true. Elapsed: 2.025718832s +Jul 27 02:53:57.340: INFO: Pod "tester" satisfied condition "running" +STEP: Deleting pre-stop pod 07/27/23 02:53:57.34 +Jul 27 02:54:02.388: INFO: Saw: { + "Hostname": "server", + "Sent": null, + "Received": { + "prestop": 1 + }, + "Errors": null, + "Log": [ + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." + ], + "StillContactingPeers": true +} +STEP: Deleting the server pod 07/27/23 02:54:02.389 +[AfterEach] [sig-node] PreStop test/e2e/framework/node/init/init.go:32 -Jun 12 22:22:02.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] ReplicaSet +Jul 27 02:54:02.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] PreStop test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] ReplicaSet +[DeferCleanup (Each)] [sig-node] PreStop dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] ReplicaSet +[DeferCleanup (Each)] [sig-node] PreStop tear down framework | framework.go:193 -STEP: Destroying namespace "replicaset-9279" for this suite. 06/12/23 22:22:02.52 +STEP: Destroying namespace "prestop-112" for this suite. 07/27/23 02:54:02.451 ------------------------------ -• [SLOW TEST] [8.984 seconds] -[sig-apps] ReplicaSet -test/e2e/apps/framework.go:23 - Replace and Patch tests [Conformance] - test/e2e/apps/replica_set.go:154 +• [SLOW TEST] [10.351 seconds] +[sig-node] PreStop +test/e2e/node/framework.go:23 + should call prestop when killing a pod [Conformance] + test/e2e/node/pre_stop.go:168 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] ReplicaSet + [BeforeEach] [sig-node] PreStop set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:21:53.572 - Jun 12 22:21:53.573: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename replicaset 06/12/23 22:21:53.578 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:21:53.641 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:21:53.651 - [BeforeEach] [sig-apps] ReplicaSet + STEP: Creating a kubernetes client 07/27/23 02:53:52.161 + Jul 27 02:53:52.161: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename prestop 07/27/23 02:53:52.162 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:53:52.204 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:53:52.216 + [BeforeEach] [sig-node] PreStop test/e2e/framework/metrics/init/init.go:31 - [It] Replace and Patch tests [Conformance] - test/e2e/apps/replica_set.go:154 - Jun 12 22:21:53.698: INFO: Pod name sample-pod: Found 0 pods out of 1 - Jun 12 22:21:58.707: INFO: Pod name sample-pod: Found 1 pods out of 1 - STEP: ensuring each pod is running 06/12/23 22:21:58.707 - STEP: Scaling up "test-rs" replicaset 06/12/23 22:21:58.707 - Jun 12 22:21:58.723: INFO: Updating replica set "test-rs" - STEP: patching the ReplicaSet 06/12/23 22:21:58.723 - W0612 22:21:58.735607 23 warnings.go:70] unknown field "spec.template.spec.TerminationGracePeriodSeconds" - Jun 12 22:21:58.740: INFO: observed ReplicaSet test-rs in namespace replicaset-9279 with ReadyReplicas 1, AvailableReplicas 1 - Jun 12 22:21:58.773: INFO: observed ReplicaSet test-rs in namespace replicaset-9279 with ReadyReplicas 1, AvailableReplicas 1 - Jun 12 22:21:58.805: INFO: observed ReplicaSet test-rs in namespace replicaset-9279 with ReadyReplicas 1, AvailableReplicas 1 - Jun 12 22:21:58.820: INFO: observed ReplicaSet test-rs in namespace replicaset-9279 with ReadyReplicas 1, AvailableReplicas 1 - Jun 12 22:22:02.004: INFO: observed ReplicaSet test-rs in namespace replicaset-9279 with ReadyReplicas 2, AvailableReplicas 2 - Jun 12 22:22:02.508: INFO: observed Replicaset test-rs in namespace replicaset-9279 with ReadyReplicas 3 found true - [AfterEach] [sig-apps] ReplicaSet + [BeforeEach] [sig-node] PreStop + test/e2e/node/pre_stop.go:159 + [It] should call prestop when killing a pod [Conformance] + test/e2e/node/pre_stop.go:168 + STEP: Creating server pod server in namespace prestop-112 07/27/23 02:53:52.227 + STEP: Waiting for pods to come up. 07/27/23 02:53:53.259 + Jul 27 02:53:53.259: INFO: Waiting up to 5m0s for pod "server" in namespace "prestop-112" to be "running" + Jul 27 02:53:53.271: INFO: Pod "server": Phase="Pending", Reason="", readiness=false. Elapsed: 11.734688ms + Jul 27 02:53:55.283: INFO: Pod "server": Phase="Running", Reason="", readiness=true. Elapsed: 2.023982631s + Jul 27 02:53:55.283: INFO: Pod "server" satisfied condition "running" + STEP: Creating tester pod tester in namespace prestop-112 07/27/23 02:53:55.295 + Jul 27 02:53:55.314: INFO: Waiting up to 5m0s for pod "tester" in namespace "prestop-112" to be "running" + Jul 27 02:53:55.326: INFO: Pod "tester": Phase="Pending", Reason="", readiness=false. Elapsed: 11.713309ms + Jul 27 02:53:57.340: INFO: Pod "tester": Phase="Running", Reason="", readiness=true. Elapsed: 2.025718832s + Jul 27 02:53:57.340: INFO: Pod "tester" satisfied condition "running" + STEP: Deleting pre-stop pod 07/27/23 02:53:57.34 + Jul 27 02:54:02.388: INFO: Saw: { + "Hostname": "server", + "Sent": null, + "Received": { + "prestop": 1 + }, + "Errors": null, + "Log": [ + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." + ], + "StillContactingPeers": true + } + STEP: Deleting the server pod 07/27/23 02:54:02.389 + [AfterEach] [sig-node] PreStop test/e2e/framework/node/init/init.go:32 - Jun 12 22:22:02.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] ReplicaSet + Jul 27 02:54:02.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] PreStop test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] ReplicaSet + [DeferCleanup (Each)] [sig-node] PreStop dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] ReplicaSet + [DeferCleanup (Each)] [sig-node] PreStop tear down framework | framework.go:193 - STEP: Destroying namespace "replicaset-9279" for this suite. 06/12/23 22:22:02.52 + STEP: Destroying namespace "prestop-112" for this suite. 07/27/23 02:54:02.451 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-node] Container Runtime blackbox test on terminated container - should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] - test/e2e/common/node/runtime.go:248 -[BeforeEach] [sig-node] Container Runtime +[sig-node] Probing container + should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:152 +[BeforeEach] [sig-node] Probing container set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:22:02.564 -Jun 12 22:22:02.565: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename container-runtime 06/12/23 22:22:02.567 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:22:02.626 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:22:02.638 -[BeforeEach] [sig-node] Container Runtime +STEP: Creating a kubernetes client 07/27/23 02:54:02.513 +Jul 27 02:54:02.513: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename container-probe 07/27/23 02:54:02.514 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:54:02.596 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:54:02.611 +[BeforeEach] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:31 -[It] should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] - test/e2e/common/node/runtime.go:248 -STEP: create the container 06/12/23 22:22:02.649 -STEP: wait for the container to reach Succeeded 06/12/23 22:22:02.684 -STEP: get the container status 06/12/23 22:22:07.766 -STEP: the container should be terminated 06/12/23 22:22:07.805 -STEP: the termination message should be set 06/12/23 22:22:07.806 -Jun 12 22:22:07.806: INFO: Expected: &{OK} to match Container's Termination Message: OK -- -STEP: delete the container 06/12/23 22:22:07.806 -[AfterEach] [sig-node] Container Runtime +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 +[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:152 +STEP: Creating pod busybox-994d4740-f859-4978-a274-afbb79ad3ae1 in namespace container-probe-3531 07/27/23 02:54:02.625 +Jul 27 02:54:02.662: INFO: Waiting up to 5m0s for pod "busybox-994d4740-f859-4978-a274-afbb79ad3ae1" in namespace "container-probe-3531" to be "not pending" +Jul 27 02:54:02.687: INFO: Pod "busybox-994d4740-f859-4978-a274-afbb79ad3ae1": Phase="Pending", Reason="", readiness=false. Elapsed: 24.728433ms +Jul 27 02:54:04.699: INFO: Pod "busybox-994d4740-f859-4978-a274-afbb79ad3ae1": Phase="Running", Reason="", readiness=true. Elapsed: 2.036864165s +Jul 27 02:54:04.699: INFO: Pod "busybox-994d4740-f859-4978-a274-afbb79ad3ae1" satisfied condition "not pending" +Jul 27 02:54:04.699: INFO: Started pod busybox-994d4740-f859-4978-a274-afbb79ad3ae1 in namespace container-probe-3531 +STEP: checking the pod's current state and verifying that restartCount is present 07/27/23 02:54:04.699 +Jul 27 02:54:04.712: INFO: Initial restart count of pod busybox-994d4740-f859-4978-a274-afbb79ad3ae1 is 0 +STEP: deleting the pod 07/27/23 02:58:06.594 +[AfterEach] [sig-node] Probing container test/e2e/framework/node/init/init.go:32 -Jun 12 22:22:07.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Container Runtime +Jul 27 02:58:06.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Container Runtime +[DeferCleanup (Each)] [sig-node] Probing container dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Container Runtime +[DeferCleanup (Each)] [sig-node] Probing container tear down framework | framework.go:193 -STEP: Destroying namespace "container-runtime-2030" for this suite. 06/12/23 22:22:07.952 +STEP: Destroying namespace "container-probe-3531" for this suite. 07/27/23 02:58:06.65 ------------------------------ -• [SLOW TEST] [5.429 seconds] -[sig-node] Container Runtime +• [SLOW TEST] [244.156 seconds] +[sig-node] Probing container test/e2e/common/node/framework.go:23 - blackbox test - test/e2e/common/node/runtime.go:44 - on terminated container - test/e2e/common/node/runtime.go:137 - should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] - test/e2e/common/node/runtime.go:248 + should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:152 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Container Runtime + [BeforeEach] [sig-node] Probing container set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:22:02.564 - Jun 12 22:22:02.565: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename container-runtime 06/12/23 22:22:02.567 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:22:02.626 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:22:02.638 - [BeforeEach] [sig-node] Container Runtime + STEP: Creating a kubernetes client 07/27/23 02:54:02.513 + Jul 27 02:54:02.513: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename container-probe 07/27/23 02:54:02.514 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:54:02.596 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:54:02.611 + [BeforeEach] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:31 - [It] should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] - test/e2e/common/node/runtime.go:248 - STEP: create the container 06/12/23 22:22:02.649 - STEP: wait for the container to reach Succeeded 06/12/23 22:22:02.684 - STEP: get the container status 06/12/23 22:22:07.766 - STEP: the container should be terminated 06/12/23 22:22:07.805 - STEP: the termination message should be set 06/12/23 22:22:07.806 - Jun 12 22:22:07.806: INFO: Expected: &{OK} to match Container's Termination Message: OK -- - STEP: delete the container 06/12/23 22:22:07.806 - [AfterEach] [sig-node] Container Runtime + [BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 + [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:152 + STEP: Creating pod busybox-994d4740-f859-4978-a274-afbb79ad3ae1 in namespace container-probe-3531 07/27/23 02:54:02.625 + Jul 27 02:54:02.662: INFO: Waiting up to 5m0s for pod "busybox-994d4740-f859-4978-a274-afbb79ad3ae1" in namespace "container-probe-3531" to be "not pending" + Jul 27 02:54:02.687: INFO: Pod "busybox-994d4740-f859-4978-a274-afbb79ad3ae1": Phase="Pending", Reason="", readiness=false. Elapsed: 24.728433ms + Jul 27 02:54:04.699: INFO: Pod "busybox-994d4740-f859-4978-a274-afbb79ad3ae1": Phase="Running", Reason="", readiness=true. Elapsed: 2.036864165s + Jul 27 02:54:04.699: INFO: Pod "busybox-994d4740-f859-4978-a274-afbb79ad3ae1" satisfied condition "not pending" + Jul 27 02:54:04.699: INFO: Started pod busybox-994d4740-f859-4978-a274-afbb79ad3ae1 in namespace container-probe-3531 + STEP: checking the pod's current state and verifying that restartCount is present 07/27/23 02:54:04.699 + Jul 27 02:54:04.712: INFO: Initial restart count of pod busybox-994d4740-f859-4978-a274-afbb79ad3ae1 is 0 + STEP: deleting the pod 07/27/23 02:58:06.594 + [AfterEach] [sig-node] Probing container test/e2e/framework/node/init/init.go:32 - Jun 12 22:22:07.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Container Runtime + Jul 27 02:58:06.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Container Runtime + [DeferCleanup (Each)] [sig-node] Probing container dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Container Runtime + [DeferCleanup (Each)] [sig-node] Probing container tear down framework | framework.go:193 - STEP: Destroying namespace "container-runtime-2030" for this suite. 06/12/23 22:22:07.952 + STEP: Destroying namespace "container-probe-3531" for this suite. 07/27/23 02:58:06.65 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSS ------------------------------ -[sig-node] Lease - lease API should be available [Conformance] - test/e2e/common/node/lease.go:72 -[BeforeEach] [sig-node] Lease +[sig-storage] Downward API volume + should update labels on modification [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:130 +[BeforeEach] [sig-storage] Downward API volume set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:22:08.019 -Jun 12 22:22:08.019: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename lease-test 06/12/23 22:22:08.03 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:22:08.131 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:22:08.182 -[BeforeEach] [sig-node] Lease +STEP: Creating a kubernetes client 07/27/23 02:58:06.67 +Jul 27 02:58:06.670: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename downward-api 07/27/23 02:58:06.671 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:58:06.719 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:58:06.748 +[BeforeEach] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:31 -[It] lease API should be available [Conformance] - test/e2e/common/node/lease.go:72 -[AfterEach] [sig-node] Lease +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 +[It] should update labels on modification [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:130 +STEP: Creating the pod 07/27/23 02:58:06.763 +Jul 27 02:58:06.808: INFO: Waiting up to 5m0s for pod "labelsupdateb558e547-c882-4113-8c3f-9a45c744a8c3" in namespace "downward-api-4755" to be "running and ready" +Jul 27 02:58:06.837: INFO: Pod "labelsupdateb558e547-c882-4113-8c3f-9a45c744a8c3": Phase="Pending", Reason="", readiness=false. Elapsed: 28.941196ms +Jul 27 02:58:06.837: INFO: The phase of Pod labelsupdateb558e547-c882-4113-8c3f-9a45c744a8c3 is Pending, waiting for it to be Running (with Ready = true) +Jul 27 02:58:08.850: INFO: Pod "labelsupdateb558e547-c882-4113-8c3f-9a45c744a8c3": Phase="Running", Reason="", readiness=true. Elapsed: 2.041887607s +Jul 27 02:58:08.850: INFO: The phase of Pod labelsupdateb558e547-c882-4113-8c3f-9a45c744a8c3 is Running (Ready = true) +Jul 27 02:58:08.850: INFO: Pod "labelsupdateb558e547-c882-4113-8c3f-9a45c744a8c3" satisfied condition "running and ready" +Jul 27 02:58:09.463: INFO: Successfully updated pod "labelsupdateb558e547-c882-4113-8c3f-9a45c744a8c3" +[AfterEach] [sig-storage] Downward API volume test/e2e/framework/node/init/init.go:32 -Jun 12 22:22:08.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Lease +Jul 27 02:58:11.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Lease +[DeferCleanup (Each)] [sig-storage] Downward API volume dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Lease +[DeferCleanup (Each)] [sig-storage] Downward API volume tear down framework | framework.go:193 -STEP: Destroying namespace "lease-test-1067" for this suite. 06/12/23 22:22:08.662 +STEP: Destroying namespace "downward-api-4755" for this suite. 07/27/23 02:58:11.557 ------------------------------ -• [0.671 seconds] -[sig-node] Lease -test/e2e/common/node/framework.go:23 - lease API should be available [Conformance] - test/e2e/common/node/lease.go:72 +• [4.943 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should update labels on modification [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:130 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Lease + [BeforeEach] [sig-storage] Downward API volume set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:22:08.019 - Jun 12 22:22:08.019: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename lease-test 06/12/23 22:22:08.03 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:22:08.131 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:22:08.182 - [BeforeEach] [sig-node] Lease + STEP: Creating a kubernetes client 07/27/23 02:58:06.67 + Jul 27 02:58:06.670: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename downward-api 07/27/23 02:58:06.671 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:58:06.719 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:58:06.748 + [BeforeEach] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:31 - [It] lease API should be available [Conformance] - test/e2e/common/node/lease.go:72 - [AfterEach] [sig-node] Lease + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 + [It] should update labels on modification [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:130 + STEP: Creating the pod 07/27/23 02:58:06.763 + Jul 27 02:58:06.808: INFO: Waiting up to 5m0s for pod "labelsupdateb558e547-c882-4113-8c3f-9a45c744a8c3" in namespace "downward-api-4755" to be "running and ready" + Jul 27 02:58:06.837: INFO: Pod "labelsupdateb558e547-c882-4113-8c3f-9a45c744a8c3": Phase="Pending", Reason="", readiness=false. Elapsed: 28.941196ms + Jul 27 02:58:06.837: INFO: The phase of Pod labelsupdateb558e547-c882-4113-8c3f-9a45c744a8c3 is Pending, waiting for it to be Running (with Ready = true) + Jul 27 02:58:08.850: INFO: Pod "labelsupdateb558e547-c882-4113-8c3f-9a45c744a8c3": Phase="Running", Reason="", readiness=true. Elapsed: 2.041887607s + Jul 27 02:58:08.850: INFO: The phase of Pod labelsupdateb558e547-c882-4113-8c3f-9a45c744a8c3 is Running (Ready = true) + Jul 27 02:58:08.850: INFO: Pod "labelsupdateb558e547-c882-4113-8c3f-9a45c744a8c3" satisfied condition "running and ready" + Jul 27 02:58:09.463: INFO: Successfully updated pod "labelsupdateb558e547-c882-4113-8c3f-9a45c744a8c3" + [AfterEach] [sig-storage] Downward API volume test/e2e/framework/node/init/init.go:32 - Jun 12 22:22:08.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Lease + Jul 27 02:58:11.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Downward API volume test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Lease + [DeferCleanup (Each)] [sig-storage] Downward API volume dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Lease + [DeferCleanup (Each)] [sig-storage] Downward API volume tear down framework | framework.go:193 - STEP: Destroying namespace "lease-test-1067" for this suite. 06/12/23 22:22:08.662 + STEP: Destroying namespace "downward-api-4755" for this suite. 07/27/23 02:58:11.557 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSS ------------------------------ -[sig-node] ConfigMap - should be consumable via environment variable [NodeConformance] [Conformance] - test/e2e/common/node/configmap.go:45 -[BeforeEach] [sig-node] ConfigMap +[sig-storage] EmptyDir volumes + should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:167 +[BeforeEach] [sig-storage] EmptyDir volumes set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:22:08.71 -Jun 12 22:22:08.711: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename configmap 06/12/23 22:22:08.718 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:22:09.029 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:22:09.119 -[BeforeEach] [sig-node] ConfigMap +STEP: Creating a kubernetes client 07/27/23 02:58:11.613 +Jul 27 02:58:11.613: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename emptydir 07/27/23 02:58:11.614 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:58:11.789 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:58:11.824 +[BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:31 -[It] should be consumable via environment variable [NodeConformance] [Conformance] - test/e2e/common/node/configmap.go:45 -STEP: Creating configMap configmap-9962/configmap-test-b9c5fdec-6994-4e5d-96e8-12ed545a9a0a 06/12/23 22:22:09.261 -STEP: Creating a pod to test consume configMaps 06/12/23 22:22:09.34 -Jun 12 22:22:09.384: INFO: Waiting up to 5m0s for pod "pod-configmaps-eff60c74-d947-4bb1-a5a4-d6e2fc65120f" in namespace "configmap-9962" to be "Succeeded or Failed" -Jun 12 22:22:09.394: INFO: Pod "pod-configmaps-eff60c74-d947-4bb1-a5a4-d6e2fc65120f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.83872ms -Jun 12 22:22:11.443: INFO: Pod "pod-configmaps-eff60c74-d947-4bb1-a5a4-d6e2fc65120f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059161667s -Jun 12 22:22:13.535: INFO: Pod "pod-configmaps-eff60c74-d947-4bb1-a5a4-d6e2fc65120f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150738426s -Jun 12 22:22:15.403: INFO: Pod "pod-configmaps-eff60c74-d947-4bb1-a5a4-d6e2fc65120f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019189432s -STEP: Saw pod success 06/12/23 22:22:15.403 -Jun 12 22:22:15.404: INFO: Pod "pod-configmaps-eff60c74-d947-4bb1-a5a4-d6e2fc65120f" satisfied condition "Succeeded or Failed" -Jun 12 22:22:15.411: INFO: Trying to get logs from node 10.138.75.70 pod pod-configmaps-eff60c74-d947-4bb1-a5a4-d6e2fc65120f container env-test: -STEP: delete the pod 06/12/23 22:22:15.435 -Jun 12 22:22:15.503: INFO: Waiting for pod pod-configmaps-eff60c74-d947-4bb1-a5a4-d6e2fc65120f to disappear -Jun 12 22:22:15.514: INFO: Pod pod-configmaps-eff60c74-d947-4bb1-a5a4-d6e2fc65120f no longer exists -[AfterEach] [sig-node] ConfigMap +[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:167 +STEP: Creating a pod to test emptydir 0644 on node default medium 07/27/23 02:58:11.848 +Jul 27 02:58:11.885: INFO: Waiting up to 5m0s for pod "pod-801ae424-5e1c-407e-89a0-fa0b83ed5dbb" in namespace "emptydir-7118" to be "Succeeded or Failed" +Jul 27 02:58:11.901: INFO: Pod "pod-801ae424-5e1c-407e-89a0-fa0b83ed5dbb": Phase="Pending", Reason="", readiness=false. Elapsed: 15.467642ms +Jul 27 02:58:13.911: INFO: Pod "pod-801ae424-5e1c-407e-89a0-fa0b83ed5dbb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025772199s +Jul 27 02:58:15.915: INFO: Pod "pod-801ae424-5e1c-407e-89a0-fa0b83ed5dbb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029576243s +STEP: Saw pod success 07/27/23 02:58:15.915 +Jul 27 02:58:15.915: INFO: Pod "pod-801ae424-5e1c-407e-89a0-fa0b83ed5dbb" satisfied condition "Succeeded or Failed" +Jul 27 02:58:15.926: INFO: Trying to get logs from node 10.245.128.19 pod pod-801ae424-5e1c-407e-89a0-fa0b83ed5dbb container test-container: +STEP: delete the pod 07/27/23 02:58:15.951 +Jul 27 02:58:15.986: INFO: Waiting for pod pod-801ae424-5e1c-407e-89a0-fa0b83ed5dbb to disappear +Jul 27 02:58:15.996: INFO: Pod pod-801ae424-5e1c-407e-89a0-fa0b83ed5dbb no longer exists +[AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/node/init/init.go:32 -Jun 12 22:22:15.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] ConfigMap +Jul 27 02:58:15.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] ConfigMap +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] ConfigMap +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes tear down framework | framework.go:193 -STEP: Destroying namespace "configmap-9962" for this suite. 06/12/23 22:22:15.53 +STEP: Destroying namespace "emptydir-7118" for this suite. 07/27/23 02:58:16.013 ------------------------------ -• [SLOW TEST] [6.843 seconds] -[sig-node] ConfigMap -test/e2e/common/node/framework.go:23 - should be consumable via environment variable [NodeConformance] [Conformance] - test/e2e/common/node/configmap.go:45 +• [4.420 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:167 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] ConfigMap + [BeforeEach] [sig-storage] EmptyDir volumes set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:22:08.71 - Jun 12 22:22:08.711: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename configmap 06/12/23 22:22:08.718 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:22:09.029 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:22:09.119 - [BeforeEach] [sig-node] ConfigMap + STEP: Creating a kubernetes client 07/27/23 02:58:11.613 + Jul 27 02:58:11.613: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename emptydir 07/27/23 02:58:11.614 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:58:11.789 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:58:11.824 + [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:31 - [It] should be consumable via environment variable [NodeConformance] [Conformance] - test/e2e/common/node/configmap.go:45 - STEP: Creating configMap configmap-9962/configmap-test-b9c5fdec-6994-4e5d-96e8-12ed545a9a0a 06/12/23 22:22:09.261 - STEP: Creating a pod to test consume configMaps 06/12/23 22:22:09.34 - Jun 12 22:22:09.384: INFO: Waiting up to 5m0s for pod "pod-configmaps-eff60c74-d947-4bb1-a5a4-d6e2fc65120f" in namespace "configmap-9962" to be "Succeeded or Failed" - Jun 12 22:22:09.394: INFO: Pod "pod-configmaps-eff60c74-d947-4bb1-a5a4-d6e2fc65120f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.83872ms - Jun 12 22:22:11.443: INFO: Pod "pod-configmaps-eff60c74-d947-4bb1-a5a4-d6e2fc65120f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059161667s - Jun 12 22:22:13.535: INFO: Pod "pod-configmaps-eff60c74-d947-4bb1-a5a4-d6e2fc65120f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150738426s - Jun 12 22:22:15.403: INFO: Pod "pod-configmaps-eff60c74-d947-4bb1-a5a4-d6e2fc65120f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019189432s - STEP: Saw pod success 06/12/23 22:22:15.403 - Jun 12 22:22:15.404: INFO: Pod "pod-configmaps-eff60c74-d947-4bb1-a5a4-d6e2fc65120f" satisfied condition "Succeeded or Failed" - Jun 12 22:22:15.411: INFO: Trying to get logs from node 10.138.75.70 pod pod-configmaps-eff60c74-d947-4bb1-a5a4-d6e2fc65120f container env-test: - STEP: delete the pod 06/12/23 22:22:15.435 - Jun 12 22:22:15.503: INFO: Waiting for pod pod-configmaps-eff60c74-d947-4bb1-a5a4-d6e2fc65120f to disappear - Jun 12 22:22:15.514: INFO: Pod pod-configmaps-eff60c74-d947-4bb1-a5a4-d6e2fc65120f no longer exists - [AfterEach] [sig-node] ConfigMap + [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:167 + STEP: Creating a pod to test emptydir 0644 on node default medium 07/27/23 02:58:11.848 + Jul 27 02:58:11.885: INFO: Waiting up to 5m0s for pod "pod-801ae424-5e1c-407e-89a0-fa0b83ed5dbb" in namespace "emptydir-7118" to be "Succeeded or Failed" + Jul 27 02:58:11.901: INFO: Pod "pod-801ae424-5e1c-407e-89a0-fa0b83ed5dbb": Phase="Pending", Reason="", readiness=false. Elapsed: 15.467642ms + Jul 27 02:58:13.911: INFO: Pod "pod-801ae424-5e1c-407e-89a0-fa0b83ed5dbb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025772199s + Jul 27 02:58:15.915: INFO: Pod "pod-801ae424-5e1c-407e-89a0-fa0b83ed5dbb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029576243s + STEP: Saw pod success 07/27/23 02:58:15.915 + Jul 27 02:58:15.915: INFO: Pod "pod-801ae424-5e1c-407e-89a0-fa0b83ed5dbb" satisfied condition "Succeeded or Failed" + Jul 27 02:58:15.926: INFO: Trying to get logs from node 10.245.128.19 pod pod-801ae424-5e1c-407e-89a0-fa0b83ed5dbb container test-container: + STEP: delete the pod 07/27/23 02:58:15.951 + Jul 27 02:58:15.986: INFO: Waiting for pod pod-801ae424-5e1c-407e-89a0-fa0b83ed5dbb to disappear + Jul 27 02:58:15.996: INFO: Pod pod-801ae424-5e1c-407e-89a0-fa0b83ed5dbb no longer exists + [AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/node/init/init.go:32 - Jun 12 22:22:15.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] ConfigMap + Jul 27 02:58:15.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] ConfigMap + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] ConfigMap + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes tear down framework | framework.go:193 - STEP: Destroying namespace "configmap-9962" for this suite. 06/12/23 22:22:15.53 + STEP: Destroying namespace "emptydir-7118" for this suite. 07/27/23 02:58:16.013 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSS +SSSSSSSS ------------------------------ -[sig-scheduling] LimitRange - should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] - test/e2e/scheduling/limit_range.go:61 -[BeforeEach] [sig-scheduling] LimitRange +[sig-auth] Certificates API [Privileged:ClusterAdmin] + should support CSR API operations [Conformance] + test/e2e/auth/certificates.go:200 +[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:22:15.568 -Jun 12 22:22:15.569: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename limitrange 06/12/23 22:22:15.572 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:22:15.644 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:22:15.657 -[BeforeEach] [sig-scheduling] LimitRange +STEP: Creating a kubernetes client 07/27/23 02:58:16.033 +Jul 27 02:58:16.033: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename certificates 07/27/23 02:58:16.034 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:58:16.077 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:58:16.088 +[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 -[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] - test/e2e/scheduling/limit_range.go:61 -STEP: Creating a LimitRange 06/12/23 22:22:15.695 -STEP: Setting up watch 06/12/23 22:22:15.696 -STEP: Submitting a LimitRange 06/12/23 22:22:15.813 -STEP: Verifying LimitRange creation was observed 06/12/23 22:22:15.829 -STEP: Fetching the LimitRange to ensure it has proper values 06/12/23 22:22:15.83 -Jun 12 22:22:15.840: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] -Jun 12 22:22:15.840: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] -STEP: Creating a Pod with no resource requirements 06/12/23 22:22:15.84 -STEP: Ensuring Pod has resource requirements applied from LimitRange 06/12/23 22:22:15.851 -Jun 12 22:22:15.860: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] -Jun 12 22:22:15.860: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] -STEP: Creating a Pod with partial resource requirements 06/12/23 22:22:15.861 -STEP: Ensuring Pod has merged resource requirements applied from LimitRange 06/12/23 22:22:15.891 -Jun 12 22:22:15.897: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] -Jun 12 22:22:15.897: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] -STEP: Failing to create a Pod with less than min resources 06/12/23 22:22:15.897 -STEP: Failing to create a Pod with more than max resources 06/12/23 22:22:15.903 -STEP: Updating a LimitRange 06/12/23 22:22:15.91 -STEP: Verifying LimitRange updating is effective 06/12/23 22:22:15.924 -STEP: Creating a Pod with less than former min resources 06/12/23 22:22:17.935 -STEP: Failing to create a Pod with more than max resources 06/12/23 22:22:17.948 -STEP: Deleting a LimitRange 06/12/23 22:22:17.954 -STEP: Verifying the LimitRange was deleted 06/12/23 22:22:17.971 -Jun 12 22:22:22.983: INFO: limitRange is already deleted -STEP: Creating a Pod with more than former max resources 06/12/23 22:22:22.983 -[AfterEach] [sig-scheduling] LimitRange +[It] should support CSR API operations [Conformance] + test/e2e/auth/certificates.go:200 +STEP: getting /apis 07/27/23 02:58:16.582 +STEP: getting /apis/certificates.k8s.io 07/27/23 02:58:16.594 +STEP: getting /apis/certificates.k8s.io/v1 07/27/23 02:58:16.598 +STEP: creating 07/27/23 02:58:16.603 +STEP: getting 07/27/23 02:58:16.652 +STEP: listing 07/27/23 02:58:16.663 +STEP: watching 07/27/23 02:58:16.674 +Jul 27 02:58:16.674: INFO: starting watch +STEP: patching 07/27/23 02:58:16.679 +STEP: updating 07/27/23 02:58:16.697 +Jul 27 02:58:16.714: INFO: waiting for watch events with expected annotations +Jul 27 02:58:16.714: INFO: saw patched and updated annotations +STEP: getting /approval 07/27/23 02:58:16.714 +STEP: patching /approval 07/27/23 02:58:16.725 +STEP: updating /approval 07/27/23 02:58:16.743 +STEP: getting /status 07/27/23 02:58:16.762 +STEP: patching /status 07/27/23 02:58:16.775 +STEP: updating /status 07/27/23 02:58:16.792 +STEP: deleting 07/27/23 02:58:16.81 +STEP: deleting a collection 07/27/23 02:58:16.858 +[AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 -Jun 12 22:22:23.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-scheduling] LimitRange +Jul 27 02:58:16.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-auth] Certificates API [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-scheduling] LimitRange +[DeferCleanup (Each)] [sig-auth] Certificates API [Privileged:ClusterAdmin] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-scheduling] LimitRange +[DeferCleanup (Each)] [sig-auth] Certificates API [Privileged:ClusterAdmin] tear down framework | framework.go:193 -STEP: Destroying namespace "limitrange-410" for this suite. 06/12/23 22:22:23.017 +STEP: Destroying namespace "certificates-9632" for this suite. 07/27/23 02:58:16.936 ------------------------------ -• [SLOW TEST] [7.469 seconds] -[sig-scheduling] LimitRange -test/e2e/scheduling/framework.go:40 - should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] - test/e2e/scheduling/limit_range.go:61 +• [0.934 seconds] +[sig-auth] Certificates API [Privileged:ClusterAdmin] +test/e2e/auth/framework.go:23 + should support CSR API operations [Conformance] + test/e2e/auth/certificates.go:200 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-scheduling] LimitRange + [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:22:15.568 - Jun 12 22:22:15.569: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename limitrange 06/12/23 22:22:15.572 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:22:15.644 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:22:15.657 - [BeforeEach] [sig-scheduling] LimitRange + STEP: Creating a kubernetes client 07/27/23 02:58:16.033 + Jul 27 02:58:16.033: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename certificates 07/27/23 02:58:16.034 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:58:16.077 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:58:16.088 + [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 - [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] - test/e2e/scheduling/limit_range.go:61 - STEP: Creating a LimitRange 06/12/23 22:22:15.695 - STEP: Setting up watch 06/12/23 22:22:15.696 - STEP: Submitting a LimitRange 06/12/23 22:22:15.813 - STEP: Verifying LimitRange creation was observed 06/12/23 22:22:15.829 - STEP: Fetching the LimitRange to ensure it has proper values 06/12/23 22:22:15.83 - Jun 12 22:22:15.840: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] - Jun 12 22:22:15.840: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] - STEP: Creating a Pod with no resource requirements 06/12/23 22:22:15.84 - STEP: Ensuring Pod has resource requirements applied from LimitRange 06/12/23 22:22:15.851 - Jun 12 22:22:15.860: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] - Jun 12 22:22:15.860: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] - STEP: Creating a Pod with partial resource requirements 06/12/23 22:22:15.861 - STEP: Ensuring Pod has merged resource requirements applied from LimitRange 06/12/23 22:22:15.891 - Jun 12 22:22:15.897: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] - Jun 12 22:22:15.897: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] - STEP: Failing to create a Pod with less than min resources 06/12/23 22:22:15.897 - STEP: Failing to create a Pod with more than max resources 06/12/23 22:22:15.903 - STEP: Updating a LimitRange 06/12/23 22:22:15.91 - STEP: Verifying LimitRange updating is effective 06/12/23 22:22:15.924 - STEP: Creating a Pod with less than former min resources 06/12/23 22:22:17.935 - STEP: Failing to create a Pod with more than max resources 06/12/23 22:22:17.948 - STEP: Deleting a LimitRange 06/12/23 22:22:17.954 - STEP: Verifying the LimitRange was deleted 06/12/23 22:22:17.971 - Jun 12 22:22:22.983: INFO: limitRange is already deleted - STEP: Creating a Pod with more than former max resources 06/12/23 22:22:22.983 - [AfterEach] [sig-scheduling] LimitRange + [It] should support CSR API operations [Conformance] + test/e2e/auth/certificates.go:200 + STEP: getting /apis 07/27/23 02:58:16.582 + STEP: getting /apis/certificates.k8s.io 07/27/23 02:58:16.594 + STEP: getting /apis/certificates.k8s.io/v1 07/27/23 02:58:16.598 + STEP: creating 07/27/23 02:58:16.603 + STEP: getting 07/27/23 02:58:16.652 + STEP: listing 07/27/23 02:58:16.663 + STEP: watching 07/27/23 02:58:16.674 + Jul 27 02:58:16.674: INFO: starting watch + STEP: patching 07/27/23 02:58:16.679 + STEP: updating 07/27/23 02:58:16.697 + Jul 27 02:58:16.714: INFO: waiting for watch events with expected annotations + Jul 27 02:58:16.714: INFO: saw patched and updated annotations + STEP: getting /approval 07/27/23 02:58:16.714 + STEP: patching /approval 07/27/23 02:58:16.725 + STEP: updating /approval 07/27/23 02:58:16.743 + STEP: getting /status 07/27/23 02:58:16.762 + STEP: patching /status 07/27/23 02:58:16.775 + STEP: updating /status 07/27/23 02:58:16.792 + STEP: deleting 07/27/23 02:58:16.81 + STEP: deleting a collection 07/27/23 02:58:16.858 + [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 22:22:23.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-scheduling] LimitRange + Jul 27 02:58:16.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-auth] Certificates API [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-scheduling] LimitRange + [DeferCleanup (Each)] [sig-auth] Certificates API [Privileged:ClusterAdmin] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-scheduling] LimitRange + [DeferCleanup (Each)] [sig-auth] Certificates API [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "limitrange-410" for this suite. 06/12/23 22:22:23.017 + STEP: Destroying namespace "certificates-9632" for this suite. 07/27/23 02:58:16.936 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - should be able to deny pod and configmap creation [Conformance] - test/e2e/apimachinery/webhook.go:197 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a service. [Conformance] + test/e2e/apimachinery/resource_quota.go:100 +[BeforeEach] [sig-api-machinery] ResourceQuota set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:22:23.046 -Jun 12 22:22:23.046: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename webhook 06/12/23 22:22:23.048 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:22:23.154 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:22:23.174 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +STEP: Creating a kubernetes client 07/27/23 02:58:16.969 +Jul 27 02:58:16.969: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename resourcequota 07/27/23 02:58:16.97 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:58:17.017 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:58:17.027 +[BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:90 -STEP: Setting up server cert 06/12/23 22:22:23.242 -STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 22:22:24.638 -STEP: Deploying the webhook pod 06/12/23 22:22:24.667 -STEP: Wait for the deployment to be ready 06/12/23 22:22:24.692 -Jun 12 22:22:24.717: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created -Jun 12 22:22:26.737: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 22, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 22, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 22, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 22, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} -STEP: Deploying the webhook service 06/12/23 22:22:28.745 -STEP: Verifying the service has paired with the endpoint 06/12/23 22:22:28.777 -Jun 12 22:22:29.779: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 -[It] should be able to deny pod and configmap creation [Conformance] - test/e2e/apimachinery/webhook.go:197 -STEP: Registering the webhook via the AdmissionRegistration API 06/12/23 22:22:29.813 -STEP: create a pod that should be denied by the webhook 06/12/23 22:22:29.926 -STEP: create a pod that causes the webhook to hang 06/12/23 22:22:29.972 -STEP: create a configmap that should be denied by the webhook 06/12/23 22:22:39.985 -STEP: create a configmap that should be admitted by the webhook 06/12/23 22:22:40.025 -STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook 06/12/23 22:22:40.053 -STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook 06/12/23 22:22:40.075 -STEP: create a namespace that bypass the webhook 06/12/23 22:22:40.09 -STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace 06/12/23 22:22:40.113 -[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[It] should create a ResourceQuota and capture the life of a service. [Conformance] + test/e2e/apimachinery/resource_quota.go:100 +STEP: Counting existing ResourceQuota 07/27/23 02:58:17.038 +STEP: Creating a ResourceQuota 07/27/23 02:58:22.055 +STEP: Ensuring resource quota status is calculated 07/27/23 02:58:22.071 +STEP: Creating a Service 07/27/23 02:58:24.084 +STEP: Creating a NodePort Service 07/27/23 02:58:24.148 +STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota 07/27/23 02:58:24.206 +STEP: Ensuring resource quota status captures service creation 07/27/23 02:58:24.271 +STEP: Deleting Services 07/27/23 02:58:26.284 +STEP: Ensuring resource quota status released usage 07/27/23 02:58:26.377 +[AfterEach] [sig-api-machinery] ResourceQuota test/e2e/framework/node/init/init.go:32 -Jun 12 22:22:40.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:105 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +Jul 27 02:58:28.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota tear down framework | framework.go:193 -STEP: Destroying namespace "webhook-8030" for this suite. 06/12/23 22:22:40.344 -STEP: Destroying namespace "webhook-8030-markers" for this suite. 06/12/23 22:22:40.374 +STEP: Destroying namespace "resourcequota-534" for this suite. 07/27/23 02:58:28.414 ------------------------------ -• [SLOW TEST] [17.351 seconds] -[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +• [SLOW TEST] [11.466 seconds] +[sig-api-machinery] ResourceQuota test/e2e/apimachinery/framework.go:23 - should be able to deny pod and configmap creation [Conformance] - test/e2e/apimachinery/webhook.go:197 + should create a ResourceQuota and capture the life of a service. [Conformance] + test/e2e/apimachinery/resource_quota.go:100 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [BeforeEach] [sig-api-machinery] ResourceQuota set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:22:23.046 - Jun 12 22:22:23.046: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename webhook 06/12/23 22:22:23.048 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:22:23.154 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:22:23.174 - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + STEP: Creating a kubernetes client 07/27/23 02:58:16.969 + Jul 27 02:58:16.969: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename resourcequota 07/27/23 02:58:16.97 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:58:17.017 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:58:17.027 + [BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:90 - STEP: Setting up server cert 06/12/23 22:22:23.242 - STEP: Create role binding to let webhook read extension-apiserver-authentication 06/12/23 22:22:24.638 - STEP: Deploying the webhook pod 06/12/23 22:22:24.667 - STEP: Wait for the deployment to be ready 06/12/23 22:22:24.692 - Jun 12 22:22:24.717: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created - Jun 12 22:22:26.737: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 22, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 22, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 22, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 22, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} - STEP: Deploying the webhook service 06/12/23 22:22:28.745 - STEP: Verifying the service has paired with the endpoint 06/12/23 22:22:28.777 - Jun 12 22:22:29.779: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 - [It] should be able to deny pod and configmap creation [Conformance] - test/e2e/apimachinery/webhook.go:197 - STEP: Registering the webhook via the AdmissionRegistration API 06/12/23 22:22:29.813 - STEP: create a pod that should be denied by the webhook 06/12/23 22:22:29.926 - STEP: create a pod that causes the webhook to hang 06/12/23 22:22:29.972 - STEP: create a configmap that should be denied by the webhook 06/12/23 22:22:39.985 - STEP: create a configmap that should be admitted by the webhook 06/12/23 22:22:40.025 - STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook 06/12/23 22:22:40.053 - STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook 06/12/23 22:22:40.075 - STEP: create a namespace that bypass the webhook 06/12/23 22:22:40.09 - STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace 06/12/23 22:22:40.113 - [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [It] should create a ResourceQuota and capture the life of a service. [Conformance] + test/e2e/apimachinery/resource_quota.go:100 + STEP: Counting existing ResourceQuota 07/27/23 02:58:17.038 + STEP: Creating a ResourceQuota 07/27/23 02:58:22.055 + STEP: Ensuring resource quota status is calculated 07/27/23 02:58:22.071 + STEP: Creating a Service 07/27/23 02:58:24.084 + STEP: Creating a NodePort Service 07/27/23 02:58:24.148 + STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota 07/27/23 02:58:24.206 + STEP: Ensuring resource quota status captures service creation 07/27/23 02:58:24.271 + STEP: Deleting Services 07/27/23 02:58:26.284 + STEP: Ensuring resource quota status released usage 07/27/23 02:58:26.377 + [AfterEach] [sig-api-machinery] ResourceQuota test/e2e/framework/node/init/init.go:32 - Jun 12 22:22:40.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] - test/e2e/apimachinery/webhook.go:105 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + Jul 27 02:58:28.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota tear down framework | framework.go:193 - STEP: Destroying namespace "webhook-8030" for this suite. 06/12/23 22:22:40.344 - STEP: Destroying namespace "webhook-8030-markers" for this suite. 06/12/23 22:22:40.374 + STEP: Destroying namespace "resourcequota-534" for this suite. 07/27/23 02:58:28.414 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSS +SSSSS ------------------------------ -[sig-node] Containers - should use the image defaults if command and args are blank [NodeConformance] [Conformance] - test/e2e/common/node/containers.go:39 -[BeforeEach] [sig-node] Containers +[sig-api-machinery] ResourceQuota + should manage the lifecycle of a ResourceQuota [Conformance] + test/e2e/apimachinery/resource_quota.go:943 +[BeforeEach] [sig-api-machinery] ResourceQuota set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:22:40.4 -Jun 12 22:22:40.401: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename containers 06/12/23 22:22:40.404 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:22:40.476 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:22:40.486 -[BeforeEach] [sig-node] Containers +STEP: Creating a kubernetes client 07/27/23 02:58:28.435 +Jul 27 02:58:28.435: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename resourcequota 07/27/23 02:58:28.436 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:58:28.491 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:58:28.5 +[BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:31 -[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] - test/e2e/common/node/containers.go:39 -Jun 12 22:22:40.526: INFO: Waiting up to 5m0s for pod "client-containers-c9085b9c-792a-4edb-9512-19eabc8681e5" in namespace "containers-4045" to be "running" -Jun 12 22:22:40.532: INFO: Pod "client-containers-c9085b9c-792a-4edb-9512-19eabc8681e5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.243932ms -Jun 12 22:22:42.551: INFO: Pod "client-containers-c9085b9c-792a-4edb-9512-19eabc8681e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025341022s -Jun 12 22:22:44.541: INFO: Pod "client-containers-c9085b9c-792a-4edb-9512-19eabc8681e5": Phase="Running", Reason="", readiness=true. Elapsed: 4.014715162s -Jun 12 22:22:44.541: INFO: Pod "client-containers-c9085b9c-792a-4edb-9512-19eabc8681e5" satisfied condition "running" -[AfterEach] [sig-node] Containers +[It] should manage the lifecycle of a ResourceQuota [Conformance] + test/e2e/apimachinery/resource_quota.go:943 +STEP: Creating a ResourceQuota 07/27/23 02:58:28.539 +STEP: Getting a ResourceQuota 07/27/23 02:58:28.558 +STEP: Listing all ResourceQuotas with LabelSelector 07/27/23 02:58:28.575 +STEP: Patching the ResourceQuota 07/27/23 02:58:28.589 +STEP: Deleting a Collection of ResourceQuotas 07/27/23 02:58:28.638 +STEP: Verifying the deleted ResourceQuota 07/27/23 02:58:28.695 +[AfterEach] [sig-api-machinery] ResourceQuota test/e2e/framework/node/init/init.go:32 -Jun 12 22:22:44.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Containers +Jul 27 02:58:28.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Containers +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Containers +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota tear down framework | framework.go:193 -STEP: Destroying namespace "containers-4045" for this suite. 06/12/23 22:22:44.57 +STEP: Destroying namespace "resourcequota-9501" for this suite. 07/27/23 02:58:28.723 ------------------------------ -• [4.199 seconds] -[sig-node] Containers -test/e2e/common/node/framework.go:23 - should use the image defaults if command and args are blank [NodeConformance] [Conformance] - test/e2e/common/node/containers.go:39 +• [0.314 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should manage the lifecycle of a ResourceQuota [Conformance] + test/e2e/apimachinery/resource_quota.go:943 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Containers + [BeforeEach] [sig-api-machinery] ResourceQuota set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:22:40.4 - Jun 12 22:22:40.401: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename containers 06/12/23 22:22:40.404 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:22:40.476 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:22:40.486 - [BeforeEach] [sig-node] Containers + STEP: Creating a kubernetes client 07/27/23 02:58:28.435 + Jul 27 02:58:28.435: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename resourcequota 07/27/23 02:58:28.436 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:58:28.491 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:58:28.5 + [BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:31 - [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] - test/e2e/common/node/containers.go:39 - Jun 12 22:22:40.526: INFO: Waiting up to 5m0s for pod "client-containers-c9085b9c-792a-4edb-9512-19eabc8681e5" in namespace "containers-4045" to be "running" - Jun 12 22:22:40.532: INFO: Pod "client-containers-c9085b9c-792a-4edb-9512-19eabc8681e5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.243932ms - Jun 12 22:22:42.551: INFO: Pod "client-containers-c9085b9c-792a-4edb-9512-19eabc8681e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025341022s - Jun 12 22:22:44.541: INFO: Pod "client-containers-c9085b9c-792a-4edb-9512-19eabc8681e5": Phase="Running", Reason="", readiness=true. Elapsed: 4.014715162s - Jun 12 22:22:44.541: INFO: Pod "client-containers-c9085b9c-792a-4edb-9512-19eabc8681e5" satisfied condition "running" - [AfterEach] [sig-node] Containers + [It] should manage the lifecycle of a ResourceQuota [Conformance] + test/e2e/apimachinery/resource_quota.go:943 + STEP: Creating a ResourceQuota 07/27/23 02:58:28.539 + STEP: Getting a ResourceQuota 07/27/23 02:58:28.558 + STEP: Listing all ResourceQuotas with LabelSelector 07/27/23 02:58:28.575 + STEP: Patching the ResourceQuota 07/27/23 02:58:28.589 + STEP: Deleting a Collection of ResourceQuotas 07/27/23 02:58:28.638 + STEP: Verifying the deleted ResourceQuota 07/27/23 02:58:28.695 + [AfterEach] [sig-api-machinery] ResourceQuota test/e2e/framework/node/init/init.go:32 - Jun 12 22:22:44.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Containers + Jul 27 02:58:28.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Containers + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Containers + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota tear down framework | framework.go:193 - STEP: Destroying namespace "containers-4045" for this suite. 06/12/23 22:22:44.57 + STEP: Destroying namespace "resourcequota-9501" for this suite. 07/27/23 02:58:28.723 << End Captured GinkgoWriter Output ------------------------------ -SSSSS +SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] Downward API volume - should provide container's memory request [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:235 -[BeforeEach] [sig-storage] Downward API volume +[sig-api-machinery] Namespaces [Serial] + should patch a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:268 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:22:44.599 -Jun 12 22:22:44.599: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename downward-api 06/12/23 22:22:44.613 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:22:44.707 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:22:44.719 -[BeforeEach] [sig-storage] Downward API volume +STEP: Creating a kubernetes client 07/27/23 02:58:28.75 +Jul 27 02:58:28.750: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename namespaces 07/27/23 02:58:28.751 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:58:28.875 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:58:28.887 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-storage] Downward API volume - test/e2e/common/storage/downwardapi_volume.go:44 -[It] should provide container's memory request [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:235 -STEP: Creating a pod to test downward API volume plugin 06/12/23 22:22:44.73 -Jun 12 22:22:44.767: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ec390af3-dab8-4bff-a253-6a016833b0f9" in namespace "downward-api-5349" to be "Succeeded or Failed" -Jun 12 22:22:44.802: INFO: Pod "downwardapi-volume-ec390af3-dab8-4bff-a253-6a016833b0f9": Phase="Pending", Reason="", readiness=false. Elapsed: 34.783751ms -Jun 12 22:22:46.821: INFO: Pod "downwardapi-volume-ec390af3-dab8-4bff-a253-6a016833b0f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054239083s -Jun 12 22:22:48.824: INFO: Pod "downwardapi-volume-ec390af3-dab8-4bff-a253-6a016833b0f9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057049521s -Jun 12 22:22:50.809: INFO: Pod "downwardapi-volume-ec390af3-dab8-4bff-a253-6a016833b0f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041810312s -STEP: Saw pod success 06/12/23 22:22:50.809 -Jun 12 22:22:50.809: INFO: Pod "downwardapi-volume-ec390af3-dab8-4bff-a253-6a016833b0f9" satisfied condition "Succeeded or Failed" -Jun 12 22:22:50.816: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-ec390af3-dab8-4bff-a253-6a016833b0f9 container client-container: -STEP: delete the pod 06/12/23 22:22:50.843 -Jun 12 22:22:50.866: INFO: Waiting for pod downwardapi-volume-ec390af3-dab8-4bff-a253-6a016833b0f9 to disappear -Jun 12 22:22:50.876: INFO: Pod downwardapi-volume-ec390af3-dab8-4bff-a253-6a016833b0f9 no longer exists -[AfterEach] [sig-storage] Downward API volume +[It] should patch a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:268 +STEP: creating a Namespace 07/27/23 02:58:28.964 +STEP: patching the Namespace 07/27/23 02:58:29.051 +STEP: get the Namespace and ensuring it has the label 07/27/23 02:58:29.074 +[AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 -Jun 12 22:22:50.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Downward API volume +Jul 27 02:58:29.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Downward API volume +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Downward API volume +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 -STEP: Destroying namespace "downward-api-5349" for this suite. 06/12/23 22:22:50.905 +STEP: Destroying namespace "namespaces-5143" for this suite. 07/27/23 02:58:29.136 +STEP: Destroying namespace "nspatchtest-96c04b6b-cbeb-4efd-b74b-3880cf7100e9-4459" for this suite. 07/27/23 02:58:29.196 ------------------------------ -• [SLOW TEST] [6.331 seconds] -[sig-storage] Downward API volume -test/e2e/common/storage/framework.go:23 - should provide container's memory request [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:235 +• [0.477 seconds] +[sig-api-machinery] Namespaces [Serial] +test/e2e/apimachinery/framework.go:23 + should patch a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:268 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Downward API volume + [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:22:44.599 - Jun 12 22:22:44.599: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename downward-api 06/12/23 22:22:44.613 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:22:44.707 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:22:44.719 - [BeforeEach] [sig-storage] Downward API volume + STEP: Creating a kubernetes client 07/27/23 02:58:28.75 + Jul 27 02:58:28.750: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename namespaces 07/27/23 02:58:28.751 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:58:28.875 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:58:28.887 + [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-storage] Downward API volume - test/e2e/common/storage/downwardapi_volume.go:44 - [It] should provide container's memory request [NodeConformance] [Conformance] - test/e2e/common/storage/downwardapi_volume.go:235 - STEP: Creating a pod to test downward API volume plugin 06/12/23 22:22:44.73 - Jun 12 22:22:44.767: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ec390af3-dab8-4bff-a253-6a016833b0f9" in namespace "downward-api-5349" to be "Succeeded or Failed" - Jun 12 22:22:44.802: INFO: Pod "downwardapi-volume-ec390af3-dab8-4bff-a253-6a016833b0f9": Phase="Pending", Reason="", readiness=false. Elapsed: 34.783751ms - Jun 12 22:22:46.821: INFO: Pod "downwardapi-volume-ec390af3-dab8-4bff-a253-6a016833b0f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054239083s - Jun 12 22:22:48.824: INFO: Pod "downwardapi-volume-ec390af3-dab8-4bff-a253-6a016833b0f9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057049521s - Jun 12 22:22:50.809: INFO: Pod "downwardapi-volume-ec390af3-dab8-4bff-a253-6a016833b0f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041810312s - STEP: Saw pod success 06/12/23 22:22:50.809 - Jun 12 22:22:50.809: INFO: Pod "downwardapi-volume-ec390af3-dab8-4bff-a253-6a016833b0f9" satisfied condition "Succeeded or Failed" - Jun 12 22:22:50.816: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-ec390af3-dab8-4bff-a253-6a016833b0f9 container client-container: - STEP: delete the pod 06/12/23 22:22:50.843 - Jun 12 22:22:50.866: INFO: Waiting for pod downwardapi-volume-ec390af3-dab8-4bff-a253-6a016833b0f9 to disappear - Jun 12 22:22:50.876: INFO: Pod downwardapi-volume-ec390af3-dab8-4bff-a253-6a016833b0f9 no longer exists - [AfterEach] [sig-storage] Downward API volume + [It] should patch a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:268 + STEP: creating a Namespace 07/27/23 02:58:28.964 + STEP: patching the Namespace 07/27/23 02:58:29.051 + STEP: get the Namespace and ensuring it has the label 07/27/23 02:58:29.074 + [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 - Jun 12 22:22:50.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Downward API volume + Jul 27 02:58:29.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Downward API volume + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Downward API volume + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 - STEP: Destroying namespace "downward-api-5349" for this suite. 06/12/23 22:22:50.905 + STEP: Destroying namespace "namespaces-5143" for this suite. 07/27/23 02:58:29.136 + STEP: Destroying namespace "nspatchtest-96c04b6b-cbeb-4efd-b74b-3880cf7100e9-4459" for this suite. 07/27/23 02:58:29.196 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-apps] Deployment - should validate Deployment Status endpoints [Conformance] - test/e2e/apps/deployment.go:479 -[BeforeEach] [sig-apps] Deployment +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny pod and configmap creation [Conformance] + test/e2e/apimachinery/webhook.go:197 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:22:50.932 -Jun 12 22:22:50.932: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename deployment 06/12/23 22:22:50.934 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:22:51.004 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:22:51.022 -[BeforeEach] [sig-apps] Deployment +STEP: Creating a kubernetes client 07/27/23 02:58:29.229 +Jul 27 02:58:29.229: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename webhook 07/27/23 02:58:29.23 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:58:29.357 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:58:29.401 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:91 -[It] should validate Deployment Status endpoints [Conformance] - test/e2e/apps/deployment.go:479 -STEP: creating a Deployment 06/12/23 22:22:51.049 -Jun 12 22:22:51.049: INFO: Creating simple deployment test-deployment-zsh5g -Jun 12 22:22:51.149: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 22, 51, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 22, 51, 0, time.Local), Reason:"NewReplicaSetCreated", Message:"Created new replica set \"test-deployment-zsh5g-54bc444df\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 22, 51, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 22, 51, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} -Jun 12 22:22:53.157: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 22, 51, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 22, 51, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 22, 51, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 22, 51, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-zsh5g-54bc444df\" is progressing."}}, CollisionCount:(*int32)(nil)} -STEP: Getting /status 06/12/23 22:22:55.17 -Jun 12 22:22:55.184: INFO: Deployment test-deployment-zsh5g has Conditions: [{Available True 2023-06-12 22:22:53 +0000 UTC 2023-06-12 22:22:53 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2023-06-12 22:22:53 +0000 UTC 2023-06-12 22:22:51 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-zsh5g-54bc444df" has successfully progressed.}] -STEP: updating Deployment Status 06/12/23 22:22:55.184 -Jun 12 22:22:55.219: INFO: updatedStatus.Conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 22, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 22, 53, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 22, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 22, 51, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"test-deployment-zsh5g-54bc444df\" has successfully progressed."}, v1.DeploymentCondition{Type:"StatusUpdate", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} -STEP: watching for the Deployment status to be updated 06/12/23 22:22:55.219 -Jun 12 22:22:55.254: INFO: Observed &Deployment event: ADDED -Jun 12 22:22:55.254: INFO: Observed Deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-06-12 22:22:51 +0000 UTC 2023-06-12 22:22:51 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-zsh5g-54bc444df"} -Jun 12 22:22:55.255: INFO: Observed &Deployment event: MODIFIED -Jun 12 22:22:55.255: INFO: Observed Deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-06-12 22:22:51 +0000 UTC 2023-06-12 22:22:51 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-zsh5g-54bc444df"} -Jun 12 22:22:55.255: INFO: Observed Deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-06-12 22:22:51 +0000 UTC 2023-06-12 22:22:51 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} -Jun 12 22:22:55.256: INFO: Observed &Deployment event: MODIFIED -Jun 12 22:22:55.256: INFO: Observed Deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-06-12 22:22:51 +0000 UTC 2023-06-12 22:22:51 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} -Jun 12 22:22:55.257: INFO: Observed Deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-06-12 22:22:51 +0000 UTC 2023-06-12 22:22:51 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-zsh5g-54bc444df" is progressing.} -Jun 12 22:22:55.258: INFO: Observed &Deployment event: MODIFIED -Jun 12 22:22:55.258: INFO: Observed Deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-06-12 22:22:53 +0000 UTC 2023-06-12 22:22:53 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} -Jun 12 22:22:55.258: INFO: Observed Deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-06-12 22:22:53 +0000 UTC 2023-06-12 22:22:51 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-zsh5g-54bc444df" has successfully progressed.} -Jun 12 22:22:55.259: INFO: Observed &Deployment event: MODIFIED -Jun 12 22:22:55.259: INFO: Observed Deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-06-12 22:22:53 +0000 UTC 2023-06-12 22:22:53 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} -Jun 12 22:22:55.259: INFO: Observed Deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-06-12 22:22:53 +0000 UTC 2023-06-12 22:22:51 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-zsh5g-54bc444df" has successfully progressed.} -Jun 12 22:22:55.259: INFO: Found Deployment test-deployment-zsh5g in namespace deployment-4898 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} -Jun 12 22:22:55.259: INFO: Deployment test-deployment-zsh5g has an updated status -STEP: patching the Statefulset Status 06/12/23 22:22:55.259 -Jun 12 22:22:55.259: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} -Jun 12 22:22:55.294: INFO: Patched status conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"StatusPatched", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} -STEP: watching for the Deployment status to be patched 06/12/23 22:22:55.294 -Jun 12 22:22:55.341: INFO: Observed &Deployment event: ADDED -Jun 12 22:22:55.341: INFO: Observed deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-06-12 22:22:51 +0000 UTC 2023-06-12 22:22:51 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-zsh5g-54bc444df"} -Jun 12 22:22:55.342: INFO: Observed &Deployment event: MODIFIED -Jun 12 22:22:55.342: INFO: Observed deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-06-12 22:22:51 +0000 UTC 2023-06-12 22:22:51 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-zsh5g-54bc444df"} -Jun 12 22:22:55.342: INFO: Observed deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-06-12 22:22:51 +0000 UTC 2023-06-12 22:22:51 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} -Jun 12 22:22:55.343: INFO: Observed &Deployment event: MODIFIED -Jun 12 22:22:55.343: INFO: Observed deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-06-12 22:22:51 +0000 UTC 2023-06-12 22:22:51 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} -Jun 12 22:22:55.344: INFO: Observed deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-06-12 22:22:51 +0000 UTC 2023-06-12 22:22:51 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-zsh5g-54bc444df" is progressing.} -Jun 12 22:22:55.345: INFO: Observed &Deployment event: MODIFIED -Jun 12 22:22:55.346: INFO: Observed deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-06-12 22:22:53 +0000 UTC 2023-06-12 22:22:53 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} -Jun 12 22:22:55.347: INFO: Observed deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-06-12 22:22:53 +0000 UTC 2023-06-12 22:22:51 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-zsh5g-54bc444df" has successfully progressed.} -Jun 12 22:22:55.347: INFO: Observed &Deployment event: MODIFIED -Jun 12 22:22:55.347: INFO: Observed deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-06-12 22:22:53 +0000 UTC 2023-06-12 22:22:53 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} -Jun 12 22:22:55.348: INFO: Observed deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-06-12 22:22:53 +0000 UTC 2023-06-12 22:22:51 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-zsh5g-54bc444df" has successfully progressed.} -Jun 12 22:22:55.348: INFO: Observed deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} -Jun 12 22:22:55.348: INFO: Observed &Deployment event: MODIFIED -Jun 12 22:22:55.348: INFO: Found deployment test-deployment-zsh5g in namespace deployment-4898 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } -Jun 12 22:22:55.349: INFO: Deployment test-deployment-zsh5g has a patched status -[AfterEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:84 -Jun 12 22:22:55.373: INFO: Deployment "test-deployment-zsh5g": -&Deployment{ObjectMeta:{test-deployment-zsh5g deployment-4898 fde7547a-59f2-46af-9e2c-5bd9e34bf3ac 143024 1 2023-06-12 22:22:51 +0000 UTC map[e2e:testing name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2023-06-12 22:22:51 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {e2e.test Update apps/v1 2023-06-12 22:22:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"StatusPatched\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update apps/v1 2023-06-12 22:22:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc007982148 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:StatusPatched,Status:True,Reason:,Message:,LastUpdateTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,},DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-06-12 22:22:55 +0000 UTC,LastTransitionTime:2023-06-12 22:22:55 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-deployment-zsh5g-54bc444df" has successfully progressed.,LastUpdateTime:2023-06-12 22:22:55 +0000 UTC,LastTransitionTime:2023-06-12 22:22:55 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} - -Jun 12 22:22:55.407: INFO: New ReplicaSet "test-deployment-zsh5g-54bc444df" of Deployment "test-deployment-zsh5g": -&ReplicaSet{ObjectMeta:{test-deployment-zsh5g-54bc444df deployment-4898 793d00a5-3167-4e9b-958b-846f73c42e43 143008 1 2023-06-12 22:22:51 +0000 UTC map[e2e:testing name:httpd pod-template-hash:54bc444df] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment-zsh5g fde7547a-59f2-46af-9e2c-5bd9e34bf3ac 0xc007982550 0xc007982551}] [] [{kube-controller-manager Update apps/v1 2023-06-12 22:22:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fde7547a-59f2-46af-9e2c-5bd9e34bf3ac\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-06-12 22:22:53 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,pod-template-hash: 54bc444df,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd pod-template-hash:54bc444df] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0079825f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} -Jun 12 22:22:55.414: INFO: Pod "test-deployment-zsh5g-54bc444df-52fjm" is available: -&Pod{ObjectMeta:{test-deployment-zsh5g-54bc444df-52fjm test-deployment-zsh5g-54bc444df- deployment-4898 687b49bf-3603-4d2a-84f8-099240243d15 143007 0 2023-06-12 22:22:51 +0000 UTC map[e2e:testing name:httpd pod-template-hash:54bc444df] map[cni.projectcalico.org/containerID:714e0e3d5be608ce59e4d8689ab47d4126f79ac0ec7c549fbda074c5070534b3 cni.projectcalico.org/podIP:172.30.224.27/32 cni.projectcalico.org/podIPs:172.30.224.27/32 k8s.v1.cni.cncf.io/network-status:[{ - "name": "k8s-pod-network", - "ips": [ - "172.30.224.27" - ], - "default": true, - "dns": {} -}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-deployment-zsh5g-54bc444df 793d00a5-3167-4e9b-958b-846f73c42e43 0xc0079829b7 0xc0079829b8}] [] [{kube-controller-manager Update v1 2023-06-12 22:22:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"793d00a5-3167-4e9b-958b-846f73c42e43\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-06-12 22:22:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-06-12 22:22:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-06-12 22:22:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.30.224.27\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9zqsg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9zqsg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.70,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c66,c45,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:22:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:22:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:22:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:22:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.70,PodIP:172.30.224.27,StartTime:2023-06-12 22:22:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-12 22:22:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://998254484f3fe2e01696e836bd0b2276b8bdbc9dd709c2084d6563d5f5651679,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.30.224.27,},},EphemeralContainerStatuses:[]ContainerStatus{},},} -[AfterEach] [sig-apps] Deployment +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 07/27/23 02:58:29.502 +STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 02:58:30.114 +STEP: Deploying the webhook pod 07/27/23 02:58:30.206 +STEP: Wait for the deployment to be ready 07/27/23 02:58:30.267 +Jul 27 02:58:30.306: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 07/27/23 02:58:32.341 +STEP: Verifying the service has paired with the endpoint 07/27/23 02:58:32.37 +Jul 27 02:58:33.370: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny pod and configmap creation [Conformance] + test/e2e/apimachinery/webhook.go:197 +STEP: Registering the webhook via the AdmissionRegistration API 07/27/23 02:58:33.382 +STEP: create a pod that should be denied by the webhook 07/27/23 02:58:33.428 +STEP: create a pod that causes the webhook to hang 07/27/23 02:58:33.475 +STEP: create a configmap that should be denied by the webhook 07/27/23 02:58:43.502 +STEP: create a configmap that should be admitted by the webhook 07/27/23 02:58:43.567 +STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook 07/27/23 02:58:43.601 +STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook 07/27/23 02:58:43.633 +STEP: create a namespace that bypass the webhook 07/27/23 02:58:43.661 +STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace 07/27/23 02:58:43.696 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 -Jun 12 22:22:55.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] Deployment +Jul 27 02:58:43.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] Deployment +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] Deployment +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 -STEP: Destroying namespace "deployment-4898" for this suite. 06/12/23 22:22:55.453 +STEP: Destroying namespace "webhook-2127" for this suite. 07/27/23 02:58:43.972 +STEP: Destroying namespace "webhook-2127-markers" for this suite. 07/27/23 02:58:43.991 ------------------------------ -• [4.571 seconds] -[sig-apps] Deployment -test/e2e/apps/framework.go:23 - should validate Deployment Status endpoints [Conformance] - test/e2e/apps/deployment.go:479 +• [SLOW TEST] [14.798 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should be able to deny pod and configmap creation [Conformance] + test/e2e/apimachinery/webhook.go:197 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] Deployment + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:22:50.932 - Jun 12 22:22:50.932: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename deployment 06/12/23 22:22:50.934 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:22:51.004 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:22:51.022 - [BeforeEach] [sig-apps] Deployment + STEP: Creating a kubernetes client 07/27/23 02:58:29.229 + Jul 27 02:58:29.229: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename webhook 07/27/23 02:58:29.23 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:58:29.357 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:58:29.401 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:91 - [It] should validate Deployment Status endpoints [Conformance] - test/e2e/apps/deployment.go:479 - STEP: creating a Deployment 06/12/23 22:22:51.049 - Jun 12 22:22:51.049: INFO: Creating simple deployment test-deployment-zsh5g - Jun 12 22:22:51.149: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 22, 51, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 22, 51, 0, time.Local), Reason:"NewReplicaSetCreated", Message:"Created new replica set \"test-deployment-zsh5g-54bc444df\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 22, 51, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 22, 51, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} - Jun 12 22:22:53.157: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.June, 12, 22, 22, 51, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 22, 51, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 22, 51, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 22, 51, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-zsh5g-54bc444df\" is progressing."}}, CollisionCount:(*int32)(nil)} - STEP: Getting /status 06/12/23 22:22:55.17 - Jun 12 22:22:55.184: INFO: Deployment test-deployment-zsh5g has Conditions: [{Available True 2023-06-12 22:22:53 +0000 UTC 2023-06-12 22:22:53 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2023-06-12 22:22:53 +0000 UTC 2023-06-12 22:22:51 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-zsh5g-54bc444df" has successfully progressed.}] - STEP: updating Deployment Status 06/12/23 22:22:55.184 - Jun 12 22:22:55.219: INFO: updatedStatus.Conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 22, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 22, 53, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.June, 12, 22, 22, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.June, 12, 22, 22, 51, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"test-deployment-zsh5g-54bc444df\" has successfully progressed."}, v1.DeploymentCondition{Type:"StatusUpdate", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} - STEP: watching for the Deployment status to be updated 06/12/23 22:22:55.219 - Jun 12 22:22:55.254: INFO: Observed &Deployment event: ADDED - Jun 12 22:22:55.254: INFO: Observed Deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-06-12 22:22:51 +0000 UTC 2023-06-12 22:22:51 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-zsh5g-54bc444df"} - Jun 12 22:22:55.255: INFO: Observed &Deployment event: MODIFIED - Jun 12 22:22:55.255: INFO: Observed Deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-06-12 22:22:51 +0000 UTC 2023-06-12 22:22:51 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-zsh5g-54bc444df"} - Jun 12 22:22:55.255: INFO: Observed Deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-06-12 22:22:51 +0000 UTC 2023-06-12 22:22:51 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} - Jun 12 22:22:55.256: INFO: Observed &Deployment event: MODIFIED - Jun 12 22:22:55.256: INFO: Observed Deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-06-12 22:22:51 +0000 UTC 2023-06-12 22:22:51 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} - Jun 12 22:22:55.257: INFO: Observed Deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-06-12 22:22:51 +0000 UTC 2023-06-12 22:22:51 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-zsh5g-54bc444df" is progressing.} - Jun 12 22:22:55.258: INFO: Observed &Deployment event: MODIFIED - Jun 12 22:22:55.258: INFO: Observed Deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-06-12 22:22:53 +0000 UTC 2023-06-12 22:22:53 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} - Jun 12 22:22:55.258: INFO: Observed Deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-06-12 22:22:53 +0000 UTC 2023-06-12 22:22:51 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-zsh5g-54bc444df" has successfully progressed.} - Jun 12 22:22:55.259: INFO: Observed &Deployment event: MODIFIED - Jun 12 22:22:55.259: INFO: Observed Deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-06-12 22:22:53 +0000 UTC 2023-06-12 22:22:53 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} - Jun 12 22:22:55.259: INFO: Observed Deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-06-12 22:22:53 +0000 UTC 2023-06-12 22:22:51 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-zsh5g-54bc444df" has successfully progressed.} - Jun 12 22:22:55.259: INFO: Found Deployment test-deployment-zsh5g in namespace deployment-4898 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} - Jun 12 22:22:55.259: INFO: Deployment test-deployment-zsh5g has an updated status - STEP: patching the Statefulset Status 06/12/23 22:22:55.259 - Jun 12 22:22:55.259: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} - Jun 12 22:22:55.294: INFO: Patched status conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"StatusPatched", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} - STEP: watching for the Deployment status to be patched 06/12/23 22:22:55.294 - Jun 12 22:22:55.341: INFO: Observed &Deployment event: ADDED - Jun 12 22:22:55.341: INFO: Observed deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-06-12 22:22:51 +0000 UTC 2023-06-12 22:22:51 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-zsh5g-54bc444df"} - Jun 12 22:22:55.342: INFO: Observed &Deployment event: MODIFIED - Jun 12 22:22:55.342: INFO: Observed deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-06-12 22:22:51 +0000 UTC 2023-06-12 22:22:51 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-zsh5g-54bc444df"} - Jun 12 22:22:55.342: INFO: Observed deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-06-12 22:22:51 +0000 UTC 2023-06-12 22:22:51 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} - Jun 12 22:22:55.343: INFO: Observed &Deployment event: MODIFIED - Jun 12 22:22:55.343: INFO: Observed deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-06-12 22:22:51 +0000 UTC 2023-06-12 22:22:51 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} - Jun 12 22:22:55.344: INFO: Observed deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-06-12 22:22:51 +0000 UTC 2023-06-12 22:22:51 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-zsh5g-54bc444df" is progressing.} - Jun 12 22:22:55.345: INFO: Observed &Deployment event: MODIFIED - Jun 12 22:22:55.346: INFO: Observed deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-06-12 22:22:53 +0000 UTC 2023-06-12 22:22:53 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} - Jun 12 22:22:55.347: INFO: Observed deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-06-12 22:22:53 +0000 UTC 2023-06-12 22:22:51 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-zsh5g-54bc444df" has successfully progressed.} - Jun 12 22:22:55.347: INFO: Observed &Deployment event: MODIFIED - Jun 12 22:22:55.347: INFO: Observed deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-06-12 22:22:53 +0000 UTC 2023-06-12 22:22:53 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} - Jun 12 22:22:55.348: INFO: Observed deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-06-12 22:22:53 +0000 UTC 2023-06-12 22:22:51 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-zsh5g-54bc444df" has successfully progressed.} - Jun 12 22:22:55.348: INFO: Observed deployment test-deployment-zsh5g in namespace deployment-4898 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} - Jun 12 22:22:55.348: INFO: Observed &Deployment event: MODIFIED - Jun 12 22:22:55.348: INFO: Found deployment test-deployment-zsh5g in namespace deployment-4898 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } - Jun 12 22:22:55.349: INFO: Deployment test-deployment-zsh5g has a patched status - [AfterEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:84 - Jun 12 22:22:55.373: INFO: Deployment "test-deployment-zsh5g": - &Deployment{ObjectMeta:{test-deployment-zsh5g deployment-4898 fde7547a-59f2-46af-9e2c-5bd9e34bf3ac 143024 1 2023-06-12 22:22:51 +0000 UTC map[e2e:testing name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2023-06-12 22:22:51 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {e2e.test Update apps/v1 2023-06-12 22:22:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"StatusPatched\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update apps/v1 2023-06-12 22:22:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc007982148 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:StatusPatched,Status:True,Reason:,Message:,LastUpdateTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,},DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-06-12 22:22:55 +0000 UTC,LastTransitionTime:2023-06-12 22:22:55 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-deployment-zsh5g-54bc444df" has successfully progressed.,LastUpdateTime:2023-06-12 22:22:55 +0000 UTC,LastTransitionTime:2023-06-12 22:22:55 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} - - Jun 12 22:22:55.407: INFO: New ReplicaSet "test-deployment-zsh5g-54bc444df" of Deployment "test-deployment-zsh5g": - &ReplicaSet{ObjectMeta:{test-deployment-zsh5g-54bc444df deployment-4898 793d00a5-3167-4e9b-958b-846f73c42e43 143008 1 2023-06-12 22:22:51 +0000 UTC map[e2e:testing name:httpd pod-template-hash:54bc444df] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment-zsh5g fde7547a-59f2-46af-9e2c-5bd9e34bf3ac 0xc007982550 0xc007982551}] [] [{kube-controller-manager Update apps/v1 2023-06-12 22:22:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fde7547a-59f2-46af-9e2c-5bd9e34bf3ac\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-06-12 22:22:53 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,pod-template-hash: 54bc444df,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd pod-template-hash:54bc444df] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0079825f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} - Jun 12 22:22:55.414: INFO: Pod "test-deployment-zsh5g-54bc444df-52fjm" is available: - &Pod{ObjectMeta:{test-deployment-zsh5g-54bc444df-52fjm test-deployment-zsh5g-54bc444df- deployment-4898 687b49bf-3603-4d2a-84f8-099240243d15 143007 0 2023-06-12 22:22:51 +0000 UTC map[e2e:testing name:httpd pod-template-hash:54bc444df] map[cni.projectcalico.org/containerID:714e0e3d5be608ce59e4d8689ab47d4126f79ac0ec7c549fbda074c5070534b3 cni.projectcalico.org/podIP:172.30.224.27/32 cni.projectcalico.org/podIPs:172.30.224.27/32 k8s.v1.cni.cncf.io/network-status:[{ - "name": "k8s-pod-network", - "ips": [ - "172.30.224.27" - ], - "default": true, - "dns": {} - }] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet test-deployment-zsh5g-54bc444df 793d00a5-3167-4e9b-958b-846f73c42e43 0xc0079829b7 0xc0079829b8}] [] [{kube-controller-manager Update v1 2023-06-12 22:22:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"793d00a5-3167-4e9b-958b-846f73c42e43\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-06-12 22:22:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {multus Update v1 2023-06-12 22:22:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2023-06-12 22:22:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.30.224.27\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9zqsg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9zqsg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.138.75.70,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c66,c45,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:22:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:22:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:22:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-12 22:22:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.75.70,PodIP:172.30.224.27,StartTime:2023-06-12 22:22:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-12 22:22:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:cri-o://998254484f3fe2e01696e836bd0b2276b8bdbc9dd709c2084d6563d5f5651679,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.30.224.27,},},EphemeralContainerStatuses:[]ContainerStatus{},},} - [AfterEach] [sig-apps] Deployment + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 07/27/23 02:58:29.502 + STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 02:58:30.114 + STEP: Deploying the webhook pod 07/27/23 02:58:30.206 + STEP: Wait for the deployment to be ready 07/27/23 02:58:30.267 + Jul 27 02:58:30.306: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 07/27/23 02:58:32.341 + STEP: Verifying the service has paired with the endpoint 07/27/23 02:58:32.37 + Jul 27 02:58:33.370: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should be able to deny pod and configmap creation [Conformance] + test/e2e/apimachinery/webhook.go:197 + STEP: Registering the webhook via the AdmissionRegistration API 07/27/23 02:58:33.382 + STEP: create a pod that should be denied by the webhook 07/27/23 02:58:33.428 + STEP: create a pod that causes the webhook to hang 07/27/23 02:58:33.475 + STEP: create a configmap that should be denied by the webhook 07/27/23 02:58:43.502 + STEP: create a configmap that should be admitted by the webhook 07/27/23 02:58:43.567 + STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook 07/27/23 02:58:43.601 + STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook 07/27/23 02:58:43.633 + STEP: create a namespace that bypass the webhook 07/27/23 02:58:43.661 + STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace 07/27/23 02:58:43.696 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 22:22:55.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] Deployment + Jul 27 02:58:43.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] Deployment + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] Deployment + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "deployment-4898" for this suite. 06/12/23 22:22:55.453 + STEP: Destroying namespace "webhook-2127" for this suite. 07/27/23 02:58:43.972 + STEP: Destroying namespace "webhook-2127-markers" for this suite. 07/27/23 02:58:43.991 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSS ------------------------------ -[sig-network] DNS - should provide DNS for pods for Hostname [Conformance] - test/e2e/network/dns.go:248 -[BeforeEach] [sig-network] DNS +[sig-cli] Kubectl client Kubectl server-side dry-run + should check if kubectl can dry-run update Pods [Conformance] + test/e2e/kubectl/kubectl.go:962 +[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:22:55.508 -Jun 12 22:22:55.508: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename dns 06/12/23 22:22:55.51 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:22:55.594 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:22:55.602 -[BeforeEach] [sig-network] DNS +STEP: Creating a kubernetes client 07/27/23 02:58:44.03 +Jul 27 02:58:44.030: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename kubectl 07/27/23 02:58:44.031 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:58:44.077 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:58:44.097 +[BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 -[It] should provide DNS for pods for Hostname [Conformance] - test/e2e/network/dns.go:248 -STEP: Creating a test headless service 06/12/23 22:22:55.612 -STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6576.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6576.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;sleep 1; done - 06/12/23 22:22:55.683 -STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6576.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-6576.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;sleep 1; done - 06/12/23 22:22:55.683 -STEP: creating a pod to probe DNS 06/12/23 22:22:55.683 -STEP: submitting the pod to kubernetes 06/12/23 22:22:55.683 -Jun 12 22:22:55.730: INFO: Waiting up to 15m0s for pod "dns-test-19d73096-a602-4fa4-b8ac-a33280955aa2" in namespace "dns-6576" to be "running" -Jun 12 22:22:55.739: INFO: Pod "dns-test-19d73096-a602-4fa4-b8ac-a33280955aa2": Phase="Pending", Reason="", readiness=false. Elapsed: 9.246852ms -Jun 12 22:22:57.756: INFO: Pod "dns-test-19d73096-a602-4fa4-b8ac-a33280955aa2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026083476s -Jun 12 22:22:59.748: INFO: Pod "dns-test-19d73096-a602-4fa4-b8ac-a33280955aa2": Phase="Running", Reason="", readiness=true. Elapsed: 4.01807125s -Jun 12 22:22:59.748: INFO: Pod "dns-test-19d73096-a602-4fa4-b8ac-a33280955aa2" satisfied condition "running" -STEP: retrieving the pod 06/12/23 22:22:59.748 -STEP: looking for the results for each expected name from probers 06/12/23 22:22:59.756 -Jun 12 22:22:59.859: INFO: DNS probes using dns-6576/dns-test-19d73096-a602-4fa4-b8ac-a33280955aa2 succeeded - -STEP: deleting the pod 06/12/23 22:22:59.859 -STEP: deleting the test headless service 06/12/23 22:22:59.924 -[AfterEach] [sig-network] DNS +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[It] should check if kubectl can dry-run update Pods [Conformance] + test/e2e/kubectl/kubectl.go:962 +STEP: running the image registry.k8s.io/e2e-test-images/httpd:2.4.38-4 07/27/23 02:58:44.132 +Jul 27 02:58:44.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-9457 run e2e-test-httpd-pod --image=registry.k8s.io/e2e-test-images/httpd:2.4.38-4 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' +Jul 27 02:58:44.253: INFO: stderr: "" +Jul 27 02:58:44.253: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: replace the image in the pod with server-side dry-run 07/27/23 02:58:44.253 +Jul 27 02:58:44.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-9457 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "registry.k8s.io/e2e-test-images/busybox:1.29-4"}]}} --dry-run=server' +Jul 27 02:58:44.620: INFO: stderr: "" +Jul 27 02:58:44.620: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" +STEP: verifying the pod e2e-test-httpd-pod has the right image registry.k8s.io/e2e-test-images/httpd:2.4.38-4 07/27/23 02:58:44.62 +Jul 27 02:58:44.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-9457 delete pods e2e-test-httpd-pod' +Jul 27 02:58:46.913: INFO: stderr: "" +Jul 27 02:58:46.913: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 -Jun 12 22:22:59.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] DNS +Jul 27 02:58:46.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] DNS +[DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] DNS +[DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 -STEP: Destroying namespace "dns-6576" for this suite. 06/12/23 22:23:00.015 +STEP: Destroying namespace "kubectl-9457" for this suite. 07/27/23 02:58:46.934 ------------------------------ -• [4.549 seconds] -[sig-network] DNS -test/e2e/network/common/framework.go:23 - should provide DNS for pods for Hostname [Conformance] - test/e2e/network/dns.go:248 +• [2.928 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl server-side dry-run + test/e2e/kubectl/kubectl.go:956 + should check if kubectl can dry-run update Pods [Conformance] + test/e2e/kubectl/kubectl.go:962 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] DNS + [BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:22:55.508 - Jun 12 22:22:55.508: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename dns 06/12/23 22:22:55.51 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:22:55.594 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:22:55.602 - [BeforeEach] [sig-network] DNS + STEP: Creating a kubernetes client 07/27/23 02:58:44.03 + Jul 27 02:58:44.030: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename kubectl 07/27/23 02:58:44.031 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:58:44.077 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:58:44.097 + [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 - [It] should provide DNS for pods for Hostname [Conformance] - test/e2e/network/dns.go:248 - STEP: Creating a test headless service 06/12/23 22:22:55.612 - STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6576.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6576.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;sleep 1; done - 06/12/23 22:22:55.683 - STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6576.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-6576.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;sleep 1; done - 06/12/23 22:22:55.683 - STEP: creating a pod to probe DNS 06/12/23 22:22:55.683 - STEP: submitting the pod to kubernetes 06/12/23 22:22:55.683 - Jun 12 22:22:55.730: INFO: Waiting up to 15m0s for pod "dns-test-19d73096-a602-4fa4-b8ac-a33280955aa2" in namespace "dns-6576" to be "running" - Jun 12 22:22:55.739: INFO: Pod "dns-test-19d73096-a602-4fa4-b8ac-a33280955aa2": Phase="Pending", Reason="", readiness=false. Elapsed: 9.246852ms - Jun 12 22:22:57.756: INFO: Pod "dns-test-19d73096-a602-4fa4-b8ac-a33280955aa2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026083476s - Jun 12 22:22:59.748: INFO: Pod "dns-test-19d73096-a602-4fa4-b8ac-a33280955aa2": Phase="Running", Reason="", readiness=true. Elapsed: 4.01807125s - Jun 12 22:22:59.748: INFO: Pod "dns-test-19d73096-a602-4fa4-b8ac-a33280955aa2" satisfied condition "running" - STEP: retrieving the pod 06/12/23 22:22:59.748 - STEP: looking for the results for each expected name from probers 06/12/23 22:22:59.756 - Jun 12 22:22:59.859: INFO: DNS probes using dns-6576/dns-test-19d73096-a602-4fa4-b8ac-a33280955aa2 succeeded - - STEP: deleting the pod 06/12/23 22:22:59.859 - STEP: deleting the test headless service 06/12/23 22:22:59.924 - [AfterEach] [sig-network] DNS + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [It] should check if kubectl can dry-run update Pods [Conformance] + test/e2e/kubectl/kubectl.go:962 + STEP: running the image registry.k8s.io/e2e-test-images/httpd:2.4.38-4 07/27/23 02:58:44.132 + Jul 27 02:58:44.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-9457 run e2e-test-httpd-pod --image=registry.k8s.io/e2e-test-images/httpd:2.4.38-4 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' + Jul 27 02:58:44.253: INFO: stderr: "" + Jul 27 02:58:44.253: INFO: stdout: "pod/e2e-test-httpd-pod created\n" + STEP: replace the image in the pod with server-side dry-run 07/27/23 02:58:44.253 + Jul 27 02:58:44.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-9457 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "registry.k8s.io/e2e-test-images/busybox:1.29-4"}]}} --dry-run=server' + Jul 27 02:58:44.620: INFO: stderr: "" + Jul 27 02:58:44.620: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" + STEP: verifying the pod e2e-test-httpd-pod has the right image registry.k8s.io/e2e-test-images/httpd:2.4.38-4 07/27/23 02:58:44.62 + Jul 27 02:58:44.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=kubectl-9457 delete pods e2e-test-httpd-pod' + Jul 27 02:58:46.913: INFO: stderr: "" + Jul 27 02:58:46.913: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" + [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 - Jun 12 22:22:59.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] DNS + Jul 27 02:58:46.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] DNS + [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] DNS + [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 - STEP: Destroying namespace "dns-6576" for this suite. 06/12/23 22:23:00.015 + STEP: Destroying namespace "kubectl-9457" for this suite. 07/27/23 02:58:46.934 << End Captured GinkgoWriter Output ------------------------------ -SSSS +SSSSSSSSSSSSSSSS ------------------------------ -[sig-node] Variable Expansion - should fail substituting values in a volume subpath with backticks [Slow] [Conformance] - test/e2e/common/node/expansion.go:152 -[BeforeEach] [sig-node] Variable Expansion +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD preserving unknown fields at the schema root [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:194 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:23:00.058 -Jun 12 22:23:00.058: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename var-expansion 06/12/23 22:23:00.059 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:23:00.135 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:23:00.153 -[BeforeEach] [sig-node] Variable Expansion +STEP: Creating a kubernetes client 07/27/23 02:58:46.959 +Jul 27 02:58:46.959: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename crd-publish-openapi 07/27/23 02:58:46.96 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:58:47.029 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:58:47.04 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 -[It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] - test/e2e/common/node/expansion.go:152 -Jun 12 22:23:00.185: INFO: Waiting up to 2m0s for pod "var-expansion-302bdbe0-a05c-479b-ae22-38e6696363cd" in namespace "var-expansion-5425" to be "container 0 failed with reason CreateContainerConfigError" -Jun 12 22:23:00.202: INFO: Pod "var-expansion-302bdbe0-a05c-479b-ae22-38e6696363cd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.958055ms -Jun 12 22:23:02.218: INFO: Pod "var-expansion-302bdbe0-a05c-479b-ae22-38e6696363cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032422653s -Jun 12 22:23:04.210: INFO: Pod "var-expansion-302bdbe0-a05c-479b-ae22-38e6696363cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024492433s -Jun 12 22:23:04.210: INFO: Pod "var-expansion-302bdbe0-a05c-479b-ae22-38e6696363cd" satisfied condition "container 0 failed with reason CreateContainerConfigError" -Jun 12 22:23:04.210: INFO: Deleting pod "var-expansion-302bdbe0-a05c-479b-ae22-38e6696363cd" in namespace "var-expansion-5425" -Jun 12 22:23:04.221: INFO: Wait up to 5m0s for pod "var-expansion-302bdbe0-a05c-479b-ae22-38e6696363cd" to be fully deleted -[AfterEach] [sig-node] Variable Expansion +[It] works for CRD preserving unknown fields at the schema root [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:194 +Jul 27 02:58:47.054: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: kubectl validation (kubectl create and apply) allows request with any unknown properties 07/27/23 02:58:55.919 +Jul 27 02:58:55.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-1342 --namespace=crd-publish-openapi-1342 create -f -' +Jul 27 02:58:59.328: INFO: stderr: "" +Jul 27 02:58:59.328: INFO: stdout: "e2e-test-crd-publish-openapi-7581-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" +Jul 27 02:58:59.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-1342 --namespace=crd-publish-openapi-1342 delete e2e-test-crd-publish-openapi-7581-crds test-cr' +Jul 27 02:58:59.543: INFO: stderr: "" +Jul 27 02:58:59.543: INFO: stdout: "e2e-test-crd-publish-openapi-7581-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" +Jul 27 02:58:59.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-1342 --namespace=crd-publish-openapi-1342 apply -f -' +Jul 27 02:59:00.899: INFO: stderr: "" +Jul 27 02:59:00.900: INFO: stdout: "e2e-test-crd-publish-openapi-7581-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" +Jul 27 02:59:00.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-1342 --namespace=crd-publish-openapi-1342 delete e2e-test-crd-publish-openapi-7581-crds test-cr' +Jul 27 02:59:01.012: INFO: stderr: "" +Jul 27 02:59:01.012: INFO: stdout: "e2e-test-crd-publish-openapi-7581-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR 07/27/23 02:59:01.012 +Jul 27 02:59:01.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-1342 explain e2e-test-crd-publish-openapi-7581-crds' +Jul 27 02:59:01.375: INFO: stderr: "" +Jul 27 02:59:01.375: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-7581-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 -Jun 12 22:23:08.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Variable Expansion +Jul 27 02:59:08.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Variable Expansion +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Variable Expansion +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] tear down framework | framework.go:193 -STEP: Destroying namespace "var-expansion-5425" for this suite. 06/12/23 22:23:08.249 +STEP: Destroying namespace "crd-publish-openapi-1342" for this suite. 07/27/23 02:59:08.892 ------------------------------ -• [SLOW TEST] [8.231 seconds] -[sig-node] Variable Expansion -test/e2e/common/node/framework.go:23 - should fail substituting values in a volume subpath with backticks [Slow] [Conformance] - test/e2e/common/node/expansion.go:152 +• [SLOW TEST] [21.954 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for CRD preserving unknown fields at the schema root [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:194 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Variable Expansion + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:23:00.058 - Jun 12 22:23:00.058: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename var-expansion 06/12/23 22:23:00.059 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:23:00.135 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:23:00.153 - [BeforeEach] [sig-node] Variable Expansion + STEP: Creating a kubernetes client 07/27/23 02:58:46.959 + Jul 27 02:58:46.959: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename crd-publish-openapi 07/27/23 02:58:46.96 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:58:47.029 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:58:47.04 + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 - [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] - test/e2e/common/node/expansion.go:152 - Jun 12 22:23:00.185: INFO: Waiting up to 2m0s for pod "var-expansion-302bdbe0-a05c-479b-ae22-38e6696363cd" in namespace "var-expansion-5425" to be "container 0 failed with reason CreateContainerConfigError" - Jun 12 22:23:00.202: INFO: Pod "var-expansion-302bdbe0-a05c-479b-ae22-38e6696363cd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.958055ms - Jun 12 22:23:02.218: INFO: Pod "var-expansion-302bdbe0-a05c-479b-ae22-38e6696363cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032422653s - Jun 12 22:23:04.210: INFO: Pod "var-expansion-302bdbe0-a05c-479b-ae22-38e6696363cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024492433s - Jun 12 22:23:04.210: INFO: Pod "var-expansion-302bdbe0-a05c-479b-ae22-38e6696363cd" satisfied condition "container 0 failed with reason CreateContainerConfigError" - Jun 12 22:23:04.210: INFO: Deleting pod "var-expansion-302bdbe0-a05c-479b-ae22-38e6696363cd" in namespace "var-expansion-5425" - Jun 12 22:23:04.221: INFO: Wait up to 5m0s for pod "var-expansion-302bdbe0-a05c-479b-ae22-38e6696363cd" to be fully deleted - [AfterEach] [sig-node] Variable Expansion + [It] works for CRD preserving unknown fields at the schema root [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:194 + Jul 27 02:58:47.054: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: kubectl validation (kubectl create and apply) allows request with any unknown properties 07/27/23 02:58:55.919 + Jul 27 02:58:55.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-1342 --namespace=crd-publish-openapi-1342 create -f -' + Jul 27 02:58:59.328: INFO: stderr: "" + Jul 27 02:58:59.328: INFO: stdout: "e2e-test-crd-publish-openapi-7581-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" + Jul 27 02:58:59.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-1342 --namespace=crd-publish-openapi-1342 delete e2e-test-crd-publish-openapi-7581-crds test-cr' + Jul 27 02:58:59.543: INFO: stderr: "" + Jul 27 02:58:59.543: INFO: stdout: "e2e-test-crd-publish-openapi-7581-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" + Jul 27 02:58:59.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-1342 --namespace=crd-publish-openapi-1342 apply -f -' + Jul 27 02:59:00.899: INFO: stderr: "" + Jul 27 02:59:00.900: INFO: stdout: "e2e-test-crd-publish-openapi-7581-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" + Jul 27 02:59:00.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-1342 --namespace=crd-publish-openapi-1342 delete e2e-test-crd-publish-openapi-7581-crds test-cr' + Jul 27 02:59:01.012: INFO: stderr: "" + Jul 27 02:59:01.012: INFO: stdout: "e2e-test-crd-publish-openapi-7581-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" + STEP: kubectl explain works to explain CR 07/27/23 02:59:01.012 + Jul 27 02:59:01.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=crd-publish-openapi-1342 explain e2e-test-crd-publish-openapi-7581-crds' + Jul 27 02:59:01.375: INFO: stderr: "" + Jul 27 02:59:01.375: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-7581-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 22:23:08.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Variable Expansion + Jul 27 02:59:08.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Variable Expansion + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Variable Expansion + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "var-expansion-5425" for this suite. 06/12/23 22:23:08.249 + STEP: Destroying namespace "crd-publish-openapi-1342" for this suite. 07/27/23 02:59:08.892 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------- -[sig-storage] Secrets - should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] - test/e2e/common/storage/secrets_volume.go:99 -[BeforeEach] [sig-storage] Secrets +[sig-node] Pods + should be submitted and removed [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:226 +[BeforeEach] [sig-node] Pods set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:23:08.298 -Jun 12 22:23:08.298: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename secrets 06/12/23 22:23:08.302 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:23:08.383 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:23:08.403 -[BeforeEach] [sig-storage] Secrets +STEP: Creating a kubernetes client 07/27/23 02:59:08.913 +Jul 27 02:59:08.913: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename pods 07/27/23 02:59:08.914 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:59:08.975 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:59:09.002 +[BeforeEach] [sig-node] Pods test/e2e/framework/metrics/init/init.go:31 -[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] - test/e2e/common/storage/secrets_volume.go:99 -STEP: Creating secret with name secret-test-6d3c7dc7-d4c1-486b-8d22-417446b22c9a 06/12/23 22:23:08.534 -STEP: Creating a pod to test consume secrets 06/12/23 22:23:08.551 -Jun 12 22:23:08.575: INFO: Waiting up to 5m0s for pod "pod-secrets-244e352d-5b65-4313-a516-29d18a62ae21" in namespace "secrets-3811" to be "Succeeded or Failed" -Jun 12 22:23:08.610: INFO: Pod "pod-secrets-244e352d-5b65-4313-a516-29d18a62ae21": Phase="Pending", Reason="", readiness=false. Elapsed: 34.78563ms -Jun 12 22:23:10.618: INFO: Pod "pod-secrets-244e352d-5b65-4313-a516-29d18a62ae21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042417006s -Jun 12 22:23:12.617: INFO: Pod "pod-secrets-244e352d-5b65-4313-a516-29d18a62ae21": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041814018s -Jun 12 22:23:14.617: INFO: Pod "pod-secrets-244e352d-5b65-4313-a516-29d18a62ae21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041448448s -STEP: Saw pod success 06/12/23 22:23:14.617 -Jun 12 22:23:14.617: INFO: Pod "pod-secrets-244e352d-5b65-4313-a516-29d18a62ae21" satisfied condition "Succeeded or Failed" -Jun 12 22:23:14.624: INFO: Trying to get logs from node 10.138.75.70 pod pod-secrets-244e352d-5b65-4313-a516-29d18a62ae21 container secret-volume-test: -STEP: delete the pod 06/12/23 22:23:14.643 -Jun 12 22:23:14.665: INFO: Waiting for pod pod-secrets-244e352d-5b65-4313-a516-29d18a62ae21 to disappear -Jun 12 22:23:14.677: INFO: Pod pod-secrets-244e352d-5b65-4313-a516-29d18a62ae21 no longer exists -[AfterEach] [sig-storage] Secrets +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 +[It] should be submitted and removed [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:226 +STEP: creating the pod 07/27/23 02:59:09.014 +STEP: setting up watch 07/27/23 02:59:09.014 +STEP: submitting the pod to kubernetes 07/27/23 02:59:09.145 +STEP: verifying the pod is in kubernetes 07/27/23 02:59:09.196 +STEP: verifying pod creation was observed 07/27/23 02:59:09.237 +Jul 27 02:59:09.237: INFO: Waiting up to 5m0s for pod "pod-submit-remove-7f48e0d6-221e-47dc-bc1b-a5635402abf5" in namespace "pods-6136" to be "running" +Jul 27 02:59:09.248: INFO: Pod "pod-submit-remove-7f48e0d6-221e-47dc-bc1b-a5635402abf5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.994589ms +Jul 27 02:59:11.262: INFO: Pod "pod-submit-remove-7f48e0d6-221e-47dc-bc1b-a5635402abf5": Phase="Running", Reason="", readiness=true. Elapsed: 2.024560307s +Jul 27 02:59:11.262: INFO: Pod "pod-submit-remove-7f48e0d6-221e-47dc-bc1b-a5635402abf5" satisfied condition "running" +STEP: deleting the pod gracefully 07/27/23 02:59:11.273 +STEP: verifying pod deletion was observed 07/27/23 02:59:11.292 +[AfterEach] [sig-node] Pods test/e2e/framework/node/init/init.go:32 -Jun 12 22:23:14.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Secrets +Jul 27 02:59:14.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Pods test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Secrets +[DeferCleanup (Each)] [sig-node] Pods dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Secrets +[DeferCleanup (Each)] [sig-node] Pods tear down framework | framework.go:193 -STEP: Destroying namespace "secrets-3811" for this suite. 06/12/23 22:23:14.69 -STEP: Destroying namespace "secret-namespace-39" for this suite. 06/12/23 22:23:14.713 +STEP: Destroying namespace "pods-6136" for this suite. 07/27/23 02:59:14.06 ------------------------------ -• [SLOW TEST] [6.451 seconds] -[sig-storage] Secrets -test/e2e/common/storage/framework.go:23 - should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] - test/e2e/common/storage/secrets_volume.go:99 +• [SLOW TEST] [5.168 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should be submitted and removed [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:226 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Secrets + [BeforeEach] [sig-node] Pods set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:23:08.298 - Jun 12 22:23:08.298: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename secrets 06/12/23 22:23:08.302 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:23:08.383 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:23:08.403 - [BeforeEach] [sig-storage] Secrets + STEP: Creating a kubernetes client 07/27/23 02:59:08.913 + Jul 27 02:59:08.913: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename pods 07/27/23 02:59:08.914 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:59:08.975 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:59:09.002 + [BeforeEach] [sig-node] Pods test/e2e/framework/metrics/init/init.go:31 - [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] - test/e2e/common/storage/secrets_volume.go:99 - STEP: Creating secret with name secret-test-6d3c7dc7-d4c1-486b-8d22-417446b22c9a 06/12/23 22:23:08.534 - STEP: Creating a pod to test consume secrets 06/12/23 22:23:08.551 - Jun 12 22:23:08.575: INFO: Waiting up to 5m0s for pod "pod-secrets-244e352d-5b65-4313-a516-29d18a62ae21" in namespace "secrets-3811" to be "Succeeded or Failed" - Jun 12 22:23:08.610: INFO: Pod "pod-secrets-244e352d-5b65-4313-a516-29d18a62ae21": Phase="Pending", Reason="", readiness=false. Elapsed: 34.78563ms - Jun 12 22:23:10.618: INFO: Pod "pod-secrets-244e352d-5b65-4313-a516-29d18a62ae21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042417006s - Jun 12 22:23:12.617: INFO: Pod "pod-secrets-244e352d-5b65-4313-a516-29d18a62ae21": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041814018s - Jun 12 22:23:14.617: INFO: Pod "pod-secrets-244e352d-5b65-4313-a516-29d18a62ae21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041448448s - STEP: Saw pod success 06/12/23 22:23:14.617 - Jun 12 22:23:14.617: INFO: Pod "pod-secrets-244e352d-5b65-4313-a516-29d18a62ae21" satisfied condition "Succeeded or Failed" - Jun 12 22:23:14.624: INFO: Trying to get logs from node 10.138.75.70 pod pod-secrets-244e352d-5b65-4313-a516-29d18a62ae21 container secret-volume-test: - STEP: delete the pod 06/12/23 22:23:14.643 - Jun 12 22:23:14.665: INFO: Waiting for pod pod-secrets-244e352d-5b65-4313-a516-29d18a62ae21 to disappear - Jun 12 22:23:14.677: INFO: Pod pod-secrets-244e352d-5b65-4313-a516-29d18a62ae21 no longer exists - [AfterEach] [sig-storage] Secrets + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 + [It] should be submitted and removed [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:226 + STEP: creating the pod 07/27/23 02:59:09.014 + STEP: setting up watch 07/27/23 02:59:09.014 + STEP: submitting the pod to kubernetes 07/27/23 02:59:09.145 + STEP: verifying the pod is in kubernetes 07/27/23 02:59:09.196 + STEP: verifying pod creation was observed 07/27/23 02:59:09.237 + Jul 27 02:59:09.237: INFO: Waiting up to 5m0s for pod "pod-submit-remove-7f48e0d6-221e-47dc-bc1b-a5635402abf5" in namespace "pods-6136" to be "running" + Jul 27 02:59:09.248: INFO: Pod "pod-submit-remove-7f48e0d6-221e-47dc-bc1b-a5635402abf5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.994589ms + Jul 27 02:59:11.262: INFO: Pod "pod-submit-remove-7f48e0d6-221e-47dc-bc1b-a5635402abf5": Phase="Running", Reason="", readiness=true. Elapsed: 2.024560307s + Jul 27 02:59:11.262: INFO: Pod "pod-submit-remove-7f48e0d6-221e-47dc-bc1b-a5635402abf5" satisfied condition "running" + STEP: deleting the pod gracefully 07/27/23 02:59:11.273 + STEP: verifying pod deletion was observed 07/27/23 02:59:11.292 + [AfterEach] [sig-node] Pods test/e2e/framework/node/init/init.go:32 - Jun 12 22:23:14.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Secrets + Jul 27 02:59:14.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Pods test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Secrets + [DeferCleanup (Each)] [sig-node] Pods dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Secrets + [DeferCleanup (Each)] [sig-node] Pods tear down framework | framework.go:193 - STEP: Destroying namespace "secrets-3811" for this suite. 06/12/23 22:23:14.69 - STEP: Destroying namespace "secret-namespace-39" for this suite. 06/12/23 22:23:14.713 + STEP: Destroying namespace "pods-6136" for this suite. 07/27/23 02:59:14.06 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSS ------------------------------ -[sig-cli] Kubectl client Kubectl describe - should check if kubectl describe prints relevant information for rc and pods [Conformance] - test/e2e/kubectl/kubectl.go:1276 -[BeforeEach] [sig-cli] Kubectl client +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + test/e2e/apps/statefulset.go:697 +[BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:23:14.771 -Jun 12 22:23:14.771: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename kubectl 06/12/23 22:23:14.774 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:23:14.829 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:23:14.837 -[BeforeEach] [sig-cli] Kubectl client +STEP: Creating a kubernetes client 07/27/23 02:59:14.081 +Jul 27 02:59:14.081: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename statefulset 07/27/23 02:59:14.082 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:59:14.133 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:59:14.144 +[BeforeEach] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-cli] Kubectl client - test/e2e/kubectl/kubectl.go:274 -[It] should check if kubectl describe prints relevant information for rc and pods [Conformance] - test/e2e/kubectl/kubectl.go:1276 -Jun 12 22:23:14.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-3951 create -f -' -Jun 12 22:23:16.524: INFO: stderr: "" -Jun 12 22:23:16.524: INFO: stdout: "replicationcontroller/agnhost-primary created\n" -Jun 12 22:23:16.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-3951 create -f -' -Jun 12 22:23:17.789: INFO: stderr: "" -Jun 12 22:23:17.789: INFO: stdout: "service/agnhost-primary created\n" -STEP: Waiting for Agnhost primary to start. 06/12/23 22:23:17.789 -Jun 12 22:23:18.808: INFO: Selector matched 1 pods for map[app:agnhost] -Jun 12 22:23:18.809: INFO: Found 0 / 1 -Jun 12 22:23:19.809: INFO: Selector matched 1 pods for map[app:agnhost] -Jun 12 22:23:19.809: INFO: Found 1 / 1 -Jun 12 22:23:19.809: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 -Jun 12 22:23:19.823: INFO: Selector matched 1 pods for map[app:agnhost] -Jun 12 22:23:19.823: INFO: ForEach: Found 1 pods from the filter. Now looping through them. -Jun 12 22:23:19.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-3951 describe pod agnhost-primary-dqhr7' -Jun 12 22:23:21.193: INFO: stderr: "" -Jun 12 22:23:21.193: INFO: stdout: "Name: agnhost-primary-dqhr7\nNamespace: kubectl-3951\nPriority: 0\nService Account: default\nNode: 10.138.75.70/10.138.75.70\nStart Time: Mon, 12 Jun 2023 22:23:16 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: cni.projectcalico.org/containerID: fd4688e08f57da7b8ce9f0ede956fbcc15a2b83de251d815752f228dea06c252\n cni.projectcalico.org/podIP: 172.30.224.59/32\n cni.projectcalico.org/podIPs: 172.30.224.59/32\n k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"k8s-pod-network\",\n \"ips\": [\n \"172.30.224.59\"\n ],\n \"default\": true,\n \"dns\": {}\n }]\n openshift.io/scc: anyuid\nStatus: Running\nIP: 172.30.224.59\nIPs:\n IP: 172.30.224.59\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: cri-o://3b543dd5c564c3e6ccceea984a568bc0b1fda9d11c55f7c8c8d5c22c78949068\n Image: registry.k8s.io/e2e-test-images/agnhost:2.43\n Image ID: registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 12 Jun 2023 22:23:18 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dg26f (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-dg26f:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: \nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5s default-scheduler Successfully assigned kubectl-3951/agnhost-primary-dqhr7 to 10.138.75.70\n Normal AddedInterface 4s multus Add eth0 [172.30.224.59/32] from k8s-pod-network\n Normal Pulled 4s kubelet Container image \"registry.k8s.io/e2e-test-images/agnhost:2.43\" already present on machine\n Normal Created 3s kubelet Created container agnhost-primary\n Normal Started 3s kubelet Started container agnhost-primary\n" -Jun 12 22:23:21.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-3951 describe rc agnhost-primary' -Jun 12 22:23:23.148: INFO: stderr: "" -Jun 12 22:23:23.148: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-3951\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: registry.k8s.io/e2e-test-images/agnhost:2.43\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 7s replication-controller Created pod: agnhost-primary-dqhr7\n" -Jun 12 22:23:23.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-3951 describe service agnhost-primary' -Jun 12 22:23:24.250: INFO: stderr: "" -Jun 12 22:23:24.250: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-3951\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 172.21.129.205\nIPs: 172.21.129.205\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 172.30.224.59:6379\nSession Affinity: None\nEvents: \n" -Jun 12 22:23:24.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-3951 describe node 10.138.75.112' -Jun 12 22:23:25.655: INFO: stderr: "" -Jun 12 22:23:25.655: INFO: stdout: "Name: 10.138.75.112\nRoles: master,worker\nLabels: arch=amd64\n beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=b3c.4x16.encrypted\n beta.kubernetes.io/os=linux\n failure-domain.beta.kubernetes.io/region=au-syd\n failure-domain.beta.kubernetes.io/zone=syd01\n ibm-cloud.kubernetes.io/encrypted-docker-data=true\n ibm-cloud.kubernetes.io/external-ip=168.1.52.66\n ibm-cloud.kubernetes.io/iaas-provider=softlayer\n ibm-cloud.kubernetes.io/internal-ip=10.138.75.112\n ibm-cloud.kubernetes.io/machine-type=b3c.4x16.encrypted\n ibm-cloud.kubernetes.io/os=REDHAT_8_64\n ibm-cloud.kubernetes.io/region=au-syd\n ibm-cloud.kubernetes.io/sgx-enabled=false\n ibm-cloud.kubernetes.io/worker-id=kube-ci3l4bss0lnb681k1pc0-kubee2epvg6-default-00000248\n ibm-cloud.kubernetes.io/worker-pool-id=ci3l4bss0lnb681k1pc0-9c469cb\n ibm-cloud.kubernetes.io/worker-pool-name=default\n ibm-cloud.kubernetes.io/worker-version=4.13.1_1521_openshift\n ibm-cloud.kubernetes.io/zone=syd01\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=10.138.75.112\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\n node-role.kubernetes.io/worker=\n node.kubernetes.io/instance-type=b3c.4x16.encrypted\n node.openshift.io/os_id=rhel\n privateVLAN=2723066\n publicVLAN=2723064\n topology.kubernetes.io/region=au-syd\n topology.kubernetes.io/zone=syd01\nAnnotations: projectcalico.org/IPv4Address: 10.138.75.112/26\n projectcalico.org/IPv4IPIPTunnelAddr: 172.30.161.64\nCreationTimestamp: Mon, 12 Jun 2023 17:40:12 +0000\nTaints: \nUnschedulable: false\nLease:\n HolderIdentity: 10.138.75.112\n AcquireTime: \n RenewTime: Mon, 12 Jun 2023 22:23:18 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Mon, 12 Jun 2023 17:53:36 +0000 Mon, 12 Jun 2023 17:53:36 +0000 CalicoIsUp Calico is running on this node\n MemoryPressure False Mon, 12 Jun 2023 22:19:06 +0000 Mon, 12 Jun 2023 17:40:12 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 12 Jun 2023 22:19:06 +0000 Mon, 12 Jun 2023 17:40:12 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 12 Jun 2023 22:19:06 +0000 Mon, 12 Jun 2023 17:40:12 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 12 Jun 2023 22:19:06 +0000 Mon, 12 Jun 2023 17:54:09 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.138.75.112\n ExternalIP: 168.1.52.66\n Hostname: 10.138.75.112\nCapacity:\n cpu: 4\n ephemeral-storage: 102609848Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 16382396Ki\n pods: 110\nAllocatable:\n cpu: 3910m\n ephemeral-storage: 93913280025\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 13594044Ki\n pods: 110\nSystem Info:\n Machine ID: d2a27b32551248b19992e067fbb9ead9\n System UUID: c0aa31ba-abd8-4eff-27d5-68f68e13b6f9\n Boot ID: aa4affa7-d699-43c9-b205-a4521ec473d3\n Kernel Version: 4.18.0-477.13.1.el8_8.x86_64\n OS Image: Red Hat Enterprise Linux 8.8 (Ootpa)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: cri-o://1.26.3-7.rhaos4.13.gitec064c9.el8\n Kubelet Version: v1.26.3+b404935\n Kube-Proxy Version: v1.26.3+b404935\nPodCIDR: 172.30.1.0/24\nPodCIDRs: 172.30.1.0/24\nProviderID: ibm://fee034388aa6435883a1f720010ab3a2///ci3l4bss0lnb681k1pc0/kube-ci3l4bss0lnb681k1pc0-kubee2epvg6-default-00000248\nNon-terminated Pods: (41 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n calico-system calico-node-b9sdb 250m (6%) 0 (0%) 80Mi (0%) 0 (0%) 4h30m\n calico-system calico-typha-74d94b74f5-dc6td 250m (6%) 0 (0%) 80Mi (0%) 0 (0%) 4h30m\n ibm-system ibm-cloud-provider-ip-168-1-198-197-75947fc545-gxzn7 5m (0%) 0 (0%) 10Mi (0%) 0 (0%) 4h24m\n kube-system ibm-keepalived-watcher-5hc6v 5m (0%) 0 (0%) 10Mi (0%) 0 (0%) 4h43m\n kube-system ibm-master-proxy-static-10.138.75.112 26m (0%) 300m (7%) 32001024 (0%) 512M (3%) 4h42m\n kube-system ibmcloud-block-storage-driver-5zqmj 50m (1%) 500m (12%) 100Mi (0%) 300Mi (2%) 4h43m\n openshift-cluster-node-tuning-operator tuned-phslc 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 4h24m\n openshift-cluster-storage-operator csi-snapshot-controller-7f8879b9ff-p456r 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 4h27m\n openshift-cluster-storage-operator csi-snapshot-webhook-7bd9594b6d-bp5dr 10m (0%) 0 (0%) 20Mi (0%) 0 (0%) 4h27m\n openshift-console console-5bf97c7949-w5sn5 10m (0%) 0 (0%) 100Mi (0%) 0 (0%) 4h22m\n openshift-console downloads-8b57f44bb-55ss5 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 4h28m\n openshift-dns dns-default-hpnqj 60m (1%) 0 (0%) 110Mi (0%) 0 (0%) 4h24m\n openshift-dns node-resolver-5st6j 5m (0%) 0 (0%) 21Mi (0%) 0 (0%) 4h24m\n openshift-image-registry image-registry-6c79bcf5c4-p7ss4 100m (2%) 0 (0%) 256Mi (1%) 0 (0%) 4h24m\n openshift-image-registry node-ca-qm7sb 10m (0%) 0 (0%) 10Mi (0%) 0 (0%) 4h24m\n openshift-ingress-canary ingress-canary-5qpcw 10m (0%) 0 (0%) 20Mi (0%) 0 (0%) 4h24m\n openshift-ingress router-default-7d454f944c-62qgz 100m (2%) 0 (0%) 256Mi (1%) 0 (0%) 4h24m\n openshift-kube-proxy openshift-kube-proxy-b9xs9 110m (2%) 0 (0%) 220Mi (1%) 0 (0%) 4h35m\n openshift-kube-storage-version-migrator migrator-cfb6c8f7c-vx2tr 10m (0%) 0 (0%) 200Mi (1%) 0 (0%) 4h27m\n openshift-marketplace certified-operators-vjs57 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 15m\n openshift-marketplace community-operators-fm8cx 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 4h26m\n openshift-marketplace redhat-marketplace-gp8xf 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 15m\n openshift-marketplace redhat-operators-pr47d 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 3h17m\n openshift-monitoring alertmanager-main-1 9m (0%) 0 (0%) 120Mi (0%) 0 (0%) 4h22m\n openshift-monitoring kube-state-metrics-6ccfb58dc4-rgnnh 4m (0%) 0 (0%) 110Mi (0%) 0 (0%) 4h23m\n openshift-monitoring node-exporter-r799t 9m (0%) 0 (0%) 47Mi (0%) 0 (0%) 4h23m\n openshift-monitoring openshift-state-metrics-7d7f8b4cf8-cdbr8 3m (0%) 0 (0%) 72Mi (0%) 0 (0%) 15m\n openshift-monitoring prometheus-adapter-7c58c77c58-xfd55 1m (0%) 0 (0%) 40Mi (0%) 0 (0%) 4h23m\n openshift-monitoring prometheus-k8s-0 75m (1%) 0 (0%) 1104Mi (8%) 0 (0%) 4h21m\n openshift-monitoring prometheus-operator-5d978dbf9c-ngk46 6m (0%) 0 (0%) 165Mi (1%) 0 (0%) 15m\n openshift-monitoring prometheus-operator-admission-webhook-5d679565bb-66wnf 5m (0%) 0 (0%) 30Mi (0%) 0 (0%) 4h24m\n openshift-monitoring telemeter-client-55c7b57d84-vprqh 3m (0%) 0 (0%) 70Mi (0%) 0 (0%) 15m\n openshift-monitoring thanos-querier-6497df7b9-djrsc 15m (0%) 0 (0%) 92Mi (0%) 0 (0%) 4h23m\n openshift-multus multus-additional-cni-plugins-zpr6c 10m (0%) 0 (0%) 10Mi (0%) 0 (0%) 4h35m\n openshift-multus multus-admission-controller-5894dd7875-7bk6g 20m (0%) 0 (0%) 70Mi (0%) 0 (0%) 15m\n openshift-multus multus-q452d 10m (0%) 0 (0%) 65Mi (0%) 0 (0%) 4h35m\n openshift-multus network-metrics-daemon-vx56x 20m (0%) 0 (0%) 120Mi (0%) 0 (0%) 4h35m\n openshift-network-diagnostics network-check-target-lfvfw 10m (0%) 0 (0%) 15Mi (0%) 0 (0%) 4h35m\n openshift-network-operator network-operator-5498bf7dc6-xv8r2 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 4h36m\n openshift-operator-lifecycle-manager packageserver-7f8bd8c95b-fgfhz 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 4h24m\n sonobuoy sonobuoy-systemd-logs-daemon-set-129e475d543c42cc-xk7f7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 104m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 1301m (33%) 800m (20%)\n memory 4202003Ki (30%) 826572800 (5%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" -Jun 12 22:23:25.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-3951 describe namespace kubectl-3951' -Jun 12 22:23:25.911: INFO: stderr: "" -Jun 12 22:23:25.912: INFO: stdout: "Name: kubectl-3951\nLabels: e2e-framework=kubectl\n e2e-run=252e5f2f-6715-440e-b971-87933460a116\n kubernetes.io/metadata.name=kubectl-3951\n pod-security.kubernetes.io/audit=privileged\n pod-security.kubernetes.io/audit-version=v1.24\n pod-security.kubernetes.io/enforce=baseline\n pod-security.kubernetes.io/warn=privileged\n pod-security.kubernetes.io/warn-version=v1.24\nAnnotations: openshift.io/sa.scc.mcs: s0:c67,c4\n openshift.io/sa.scc.supplemental-groups: 1004430000/10000\n openshift.io/sa.scc.uid-range: 1004430000/10000\nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" -[AfterEach] [sig-cli] Kubectl client +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 +STEP: Creating service test in namespace statefulset-3855 07/27/23 02:59:14.158 +[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + test/e2e/apps/statefulset.go:697 +STEP: Creating stateful set ss in namespace statefulset-3855 07/27/23 02:59:14.175 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3855 07/27/23 02:59:14.2 +Jul 27 02:59:14.211: INFO: Found 0 stateful pods, waiting for 1 +Jul 27 02:59:24.243: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod 07/27/23 02:59:24.243 +Jul 27 02:59:24.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-3855 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Jul 27 02:59:24.479: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Jul 27 02:59:24.479: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Jul 27 02:59:24.479: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Jul 27 02:59:24.493: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Jul 27 02:59:34.509: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Jul 27 02:59:34.509: INFO: Waiting for statefulset status.replicas updated to 0 +Jul 27 02:59:34.586: INFO: POD NODE PHASE GRACE CONDITIONS +Jul 27 02:59:34.586: INFO: ss-0 10.245.128.19 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:14 +0000 UTC }] +Jul 27 02:59:34.586: INFO: +Jul 27 02:59:34.586: INFO: StatefulSet ss has not reached scale 3, at 1 +Jul 27 02:59:35.600: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.975241169s +Jul 27 02:59:36.622: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.961979373s +Jul 27 02:59:37.649: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.939036013s +Jul 27 02:59:38.663: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.913111122s +Jul 27 02:59:39.677: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.898279104s +Jul 27 02:59:40.690: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.88507961s +Jul 27 02:59:41.704: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.871104223s +Jul 27 02:59:42.718: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.858091109s +Jul 27 02:59:43.732: INFO: Verifying statefulset ss doesn't scale past 3 for another 843.198511ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3855 07/27/23 02:59:44.732 +Jul 27 02:59:44.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-3855 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Jul 27 02:59:44.983: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Jul 27 02:59:44.983: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Jul 27 02:59:44.983: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Jul 27 02:59:44.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-3855 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Jul 27 02:59:45.172: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Jul 27 02:59:45.172: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Jul 27 02:59:45.172: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Jul 27 02:59:45.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-3855 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Jul 27 02:59:45.429: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Jul 27 02:59:45.429: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Jul 27 02:59:45.429: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Jul 27 02:59:45.442: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false +Jul 27 02:59:55.460: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Jul 27 02:59:55.460: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Jul 27 02:59:55.460: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Scale down will not halt with unhealthy stateful pod 07/27/23 02:59:55.46 +Jul 27 02:59:55.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-3855 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Jul 27 02:59:55.671: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Jul 27 02:59:55.671: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Jul 27 02:59:55.671: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Jul 27 02:59:55.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-3855 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Jul 27 02:59:55.901: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Jul 27 02:59:55.901: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Jul 27 02:59:55.901: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Jul 27 02:59:55.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-3855 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Jul 27 02:59:56.135: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Jul 27 02:59:56.135: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Jul 27 02:59:56.135: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Jul 27 02:59:56.135: INFO: Waiting for statefulset status.replicas updated to 0 +Jul 27 02:59:56.147: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 +Jul 27 03:00:06.180: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Jul 27 03:00:06.180: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Jul 27 03:00:06.180: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Jul 27 03:00:06.226: INFO: POD NODE PHASE GRACE CONDITIONS +Jul 27 03:00:06.226: INFO: ss-0 10.245.128.19 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:14 +0000 UTC }] +Jul 27 03:00:06.226: INFO: ss-1 10.245.128.17 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:34 +0000 UTC }] +Jul 27 03:00:06.226: INFO: ss-2 10.245.128.18 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:34 +0000 UTC }] +Jul 27 03:00:06.226: INFO: +Jul 27 03:00:06.226: INFO: StatefulSet ss has not reached scale 0, at 3 +Jul 27 03:00:07.255: INFO: POD NODE PHASE GRACE CONDITIONS +Jul 27 03:00:07.255: INFO: ss-0 10.245.128.19 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:14 +0000 UTC }] +Jul 27 03:00:07.255: INFO: ss-1 10.245.128.17 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:34 +0000 UTC }] +Jul 27 03:00:07.255: INFO: ss-2 10.245.128.18 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:34 +0000 UTC }] +Jul 27 03:00:07.255: INFO: +Jul 27 03:00:07.255: INFO: StatefulSet ss has not reached scale 0, at 3 +Jul 27 03:00:08.268: INFO: Verifying statefulset ss doesn't scale past 0 for another 7.955241744s +Jul 27 03:00:09.281: INFO: Verifying statefulset ss doesn't scale past 0 for another 6.943107976s +Jul 27 03:00:10.293: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.929944525s +Jul 27 03:00:11.305: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.917551288s +Jul 27 03:00:12.324: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.904863288s +Jul 27 03:00:13.335: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.88608083s +Jul 27 03:00:14.348: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.874196672s +Jul 27 03:00:15.360: INFO: Verifying statefulset ss doesn't scale past 0 for another 862.089647ms +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3855 07/27/23 03:00:16.361 +Jul 27 03:00:16.373: INFO: Scaling statefulset ss to 0 +Jul 27 03:00:16.410: INFO: Waiting for statefulset status.replicas updated to 0 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 +Jul 27 03:00:16.422: INFO: Deleting all statefulset in ns statefulset-3855 +Jul 27 03:00:16.433: INFO: Scaling statefulset ss to 0 +Jul 27 03:00:16.470: INFO: Waiting for statefulset status.replicas updated to 0 +Jul 27 03:00:16.480: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 -Jun 12 22:23:25.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-cli] Kubectl client +Jul 27 03:00:16.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-cli] Kubectl client +[DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-cli] Kubectl client +[DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 -STEP: Destroying namespace "kubectl-3951" for this suite. 06/12/23 22:23:25.923 +STEP: Destroying namespace "statefulset-3855" for this suite. 07/27/23 03:00:16.541 ------------------------------ -• [SLOW TEST] [11.173 seconds] -[sig-cli] Kubectl client -test/e2e/kubectl/framework.go:23 - Kubectl describe - test/e2e/kubectl/kubectl.go:1270 - should check if kubectl describe prints relevant information for rc and pods [Conformance] - test/e2e/kubectl/kubectl.go:1276 +• [SLOW TEST] [62.479 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:103 + Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + test/e2e/apps/statefulset.go:697 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-cli] Kubectl client + [BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:23:14.771 - Jun 12 22:23:14.771: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename kubectl 06/12/23 22:23:14.774 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:23:14.829 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:23:14.837 - [BeforeEach] [sig-cli] Kubectl client + STEP: Creating a kubernetes client 07/27/23 02:59:14.081 + Jul 27 02:59:14.081: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename statefulset 07/27/23 02:59:14.082 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 02:59:14.133 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 02:59:14.144 + [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-cli] Kubectl client - test/e2e/kubectl/kubectl.go:274 - [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] - test/e2e/kubectl/kubectl.go:1276 - Jun 12 22:23:14.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-3951 create -f -' - Jun 12 22:23:16.524: INFO: stderr: "" - Jun 12 22:23:16.524: INFO: stdout: "replicationcontroller/agnhost-primary created\n" - Jun 12 22:23:16.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-3951 create -f -' - Jun 12 22:23:17.789: INFO: stderr: "" - Jun 12 22:23:17.789: INFO: stdout: "service/agnhost-primary created\n" - STEP: Waiting for Agnhost primary to start. 06/12/23 22:23:17.789 - Jun 12 22:23:18.808: INFO: Selector matched 1 pods for map[app:agnhost] - Jun 12 22:23:18.809: INFO: Found 0 / 1 - Jun 12 22:23:19.809: INFO: Selector matched 1 pods for map[app:agnhost] - Jun 12 22:23:19.809: INFO: Found 1 / 1 - Jun 12 22:23:19.809: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 - Jun 12 22:23:19.823: INFO: Selector matched 1 pods for map[app:agnhost] - Jun 12 22:23:19.823: INFO: ForEach: Found 1 pods from the filter. Now looping through them. - Jun 12 22:23:19.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-3951 describe pod agnhost-primary-dqhr7' - Jun 12 22:23:21.193: INFO: stderr: "" - Jun 12 22:23:21.193: INFO: stdout: "Name: agnhost-primary-dqhr7\nNamespace: kubectl-3951\nPriority: 0\nService Account: default\nNode: 10.138.75.70/10.138.75.70\nStart Time: Mon, 12 Jun 2023 22:23:16 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: cni.projectcalico.org/containerID: fd4688e08f57da7b8ce9f0ede956fbcc15a2b83de251d815752f228dea06c252\n cni.projectcalico.org/podIP: 172.30.224.59/32\n cni.projectcalico.org/podIPs: 172.30.224.59/32\n k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"k8s-pod-network\",\n \"ips\": [\n \"172.30.224.59\"\n ],\n \"default\": true,\n \"dns\": {}\n }]\n openshift.io/scc: anyuid\nStatus: Running\nIP: 172.30.224.59\nIPs:\n IP: 172.30.224.59\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: cri-o://3b543dd5c564c3e6ccceea984a568bc0b1fda9d11c55f7c8c8d5c22c78949068\n Image: registry.k8s.io/e2e-test-images/agnhost:2.43\n Image ID: registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 12 Jun 2023 22:23:18 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dg26f (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-dg26f:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: \nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5s default-scheduler Successfully assigned kubectl-3951/agnhost-primary-dqhr7 to 10.138.75.70\n Normal AddedInterface 4s multus Add eth0 [172.30.224.59/32] from k8s-pod-network\n Normal Pulled 4s kubelet Container image \"registry.k8s.io/e2e-test-images/agnhost:2.43\" already present on machine\n Normal Created 3s kubelet Created container agnhost-primary\n Normal Started 3s kubelet Started container agnhost-primary\n" - Jun 12 22:23:21.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-3951 describe rc agnhost-primary' - Jun 12 22:23:23.148: INFO: stderr: "" - Jun 12 22:23:23.148: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-3951\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: registry.k8s.io/e2e-test-images/agnhost:2.43\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 7s replication-controller Created pod: agnhost-primary-dqhr7\n" - Jun 12 22:23:23.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-3951 describe service agnhost-primary' - Jun 12 22:23:24.250: INFO: stderr: "" - Jun 12 22:23:24.250: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-3951\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 172.21.129.205\nIPs: 172.21.129.205\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 172.30.224.59:6379\nSession Affinity: None\nEvents: \n" - Jun 12 22:23:24.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-3951 describe node 10.138.75.112' - Jun 12 22:23:25.655: INFO: stderr: "" - Jun 12 22:23:25.655: INFO: stdout: "Name: 10.138.75.112\nRoles: master,worker\nLabels: arch=amd64\n beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=b3c.4x16.encrypted\n beta.kubernetes.io/os=linux\n failure-domain.beta.kubernetes.io/region=au-syd\n failure-domain.beta.kubernetes.io/zone=syd01\n ibm-cloud.kubernetes.io/encrypted-docker-data=true\n ibm-cloud.kubernetes.io/external-ip=168.1.52.66\n ibm-cloud.kubernetes.io/iaas-provider=softlayer\n ibm-cloud.kubernetes.io/internal-ip=10.138.75.112\n ibm-cloud.kubernetes.io/machine-type=b3c.4x16.encrypted\n ibm-cloud.kubernetes.io/os=REDHAT_8_64\n ibm-cloud.kubernetes.io/region=au-syd\n ibm-cloud.kubernetes.io/sgx-enabled=false\n ibm-cloud.kubernetes.io/worker-id=kube-ci3l4bss0lnb681k1pc0-kubee2epvg6-default-00000248\n ibm-cloud.kubernetes.io/worker-pool-id=ci3l4bss0lnb681k1pc0-9c469cb\n ibm-cloud.kubernetes.io/worker-pool-name=default\n ibm-cloud.kubernetes.io/worker-version=4.13.1_1521_openshift\n ibm-cloud.kubernetes.io/zone=syd01\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=10.138.75.112\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\n node-role.kubernetes.io/worker=\n node.kubernetes.io/instance-type=b3c.4x16.encrypted\n node.openshift.io/os_id=rhel\n privateVLAN=2723066\n publicVLAN=2723064\n topology.kubernetes.io/region=au-syd\n topology.kubernetes.io/zone=syd01\nAnnotations: projectcalico.org/IPv4Address: 10.138.75.112/26\n projectcalico.org/IPv4IPIPTunnelAddr: 172.30.161.64\nCreationTimestamp: Mon, 12 Jun 2023 17:40:12 +0000\nTaints: \nUnschedulable: false\nLease:\n HolderIdentity: 10.138.75.112\n AcquireTime: \n RenewTime: Mon, 12 Jun 2023 22:23:18 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Mon, 12 Jun 2023 17:53:36 +0000 Mon, 12 Jun 2023 17:53:36 +0000 CalicoIsUp Calico is running on this node\n MemoryPressure False Mon, 12 Jun 2023 22:19:06 +0000 Mon, 12 Jun 2023 17:40:12 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 12 Jun 2023 22:19:06 +0000 Mon, 12 Jun 2023 17:40:12 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 12 Jun 2023 22:19:06 +0000 Mon, 12 Jun 2023 17:40:12 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 12 Jun 2023 22:19:06 +0000 Mon, 12 Jun 2023 17:54:09 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.138.75.112\n ExternalIP: 168.1.52.66\n Hostname: 10.138.75.112\nCapacity:\n cpu: 4\n ephemeral-storage: 102609848Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 16382396Ki\n pods: 110\nAllocatable:\n cpu: 3910m\n ephemeral-storage: 93913280025\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 13594044Ki\n pods: 110\nSystem Info:\n Machine ID: d2a27b32551248b19992e067fbb9ead9\n System UUID: c0aa31ba-abd8-4eff-27d5-68f68e13b6f9\n Boot ID: aa4affa7-d699-43c9-b205-a4521ec473d3\n Kernel Version: 4.18.0-477.13.1.el8_8.x86_64\n OS Image: Red Hat Enterprise Linux 8.8 (Ootpa)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: cri-o://1.26.3-7.rhaos4.13.gitec064c9.el8\n Kubelet Version: v1.26.3+b404935\n Kube-Proxy Version: v1.26.3+b404935\nPodCIDR: 172.30.1.0/24\nPodCIDRs: 172.30.1.0/24\nProviderID: ibm://fee034388aa6435883a1f720010ab3a2///ci3l4bss0lnb681k1pc0/kube-ci3l4bss0lnb681k1pc0-kubee2epvg6-default-00000248\nNon-terminated Pods: (41 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n calico-system calico-node-b9sdb 250m (6%) 0 (0%) 80Mi (0%) 0 (0%) 4h30m\n calico-system calico-typha-74d94b74f5-dc6td 250m (6%) 0 (0%) 80Mi (0%) 0 (0%) 4h30m\n ibm-system ibm-cloud-provider-ip-168-1-198-197-75947fc545-gxzn7 5m (0%) 0 (0%) 10Mi (0%) 0 (0%) 4h24m\n kube-system ibm-keepalived-watcher-5hc6v 5m (0%) 0 (0%) 10Mi (0%) 0 (0%) 4h43m\n kube-system ibm-master-proxy-static-10.138.75.112 26m (0%) 300m (7%) 32001024 (0%) 512M (3%) 4h42m\n kube-system ibmcloud-block-storage-driver-5zqmj 50m (1%) 500m (12%) 100Mi (0%) 300Mi (2%) 4h43m\n openshift-cluster-node-tuning-operator tuned-phslc 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 4h24m\n openshift-cluster-storage-operator csi-snapshot-controller-7f8879b9ff-p456r 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 4h27m\n openshift-cluster-storage-operator csi-snapshot-webhook-7bd9594b6d-bp5dr 10m (0%) 0 (0%) 20Mi (0%) 0 (0%) 4h27m\n openshift-console console-5bf97c7949-w5sn5 10m (0%) 0 (0%) 100Mi (0%) 0 (0%) 4h22m\n openshift-console downloads-8b57f44bb-55ss5 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 4h28m\n openshift-dns dns-default-hpnqj 60m (1%) 0 (0%) 110Mi (0%) 0 (0%) 4h24m\n openshift-dns node-resolver-5st6j 5m (0%) 0 (0%) 21Mi (0%) 0 (0%) 4h24m\n openshift-image-registry image-registry-6c79bcf5c4-p7ss4 100m (2%) 0 (0%) 256Mi (1%) 0 (0%) 4h24m\n openshift-image-registry node-ca-qm7sb 10m (0%) 0 (0%) 10Mi (0%) 0 (0%) 4h24m\n openshift-ingress-canary ingress-canary-5qpcw 10m (0%) 0 (0%) 20Mi (0%) 0 (0%) 4h24m\n openshift-ingress router-default-7d454f944c-62qgz 100m (2%) 0 (0%) 256Mi (1%) 0 (0%) 4h24m\n openshift-kube-proxy openshift-kube-proxy-b9xs9 110m (2%) 0 (0%) 220Mi (1%) 0 (0%) 4h35m\n openshift-kube-storage-version-migrator migrator-cfb6c8f7c-vx2tr 10m (0%) 0 (0%) 200Mi (1%) 0 (0%) 4h27m\n openshift-marketplace certified-operators-vjs57 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 15m\n openshift-marketplace community-operators-fm8cx 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 4h26m\n openshift-marketplace redhat-marketplace-gp8xf 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 15m\n openshift-marketplace redhat-operators-pr47d 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 3h17m\n openshift-monitoring alertmanager-main-1 9m (0%) 0 (0%) 120Mi (0%) 0 (0%) 4h22m\n openshift-monitoring kube-state-metrics-6ccfb58dc4-rgnnh 4m (0%) 0 (0%) 110Mi (0%) 0 (0%) 4h23m\n openshift-monitoring node-exporter-r799t 9m (0%) 0 (0%) 47Mi (0%) 0 (0%) 4h23m\n openshift-monitoring openshift-state-metrics-7d7f8b4cf8-cdbr8 3m (0%) 0 (0%) 72Mi (0%) 0 (0%) 15m\n openshift-monitoring prometheus-adapter-7c58c77c58-xfd55 1m (0%) 0 (0%) 40Mi (0%) 0 (0%) 4h23m\n openshift-monitoring prometheus-k8s-0 75m (1%) 0 (0%) 1104Mi (8%) 0 (0%) 4h21m\n openshift-monitoring prometheus-operator-5d978dbf9c-ngk46 6m (0%) 0 (0%) 165Mi (1%) 0 (0%) 15m\n openshift-monitoring prometheus-operator-admission-webhook-5d679565bb-66wnf 5m (0%) 0 (0%) 30Mi (0%) 0 (0%) 4h24m\n openshift-monitoring telemeter-client-55c7b57d84-vprqh 3m (0%) 0 (0%) 70Mi (0%) 0 (0%) 15m\n openshift-monitoring thanos-querier-6497df7b9-djrsc 15m (0%) 0 (0%) 92Mi (0%) 0 (0%) 4h23m\n openshift-multus multus-additional-cni-plugins-zpr6c 10m (0%) 0 (0%) 10Mi (0%) 0 (0%) 4h35m\n openshift-multus multus-admission-controller-5894dd7875-7bk6g 20m (0%) 0 (0%) 70Mi (0%) 0 (0%) 15m\n openshift-multus multus-q452d 10m (0%) 0 (0%) 65Mi (0%) 0 (0%) 4h35m\n openshift-multus network-metrics-daemon-vx56x 20m (0%) 0 (0%) 120Mi (0%) 0 (0%) 4h35m\n openshift-network-diagnostics network-check-target-lfvfw 10m (0%) 0 (0%) 15Mi (0%) 0 (0%) 4h35m\n openshift-network-operator network-operator-5498bf7dc6-xv8r2 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 4h36m\n openshift-operator-lifecycle-manager packageserver-7f8bd8c95b-fgfhz 10m (0%) 0 (0%) 50Mi (0%) 0 (0%) 4h24m\n sonobuoy sonobuoy-systemd-logs-daemon-set-129e475d543c42cc-xk7f7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 104m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 1301m (33%) 800m (20%)\n memory 4202003Ki (30%) 826572800 (5%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" - Jun 12 22:23:25.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-3951 describe namespace kubectl-3951' - Jun 12 22:23:25.911: INFO: stderr: "" - Jun 12 22:23:25.912: INFO: stdout: "Name: kubectl-3951\nLabels: e2e-framework=kubectl\n e2e-run=252e5f2f-6715-440e-b971-87933460a116\n kubernetes.io/metadata.name=kubectl-3951\n pod-security.kubernetes.io/audit=privileged\n pod-security.kubernetes.io/audit-version=v1.24\n pod-security.kubernetes.io/enforce=baseline\n pod-security.kubernetes.io/warn=privileged\n pod-security.kubernetes.io/warn-version=v1.24\nAnnotations: openshift.io/sa.scc.mcs: s0:c67,c4\n openshift.io/sa.scc.supplemental-groups: 1004430000/10000\n openshift.io/sa.scc.uid-range: 1004430000/10000\nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" - [AfterEach] [sig-cli] Kubectl client + [BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 + [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 + STEP: Creating service test in namespace statefulset-3855 07/27/23 02:59:14.158 + [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + test/e2e/apps/statefulset.go:697 + STEP: Creating stateful set ss in namespace statefulset-3855 07/27/23 02:59:14.175 + STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3855 07/27/23 02:59:14.2 + Jul 27 02:59:14.211: INFO: Found 0 stateful pods, waiting for 1 + Jul 27 02:59:24.243: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true + STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod 07/27/23 02:59:24.243 + Jul 27 02:59:24.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-3855 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Jul 27 02:59:24.479: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Jul 27 02:59:24.479: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Jul 27 02:59:24.479: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Jul 27 02:59:24.493: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true + Jul 27 02:59:34.509: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false + Jul 27 02:59:34.509: INFO: Waiting for statefulset status.replicas updated to 0 + Jul 27 02:59:34.586: INFO: POD NODE PHASE GRACE CONDITIONS + Jul 27 02:59:34.586: INFO: ss-0 10.245.128.19 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:14 +0000 UTC }] + Jul 27 02:59:34.586: INFO: + Jul 27 02:59:34.586: INFO: StatefulSet ss has not reached scale 3, at 1 + Jul 27 02:59:35.600: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.975241169s + Jul 27 02:59:36.622: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.961979373s + Jul 27 02:59:37.649: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.939036013s + Jul 27 02:59:38.663: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.913111122s + Jul 27 02:59:39.677: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.898279104s + Jul 27 02:59:40.690: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.88507961s + Jul 27 02:59:41.704: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.871104223s + Jul 27 02:59:42.718: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.858091109s + Jul 27 02:59:43.732: INFO: Verifying statefulset ss doesn't scale past 3 for another 843.198511ms + STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3855 07/27/23 02:59:44.732 + Jul 27 02:59:44.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-3855 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Jul 27 02:59:44.983: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" + Jul 27 02:59:44.983: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Jul 27 02:59:44.983: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + Jul 27 02:59:44.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-3855 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Jul 27 02:59:45.172: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" + Jul 27 02:59:45.172: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Jul 27 02:59:45.172: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + Jul 27 02:59:45.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-3855 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Jul 27 02:59:45.429: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" + Jul 27 02:59:45.429: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Jul 27 02:59:45.429: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + Jul 27 02:59:45.442: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false + Jul 27 02:59:55.460: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true + Jul 27 02:59:55.460: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true + Jul 27 02:59:55.460: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true + STEP: Scale down will not halt with unhealthy stateful pod 07/27/23 02:59:55.46 + Jul 27 02:59:55.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-3855 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Jul 27 02:59:55.671: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Jul 27 02:59:55.671: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Jul 27 02:59:55.671: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Jul 27 02:59:55.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-3855 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Jul 27 02:59:55.901: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Jul 27 02:59:55.901: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Jul 27 02:59:55.901: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Jul 27 02:59:55.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1337358882 --namespace=statefulset-3855 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Jul 27 02:59:56.135: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Jul 27 02:59:56.135: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Jul 27 02:59:56.135: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Jul 27 02:59:56.135: INFO: Waiting for statefulset status.replicas updated to 0 + Jul 27 02:59:56.147: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 + Jul 27 03:00:06.180: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false + Jul 27 03:00:06.180: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false + Jul 27 03:00:06.180: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false + Jul 27 03:00:06.226: INFO: POD NODE PHASE GRACE CONDITIONS + Jul 27 03:00:06.226: INFO: ss-0 10.245.128.19 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:14 +0000 UTC }] + Jul 27 03:00:06.226: INFO: ss-1 10.245.128.17 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:34 +0000 UTC }] + Jul 27 03:00:06.226: INFO: ss-2 10.245.128.18 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:34 +0000 UTC }] + Jul 27 03:00:06.226: INFO: + Jul 27 03:00:06.226: INFO: StatefulSet ss has not reached scale 0, at 3 + Jul 27 03:00:07.255: INFO: POD NODE PHASE GRACE CONDITIONS + Jul 27 03:00:07.255: INFO: ss-0 10.245.128.19 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:14 +0000 UTC }] + Jul 27 03:00:07.255: INFO: ss-1 10.245.128.17 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:34 +0000 UTC }] + Jul 27 03:00:07.255: INFO: ss-2 10.245.128.18 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 02:59:34 +0000 UTC }] + Jul 27 03:00:07.255: INFO: + Jul 27 03:00:07.255: INFO: StatefulSet ss has not reached scale 0, at 3 + Jul 27 03:00:08.268: INFO: Verifying statefulset ss doesn't scale past 0 for another 7.955241744s + Jul 27 03:00:09.281: INFO: Verifying statefulset ss doesn't scale past 0 for another 6.943107976s + Jul 27 03:00:10.293: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.929944525s + Jul 27 03:00:11.305: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.917551288s + Jul 27 03:00:12.324: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.904863288s + Jul 27 03:00:13.335: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.88608083s + Jul 27 03:00:14.348: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.874196672s + Jul 27 03:00:15.360: INFO: Verifying statefulset ss doesn't scale past 0 for another 862.089647ms + STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3855 07/27/23 03:00:16.361 + Jul 27 03:00:16.373: INFO: Scaling statefulset ss to 0 + Jul 27 03:00:16.410: INFO: Waiting for statefulset status.replicas updated to 0 + [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 + Jul 27 03:00:16.422: INFO: Deleting all statefulset in ns statefulset-3855 + Jul 27 03:00:16.433: INFO: Scaling statefulset ss to 0 + Jul 27 03:00:16.470: INFO: Waiting for statefulset status.replicas updated to 0 + Jul 27 03:00:16.480: INFO: Deleting statefulset ss + [AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 - Jun 12 22:23:25.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-cli] Kubectl client + Jul 27 03:00:16.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-cli] Kubectl client + [DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-cli] Kubectl client + [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 - STEP: Destroying namespace "kubectl-3951" for this suite. 06/12/23 22:23:25.923 + STEP: Destroying namespace "statefulset-3855" for this suite. 07/27/23 03:00:16.541 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSS ------------------------------ -[sig-api-machinery] ResourceQuota - should manage the lifecycle of a ResourceQuota [Conformance] - test/e2e/apimachinery/resource_quota.go:943 -[BeforeEach] [sig-api-machinery] ResourceQuota +[sig-node] Lease + lease API should be available [Conformance] + test/e2e/common/node/lease.go:72 +[BeforeEach] [sig-node] Lease set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:23:25.949 -Jun 12 22:23:25.949: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename resourcequota 06/12/23 22:23:25.952 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:23:26.014 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:23:26.023 -[BeforeEach] [sig-api-machinery] ResourceQuota +STEP: Creating a kubernetes client 07/27/23 03:00:16.561 +Jul 27 03:00:16.561: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename lease-test 07/27/23 03:00:16.562 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:00:16.603 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:00:16.615 +[BeforeEach] [sig-node] Lease test/e2e/framework/metrics/init/init.go:31 -[It] should manage the lifecycle of a ResourceQuota [Conformance] - test/e2e/apimachinery/resource_quota.go:943 -STEP: Creating a ResourceQuota 06/12/23 22:23:26.056 -STEP: Getting a ResourceQuota 06/12/23 22:23:26.073 -STEP: Listing all ResourceQuotas with LabelSelector 06/12/23 22:23:26.103 -STEP: Patching the ResourceQuota 06/12/23 22:23:26.115 -STEP: Deleting a Collection of ResourceQuotas 06/12/23 22:23:26.13 -STEP: Verifying the deleted ResourceQuota 06/12/23 22:23:26.161 -[AfterEach] [sig-api-machinery] ResourceQuota +[It] lease API should be available [Conformance] + test/e2e/common/node/lease.go:72 +[AfterEach] [sig-node] Lease test/e2e/framework/node/init/init.go:32 -Jun 12 22:23:26.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +Jul 27 03:00:16.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Lease test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +[DeferCleanup (Each)] [sig-node] Lease dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +[DeferCleanup (Each)] [sig-node] Lease tear down framework | framework.go:193 -STEP: Destroying namespace "resourcequota-4173" for this suite. 06/12/23 22:23:26.182 +STEP: Destroying namespace "lease-test-5723" for this suite. 07/27/23 03:00:16.83 ------------------------------ -• [0.254 seconds] -[sig-api-machinery] ResourceQuota -test/e2e/apimachinery/framework.go:23 - should manage the lifecycle of a ResourceQuota [Conformance] - test/e2e/apimachinery/resource_quota.go:943 +• [0.289 seconds] +[sig-node] Lease +test/e2e/common/node/framework.go:23 + lease API should be available [Conformance] + test/e2e/common/node/lease.go:72 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] ResourceQuota + [BeforeEach] [sig-node] Lease set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:23:25.949 - Jun 12 22:23:25.949: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename resourcequota 06/12/23 22:23:25.952 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:23:26.014 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:23:26.023 - [BeforeEach] [sig-api-machinery] ResourceQuota + STEP: Creating a kubernetes client 07/27/23 03:00:16.561 + Jul 27 03:00:16.561: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename lease-test 07/27/23 03:00:16.562 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:00:16.603 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:00:16.615 + [BeforeEach] [sig-node] Lease test/e2e/framework/metrics/init/init.go:31 - [It] should manage the lifecycle of a ResourceQuota [Conformance] - test/e2e/apimachinery/resource_quota.go:943 - STEP: Creating a ResourceQuota 06/12/23 22:23:26.056 - STEP: Getting a ResourceQuota 06/12/23 22:23:26.073 - STEP: Listing all ResourceQuotas with LabelSelector 06/12/23 22:23:26.103 - STEP: Patching the ResourceQuota 06/12/23 22:23:26.115 - STEP: Deleting a Collection of ResourceQuotas 06/12/23 22:23:26.13 - STEP: Verifying the deleted ResourceQuota 06/12/23 22:23:26.161 - [AfterEach] [sig-api-machinery] ResourceQuota + [It] lease API should be available [Conformance] + test/e2e/common/node/lease.go:72 + [AfterEach] [sig-node] Lease test/e2e/framework/node/init/init.go:32 - Jun 12 22:23:26.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + Jul 27 03:00:16.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Lease test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + [DeferCleanup (Each)] [sig-node] Lease dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + [DeferCleanup (Each)] [sig-node] Lease tear down framework | framework.go:193 - STEP: Destroying namespace "resourcequota-4173" for this suite. 06/12/23 22:23:26.182 + STEP: Destroying namespace "lease-test-5723" for this suite. 07/27/23 03:00:16.83 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] - should list, patch and delete a collection of StatefulSets [Conformance] - test/e2e/apps/statefulset.go:908 + Should recreate evicted statefulset [Conformance] + test/e2e/apps/statefulset.go:739 [BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:23:26.214 -Jun 12 22:23:26.214: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename statefulset 06/12/23 22:23:26.215 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:23:26.274 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:23:26.299 +STEP: Creating a kubernetes client 07/27/23 03:00:16.852 +Jul 27 03:00:16.852: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename statefulset 07/27/23 03:00:16.853 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:00:16.898 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:00:16.909 [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:98 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:113 -STEP: Creating service test in namespace statefulset-1529 06/12/23 22:23:26.312 -[It] should list, patch and delete a collection of StatefulSets [Conformance] - test/e2e/apps/statefulset.go:908 -Jun 12 22:23:26.402: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Pending - Ready=false -Jun 12 22:23:36.410: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true -STEP: patching the StatefulSet 06/12/23 22:23:36.427 -W0612 22:23:36.464538 23 warnings.go:70] unknown field "spec.template.spec.TerminationGracePeriodSeconds" -Jun 12 22:23:36.481: INFO: Found 1 stateful pods, waiting for 2 -Jun 12 22:23:46.496: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true -Jun 12 22:23:46.496: INFO: Waiting for pod test-ss-1 to enter Running - Ready=true, currently Running - Ready=true -STEP: Listing all StatefulSets 06/12/23 22:23:46.519 -STEP: Delete all of the StatefulSets 06/12/23 22:23:46.532 -STEP: Verify that StatefulSets have been deleted 06/12/23 22:23:46.581 +STEP: Creating service test in namespace statefulset-8458 07/27/23 03:00:16.921 +[It] Should recreate evicted statefulset [Conformance] + test/e2e/apps/statefulset.go:739 +STEP: Looking for a node to schedule stateful set and pod 07/27/23 03:00:16.936 +STEP: Creating pod with conflicting port in namespace statefulset-8458 07/27/23 03:00:16.961 +STEP: Waiting until pod test-pod will start running in namespace statefulset-8458 07/27/23 03:00:16.995 +Jul 27 03:00:16.995: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "statefulset-8458" to be "running" +Jul 27 03:00:17.007: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 12.001267ms +Jul 27 03:00:19.026: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=false. Elapsed: 2.030475606s +Jul 27 03:00:19.026: INFO: Pod "test-pod" satisfied condition "running" +STEP: Creating statefulset with conflicting port in namespace statefulset-8458 07/27/23 03:00:19.026 +STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8458 07/27/23 03:00:19.045 +Jul 27 03:00:19.079: INFO: Observed stateful pod in namespace: statefulset-8458, name: ss-0, uid: 666b7a6b-732c-45d2-8d03-1c1673c50a97, status phase: Pending. Waiting for statefulset controller to delete. +Jul 27 03:00:19.106: INFO: Observed stateful pod in namespace: statefulset-8458, name: ss-0, uid: 666b7a6b-732c-45d2-8d03-1c1673c50a97, status phase: Failed. Waiting for statefulset controller to delete. +Jul 27 03:00:19.122: INFO: Observed stateful pod in namespace: statefulset-8458, name: ss-0, uid: 666b7a6b-732c-45d2-8d03-1c1673c50a97, status phase: Failed. Waiting for statefulset controller to delete. +Jul 27 03:00:19.139: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8458 +STEP: Removing pod with conflicting port in namespace statefulset-8458 07/27/23 03:00:19.139 +STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8458 and will be in running state 07/27/23 03:00:19.181 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:124 -Jun 12 22:23:46.613: INFO: Deleting all statefulset in ns statefulset-1529 +Jul 27 03:00:23.234: INFO: Deleting all statefulset in ns statefulset-8458 +Jul 27 03:00:23.247: INFO: Scaling statefulset ss to 0 +Jul 27 03:00:33.309: INFO: Waiting for statefulset status.replicas updated to 0 +Jul 27 03:00:33.320: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 -Jun 12 22:23:46.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +Jul 27 03:00:33.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 -STEP: Destroying namespace "statefulset-1529" for this suite. 06/12/23 22:23:46.736 +STEP: Destroying namespace "statefulset-8458" for this suite. 07/27/23 03:00:33.389 ------------------------------ -• [SLOW TEST] [20.576 seconds] +• [SLOW TEST] [16.561 seconds] [sig-apps] StatefulSet test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:103 - should list, patch and delete a collection of StatefulSets [Conformance] - test/e2e/apps/statefulset.go:908 + Should recreate evicted statefulset [Conformance] + test/e2e/apps/statefulset.go:739 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:23:26.214 - Jun 12 22:23:26.214: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename statefulset 06/12/23 22:23:26.215 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:23:26.274 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:23:26.299 + STEP: Creating a kubernetes client 07/27/23 03:00:16.852 + Jul 27 03:00:16.852: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename statefulset 07/27/23 03:00:16.853 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:00:16.898 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:00:16.909 [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:98 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:113 - STEP: Creating service test in namespace statefulset-1529 06/12/23 22:23:26.312 - [It] should list, patch and delete a collection of StatefulSets [Conformance] - test/e2e/apps/statefulset.go:908 - Jun 12 22:23:26.402: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Pending - Ready=false - Jun 12 22:23:36.410: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true - STEP: patching the StatefulSet 06/12/23 22:23:36.427 - W0612 22:23:36.464538 23 warnings.go:70] unknown field "spec.template.spec.TerminationGracePeriodSeconds" - Jun 12 22:23:36.481: INFO: Found 1 stateful pods, waiting for 2 - Jun 12 22:23:46.496: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true - Jun 12 22:23:46.496: INFO: Waiting for pod test-ss-1 to enter Running - Ready=true, currently Running - Ready=true - STEP: Listing all StatefulSets 06/12/23 22:23:46.519 - STEP: Delete all of the StatefulSets 06/12/23 22:23:46.532 - STEP: Verify that StatefulSets have been deleted 06/12/23 22:23:46.581 + STEP: Creating service test in namespace statefulset-8458 07/27/23 03:00:16.921 + [It] Should recreate evicted statefulset [Conformance] + test/e2e/apps/statefulset.go:739 + STEP: Looking for a node to schedule stateful set and pod 07/27/23 03:00:16.936 + STEP: Creating pod with conflicting port in namespace statefulset-8458 07/27/23 03:00:16.961 + STEP: Waiting until pod test-pod will start running in namespace statefulset-8458 07/27/23 03:00:16.995 + Jul 27 03:00:16.995: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "statefulset-8458" to be "running" + Jul 27 03:00:17.007: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 12.001267ms + Jul 27 03:00:19.026: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=false. Elapsed: 2.030475606s + Jul 27 03:00:19.026: INFO: Pod "test-pod" satisfied condition "running" + STEP: Creating statefulset with conflicting port in namespace statefulset-8458 07/27/23 03:00:19.026 + STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8458 07/27/23 03:00:19.045 + Jul 27 03:00:19.079: INFO: Observed stateful pod in namespace: statefulset-8458, name: ss-0, uid: 666b7a6b-732c-45d2-8d03-1c1673c50a97, status phase: Pending. Waiting for statefulset controller to delete. + Jul 27 03:00:19.106: INFO: Observed stateful pod in namespace: statefulset-8458, name: ss-0, uid: 666b7a6b-732c-45d2-8d03-1c1673c50a97, status phase: Failed. Waiting for statefulset controller to delete. + Jul 27 03:00:19.122: INFO: Observed stateful pod in namespace: statefulset-8458, name: ss-0, uid: 666b7a6b-732c-45d2-8d03-1c1673c50a97, status phase: Failed. Waiting for statefulset controller to delete. + Jul 27 03:00:19.139: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8458 + STEP: Removing pod with conflicting port in namespace statefulset-8458 07/27/23 03:00:19.139 + STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8458 and will be in running state 07/27/23 03:00:19.181 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:124 - Jun 12 22:23:46.613: INFO: Deleting all statefulset in ns statefulset-1529 + Jul 27 03:00:23.234: INFO: Deleting all statefulset in ns statefulset-8458 + Jul 27 03:00:23.247: INFO: Scaling statefulset ss to 0 + Jul 27 03:00:33.309: INFO: Waiting for statefulset status.replicas updated to 0 + Jul 27 03:00:33.320: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 - Jun 12 22:23:46.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + Jul 27 03:00:33.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 - STEP: Destroying namespace "statefulset-1529" for this suite. 06/12/23 22:23:46.736 + STEP: Destroying namespace "statefulset-8458" for this suite. 07/27/23 03:00:33.389 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] Secrets - should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] - test/e2e/common/storage/secrets_volume.go:125 -[BeforeEach] [sig-storage] Secrets +[sig-node] Container Runtime blackbox test on terminated container + should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:216 +[BeforeEach] [sig-node] Container Runtime set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:23:46.804 -Jun 12 22:23:46.804: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename secrets 06/12/23 22:23:46.807 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:23:46.868 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:23:46.922 -[BeforeEach] [sig-storage] Secrets +STEP: Creating a kubernetes client 07/27/23 03:00:33.413 +Jul 27 03:00:33.414: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename container-runtime 07/27/23 03:00:33.414 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:00:33.457 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:00:33.471 +[BeforeEach] [sig-node] Container Runtime test/e2e/framework/metrics/init/init.go:31 -[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] - test/e2e/common/storage/secrets_volume.go:125 -STEP: Creating secret with name secret-test-db17e811-6429-4dd9-9dba-045703ea9780 06/12/23 22:23:47.004 -STEP: Creating a pod to test consume secrets 06/12/23 22:23:47.081 -Jun 12 22:23:47.295: INFO: Waiting up to 5m0s for pod "pod-secrets-6ff4c84b-b7b5-45e0-acb0-f1cd6bd0fc13" in namespace "secrets-6973" to be "Succeeded or Failed" -Jun 12 22:23:47.356: INFO: Pod "pod-secrets-6ff4c84b-b7b5-45e0-acb0-f1cd6bd0fc13": Phase="Pending", Reason="", readiness=false. Elapsed: 59.975288ms -Jun 12 22:23:49.388: INFO: Pod "pod-secrets-6ff4c84b-b7b5-45e0-acb0-f1cd6bd0fc13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092523224s -Jun 12 22:23:51.363: INFO: Pod "pod-secrets-6ff4c84b-b7b5-45e0-acb0-f1cd6bd0fc13": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067719508s -Jun 12 22:23:53.366: INFO: Pod "pod-secrets-6ff4c84b-b7b5-45e0-acb0-f1cd6bd0fc13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.069985561s -STEP: Saw pod success 06/12/23 22:23:53.366 -Jun 12 22:23:53.366: INFO: Pod "pod-secrets-6ff4c84b-b7b5-45e0-acb0-f1cd6bd0fc13" satisfied condition "Succeeded or Failed" -Jun 12 22:23:53.389: INFO: Trying to get logs from node 10.138.75.70 pod pod-secrets-6ff4c84b-b7b5-45e0-acb0-f1cd6bd0fc13 container secret-volume-test: -STEP: delete the pod 06/12/23 22:23:53.421 -Jun 12 22:23:53.438: INFO: Waiting for pod pod-secrets-6ff4c84b-b7b5-45e0-acb0-f1cd6bd0fc13 to disappear -Jun 12 22:23:53.443: INFO: Pod pod-secrets-6ff4c84b-b7b5-45e0-acb0-f1cd6bd0fc13 no longer exists -[AfterEach] [sig-storage] Secrets +[It] should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:216 +STEP: create the container 07/27/23 03:00:33.483 +STEP: wait for the container to reach Failed 07/27/23 03:00:33.517 +STEP: get the container status 07/27/23 03:00:37.614 +STEP: the container should be terminated 07/27/23 03:00:37.631 +STEP: the termination message should be set 07/27/23 03:00:37.631 +Jul 27 03:00:37.631: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- +STEP: delete the container 07/27/23 03:00:37.631 +[AfterEach] [sig-node] Container Runtime test/e2e/framework/node/init/init.go:32 -Jun 12 22:23:53.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Secrets +Jul 27 03:00:37.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Container Runtime test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Secrets +[DeferCleanup (Each)] [sig-node] Container Runtime dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Secrets +[DeferCleanup (Each)] [sig-node] Container Runtime tear down framework | framework.go:193 -STEP: Destroying namespace "secrets-6973" for this suite. 06/12/23 22:23:53.486 +STEP: Destroying namespace "container-runtime-8980" for this suite. 07/27/23 03:00:37.77 ------------------------------ -• [SLOW TEST] [6.720 seconds] -[sig-storage] Secrets -test/e2e/common/storage/framework.go:23 - should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] - test/e2e/common/storage/secrets_volume.go:125 +• [4.377 seconds] +[sig-node] Container Runtime +test/e2e/common/node/framework.go:23 + blackbox test + test/e2e/common/node/runtime.go:44 + on terminated container + test/e2e/common/node/runtime.go:137 + should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:216 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Secrets + [BeforeEach] [sig-node] Container Runtime set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:23:46.804 - Jun 12 22:23:46.804: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename secrets 06/12/23 22:23:46.807 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:23:46.868 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:23:46.922 - [BeforeEach] [sig-storage] Secrets + STEP: Creating a kubernetes client 07/27/23 03:00:33.413 + Jul 27 03:00:33.414: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename container-runtime 07/27/23 03:00:33.414 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:00:33.457 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:00:33.471 + [BeforeEach] [sig-node] Container Runtime test/e2e/framework/metrics/init/init.go:31 - [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] - test/e2e/common/storage/secrets_volume.go:125 - STEP: Creating secret with name secret-test-db17e811-6429-4dd9-9dba-045703ea9780 06/12/23 22:23:47.004 - STEP: Creating a pod to test consume secrets 06/12/23 22:23:47.081 - Jun 12 22:23:47.295: INFO: Waiting up to 5m0s for pod "pod-secrets-6ff4c84b-b7b5-45e0-acb0-f1cd6bd0fc13" in namespace "secrets-6973" to be "Succeeded or Failed" - Jun 12 22:23:47.356: INFO: Pod "pod-secrets-6ff4c84b-b7b5-45e0-acb0-f1cd6bd0fc13": Phase="Pending", Reason="", readiness=false. Elapsed: 59.975288ms - Jun 12 22:23:49.388: INFO: Pod "pod-secrets-6ff4c84b-b7b5-45e0-acb0-f1cd6bd0fc13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092523224s - Jun 12 22:23:51.363: INFO: Pod "pod-secrets-6ff4c84b-b7b5-45e0-acb0-f1cd6bd0fc13": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067719508s - Jun 12 22:23:53.366: INFO: Pod "pod-secrets-6ff4c84b-b7b5-45e0-acb0-f1cd6bd0fc13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.069985561s - STEP: Saw pod success 06/12/23 22:23:53.366 - Jun 12 22:23:53.366: INFO: Pod "pod-secrets-6ff4c84b-b7b5-45e0-acb0-f1cd6bd0fc13" satisfied condition "Succeeded or Failed" - Jun 12 22:23:53.389: INFO: Trying to get logs from node 10.138.75.70 pod pod-secrets-6ff4c84b-b7b5-45e0-acb0-f1cd6bd0fc13 container secret-volume-test: - STEP: delete the pod 06/12/23 22:23:53.421 - Jun 12 22:23:53.438: INFO: Waiting for pod pod-secrets-6ff4c84b-b7b5-45e0-acb0-f1cd6bd0fc13 to disappear - Jun 12 22:23:53.443: INFO: Pod pod-secrets-6ff4c84b-b7b5-45e0-acb0-f1cd6bd0fc13 no longer exists - [AfterEach] [sig-storage] Secrets + [It] should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:216 + STEP: create the container 07/27/23 03:00:33.483 + STEP: wait for the container to reach Failed 07/27/23 03:00:33.517 + STEP: get the container status 07/27/23 03:00:37.614 + STEP: the container should be terminated 07/27/23 03:00:37.631 + STEP: the termination message should be set 07/27/23 03:00:37.631 + Jul 27 03:00:37.631: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- + STEP: delete the container 07/27/23 03:00:37.631 + [AfterEach] [sig-node] Container Runtime test/e2e/framework/node/init/init.go:32 - Jun 12 22:23:53.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Secrets + Jul 27 03:00:37.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Container Runtime test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Secrets + [DeferCleanup (Each)] [sig-node] Container Runtime dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Secrets + [DeferCleanup (Each)] [sig-node] Container Runtime tear down framework | framework.go:193 - STEP: Destroying namespace "secrets-6973" for this suite. 06/12/23 22:23:53.486 + STEP: Destroying namespace "container-runtime-8980" for this suite. 07/27/23 03:00:37.77 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSS +SSSSSSSSSSSSS ------------------------------ -[sig-cli] Kubectl client Kubectl patch - should add annotations for pods in rc [Conformance] - test/e2e/kubectl/kubectl.go:1652 -[BeforeEach] [sig-cli] Kubectl client +[sig-apps] Daemon set [Serial] + should run and stop simple daemon [Conformance] + test/e2e/apps/daemon_set.go:166 +[BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:23:53.526 -Jun 12 22:23:53.526: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename kubectl 06/12/23 22:23:53.529 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:23:53.598 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:23:53.631 -[BeforeEach] [sig-cli] Kubectl client +STEP: Creating a kubernetes client 07/27/23 03:00:37.791 +Jul 27 03:00:37.791: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename daemonsets 07/27/23 03:00:37.792 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:00:37.834 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:00:37.846 +[BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-cli] Kubectl client - test/e2e/kubectl/kubectl.go:274 -[It] should add annotations for pods in rc [Conformance] - test/e2e/kubectl/kubectl.go:1652 -STEP: creating Agnhost RC 06/12/23 22:23:53.64 -Jun 12 22:23:53.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-6519 create -f -' -Jun 12 22:24:00.529: INFO: stderr: "" -Jun 12 22:24:00.529: INFO: stdout: "replicationcontroller/agnhost-primary created\n" -STEP: Waiting for Agnhost primary to start. 06/12/23 22:24:00.529 -Jun 12 22:24:01.543: INFO: Selector matched 1 pods for map[app:agnhost] -Jun 12 22:24:01.543: INFO: Found 0 / 1 -Jun 12 22:24:02.547: INFO: Selector matched 1 pods for map[app:agnhost] -Jun 12 22:24:02.547: INFO: Found 1 / 1 -Jun 12 22:24:02.547: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 -STEP: patching all pods 06/12/23 22:24:02.547 -Jun 12 22:24:02.554: INFO: Selector matched 1 pods for map[app:agnhost] -Jun 12 22:24:02.554: INFO: ForEach: Found 1 pods from the filter. Now looping through them. -Jun 12 22:24:02.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-6519 patch pod agnhost-primary-p6hp6 -p {"metadata":{"annotations":{"x":"y"}}}' -Jun 12 22:24:02.833: INFO: stderr: "" -Jun 12 22:24:02.833: INFO: stdout: "pod/agnhost-primary-p6hp6 patched\n" -STEP: checking annotations 06/12/23 22:24:02.833 -Jun 12 22:24:02.848: INFO: Selector matched 1 pods for map[app:agnhost] -Jun 12 22:24:02.848: INFO: ForEach: Found 1 pods from the filter. Now looping through them. -[AfterEach] [sig-cli] Kubectl client - test/e2e/framework/node/init/init.go:32 -Jun 12 22:24:02.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-cli] Kubectl client - test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-cli] Kubectl client - dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-cli] Kubectl client - tear down framework | framework.go:193 -STEP: Destroying namespace "kubectl-6519" for this suite. 06/12/23 22:24:02.867 ------------------------------- -• [SLOW TEST] [9.362 seconds] -[sig-cli] Kubectl client -test/e2e/kubectl/framework.go:23 - Kubectl patch - test/e2e/kubectl/kubectl.go:1646 - should add annotations for pods in rc [Conformance] - test/e2e/kubectl/kubectl.go:1652 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 +[It] should run and stop simple daemon [Conformance] + test/e2e/apps/daemon_set.go:166 +STEP: Creating simple DaemonSet "daemon-set" 07/27/23 03:00:37.977 +STEP: Check that daemon pods launch on every node of the cluster. 07/27/23 03:00:37.995 +Jul 27 03:00:38.022: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jul 27 03:00:38.022: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 +Jul 27 03:00:39.070: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jul 27 03:00:39.071: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 +Jul 27 03:00:40.053: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Jul 27 03:00:40.053: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 +Jul 27 03:00:41.051: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Jul 27 03:00:41.051: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +STEP: Stop a daemon pod, check that the daemon pod is revived. 07/27/23 03:00:41.067 +Jul 27 03:00:41.135: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Jul 27 03:00:41.135: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 +Jul 27 03:00:42.186: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Jul 27 03:00:42.208: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 +Jul 27 03:00:43.165: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Jul 27 03:00:43.165: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 +Jul 27 03:00:44.165: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Jul 27 03:00:44.165: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 +Jul 27 03:00:45.166: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Jul 27 03:00:45.166: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 +Jul 27 03:00:46.167: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Jul 27 03:00:46.167: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 +STEP: Deleting DaemonSet "daemon-set" 07/27/23 03:00:46.178 +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2796, will wait for the garbage collector to delete the pods 07/27/23 03:00:46.178 +Jul 27 03:00:46.261: INFO: Deleting DaemonSet.extensions daemon-set took: 21.523385ms +Jul 27 03:00:46.362: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.086641ms +Jul 27 03:00:48.974: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jul 27 03:00:48.974: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Jul 27 03:00:48.987: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"126498"},"items":null} - Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-cli] Kubectl client - set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:23:53.526 - Jun 12 22:23:53.526: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename kubectl 06/12/23 22:23:53.529 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:23:53.598 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:23:53.631 - [BeforeEach] [sig-cli] Kubectl client - test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-cli] Kubectl client - test/e2e/kubectl/kubectl.go:274 - [It] should add annotations for pods in rc [Conformance] - test/e2e/kubectl/kubectl.go:1652 - STEP: creating Agnhost RC 06/12/23 22:23:53.64 - Jun 12 22:23:53.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-6519 create -f -' - Jun 12 22:24:00.529: INFO: stderr: "" - Jun 12 22:24:00.529: INFO: stdout: "replicationcontroller/agnhost-primary created\n" - STEP: Waiting for Agnhost primary to start. 06/12/23 22:24:00.529 - Jun 12 22:24:01.543: INFO: Selector matched 1 pods for map[app:agnhost] - Jun 12 22:24:01.543: INFO: Found 0 / 1 - Jun 12 22:24:02.547: INFO: Selector matched 1 pods for map[app:agnhost] - Jun 12 22:24:02.547: INFO: Found 1 / 1 - Jun 12 22:24:02.547: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 - STEP: patching all pods 06/12/23 22:24:02.547 - Jun 12 22:24:02.554: INFO: Selector matched 1 pods for map[app:agnhost] - Jun 12 22:24:02.554: INFO: ForEach: Found 1 pods from the filter. Now looping through them. - Jun 12 22:24:02.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=kubectl-6519 patch pod agnhost-primary-p6hp6 -p {"metadata":{"annotations":{"x":"y"}}}' - Jun 12 22:24:02.833: INFO: stderr: "" - Jun 12 22:24:02.833: INFO: stdout: "pod/agnhost-primary-p6hp6 patched\n" - STEP: checking annotations 06/12/23 22:24:02.833 - Jun 12 22:24:02.848: INFO: Selector matched 1 pods for map[app:agnhost] - Jun 12 22:24:02.848: INFO: ForEach: Found 1 pods from the filter. Now looping through them. - [AfterEach] [sig-cli] Kubectl client - test/e2e/framework/node/init/init.go:32 - Jun 12 22:24:02.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-cli] Kubectl client - test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-cli] Kubectl client - dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-cli] Kubectl client - tear down framework | framework.go:193 - STEP: Destroying namespace "kubectl-6519" for this suite. 06/12/23 22:24:02.867 - << End Captured GinkgoWriter Output ------------------------------- -SS ------------------------------- -[sig-auth] ServiceAccounts - should mount projected service account token [Conformance] - test/e2e/auth/service_accounts.go:275 -[BeforeEach] [sig-auth] ServiceAccounts - set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:24:02.891 -Jun 12 22:24:02.891: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename svcaccounts 06/12/23 22:24:02.893 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:24:02.943 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:24:02.95 -[BeforeEach] [sig-auth] ServiceAccounts - test/e2e/framework/metrics/init/init.go:31 -[It] should mount projected service account token [Conformance] - test/e2e/auth/service_accounts.go:275 -STEP: Creating a pod to test service account token: 06/12/23 22:24:02.967 -Jun 12 22:24:02.988: INFO: Waiting up to 5m0s for pod "test-pod-183f72c4-880d-4bdf-930d-9fc5d2474945" in namespace "svcaccounts-8850" to be "Succeeded or Failed" -Jun 12 22:24:02.994: INFO: Pod "test-pod-183f72c4-880d-4bdf-930d-9fc5d2474945": Phase="Pending", Reason="", readiness=false. Elapsed: 5.896897ms -Jun 12 22:24:05.002: INFO: Pod "test-pod-183f72c4-880d-4bdf-930d-9fc5d2474945": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014216019s -Jun 12 22:24:07.006: INFO: Pod "test-pod-183f72c4-880d-4bdf-930d-9fc5d2474945": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017724189s -Jun 12 22:24:09.004: INFO: Pod "test-pod-183f72c4-880d-4bdf-930d-9fc5d2474945": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015627546s -STEP: Saw pod success 06/12/23 22:24:09.004 -Jun 12 22:24:09.004: INFO: Pod "test-pod-183f72c4-880d-4bdf-930d-9fc5d2474945" satisfied condition "Succeeded or Failed" -Jun 12 22:24:09.010: INFO: Trying to get logs from node 10.138.75.70 pod test-pod-183f72c4-880d-4bdf-930d-9fc5d2474945 container agnhost-container: -STEP: delete the pod 06/12/23 22:24:09.031 -Jun 12 22:24:09.049: INFO: Waiting for pod test-pod-183f72c4-880d-4bdf-930d-9fc5d2474945 to disappear -Jun 12 22:24:09.055: INFO: Pod test-pod-183f72c4-880d-4bdf-930d-9fc5d2474945 no longer exists -[AfterEach] [sig-auth] ServiceAccounts +Jul 27 03:00:48.997: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"126498"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 -Jun 12 22:24:09.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-auth] ServiceAccounts +Jul 27 03:00:49.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-auth] ServiceAccounts +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-auth] ServiceAccounts +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 -STEP: Destroying namespace "svcaccounts-8850" for this suite. 06/12/23 22:24:09.071 +STEP: Destroying namespace "daemonsets-2796" for this suite. 07/27/23 03:00:49.064 ------------------------------ -• [SLOW TEST] [6.203 seconds] -[sig-auth] ServiceAccounts -test/e2e/auth/framework.go:23 - should mount projected service account token [Conformance] - test/e2e/auth/service_accounts.go:275 +• [SLOW TEST] [11.295 seconds] +[sig-apps] Daemon set [Serial] +test/e2e/apps/framework.go:23 + should run and stop simple daemon [Conformance] + test/e2e/apps/daemon_set.go:166 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-auth] ServiceAccounts + [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:24:02.891 - Jun 12 22:24:02.891: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename svcaccounts 06/12/23 22:24:02.893 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:24:02.943 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:24:02.95 - [BeforeEach] [sig-auth] ServiceAccounts + STEP: Creating a kubernetes client 07/27/23 03:00:37.791 + Jul 27 03:00:37.791: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename daemonsets 07/27/23 03:00:37.792 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:00:37.834 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:00:37.846 + [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 - [It] should mount projected service account token [Conformance] - test/e2e/auth/service_accounts.go:275 - STEP: Creating a pod to test service account token: 06/12/23 22:24:02.967 - Jun 12 22:24:02.988: INFO: Waiting up to 5m0s for pod "test-pod-183f72c4-880d-4bdf-930d-9fc5d2474945" in namespace "svcaccounts-8850" to be "Succeeded or Failed" - Jun 12 22:24:02.994: INFO: Pod "test-pod-183f72c4-880d-4bdf-930d-9fc5d2474945": Phase="Pending", Reason="", readiness=false. Elapsed: 5.896897ms - Jun 12 22:24:05.002: INFO: Pod "test-pod-183f72c4-880d-4bdf-930d-9fc5d2474945": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014216019s - Jun 12 22:24:07.006: INFO: Pod "test-pod-183f72c4-880d-4bdf-930d-9fc5d2474945": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017724189s - Jun 12 22:24:09.004: INFO: Pod "test-pod-183f72c4-880d-4bdf-930d-9fc5d2474945": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015627546s - STEP: Saw pod success 06/12/23 22:24:09.004 - Jun 12 22:24:09.004: INFO: Pod "test-pod-183f72c4-880d-4bdf-930d-9fc5d2474945" satisfied condition "Succeeded or Failed" - Jun 12 22:24:09.010: INFO: Trying to get logs from node 10.138.75.70 pod test-pod-183f72c4-880d-4bdf-930d-9fc5d2474945 container agnhost-container: - STEP: delete the pod 06/12/23 22:24:09.031 - Jun 12 22:24:09.049: INFO: Waiting for pod test-pod-183f72c4-880d-4bdf-930d-9fc5d2474945 to disappear - Jun 12 22:24:09.055: INFO: Pod test-pod-183f72c4-880d-4bdf-930d-9fc5d2474945 no longer exists - [AfterEach] [sig-auth] ServiceAccounts + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 + [It] should run and stop simple daemon [Conformance] + test/e2e/apps/daemon_set.go:166 + STEP: Creating simple DaemonSet "daemon-set" 07/27/23 03:00:37.977 + STEP: Check that daemon pods launch on every node of the cluster. 07/27/23 03:00:37.995 + Jul 27 03:00:38.022: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jul 27 03:00:38.022: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 + Jul 27 03:00:39.070: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jul 27 03:00:39.071: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 + Jul 27 03:00:40.053: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 + Jul 27 03:00:40.053: INFO: Node 10.245.128.18 is running 0 daemon pod, expected 1 + Jul 27 03:00:41.051: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Jul 27 03:00:41.051: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + STEP: Stop a daemon pod, check that the daemon pod is revived. 07/27/23 03:00:41.067 + Jul 27 03:00:41.135: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Jul 27 03:00:41.135: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 + Jul 27 03:00:42.186: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Jul 27 03:00:42.208: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 + Jul 27 03:00:43.165: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Jul 27 03:00:43.165: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 + Jul 27 03:00:44.165: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Jul 27 03:00:44.165: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 + Jul 27 03:00:45.166: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Jul 27 03:00:45.166: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 + Jul 27 03:00:46.167: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Jul 27 03:00:46.167: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 + STEP: Deleting DaemonSet "daemon-set" 07/27/23 03:00:46.178 + STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2796, will wait for the garbage collector to delete the pods 07/27/23 03:00:46.178 + Jul 27 03:00:46.261: INFO: Deleting DaemonSet.extensions daemon-set took: 21.523385ms + Jul 27 03:00:46.362: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.086641ms + Jul 27 03:00:48.974: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jul 27 03:00:48.974: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set + Jul 27 03:00:48.987: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"126498"},"items":null} + + Jul 27 03:00:48.997: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"126498"},"items":null} + + [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 - Jun 12 22:24:09.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-auth] ServiceAccounts + Jul 27 03:00:49.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-auth] ServiceAccounts + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-auth] ServiceAccounts + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 - STEP: Destroying namespace "svcaccounts-8850" for this suite. 06/12/23 22:24:09.071 + STEP: Destroying namespace "daemonsets-2796" for this suite. 07/27/23 03:00:49.064 << End Captured GinkgoWriter Output ------------------------------ -S +SSSSSSS ------------------------------ -[sig-network] Services - should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] - test/e2e/network/service.go:2191 -[BeforeEach] [sig-network] Services +[sig-node] Security Context + should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/node/security_context.go:129 +[BeforeEach] [sig-node] Security Context set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:24:09.094 -Jun 12 22:24:09.094: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename services 06/12/23 22:24:09.097 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:24:09.144 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:24:09.169 -[BeforeEach] [sig-network] Services +STEP: Creating a kubernetes client 07/27/23 03:00:49.086 +Jul 27 03:00:49.086: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename security-context 07/27/23 03:00:49.087 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:00:49.139 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:00:49.153 +[BeforeEach] [sig-node] Security Context test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-network] Services - test/e2e/network/service.go:766 -[It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] - test/e2e/network/service.go:2191 -STEP: creating service in namespace services-3931 06/12/23 22:24:09.184 -STEP: creating service affinity-clusterip in namespace services-3931 06/12/23 22:24:09.185 -STEP: creating replication controller affinity-clusterip in namespace services-3931 06/12/23 22:24:09.25 -I0612 22:24:09.289550 23 runners.go:193] Created replication controller with name: affinity-clusterip, namespace: services-3931, replica count: 3 -I0612 22:24:12.350150 23 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady -I0612 22:24:15.365017 23 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady -Jun 12 22:24:15.405: INFO: Creating new exec pod -Jun 12 22:24:15.424: INFO: Waiting up to 5m0s for pod "execpod-affinityv7cjh" in namespace "services-3931" to be "running" -Jun 12 22:24:15.433: INFO: Pod "execpod-affinityv7cjh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.877417ms -Jun 12 22:24:17.441: INFO: Pod "execpod-affinityv7cjh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016619821s -Jun 12 22:24:19.446: INFO: Pod "execpod-affinityv7cjh": Phase="Running", Reason="", readiness=true. Elapsed: 4.021704618s -Jun 12 22:24:19.446: INFO: Pod "execpod-affinityv7cjh" satisfied condition "running" -Jun 12 22:24:20.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-3931 exec execpod-affinityv7cjh -- /bin/sh -x -c nc -v -z -w 2 affinity-clusterip 80' -Jun 12 22:24:20.886: INFO: stderr: "+ nc -v -z -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" -Jun 12 22:24:20.886: INFO: stdout: "" -Jun 12 22:24:20.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-3931 exec execpod-affinityv7cjh -- /bin/sh -x -c nc -v -z -w 2 172.21.10.199 80' -Jun 12 22:24:21.454: INFO: stderr: "+ nc -v -z -w 2 172.21.10.199 80\nConnection to 172.21.10.199 80 port [tcp/http] succeeded!\n" -Jun 12 22:24:21.454: INFO: stdout: "" -Jun 12 22:24:21.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-3931 exec execpod-affinityv7cjh -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.21.10.199:80/ ; done' -Jun 12 22:24:22.107: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.10.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.10.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.10.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.10.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.10.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.10.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.10.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.10.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.10.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.10.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.10.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.10.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.10.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.10.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.10.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.10.199:80/\n" -Jun 12 22:24:22.107: INFO: stdout: "\naffinity-clusterip-x8gnm\naffinity-clusterip-x8gnm\naffinity-clusterip-x8gnm\naffinity-clusterip-x8gnm\naffinity-clusterip-x8gnm\naffinity-clusterip-x8gnm\naffinity-clusterip-x8gnm\naffinity-clusterip-x8gnm\naffinity-clusterip-x8gnm\naffinity-clusterip-x8gnm\naffinity-clusterip-x8gnm\naffinity-clusterip-x8gnm\naffinity-clusterip-x8gnm\naffinity-clusterip-x8gnm\naffinity-clusterip-x8gnm\naffinity-clusterip-x8gnm" -Jun 12 22:24:22.108: INFO: Received response from host: affinity-clusterip-x8gnm -Jun 12 22:24:22.108: INFO: Received response from host: affinity-clusterip-x8gnm -Jun 12 22:24:22.108: INFO: Received response from host: affinity-clusterip-x8gnm -Jun 12 22:24:22.108: INFO: Received response from host: affinity-clusterip-x8gnm -Jun 12 22:24:22.108: INFO: Received response from host: affinity-clusterip-x8gnm -Jun 12 22:24:22.108: INFO: Received response from host: affinity-clusterip-x8gnm -Jun 12 22:24:22.108: INFO: Received response from host: affinity-clusterip-x8gnm -Jun 12 22:24:22.108: INFO: Received response from host: affinity-clusterip-x8gnm -Jun 12 22:24:22.108: INFO: Received response from host: affinity-clusterip-x8gnm -Jun 12 22:24:22.108: INFO: Received response from host: affinity-clusterip-x8gnm -Jun 12 22:24:22.108: INFO: Received response from host: affinity-clusterip-x8gnm -Jun 12 22:24:22.108: INFO: Received response from host: affinity-clusterip-x8gnm -Jun 12 22:24:22.108: INFO: Received response from host: affinity-clusterip-x8gnm -Jun 12 22:24:22.108: INFO: Received response from host: affinity-clusterip-x8gnm -Jun 12 22:24:22.108: INFO: Received response from host: affinity-clusterip-x8gnm -Jun 12 22:24:22.108: INFO: Received response from host: affinity-clusterip-x8gnm -Jun 12 22:24:22.108: INFO: Cleaning up the exec pod -STEP: deleting ReplicationController affinity-clusterip in namespace services-3931, will wait for the garbage collector to delete the pods 06/12/23 22:24:22.125 -Jun 12 22:24:22.208: INFO: Deleting ReplicationController affinity-clusterip took: 19.098076ms -Jun 12 22:24:22.308: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.2103ms -[AfterEach] [sig-network] Services +[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/node/security_context.go:129 +STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser 07/27/23 03:00:49.164 +Jul 27 03:00:49.201: INFO: Waiting up to 5m0s for pod "security-context-ba53be28-410f-427d-9e27-2ca9634c3e41" in namespace "security-context-7240" to be "Succeeded or Failed" +Jul 27 03:00:49.257: INFO: Pod "security-context-ba53be28-410f-427d-9e27-2ca9634c3e41": Phase="Pending", Reason="", readiness=false. Elapsed: 55.922879ms +Jul 27 03:00:51.270: INFO: Pod "security-context-ba53be28-410f-427d-9e27-2ca9634c3e41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069327447s +Jul 27 03:00:53.271: INFO: Pod "security-context-ba53be28-410f-427d-9e27-2ca9634c3e41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069471613s +STEP: Saw pod success 07/27/23 03:00:53.271 +Jul 27 03:00:53.271: INFO: Pod "security-context-ba53be28-410f-427d-9e27-2ca9634c3e41" satisfied condition "Succeeded or Failed" +Jul 27 03:00:53.281: INFO: Trying to get logs from node 10.245.128.19 pod security-context-ba53be28-410f-427d-9e27-2ca9634c3e41 container test-container: +STEP: delete the pod 07/27/23 03:00:53.327 +Jul 27 03:00:53.359: INFO: Waiting for pod security-context-ba53be28-410f-427d-9e27-2ca9634c3e41 to disappear +Jul 27 03:00:53.369: INFO: Pod security-context-ba53be28-410f-427d-9e27-2ca9634c3e41 no longer exists +[AfterEach] [sig-node] Security Context test/e2e/framework/node/init/init.go:32 -Jun 12 22:24:27.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] Services +Jul 27 03:00:53.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Security Context test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] Services +[DeferCleanup (Each)] [sig-node] Security Context dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] Services +[DeferCleanup (Each)] [sig-node] Security Context tear down framework | framework.go:193 -STEP: Destroying namespace "services-3931" for this suite. 06/12/23 22:24:27.072 +STEP: Destroying namespace "security-context-7240" for this suite. 07/27/23 03:00:53.387 ------------------------------ -• [SLOW TEST] [18.005 seconds] -[sig-network] Services -test/e2e/network/common/framework.go:23 - should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] - test/e2e/network/service.go:2191 +• [4.319 seconds] +[sig-node] Security Context +test/e2e/node/framework.go:23 + should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/node/security_context.go:129 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] Services + [BeforeEach] [sig-node] Security Context set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:24:09.094 - Jun 12 22:24:09.094: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename services 06/12/23 22:24:09.097 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:24:09.144 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:24:09.169 - [BeforeEach] [sig-network] Services + STEP: Creating a kubernetes client 07/27/23 03:00:49.086 + Jul 27 03:00:49.086: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename security-context 07/27/23 03:00:49.087 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:00:49.139 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:00:49.153 + [BeforeEach] [sig-node] Security Context test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-network] Services - test/e2e/network/service.go:766 - [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] - test/e2e/network/service.go:2191 - STEP: creating service in namespace services-3931 06/12/23 22:24:09.184 - STEP: creating service affinity-clusterip in namespace services-3931 06/12/23 22:24:09.185 - STEP: creating replication controller affinity-clusterip in namespace services-3931 06/12/23 22:24:09.25 - I0612 22:24:09.289550 23 runners.go:193] Created replication controller with name: affinity-clusterip, namespace: services-3931, replica count: 3 - I0612 22:24:12.350150 23 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady - I0612 22:24:15.365017 23 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady - Jun 12 22:24:15.405: INFO: Creating new exec pod - Jun 12 22:24:15.424: INFO: Waiting up to 5m0s for pod "execpod-affinityv7cjh" in namespace "services-3931" to be "running" - Jun 12 22:24:15.433: INFO: Pod "execpod-affinityv7cjh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.877417ms - Jun 12 22:24:17.441: INFO: Pod "execpod-affinityv7cjh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016619821s - Jun 12 22:24:19.446: INFO: Pod "execpod-affinityv7cjh": Phase="Running", Reason="", readiness=true. Elapsed: 4.021704618s - Jun 12 22:24:19.446: INFO: Pod "execpod-affinityv7cjh" satisfied condition "running" - Jun 12 22:24:20.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-3931 exec execpod-affinityv7cjh -- /bin/sh -x -c nc -v -z -w 2 affinity-clusterip 80' - Jun 12 22:24:20.886: INFO: stderr: "+ nc -v -z -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" - Jun 12 22:24:20.886: INFO: stdout: "" - Jun 12 22:24:20.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-3931 exec execpod-affinityv7cjh -- /bin/sh -x -c nc -v -z -w 2 172.21.10.199 80' - Jun 12 22:24:21.454: INFO: stderr: "+ nc -v -z -w 2 172.21.10.199 80\nConnection to 172.21.10.199 80 port [tcp/http] succeeded!\n" - Jun 12 22:24:21.454: INFO: stdout: "" - Jun 12 22:24:21.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=services-3931 exec execpod-affinityv7cjh -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.21.10.199:80/ ; done' - Jun 12 22:24:22.107: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.10.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.10.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.10.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.10.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.10.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.10.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.10.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.10.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.10.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.10.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.10.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.10.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.10.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.10.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.10.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.21.10.199:80/\n" - Jun 12 22:24:22.107: INFO: stdout: "\naffinity-clusterip-x8gnm\naffinity-clusterip-x8gnm\naffinity-clusterip-x8gnm\naffinity-clusterip-x8gnm\naffinity-clusterip-x8gnm\naffinity-clusterip-x8gnm\naffinity-clusterip-x8gnm\naffinity-clusterip-x8gnm\naffinity-clusterip-x8gnm\naffinity-clusterip-x8gnm\naffinity-clusterip-x8gnm\naffinity-clusterip-x8gnm\naffinity-clusterip-x8gnm\naffinity-clusterip-x8gnm\naffinity-clusterip-x8gnm\naffinity-clusterip-x8gnm" - Jun 12 22:24:22.108: INFO: Received response from host: affinity-clusterip-x8gnm - Jun 12 22:24:22.108: INFO: Received response from host: affinity-clusterip-x8gnm - Jun 12 22:24:22.108: INFO: Received response from host: affinity-clusterip-x8gnm - Jun 12 22:24:22.108: INFO: Received response from host: affinity-clusterip-x8gnm - Jun 12 22:24:22.108: INFO: Received response from host: affinity-clusterip-x8gnm - Jun 12 22:24:22.108: INFO: Received response from host: affinity-clusterip-x8gnm - Jun 12 22:24:22.108: INFO: Received response from host: affinity-clusterip-x8gnm - Jun 12 22:24:22.108: INFO: Received response from host: affinity-clusterip-x8gnm - Jun 12 22:24:22.108: INFO: Received response from host: affinity-clusterip-x8gnm - Jun 12 22:24:22.108: INFO: Received response from host: affinity-clusterip-x8gnm - Jun 12 22:24:22.108: INFO: Received response from host: affinity-clusterip-x8gnm - Jun 12 22:24:22.108: INFO: Received response from host: affinity-clusterip-x8gnm - Jun 12 22:24:22.108: INFO: Received response from host: affinity-clusterip-x8gnm - Jun 12 22:24:22.108: INFO: Received response from host: affinity-clusterip-x8gnm - Jun 12 22:24:22.108: INFO: Received response from host: affinity-clusterip-x8gnm - Jun 12 22:24:22.108: INFO: Received response from host: affinity-clusterip-x8gnm - Jun 12 22:24:22.108: INFO: Cleaning up the exec pod - STEP: deleting ReplicationController affinity-clusterip in namespace services-3931, will wait for the garbage collector to delete the pods 06/12/23 22:24:22.125 - Jun 12 22:24:22.208: INFO: Deleting ReplicationController affinity-clusterip took: 19.098076ms - Jun 12 22:24:22.308: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.2103ms - [AfterEach] [sig-network] Services + [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/node/security_context.go:129 + STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser 07/27/23 03:00:49.164 + Jul 27 03:00:49.201: INFO: Waiting up to 5m0s for pod "security-context-ba53be28-410f-427d-9e27-2ca9634c3e41" in namespace "security-context-7240" to be "Succeeded or Failed" + Jul 27 03:00:49.257: INFO: Pod "security-context-ba53be28-410f-427d-9e27-2ca9634c3e41": Phase="Pending", Reason="", readiness=false. Elapsed: 55.922879ms + Jul 27 03:00:51.270: INFO: Pod "security-context-ba53be28-410f-427d-9e27-2ca9634c3e41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069327447s + Jul 27 03:00:53.271: INFO: Pod "security-context-ba53be28-410f-427d-9e27-2ca9634c3e41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069471613s + STEP: Saw pod success 07/27/23 03:00:53.271 + Jul 27 03:00:53.271: INFO: Pod "security-context-ba53be28-410f-427d-9e27-2ca9634c3e41" satisfied condition "Succeeded or Failed" + Jul 27 03:00:53.281: INFO: Trying to get logs from node 10.245.128.19 pod security-context-ba53be28-410f-427d-9e27-2ca9634c3e41 container test-container: + STEP: delete the pod 07/27/23 03:00:53.327 + Jul 27 03:00:53.359: INFO: Waiting for pod security-context-ba53be28-410f-427d-9e27-2ca9634c3e41 to disappear + Jul 27 03:00:53.369: INFO: Pod security-context-ba53be28-410f-427d-9e27-2ca9634c3e41 no longer exists + [AfterEach] [sig-node] Security Context test/e2e/framework/node/init/init.go:32 - Jun 12 22:24:27.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] Services + Jul 27 03:00:53.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Security Context test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] Services + [DeferCleanup (Each)] [sig-node] Security Context dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] Services + [DeferCleanup (Each)] [sig-node] Security Context tear down framework | framework.go:193 - STEP: Destroying namespace "services-3931" for this suite. 06/12/23 22:24:27.072 + STEP: Destroying namespace "security-context-7240" for this suite. 07/27/23 03:00:53.387 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSS ------------------------------ -[sig-storage] Projected configMap - should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:109 -[BeforeEach] [sig-storage] Projected configMap +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + removes definition from spec when one version gets changed to not be served [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:442 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:24:27.112 -Jun 12 22:24:27.112: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename projected 06/12/23 22:24:27.117 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:24:27.169 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:24:27.178 -[BeforeEach] [sig-storage] Projected configMap +STEP: Creating a kubernetes client 07/27/23 03:00:53.405 +Jul 27 03:00:53.405: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename crd-publish-openapi 07/27/23 03:00:53.406 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:00:53.469 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:00:53.478 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 -[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:109 -STEP: Creating configMap with name projected-configmap-test-volume-map-ce69e073-dc16-4dd4-a850-bca0a1a60939 06/12/23 22:24:27.188 -STEP: Creating a pod to test consume configMaps 06/12/23 22:24:27.201 -Jun 12 22:24:27.229: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bbc911ac-70fd-468d-9819-c657e81b5893" in namespace "projected-5395" to be "Succeeded or Failed" -Jun 12 22:24:27.240: INFO: Pod "pod-projected-configmaps-bbc911ac-70fd-468d-9819-c657e81b5893": Phase="Pending", Reason="", readiness=false. Elapsed: 10.911895ms -Jun 12 22:24:29.252: INFO: Pod "pod-projected-configmaps-bbc911ac-70fd-468d-9819-c657e81b5893": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023671944s -Jun 12 22:24:31.262: INFO: Pod "pod-projected-configmaps-bbc911ac-70fd-468d-9819-c657e81b5893": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033451296s -Jun 12 22:24:33.275: INFO: Pod "pod-projected-configmaps-bbc911ac-70fd-468d-9819-c657e81b5893": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045768475s -STEP: Saw pod success 06/12/23 22:24:33.275 -Jun 12 22:24:33.275: INFO: Pod "pod-projected-configmaps-bbc911ac-70fd-468d-9819-c657e81b5893" satisfied condition "Succeeded or Failed" -Jun 12 22:24:33.310: INFO: Trying to get logs from node 10.138.75.70 pod pod-projected-configmaps-bbc911ac-70fd-468d-9819-c657e81b5893 container agnhost-container: -STEP: delete the pod 06/12/23 22:24:33.335 -Jun 12 22:24:33.390: INFO: Waiting for pod pod-projected-configmaps-bbc911ac-70fd-468d-9819-c657e81b5893 to disappear -Jun 12 22:24:33.399: INFO: Pod pod-projected-configmaps-bbc911ac-70fd-468d-9819-c657e81b5893 no longer exists -[AfterEach] [sig-storage] Projected configMap +[It] removes definition from spec when one version gets changed to not be served [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:442 +STEP: set up a multi version CRD 07/27/23 03:00:53.49 +Jul 27 03:00:53.490: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: mark a version not serverd 07/27/23 03:01:05.465 +STEP: check the unserved version gets removed 07/27/23 03:01:05.506 +STEP: check the other version is not changed 07/27/23 03:01:13.641 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 -Jun 12 22:24:33.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Projected configMap +Jul 27 03:01:24.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Projected configMap +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Projected configMap +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] tear down framework | framework.go:193 -STEP: Destroying namespace "projected-5395" for this suite. 06/12/23 22:24:33.413 +STEP: Destroying namespace "crd-publish-openapi-7197" for this suite. 07/27/23 03:01:24.463 ------------------------------ -• [SLOW TEST] [6.370 seconds] -[sig-storage] Projected configMap -test/e2e/common/storage/framework.go:23 - should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:109 +• [SLOW TEST] [31.079 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + removes definition from spec when one version gets changed to not be served [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:442 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Projected configMap + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:24:27.112 - Jun 12 22:24:27.112: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename projected 06/12/23 22:24:27.117 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:24:27.169 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:24:27.178 - [BeforeEach] [sig-storage] Projected configMap + STEP: Creating a kubernetes client 07/27/23 03:00:53.405 + Jul 27 03:00:53.405: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename crd-publish-openapi 07/27/23 03:00:53.406 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:00:53.469 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:00:53.478 + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 - [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:109 - STEP: Creating configMap with name projected-configmap-test-volume-map-ce69e073-dc16-4dd4-a850-bca0a1a60939 06/12/23 22:24:27.188 - STEP: Creating a pod to test consume configMaps 06/12/23 22:24:27.201 - Jun 12 22:24:27.229: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bbc911ac-70fd-468d-9819-c657e81b5893" in namespace "projected-5395" to be "Succeeded or Failed" - Jun 12 22:24:27.240: INFO: Pod "pod-projected-configmaps-bbc911ac-70fd-468d-9819-c657e81b5893": Phase="Pending", Reason="", readiness=false. Elapsed: 10.911895ms - Jun 12 22:24:29.252: INFO: Pod "pod-projected-configmaps-bbc911ac-70fd-468d-9819-c657e81b5893": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023671944s - Jun 12 22:24:31.262: INFO: Pod "pod-projected-configmaps-bbc911ac-70fd-468d-9819-c657e81b5893": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033451296s - Jun 12 22:24:33.275: INFO: Pod "pod-projected-configmaps-bbc911ac-70fd-468d-9819-c657e81b5893": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045768475s - STEP: Saw pod success 06/12/23 22:24:33.275 - Jun 12 22:24:33.275: INFO: Pod "pod-projected-configmaps-bbc911ac-70fd-468d-9819-c657e81b5893" satisfied condition "Succeeded or Failed" - Jun 12 22:24:33.310: INFO: Trying to get logs from node 10.138.75.70 pod pod-projected-configmaps-bbc911ac-70fd-468d-9819-c657e81b5893 container agnhost-container: - STEP: delete the pod 06/12/23 22:24:33.335 - Jun 12 22:24:33.390: INFO: Waiting for pod pod-projected-configmaps-bbc911ac-70fd-468d-9819-c657e81b5893 to disappear - Jun 12 22:24:33.399: INFO: Pod pod-projected-configmaps-bbc911ac-70fd-468d-9819-c657e81b5893 no longer exists - [AfterEach] [sig-storage] Projected configMap + [It] removes definition from spec when one version gets changed to not be served [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:442 + STEP: set up a multi version CRD 07/27/23 03:00:53.49 + Jul 27 03:00:53.490: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: mark a version not serverd 07/27/23 03:01:05.465 + STEP: check the unserved version gets removed 07/27/23 03:01:05.506 + STEP: check the other version is not changed 07/27/23 03:01:13.641 + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 22:24:33.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Projected configMap + Jul 27 03:01:24.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Projected configMap + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Projected configMap + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "projected-5395" for this suite. 06/12/23 22:24:33.413 + STEP: Destroying namespace "crd-publish-openapi-7197" for this suite. 07/27/23 03:01:24.463 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSS ------------------------------ -[sig-node] Secrets - should be consumable from pods in env vars [NodeConformance] [Conformance] - test/e2e/common/node/secrets.go:46 -[BeforeEach] [sig-node] Secrets +[sig-storage] Projected combined + should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + test/e2e/common/storage/projected_combined.go:44 +[BeforeEach] [sig-storage] Projected combined set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:24:33.533 -Jun 12 22:24:33.533: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename secrets 06/12/23 22:24:33.627 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:24:33.691 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:24:33.768 -[BeforeEach] [sig-node] Secrets +STEP: Creating a kubernetes client 07/27/23 03:01:24.485 +Jul 27 03:01:24.485: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename projected 07/27/23 03:01:24.486 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:01:24.534 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:01:24.545 +[BeforeEach] [sig-storage] Projected combined test/e2e/framework/metrics/init/init.go:31 -[It] should be consumable from pods in env vars [NodeConformance] [Conformance] - test/e2e/common/node/secrets.go:46 -STEP: Creating secret with name secret-test-7418b292-7bb1-4a3f-8d1e-a4340e94a18c 06/12/23 22:24:33.822 -STEP: Creating a pod to test consume secrets 06/12/23 22:24:33.87 -Jun 12 22:24:33.890: INFO: Waiting up to 5m0s for pod "pod-secrets-256d5e4c-49b7-458b-b618-99a9c65c2d07" in namespace "secrets-6502" to be "Succeeded or Failed" -Jun 12 22:24:33.900: INFO: Pod "pod-secrets-256d5e4c-49b7-458b-b618-99a9c65c2d07": Phase="Pending", Reason="", readiness=false. Elapsed: 10.06331ms -Jun 12 22:24:35.912: INFO: Pod "pod-secrets-256d5e4c-49b7-458b-b618-99a9c65c2d07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02263239s -Jun 12 22:24:37.913: INFO: Pod "pod-secrets-256d5e4c-49b7-458b-b618-99a9c65c2d07": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02289266s -Jun 12 22:24:39.908: INFO: Pod "pod-secrets-256d5e4c-49b7-458b-b618-99a9c65c2d07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018456735s -STEP: Saw pod success 06/12/23 22:24:39.908 -Jun 12 22:24:39.909: INFO: Pod "pod-secrets-256d5e4c-49b7-458b-b618-99a9c65c2d07" satisfied condition "Succeeded or Failed" -Jun 12 22:24:39.915: INFO: Trying to get logs from node 10.138.75.70 pod pod-secrets-256d5e4c-49b7-458b-b618-99a9c65c2d07 container secret-env-test: -STEP: delete the pod 06/12/23 22:24:39.933 -Jun 12 22:24:39.962: INFO: Waiting for pod pod-secrets-256d5e4c-49b7-458b-b618-99a9c65c2d07 to disappear -Jun 12 22:24:39.969: INFO: Pod pod-secrets-256d5e4c-49b7-458b-b618-99a9c65c2d07 no longer exists -[AfterEach] [sig-node] Secrets +[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + test/e2e/common/storage/projected_combined.go:44 +STEP: Creating configMap with name configmap-projected-all-test-volume-91a3678c-5723-477a-8890-da384cbab75d 07/27/23 03:01:24.553 +STEP: Creating secret with name secret-projected-all-test-volume-6c95d514-e93e-47e6-aa9c-8ae7c3cdb0af 07/27/23 03:01:24.571 +STEP: Creating a pod to test Check all projections for projected volume plugin 07/27/23 03:01:24.589 +Jul 27 03:01:24.618: INFO: Waiting up to 5m0s for pod "projected-volume-a98fe5ec-1468-42db-9af3-7a28684a4af5" in namespace "projected-914" to be "Succeeded or Failed" +Jul 27 03:01:24.626: INFO: Pod "projected-volume-a98fe5ec-1468-42db-9af3-7a28684a4af5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.367002ms +Jul 27 03:01:26.635: INFO: Pod "projected-volume-a98fe5ec-1468-42db-9af3-7a28684a4af5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017211828s +Jul 27 03:01:28.637: INFO: Pod "projected-volume-a98fe5ec-1468-42db-9af3-7a28684a4af5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019301853s +STEP: Saw pod success 07/27/23 03:01:28.637 +Jul 27 03:01:28.638: INFO: Pod "projected-volume-a98fe5ec-1468-42db-9af3-7a28684a4af5" satisfied condition "Succeeded or Failed" +Jul 27 03:01:28.645: INFO: Trying to get logs from node 10.245.128.19 pod projected-volume-a98fe5ec-1468-42db-9af3-7a28684a4af5 container projected-all-volume-test: +STEP: delete the pod 07/27/23 03:01:28.683 +Jul 27 03:01:28.700: INFO: Waiting for pod projected-volume-a98fe5ec-1468-42db-9af3-7a28684a4af5 to disappear +Jul 27 03:01:28.708: INFO: Pod projected-volume-a98fe5ec-1468-42db-9af3-7a28684a4af5 no longer exists +[AfterEach] [sig-storage] Projected combined test/e2e/framework/node/init/init.go:32 -Jun 12 22:24:39.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Secrets +Jul 27 03:01:28.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected combined test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Secrets +[DeferCleanup (Each)] [sig-storage] Projected combined dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Secrets +[DeferCleanup (Each)] [sig-storage] Projected combined tear down framework | framework.go:193 -STEP: Destroying namespace "secrets-6502" for this suite. 06/12/23 22:24:39.982 +STEP: Destroying namespace "projected-914" for this suite. 07/27/23 03:01:28.72 ------------------------------ -• [SLOW TEST] [6.479 seconds] -[sig-node] Secrets -test/e2e/common/node/framework.go:23 - should be consumable from pods in env vars [NodeConformance] [Conformance] - test/e2e/common/node/secrets.go:46 +• [4.258 seconds] +[sig-storage] Projected combined +test/e2e/common/storage/framework.go:23 + should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + test/e2e/common/storage/projected_combined.go:44 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Secrets + [BeforeEach] [sig-storage] Projected combined set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:24:33.533 - Jun 12 22:24:33.533: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename secrets 06/12/23 22:24:33.627 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:24:33.691 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:24:33.768 - [BeforeEach] [sig-node] Secrets + STEP: Creating a kubernetes client 07/27/23 03:01:24.485 + Jul 27 03:01:24.485: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename projected 07/27/23 03:01:24.486 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:01:24.534 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:01:24.545 + [BeforeEach] [sig-storage] Projected combined test/e2e/framework/metrics/init/init.go:31 - [It] should be consumable from pods in env vars [NodeConformance] [Conformance] - test/e2e/common/node/secrets.go:46 - STEP: Creating secret with name secret-test-7418b292-7bb1-4a3f-8d1e-a4340e94a18c 06/12/23 22:24:33.822 - STEP: Creating a pod to test consume secrets 06/12/23 22:24:33.87 - Jun 12 22:24:33.890: INFO: Waiting up to 5m0s for pod "pod-secrets-256d5e4c-49b7-458b-b618-99a9c65c2d07" in namespace "secrets-6502" to be "Succeeded or Failed" - Jun 12 22:24:33.900: INFO: Pod "pod-secrets-256d5e4c-49b7-458b-b618-99a9c65c2d07": Phase="Pending", Reason="", readiness=false. Elapsed: 10.06331ms - Jun 12 22:24:35.912: INFO: Pod "pod-secrets-256d5e4c-49b7-458b-b618-99a9c65c2d07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02263239s - Jun 12 22:24:37.913: INFO: Pod "pod-secrets-256d5e4c-49b7-458b-b618-99a9c65c2d07": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02289266s - Jun 12 22:24:39.908: INFO: Pod "pod-secrets-256d5e4c-49b7-458b-b618-99a9c65c2d07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018456735s - STEP: Saw pod success 06/12/23 22:24:39.908 - Jun 12 22:24:39.909: INFO: Pod "pod-secrets-256d5e4c-49b7-458b-b618-99a9c65c2d07" satisfied condition "Succeeded or Failed" - Jun 12 22:24:39.915: INFO: Trying to get logs from node 10.138.75.70 pod pod-secrets-256d5e4c-49b7-458b-b618-99a9c65c2d07 container secret-env-test: - STEP: delete the pod 06/12/23 22:24:39.933 - Jun 12 22:24:39.962: INFO: Waiting for pod pod-secrets-256d5e4c-49b7-458b-b618-99a9c65c2d07 to disappear - Jun 12 22:24:39.969: INFO: Pod pod-secrets-256d5e4c-49b7-458b-b618-99a9c65c2d07 no longer exists - [AfterEach] [sig-node] Secrets + [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + test/e2e/common/storage/projected_combined.go:44 + STEP: Creating configMap with name configmap-projected-all-test-volume-91a3678c-5723-477a-8890-da384cbab75d 07/27/23 03:01:24.553 + STEP: Creating secret with name secret-projected-all-test-volume-6c95d514-e93e-47e6-aa9c-8ae7c3cdb0af 07/27/23 03:01:24.571 + STEP: Creating a pod to test Check all projections for projected volume plugin 07/27/23 03:01:24.589 + Jul 27 03:01:24.618: INFO: Waiting up to 5m0s for pod "projected-volume-a98fe5ec-1468-42db-9af3-7a28684a4af5" in namespace "projected-914" to be "Succeeded or Failed" + Jul 27 03:01:24.626: INFO: Pod "projected-volume-a98fe5ec-1468-42db-9af3-7a28684a4af5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.367002ms + Jul 27 03:01:26.635: INFO: Pod "projected-volume-a98fe5ec-1468-42db-9af3-7a28684a4af5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017211828s + Jul 27 03:01:28.637: INFO: Pod "projected-volume-a98fe5ec-1468-42db-9af3-7a28684a4af5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019301853s + STEP: Saw pod success 07/27/23 03:01:28.637 + Jul 27 03:01:28.638: INFO: Pod "projected-volume-a98fe5ec-1468-42db-9af3-7a28684a4af5" satisfied condition "Succeeded or Failed" + Jul 27 03:01:28.645: INFO: Trying to get logs from node 10.245.128.19 pod projected-volume-a98fe5ec-1468-42db-9af3-7a28684a4af5 container projected-all-volume-test: + STEP: delete the pod 07/27/23 03:01:28.683 + Jul 27 03:01:28.700: INFO: Waiting for pod projected-volume-a98fe5ec-1468-42db-9af3-7a28684a4af5 to disappear + Jul 27 03:01:28.708: INFO: Pod projected-volume-a98fe5ec-1468-42db-9af3-7a28684a4af5 no longer exists + [AfterEach] [sig-storage] Projected combined test/e2e/framework/node/init/init.go:32 - Jun 12 22:24:39.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Secrets + Jul 27 03:01:28.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected combined test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Secrets + [DeferCleanup (Each)] [sig-storage] Projected combined dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Secrets + [DeferCleanup (Each)] [sig-storage] Projected combined tear down framework | framework.go:193 - STEP: Destroying namespace "secrets-6502" for this suite. 06/12/23 22:24:39.982 + STEP: Destroying namespace "projected-914" for this suite. 07/27/23 03:01:28.72 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSS +SSSSSSSSS ------------------------------ -[sig-storage] Projected configMap - should be consumable from pods in volume as non-root [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:74 -[BeforeEach] [sig-storage] Projected configMap +[sig-storage] Subpath Atomic writer volumes + should support subpaths with secret pod [Conformance] + test/e2e/storage/subpath.go:60 +[BeforeEach] [sig-storage] Subpath set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:24:40.017 -Jun 12 22:24:40.017: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename projected 06/12/23 22:24:40.019 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:24:40.076 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:24:40.091 -[BeforeEach] [sig-storage] Projected configMap +STEP: Creating a kubernetes client 07/27/23 03:01:28.744 +Jul 27 03:01:28.744: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename subpath 07/27/23 03:01:28.745 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:01:28.786 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:01:28.795 +[BeforeEach] [sig-storage] Subpath test/e2e/framework/metrics/init/init.go:31 -[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:74 -STEP: Creating configMap with name projected-configmap-test-volume-672bc3fc-89eb-4ae0-8674-5909fd011cf5 06/12/23 22:24:40.103 -STEP: Creating a pod to test consume configMaps 06/12/23 22:24:40.126 -Jun 12 22:24:40.149: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e465887d-f539-47b3-9f1f-657790275262" in namespace "projected-306" to be "Succeeded or Failed" -Jun 12 22:24:40.159: INFO: Pod "pod-projected-configmaps-e465887d-f539-47b3-9f1f-657790275262": Phase="Pending", Reason="", readiness=false. Elapsed: 10.281331ms -Jun 12 22:24:42.167: INFO: Pod "pod-projected-configmaps-e465887d-f539-47b3-9f1f-657790275262": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017920421s -Jun 12 22:24:44.176: INFO: Pod "pod-projected-configmaps-e465887d-f539-47b3-9f1f-657790275262": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027182601s -Jun 12 22:24:46.168: INFO: Pod "pod-projected-configmaps-e465887d-f539-47b3-9f1f-657790275262": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019344979s -STEP: Saw pod success 06/12/23 22:24:46.169 -Jun 12 22:24:46.169: INFO: Pod "pod-projected-configmaps-e465887d-f539-47b3-9f1f-657790275262" satisfied condition "Succeeded or Failed" -Jun 12 22:24:46.175: INFO: Trying to get logs from node 10.138.75.70 pod pod-projected-configmaps-e465887d-f539-47b3-9f1f-657790275262 container agnhost-container: -STEP: delete the pod 06/12/23 22:24:46.192 -Jun 12 22:24:46.214: INFO: Waiting for pod pod-projected-configmaps-e465887d-f539-47b3-9f1f-657790275262 to disappear -Jun 12 22:24:46.222: INFO: Pod pod-projected-configmaps-e465887d-f539-47b3-9f1f-657790275262 no longer exists -[AfterEach] [sig-storage] Projected configMap +[BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 +STEP: Setting up data 07/27/23 03:01:28.805 +[It] should support subpaths with secret pod [Conformance] + test/e2e/storage/subpath.go:60 +STEP: Creating pod pod-subpath-test-secret-7grh 07/27/23 03:01:28.877 +STEP: Creating a pod to test atomic-volume-subpath 07/27/23 03:01:28.877 +Jul 27 03:01:28.916: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-7grh" in namespace "subpath-9911" to be "Succeeded or Failed" +Jul 27 03:01:28.925: INFO: Pod "pod-subpath-test-secret-7grh": Phase="Pending", Reason="", readiness=false. Elapsed: 9.412602ms +Jul 27 03:01:30.934: INFO: Pod "pod-subpath-test-secret-7grh": Phase="Running", Reason="", readiness=true. Elapsed: 2.01867163s +Jul 27 03:01:32.937: INFO: Pod "pod-subpath-test-secret-7grh": Phase="Running", Reason="", readiness=true. Elapsed: 4.021746209s +Jul 27 03:01:34.935: INFO: Pod "pod-subpath-test-secret-7grh": Phase="Running", Reason="", readiness=true. Elapsed: 6.019134527s +Jul 27 03:01:36.935: INFO: Pod "pod-subpath-test-secret-7grh": Phase="Running", Reason="", readiness=true. Elapsed: 8.019241981s +Jul 27 03:01:38.934: INFO: Pod "pod-subpath-test-secret-7grh": Phase="Running", Reason="", readiness=true. Elapsed: 10.01872182s +Jul 27 03:01:40.935: INFO: Pod "pod-subpath-test-secret-7grh": Phase="Running", Reason="", readiness=true. Elapsed: 12.018829971s +Jul 27 03:01:42.935: INFO: Pod "pod-subpath-test-secret-7grh": Phase="Running", Reason="", readiness=true. Elapsed: 14.019618083s +Jul 27 03:01:44.935: INFO: Pod "pod-subpath-test-secret-7grh": Phase="Running", Reason="", readiness=true. Elapsed: 16.019692663s +Jul 27 03:01:46.935: INFO: Pod "pod-subpath-test-secret-7grh": Phase="Running", Reason="", readiness=true. Elapsed: 18.019217323s +Jul 27 03:01:48.981: INFO: Pod "pod-subpath-test-secret-7grh": Phase="Running", Reason="", readiness=true. Elapsed: 20.06520904s +Jul 27 03:01:50.937: INFO: Pod "pod-subpath-test-secret-7grh": Phase="Running", Reason="", readiness=false. Elapsed: 22.020859349s +Jul 27 03:01:52.935: INFO: Pod "pod-subpath-test-secret-7grh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.019172852s +STEP: Saw pod success 07/27/23 03:01:52.935 +Jul 27 03:01:52.935: INFO: Pod "pod-subpath-test-secret-7grh" satisfied condition "Succeeded or Failed" +Jul 27 03:01:52.945: INFO: Trying to get logs from node 10.245.128.19 pod pod-subpath-test-secret-7grh container test-container-subpath-secret-7grh: +STEP: delete the pod 07/27/23 03:01:52.968 +Jul 27 03:01:52.989: INFO: Waiting for pod pod-subpath-test-secret-7grh to disappear +Jul 27 03:01:52.997: INFO: Pod pod-subpath-test-secret-7grh no longer exists +STEP: Deleting pod pod-subpath-test-secret-7grh 07/27/23 03:01:52.997 +Jul 27 03:01:52.997: INFO: Deleting pod "pod-subpath-test-secret-7grh" in namespace "subpath-9911" +[AfterEach] [sig-storage] Subpath test/e2e/framework/node/init/init.go:32 -Jun 12 22:24:46.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Projected configMap +Jul 27 03:01:53.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Subpath test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Projected configMap +[DeferCleanup (Each)] [sig-storage] Subpath dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Projected configMap +[DeferCleanup (Each)] [sig-storage] Subpath tear down framework | framework.go:193 -STEP: Destroying namespace "projected-306" for this suite. 06/12/23 22:24:46.235 +STEP: Destroying namespace "subpath-9911" for this suite. 07/27/23 03:01:53.021 ------------------------------ -• [SLOW TEST] [6.238 seconds] -[sig-storage] Projected configMap -test/e2e/common/storage/framework.go:23 - should be consumable from pods in volume as non-root [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:74 +• [SLOW TEST] [24.301 seconds] +[sig-storage] Subpath +test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + test/e2e/storage/subpath.go:36 + should support subpaths with secret pod [Conformance] + test/e2e/storage/subpath.go:60 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Projected configMap + [BeforeEach] [sig-storage] Subpath set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:24:40.017 - Jun 12 22:24:40.017: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename projected 06/12/23 22:24:40.019 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:24:40.076 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:24:40.091 - [BeforeEach] [sig-storage] Projected configMap + STEP: Creating a kubernetes client 07/27/23 03:01:28.744 + Jul 27 03:01:28.744: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename subpath 07/27/23 03:01:28.745 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:01:28.786 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:01:28.795 + [BeforeEach] [sig-storage] Subpath test/e2e/framework/metrics/init/init.go:31 - [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] - test/e2e/common/storage/projected_configmap.go:74 - STEP: Creating configMap with name projected-configmap-test-volume-672bc3fc-89eb-4ae0-8674-5909fd011cf5 06/12/23 22:24:40.103 - STEP: Creating a pod to test consume configMaps 06/12/23 22:24:40.126 - Jun 12 22:24:40.149: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e465887d-f539-47b3-9f1f-657790275262" in namespace "projected-306" to be "Succeeded or Failed" - Jun 12 22:24:40.159: INFO: Pod "pod-projected-configmaps-e465887d-f539-47b3-9f1f-657790275262": Phase="Pending", Reason="", readiness=false. Elapsed: 10.281331ms - Jun 12 22:24:42.167: INFO: Pod "pod-projected-configmaps-e465887d-f539-47b3-9f1f-657790275262": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017920421s - Jun 12 22:24:44.176: INFO: Pod "pod-projected-configmaps-e465887d-f539-47b3-9f1f-657790275262": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027182601s - Jun 12 22:24:46.168: INFO: Pod "pod-projected-configmaps-e465887d-f539-47b3-9f1f-657790275262": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019344979s - STEP: Saw pod success 06/12/23 22:24:46.169 - Jun 12 22:24:46.169: INFO: Pod "pod-projected-configmaps-e465887d-f539-47b3-9f1f-657790275262" satisfied condition "Succeeded or Failed" - Jun 12 22:24:46.175: INFO: Trying to get logs from node 10.138.75.70 pod pod-projected-configmaps-e465887d-f539-47b3-9f1f-657790275262 container agnhost-container: - STEP: delete the pod 06/12/23 22:24:46.192 - Jun 12 22:24:46.214: INFO: Waiting for pod pod-projected-configmaps-e465887d-f539-47b3-9f1f-657790275262 to disappear - Jun 12 22:24:46.222: INFO: Pod pod-projected-configmaps-e465887d-f539-47b3-9f1f-657790275262 no longer exists - [AfterEach] [sig-storage] Projected configMap + [BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 + STEP: Setting up data 07/27/23 03:01:28.805 + [It] should support subpaths with secret pod [Conformance] + test/e2e/storage/subpath.go:60 + STEP: Creating pod pod-subpath-test-secret-7grh 07/27/23 03:01:28.877 + STEP: Creating a pod to test atomic-volume-subpath 07/27/23 03:01:28.877 + Jul 27 03:01:28.916: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-7grh" in namespace "subpath-9911" to be "Succeeded or Failed" + Jul 27 03:01:28.925: INFO: Pod "pod-subpath-test-secret-7grh": Phase="Pending", Reason="", readiness=false. Elapsed: 9.412602ms + Jul 27 03:01:30.934: INFO: Pod "pod-subpath-test-secret-7grh": Phase="Running", Reason="", readiness=true. Elapsed: 2.01867163s + Jul 27 03:01:32.937: INFO: Pod "pod-subpath-test-secret-7grh": Phase="Running", Reason="", readiness=true. Elapsed: 4.021746209s + Jul 27 03:01:34.935: INFO: Pod "pod-subpath-test-secret-7grh": Phase="Running", Reason="", readiness=true. Elapsed: 6.019134527s + Jul 27 03:01:36.935: INFO: Pod "pod-subpath-test-secret-7grh": Phase="Running", Reason="", readiness=true. Elapsed: 8.019241981s + Jul 27 03:01:38.934: INFO: Pod "pod-subpath-test-secret-7grh": Phase="Running", Reason="", readiness=true. Elapsed: 10.01872182s + Jul 27 03:01:40.935: INFO: Pod "pod-subpath-test-secret-7grh": Phase="Running", Reason="", readiness=true. Elapsed: 12.018829971s + Jul 27 03:01:42.935: INFO: Pod "pod-subpath-test-secret-7grh": Phase="Running", Reason="", readiness=true. Elapsed: 14.019618083s + Jul 27 03:01:44.935: INFO: Pod "pod-subpath-test-secret-7grh": Phase="Running", Reason="", readiness=true. Elapsed: 16.019692663s + Jul 27 03:01:46.935: INFO: Pod "pod-subpath-test-secret-7grh": Phase="Running", Reason="", readiness=true. Elapsed: 18.019217323s + Jul 27 03:01:48.981: INFO: Pod "pod-subpath-test-secret-7grh": Phase="Running", Reason="", readiness=true. Elapsed: 20.06520904s + Jul 27 03:01:50.937: INFO: Pod "pod-subpath-test-secret-7grh": Phase="Running", Reason="", readiness=false. Elapsed: 22.020859349s + Jul 27 03:01:52.935: INFO: Pod "pod-subpath-test-secret-7grh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.019172852s + STEP: Saw pod success 07/27/23 03:01:52.935 + Jul 27 03:01:52.935: INFO: Pod "pod-subpath-test-secret-7grh" satisfied condition "Succeeded or Failed" + Jul 27 03:01:52.945: INFO: Trying to get logs from node 10.245.128.19 pod pod-subpath-test-secret-7grh container test-container-subpath-secret-7grh: + STEP: delete the pod 07/27/23 03:01:52.968 + Jul 27 03:01:52.989: INFO: Waiting for pod pod-subpath-test-secret-7grh to disappear + Jul 27 03:01:52.997: INFO: Pod pod-subpath-test-secret-7grh no longer exists + STEP: Deleting pod pod-subpath-test-secret-7grh 07/27/23 03:01:52.997 + Jul 27 03:01:52.997: INFO: Deleting pod "pod-subpath-test-secret-7grh" in namespace "subpath-9911" + [AfterEach] [sig-storage] Subpath test/e2e/framework/node/init/init.go:32 - Jun 12 22:24:46.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Projected configMap + Jul 27 03:01:53.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Subpath test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Projected configMap + [DeferCleanup (Each)] [sig-storage] Subpath dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Projected configMap + [DeferCleanup (Each)] [sig-storage] Subpath tear down framework | framework.go:193 - STEP: Destroying namespace "projected-306" for this suite. 06/12/23 22:24:46.235 + STEP: Destroying namespace "subpath-9911" for this suite. 07/27/23 03:01:53.021 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSS ------------------------------ -[sig-node] Variable Expansion - should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] - test/e2e/common/node/expansion.go:225 -[BeforeEach] [sig-node] Variable Expansion +[sig-storage] Secrets + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:57 +[BeforeEach] [sig-storage] Secrets set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:24:46.257 -Jun 12 22:24:46.257: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename var-expansion 06/12/23 22:24:46.258 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:24:46.31 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:24:46.337 -[BeforeEach] [sig-node] Variable Expansion +STEP: Creating a kubernetes client 07/27/23 03:01:53.045 +Jul 27 03:01:53.045: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename secrets 07/27/23 03:01:53.046 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:01:53.087 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:01:53.095 +[BeforeEach] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:31 -[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] - test/e2e/common/node/expansion.go:225 -STEP: creating the pod with failed condition 06/12/23 22:24:46.364 -Jun 12 22:24:46.384: INFO: Waiting up to 2m0s for pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc" in namespace "var-expansion-9920" to be "running" -Jun 12 22:24:46.391: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.24955ms -Jun 12 22:24:48.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014661339s -Jun 12 22:24:50.402: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01757506s -Jun 12 22:24:52.400: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015871186s -Jun 12 22:24:54.428: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.043492068s -Jun 12 22:24:56.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.014765449s -Jun 12 22:24:58.401: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.016712864s -Jun 12 22:25:00.406: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.021726204s -Jun 12 22:25:02.443: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.058164097s -Jun 12 22:25:04.398: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.013928592s -Jun 12 22:25:06.417: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 20.032281956s -Jun 12 22:25:08.421: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 22.036485879s -Jun 12 22:25:10.398: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 24.013861948s -Jun 12 22:25:12.402: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 26.017468873s -Jun 12 22:25:14.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 28.014273642s -Jun 12 22:25:16.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 30.014977582s -Jun 12 22:25:18.398: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 32.014019484s -Jun 12 22:25:20.401: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 34.016230051s -Jun 12 22:25:22.406: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 36.021647905s -Jun 12 22:25:24.401: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 38.016438076s -Jun 12 22:25:26.398: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 40.013612378s -Jun 12 22:25:28.400: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 42.015335551s -Jun 12 22:25:30.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 44.01460648s -Jun 12 22:25:32.400: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 46.015956357s -Jun 12 22:25:34.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 48.01418358s -Jun 12 22:25:36.401: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 50.017048109s -Jun 12 22:25:38.427: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 52.042837491s -Jun 12 22:25:40.420: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 54.035738404s -Jun 12 22:25:42.401: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 56.016451314s -Jun 12 22:25:44.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 58.014167987s -Jun 12 22:25:46.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.014307961s -Jun 12 22:25:48.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.014365533s -Jun 12 22:25:50.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.014407554s -Jun 12 22:25:52.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.014694532s -Jun 12 22:25:54.422: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.037085049s -Jun 12 22:25:56.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.014989558s -Jun 12 22:25:58.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.014692923s -Jun 12 22:26:00.426: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.041375725s -Jun 12 22:26:02.450: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.065386275s -Jun 12 22:26:04.400: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.015892174s -Jun 12 22:26:06.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.014894782s -Jun 12 22:26:08.409: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.024726787s -Jun 12 22:26:10.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.014279758s -Jun 12 22:26:12.408: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.023340687s -Jun 12 22:26:14.437: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.052112534s -Jun 12 22:26:16.418: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.033763639s -Jun 12 22:26:18.398: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.013609753s -Jun 12 22:26:20.400: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.015406209s -Jun 12 22:26:22.411: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.026992612s -Jun 12 22:26:24.410: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.025153758s -Jun 12 22:26:26.400: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.015617418s -Jun 12 22:26:28.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.014896809s -Jun 12 22:26:30.429: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.045013777s -Jun 12 22:26:32.406: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.021706723s -Jun 12 22:26:34.398: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.013465217s -Jun 12 22:26:36.409: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.024239606s -Jun 12 22:26:38.423: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.038572433s -Jun 12 22:26:40.400: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.015338716s -Jun 12 22:26:42.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.014401888s -Jun 12 22:26:44.398: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.01310549s -Jun 12 22:26:46.424: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.039736588s -Jun 12 22:26:46.459: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.07450522s -STEP: updating the pod 06/12/23 22:26:46.459 -Jun 12 22:26:47.032: INFO: Successfully updated pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc" -STEP: waiting for pod running 06/12/23 22:26:47.085 -Jun 12 22:26:47.086: INFO: Waiting up to 2m0s for pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc" in namespace "var-expansion-9920" to be "running" -Jun 12 22:26:47.097: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 11.115369ms -Jun 12 22:26:49.205: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Running", Reason="", readiness=true. Elapsed: 2.118904998s -Jun 12 22:26:49.205: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc" satisfied condition "running" -STEP: deleting the pod gracefully 06/12/23 22:26:49.205 -Jun 12 22:26:49.205: INFO: Deleting pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc" in namespace "var-expansion-9920" -Jun 12 22:26:49.219: INFO: Wait up to 5m0s for pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc" to be fully deleted -[AfterEach] [sig-node] Variable Expansion +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:57 +STEP: Creating secret with name secret-test-6cab76ea-6a0c-4575-96f7-6d88a2ea4d00 07/27/23 03:01:53.105 +STEP: Creating a pod to test consume secrets 07/27/23 03:01:53.117 +Jul 27 03:01:53.149: INFO: Waiting up to 5m0s for pod "pod-secrets-44b57c3b-05bf-4090-bd3c-6b4f67c2df30" in namespace "secrets-7753" to be "Succeeded or Failed" +Jul 27 03:01:53.159: INFO: Pod "pod-secrets-44b57c3b-05bf-4090-bd3c-6b4f67c2df30": Phase="Pending", Reason="", readiness=false. Elapsed: 10.525665ms +Jul 27 03:01:55.169: INFO: Pod "pod-secrets-44b57c3b-05bf-4090-bd3c-6b4f67c2df30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020649135s +Jul 27 03:01:57.168: INFO: Pod "pod-secrets-44b57c3b-05bf-4090-bd3c-6b4f67c2df30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019362417s +STEP: Saw pod success 07/27/23 03:01:57.168 +Jul 27 03:01:57.168: INFO: Pod "pod-secrets-44b57c3b-05bf-4090-bd3c-6b4f67c2df30" satisfied condition "Succeeded or Failed" +Jul 27 03:01:57.208: INFO: Trying to get logs from node 10.245.128.19 pod pod-secrets-44b57c3b-05bf-4090-bd3c-6b4f67c2df30 container secret-volume-test: +STEP: delete the pod 07/27/23 03:01:57.227 +Jul 27 03:01:57.301: INFO: Waiting for pod pod-secrets-44b57c3b-05bf-4090-bd3c-6b4f67c2df30 to disappear +Jul 27 03:01:57.308: INFO: Pod pod-secrets-44b57c3b-05bf-4090-bd3c-6b4f67c2df30 no longer exists +[AfterEach] [sig-storage] Secrets test/e2e/framework/node/init/init.go:32 -Jun 12 22:27:23.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Variable Expansion +Jul 27 03:01:57.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Variable Expansion +[DeferCleanup (Each)] [sig-storage] Secrets dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Variable Expansion - tear down framework | framework.go:193 -STEP: Destroying namespace "var-expansion-9920" for this suite. 06/12/23 22:27:23.281 ------------------------------- -• [SLOW TEST] [157.048 seconds] -[sig-node] Variable Expansion -test/e2e/common/node/framework.go:23 - should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] - test/e2e/common/node/expansion.go:225 +[DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 +STEP: Destroying namespace "secrets-7753" for this suite. 07/27/23 03:01:57.319 +------------------------------ +• [4.296 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:57 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Variable Expansion + [BeforeEach] [sig-storage] Secrets set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:24:46.257 - Jun 12 22:24:46.257: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename var-expansion 06/12/23 22:24:46.258 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:24:46.31 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:24:46.337 - [BeforeEach] [sig-node] Variable Expansion + STEP: Creating a kubernetes client 07/27/23 03:01:53.045 + Jul 27 03:01:53.045: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename secrets 07/27/23 03:01:53.046 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:01:53.087 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:01:53.095 + [BeforeEach] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:31 - [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] - test/e2e/common/node/expansion.go:225 - STEP: creating the pod with failed condition 06/12/23 22:24:46.364 - Jun 12 22:24:46.384: INFO: Waiting up to 2m0s for pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc" in namespace "var-expansion-9920" to be "running" - Jun 12 22:24:46.391: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.24955ms - Jun 12 22:24:48.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014661339s - Jun 12 22:24:50.402: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01757506s - Jun 12 22:24:52.400: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015871186s - Jun 12 22:24:54.428: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.043492068s - Jun 12 22:24:56.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.014765449s - Jun 12 22:24:58.401: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.016712864s - Jun 12 22:25:00.406: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.021726204s - Jun 12 22:25:02.443: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.058164097s - Jun 12 22:25:04.398: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.013928592s - Jun 12 22:25:06.417: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 20.032281956s - Jun 12 22:25:08.421: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 22.036485879s - Jun 12 22:25:10.398: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 24.013861948s - Jun 12 22:25:12.402: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 26.017468873s - Jun 12 22:25:14.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 28.014273642s - Jun 12 22:25:16.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 30.014977582s - Jun 12 22:25:18.398: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 32.014019484s - Jun 12 22:25:20.401: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 34.016230051s - Jun 12 22:25:22.406: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 36.021647905s - Jun 12 22:25:24.401: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 38.016438076s - Jun 12 22:25:26.398: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 40.013612378s - Jun 12 22:25:28.400: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 42.015335551s - Jun 12 22:25:30.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 44.01460648s - Jun 12 22:25:32.400: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 46.015956357s - Jun 12 22:25:34.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 48.01418358s - Jun 12 22:25:36.401: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 50.017048109s - Jun 12 22:25:38.427: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 52.042837491s - Jun 12 22:25:40.420: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 54.035738404s - Jun 12 22:25:42.401: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 56.016451314s - Jun 12 22:25:44.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 58.014167987s - Jun 12 22:25:46.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.014307961s - Jun 12 22:25:48.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.014365533s - Jun 12 22:25:50.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.014407554s - Jun 12 22:25:52.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.014694532s - Jun 12 22:25:54.422: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.037085049s - Jun 12 22:25:56.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.014989558s - Jun 12 22:25:58.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.014692923s - Jun 12 22:26:00.426: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.041375725s - Jun 12 22:26:02.450: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.065386275s - Jun 12 22:26:04.400: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.015892174s - Jun 12 22:26:06.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.014894782s - Jun 12 22:26:08.409: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.024726787s - Jun 12 22:26:10.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.014279758s - Jun 12 22:26:12.408: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.023340687s - Jun 12 22:26:14.437: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.052112534s - Jun 12 22:26:16.418: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.033763639s - Jun 12 22:26:18.398: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.013609753s - Jun 12 22:26:20.400: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.015406209s - Jun 12 22:26:22.411: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.026992612s - Jun 12 22:26:24.410: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.025153758s - Jun 12 22:26:26.400: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.015617418s - Jun 12 22:26:28.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.014896809s - Jun 12 22:26:30.429: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.045013777s - Jun 12 22:26:32.406: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.021706723s - Jun 12 22:26:34.398: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.013465217s - Jun 12 22:26:36.409: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.024239606s - Jun 12 22:26:38.423: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.038572433s - Jun 12 22:26:40.400: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.015338716s - Jun 12 22:26:42.399: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.014401888s - Jun 12 22:26:44.398: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.01310549s - Jun 12 22:26:46.424: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.039736588s - Jun 12 22:26:46.459: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.07450522s - STEP: updating the pod 06/12/23 22:26:46.459 - Jun 12 22:26:47.032: INFO: Successfully updated pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc" - STEP: waiting for pod running 06/12/23 22:26:47.085 - Jun 12 22:26:47.086: INFO: Waiting up to 2m0s for pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc" in namespace "var-expansion-9920" to be "running" - Jun 12 22:26:47.097: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Pending", Reason="", readiness=false. Elapsed: 11.115369ms - Jun 12 22:26:49.205: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc": Phase="Running", Reason="", readiness=true. Elapsed: 2.118904998s - Jun 12 22:26:49.205: INFO: Pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc" satisfied condition "running" - STEP: deleting the pod gracefully 06/12/23 22:26:49.205 - Jun 12 22:26:49.205: INFO: Deleting pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc" in namespace "var-expansion-9920" - Jun 12 22:26:49.219: INFO: Wait up to 5m0s for pod "var-expansion-d5e84490-e7b7-4a45-a449-e1a86ad587bc" to be fully deleted - [AfterEach] [sig-node] Variable Expansion + [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:57 + STEP: Creating secret with name secret-test-6cab76ea-6a0c-4575-96f7-6d88a2ea4d00 07/27/23 03:01:53.105 + STEP: Creating a pod to test consume secrets 07/27/23 03:01:53.117 + Jul 27 03:01:53.149: INFO: Waiting up to 5m0s for pod "pod-secrets-44b57c3b-05bf-4090-bd3c-6b4f67c2df30" in namespace "secrets-7753" to be "Succeeded or Failed" + Jul 27 03:01:53.159: INFO: Pod "pod-secrets-44b57c3b-05bf-4090-bd3c-6b4f67c2df30": Phase="Pending", Reason="", readiness=false. Elapsed: 10.525665ms + Jul 27 03:01:55.169: INFO: Pod "pod-secrets-44b57c3b-05bf-4090-bd3c-6b4f67c2df30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020649135s + Jul 27 03:01:57.168: INFO: Pod "pod-secrets-44b57c3b-05bf-4090-bd3c-6b4f67c2df30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019362417s + STEP: Saw pod success 07/27/23 03:01:57.168 + Jul 27 03:01:57.168: INFO: Pod "pod-secrets-44b57c3b-05bf-4090-bd3c-6b4f67c2df30" satisfied condition "Succeeded or Failed" + Jul 27 03:01:57.208: INFO: Trying to get logs from node 10.245.128.19 pod pod-secrets-44b57c3b-05bf-4090-bd3c-6b4f67c2df30 container secret-volume-test: + STEP: delete the pod 07/27/23 03:01:57.227 + Jul 27 03:01:57.301: INFO: Waiting for pod pod-secrets-44b57c3b-05bf-4090-bd3c-6b4f67c2df30 to disappear + Jul 27 03:01:57.308: INFO: Pod pod-secrets-44b57c3b-05bf-4090-bd3c-6b4f67c2df30 no longer exists + [AfterEach] [sig-storage] Secrets test/e2e/framework/node/init/init.go:32 - Jun 12 22:27:23.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Variable Expansion + Jul 27 03:01:57.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Secrets test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Variable Expansion + [DeferCleanup (Each)] [sig-storage] Secrets dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Variable Expansion + [DeferCleanup (Each)] [sig-storage] Secrets tear down framework | framework.go:193 - STEP: Destroying namespace "var-expansion-9920" for this suite. 06/12/23 22:27:23.281 + STEP: Destroying namespace "secrets-7753" for this suite. 07/27/23 03:01:57.319 << End Captured GinkgoWriter Output ------------------------------ -SSSS +SSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] Projected downwardAPI - should provide container's cpu request [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:221 -[BeforeEach] [sig-storage] Projected downwardAPI +[sig-node] ConfigMap + should run through a ConfigMap lifecycle [Conformance] + test/e2e/common/node/configmap.go:169 +[BeforeEach] [sig-node] ConfigMap set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:27:23.308 -Jun 12 22:27:23.309: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename projected 06/12/23 22:27:23.314 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:27:23.367 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:27:23.38 -[BeforeEach] [sig-storage] Projected downwardAPI +STEP: Creating a kubernetes client 07/27/23 03:01:57.342 +Jul 27 03:01:57.342: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename configmap 07/27/23 03:01:57.343 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:01:57.389 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:01:57.398 +[BeforeEach] [sig-node] ConfigMap test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-storage] Projected downwardAPI - test/e2e/common/storage/projected_downwardapi.go:44 -[It] should provide container's cpu request [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:221 -STEP: Creating a pod to test downward API volume plugin 06/12/23 22:27:23.4 -Jun 12 22:27:23.423: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3ab3b583-61ba-4472-982a-098433292a4e" in namespace "projected-8849" to be "Succeeded or Failed" -Jun 12 22:27:23.431: INFO: Pod "downwardapi-volume-3ab3b583-61ba-4472-982a-098433292a4e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.028593ms -Jun 12 22:27:25.454: INFO: Pod "downwardapi-volume-3ab3b583-61ba-4472-982a-098433292a4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030688571s -Jun 12 22:27:27.443: INFO: Pod "downwardapi-volume-3ab3b583-61ba-4472-982a-098433292a4e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020033166s -Jun 12 22:27:29.447: INFO: Pod "downwardapi-volume-3ab3b583-61ba-4472-982a-098433292a4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023708966s -STEP: Saw pod success 06/12/23 22:27:29.447 -Jun 12 22:27:29.447: INFO: Pod "downwardapi-volume-3ab3b583-61ba-4472-982a-098433292a4e" satisfied condition "Succeeded or Failed" -Jun 12 22:27:29.455: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-3ab3b583-61ba-4472-982a-098433292a4e container client-container: -STEP: delete the pod 06/12/23 22:27:29.506 -Jun 12 22:27:29.531: INFO: Waiting for pod downwardapi-volume-3ab3b583-61ba-4472-982a-098433292a4e to disappear -Jun 12 22:27:29.557: INFO: Pod downwardapi-volume-3ab3b583-61ba-4472-982a-098433292a4e no longer exists -[AfterEach] [sig-storage] Projected downwardAPI +[It] should run through a ConfigMap lifecycle [Conformance] + test/e2e/common/node/configmap.go:169 +STEP: creating a ConfigMap 07/27/23 03:01:57.407 +STEP: fetching the ConfigMap 07/27/23 03:01:57.444 +STEP: patching the ConfigMap 07/27/23 03:01:57.475 +STEP: listing all ConfigMaps in all namespaces with a label selector 07/27/23 03:01:57.495 +STEP: deleting the ConfigMap by collection with a label selector 07/27/23 03:01:57.691 +STEP: listing all ConfigMaps in test namespace 07/27/23 03:01:57.739 +[AfterEach] [sig-node] ConfigMap test/e2e/framework/node/init/init.go:32 -Jun 12 22:27:29.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Projected downwardAPI +Jul 27 03:01:57.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] ConfigMap test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Projected downwardAPI +[DeferCleanup (Each)] [sig-node] ConfigMap dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Projected downwardAPI +[DeferCleanup (Each)] [sig-node] ConfigMap tear down framework | framework.go:193 -STEP: Destroying namespace "projected-8849" for this suite. 06/12/23 22:27:29.574 +STEP: Destroying namespace "configmap-4676" for this suite. 07/27/23 03:01:57.762 ------------------------------ -• [SLOW TEST] [6.293 seconds] -[sig-storage] Projected downwardAPI -test/e2e/common/storage/framework.go:23 - should provide container's cpu request [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:221 +• [0.448 seconds] +[sig-node] ConfigMap +test/e2e/common/node/framework.go:23 + should run through a ConfigMap lifecycle [Conformance] + test/e2e/common/node/configmap.go:169 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Projected downwardAPI + [BeforeEach] [sig-node] ConfigMap set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:27:23.308 - Jun 12 22:27:23.309: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename projected 06/12/23 22:27:23.314 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:27:23.367 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:27:23.38 - [BeforeEach] [sig-storage] Projected downwardAPI + STEP: Creating a kubernetes client 07/27/23 03:01:57.342 + Jul 27 03:01:57.342: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename configmap 07/27/23 03:01:57.343 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:01:57.389 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:01:57.398 + [BeforeEach] [sig-node] ConfigMap test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-storage] Projected downwardAPI - test/e2e/common/storage/projected_downwardapi.go:44 - [It] should provide container's cpu request [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:221 - STEP: Creating a pod to test downward API volume plugin 06/12/23 22:27:23.4 - Jun 12 22:27:23.423: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3ab3b583-61ba-4472-982a-098433292a4e" in namespace "projected-8849" to be "Succeeded or Failed" - Jun 12 22:27:23.431: INFO: Pod "downwardapi-volume-3ab3b583-61ba-4472-982a-098433292a4e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.028593ms - Jun 12 22:27:25.454: INFO: Pod "downwardapi-volume-3ab3b583-61ba-4472-982a-098433292a4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030688571s - Jun 12 22:27:27.443: INFO: Pod "downwardapi-volume-3ab3b583-61ba-4472-982a-098433292a4e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020033166s - Jun 12 22:27:29.447: INFO: Pod "downwardapi-volume-3ab3b583-61ba-4472-982a-098433292a4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023708966s - STEP: Saw pod success 06/12/23 22:27:29.447 - Jun 12 22:27:29.447: INFO: Pod "downwardapi-volume-3ab3b583-61ba-4472-982a-098433292a4e" satisfied condition "Succeeded or Failed" - Jun 12 22:27:29.455: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-3ab3b583-61ba-4472-982a-098433292a4e container client-container: - STEP: delete the pod 06/12/23 22:27:29.506 - Jun 12 22:27:29.531: INFO: Waiting for pod downwardapi-volume-3ab3b583-61ba-4472-982a-098433292a4e to disappear - Jun 12 22:27:29.557: INFO: Pod downwardapi-volume-3ab3b583-61ba-4472-982a-098433292a4e no longer exists - [AfterEach] [sig-storage] Projected downwardAPI + [It] should run through a ConfigMap lifecycle [Conformance] + test/e2e/common/node/configmap.go:169 + STEP: creating a ConfigMap 07/27/23 03:01:57.407 + STEP: fetching the ConfigMap 07/27/23 03:01:57.444 + STEP: patching the ConfigMap 07/27/23 03:01:57.475 + STEP: listing all ConfigMaps in all namespaces with a label selector 07/27/23 03:01:57.495 + STEP: deleting the ConfigMap by collection with a label selector 07/27/23 03:01:57.691 + STEP: listing all ConfigMaps in test namespace 07/27/23 03:01:57.739 + [AfterEach] [sig-node] ConfigMap test/e2e/framework/node/init/init.go:32 - Jun 12 22:27:29.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + Jul 27 03:01:57.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] ConfigMap test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + [DeferCleanup (Each)] [sig-node] ConfigMap dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + [DeferCleanup (Each)] [sig-node] ConfigMap tear down framework | framework.go:193 - STEP: Destroying namespace "projected-8849" for this suite. 06/12/23 22:27:29.574 + STEP: Destroying namespace "configmap-4676" for this suite. 07/27/23 03:01:57.762 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] - removes definition from spec when one version gets changed to not be served [Conformance] - test/e2e/apimachinery/crd_publish_openapi.go:442 -[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +[sig-node] RuntimeClass + should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:104 +[BeforeEach] [sig-node] RuntimeClass set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:27:29.615 -Jun 12 22:27:29.615: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename crd-publish-openapi 06/12/23 22:27:29.617 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:27:29.67 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:27:29.682 -[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +STEP: Creating a kubernetes client 07/27/23 03:01:57.793 +Jul 27 03:01:57.793: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename runtimeclass 07/27/23 03:01:57.794 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:01:57.841 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:01:57.851 +[BeforeEach] [sig-node] RuntimeClass test/e2e/framework/metrics/init/init.go:31 -[It] removes definition from spec when one version gets changed to not be served [Conformance] - test/e2e/apimachinery/crd_publish_openapi.go:442 -STEP: set up a multi version CRD 06/12/23 22:27:29.701 -Jun 12 22:27:29.703: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: mark a version not serverd 06/12/23 22:27:44.102 -STEP: check the unserved version gets removed 06/12/23 22:27:44.151 -STEP: check the other version is not changed 06/12/23 22:27:52.723 -[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +[It] should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:104 +Jul 27 03:01:57.903: INFO: Waiting up to 1m20s for at least 1 pods in namespace runtimeclass-6572 to be scheduled +Jul 27 03:01:57.912: INFO: 1 pods are not scheduled: [runtimeclass-6572/test-runtimeclass-runtimeclass-6572-preconfigured-handler-rzws8(c3b137ec-ae40-4c6b-8288-baf52ee0f686)] +[AfterEach] [sig-node] RuntimeClass test/e2e/framework/node/init/init.go:32 -Jun 12 22:28:08.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +Jul 27 03:01:59.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] RuntimeClass test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-node] RuntimeClass dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-node] RuntimeClass tear down framework | framework.go:193 -STEP: Destroying namespace "crd-publish-openapi-557" for this suite. 06/12/23 22:28:08.98 +STEP: Destroying namespace "runtimeclass-6572" for this suite. 07/27/23 03:01:59.955 ------------------------------ -• [SLOW TEST] [39.406 seconds] -[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] -test/e2e/apimachinery/framework.go:23 - removes definition from spec when one version gets changed to not be served [Conformance] - test/e2e/apimachinery/crd_publish_openapi.go:442 +• [2.185 seconds] +[sig-node] RuntimeClass +test/e2e/common/node/framework.go:23 + should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:104 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + [BeforeEach] [sig-node] RuntimeClass set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:27:29.615 - Jun 12 22:27:29.615: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename crd-publish-openapi 06/12/23 22:27:29.617 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:27:29.67 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:27:29.682 - [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + STEP: Creating a kubernetes client 07/27/23 03:01:57.793 + Jul 27 03:01:57.793: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename runtimeclass 07/27/23 03:01:57.794 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:01:57.841 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:01:57.851 + [BeforeEach] [sig-node] RuntimeClass test/e2e/framework/metrics/init/init.go:31 - [It] removes definition from spec when one version gets changed to not be served [Conformance] - test/e2e/apimachinery/crd_publish_openapi.go:442 - STEP: set up a multi version CRD 06/12/23 22:27:29.701 - Jun 12 22:27:29.703: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: mark a version not serverd 06/12/23 22:27:44.102 - STEP: check the unserved version gets removed 06/12/23 22:27:44.151 - STEP: check the other version is not changed 06/12/23 22:27:52.723 - [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + [It] should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:104 + Jul 27 03:01:57.903: INFO: Waiting up to 1m20s for at least 1 pods in namespace runtimeclass-6572 to be scheduled + Jul 27 03:01:57.912: INFO: 1 pods are not scheduled: [runtimeclass-6572/test-runtimeclass-runtimeclass-6572-preconfigured-handler-rzws8(c3b137ec-ae40-4c6b-8288-baf52ee0f686)] + [AfterEach] [sig-node] RuntimeClass test/e2e/framework/node/init/init.go:32 - Jun 12 22:28:08.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + Jul 27 03:01:59.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] RuntimeClass test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-node] RuntimeClass dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-node] RuntimeClass tear down framework | framework.go:193 - STEP: Destroying namespace "crd-publish-openapi-557" for this suite. 06/12/23 22:28:08.98 + STEP: Destroying namespace "runtimeclass-6572" for this suite. 07/27/23 03:01:59.955 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-api-machinery] Garbage collector - should delete pods created by rc when not orphaning [Conformance] - test/e2e/apimachinery/garbage_collector.go:312 -[BeforeEach] [sig-api-machinery] Garbage collector +[sig-storage] EmptyDir wrapper volumes + should not cause race condition when used for configmaps [Serial] [Conformance] + test/e2e/storage/empty_dir_wrapper.go:189 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:28:09.027 -Jun 12 22:28:09.027: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename gc 06/12/23 22:28:09.029 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:28:09.086 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:28:09.121 -[BeforeEach] [sig-api-machinery] Garbage collector +STEP: Creating a kubernetes client 07/27/23 03:01:59.979 +Jul 27 03:01:59.979: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename emptydir-wrapper 07/27/23 03:01:59.98 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:02:00.025 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:02:00.033 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/metrics/init/init.go:31 -[It] should delete pods created by rc when not orphaning [Conformance] - test/e2e/apimachinery/garbage_collector.go:312 -STEP: create the rc 06/12/23 22:28:09.167 -STEP: delete the rc 06/12/23 22:28:14.227 -STEP: wait for all pods to be garbage collected 06/12/23 22:28:14.249 -STEP: Gathering metrics 06/12/23 22:28:19.268 -W0612 22:28:19.284080 23 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. -Jun 12 22:28:19.284: INFO: For apiserver_request_total: -For apiserver_request_latency_seconds: -For apiserver_init_events_total: -For garbage_collector_attempt_to_delete_queue_latency: -For garbage_collector_attempt_to_delete_work_duration: -For garbage_collector_attempt_to_orphan_queue_latency: -For garbage_collector_attempt_to_orphan_work_duration: -For garbage_collector_dirty_processing_latency_microseconds: -For garbage_collector_event_processing_latency_microseconds: -For garbage_collector_graph_changes_queue_latency: -For garbage_collector_graph_changes_work_duration: -For garbage_collector_orphan_processing_latency_microseconds: -For namespace_queue_latency: -For namespace_queue_latency_sum: -For namespace_queue_latency_count: -For namespace_retries: -For namespace_work_duration: -For namespace_work_duration_sum: -For namespace_work_duration_count: -For function_duration_seconds: -For errors_total: -For evicted_pods_total: - -[AfterEach] [sig-api-machinery] Garbage collector +[It] should not cause race condition when used for configmaps [Serial] [Conformance] + test/e2e/storage/empty_dir_wrapper.go:189 +STEP: Creating 50 configmaps 07/27/23 03:02:00.046 +STEP: Creating RC which spawns configmap-volume pods 07/27/23 03:02:01.035 +Jul 27 03:02:01.079: INFO: Pod name wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e: Found 0 pods out of 5 +Jul 27 03:02:06.102: INFO: Pod name wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e: Found 5 pods out of 5 +STEP: Ensuring each pod is running 07/27/23 03:02:06.102 +Jul 27 03:02:06.102: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e-bvrsf" in namespace "emptydir-wrapper-9180" to be "running" +Jul 27 03:02:06.114: INFO: Pod "wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e-bvrsf": Phase="Running", Reason="", readiness=true. Elapsed: 11.678939ms +Jul 27 03:02:06.114: INFO: Pod "wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e-bvrsf" satisfied condition "running" +Jul 27 03:02:06.114: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e-cbnxt" in namespace "emptydir-wrapper-9180" to be "running" +Jul 27 03:02:06.122: INFO: Pod "wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e-cbnxt": Phase="Running", Reason="", readiness=true. Elapsed: 8.240712ms +Jul 27 03:02:06.122: INFO: Pod "wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e-cbnxt" satisfied condition "running" +Jul 27 03:02:06.122: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e-dskfk" in namespace "emptydir-wrapper-9180" to be "running" +Jul 27 03:02:06.131: INFO: Pod "wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e-dskfk": Phase="Running", Reason="", readiness=true. Elapsed: 8.925268ms +Jul 27 03:02:06.131: INFO: Pod "wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e-dskfk" satisfied condition "running" +Jul 27 03:02:06.131: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e-fc8ms" in namespace "emptydir-wrapper-9180" to be "running" +Jul 27 03:02:06.139: INFO: Pod "wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e-fc8ms": Phase="Running", Reason="", readiness=true. Elapsed: 8.02853ms +Jul 27 03:02:06.139: INFO: Pod "wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e-fc8ms" satisfied condition "running" +Jul 27 03:02:06.139: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e-t9rnm" in namespace "emptydir-wrapper-9180" to be "running" +Jul 27 03:02:06.148: INFO: Pod "wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e-t9rnm": Phase="Running", Reason="", readiness=true. Elapsed: 9.16535ms +Jul 27 03:02:06.148: INFO: Pod "wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e-t9rnm" satisfied condition "running" +STEP: deleting ReplicationController wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e in namespace emptydir-wrapper-9180, will wait for the garbage collector to delete the pods 07/27/23 03:02:06.148 +Jul 27 03:02:06.254: INFO: Deleting ReplicationController wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e took: 39.820595ms +Jul 27 03:02:06.355: INFO: Terminating ReplicationController wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e pods took: 101.186082ms +STEP: Creating RC which spawns configmap-volume pods 07/27/23 03:02:08.57 +Jul 27 03:02:08.605: INFO: Pod name wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6: Found 0 pods out of 5 +Jul 27 03:02:13.624: INFO: Pod name wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6: Found 5 pods out of 5 +STEP: Ensuring each pod is running 07/27/23 03:02:13.624 +Jul 27 03:02:13.624: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6-54gr7" in namespace "emptydir-wrapper-9180" to be "running" +Jul 27 03:02:13.633: INFO: Pod "wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6-54gr7": Phase="Running", Reason="", readiness=true. Elapsed: 8.168342ms +Jul 27 03:02:13.633: INFO: Pod "wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6-54gr7" satisfied condition "running" +Jul 27 03:02:13.633: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6-6rmb6" in namespace "emptydir-wrapper-9180" to be "running" +Jul 27 03:02:13.641: INFO: Pod "wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6-6rmb6": Phase="Running", Reason="", readiness=true. Elapsed: 8.152927ms +Jul 27 03:02:13.641: INFO: Pod "wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6-6rmb6" satisfied condition "running" +Jul 27 03:02:13.641: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6-phg86" in namespace "emptydir-wrapper-9180" to be "running" +Jul 27 03:02:13.650: INFO: Pod "wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6-phg86": Phase="Running", Reason="", readiness=true. Elapsed: 8.913851ms +Jul 27 03:02:13.650: INFO: Pod "wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6-phg86" satisfied condition "running" +Jul 27 03:02:13.650: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6-rx7hl" in namespace "emptydir-wrapper-9180" to be "running" +Jul 27 03:02:13.658: INFO: Pod "wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6-rx7hl": Phase="Running", Reason="", readiness=true. Elapsed: 8.488354ms +Jul 27 03:02:13.658: INFO: Pod "wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6-rx7hl" satisfied condition "running" +Jul 27 03:02:13.658: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6-z8x9f" in namespace "emptydir-wrapper-9180" to be "running" +Jul 27 03:02:13.667: INFO: Pod "wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6-z8x9f": Phase="Running", Reason="", readiness=true. Elapsed: 8.812477ms +Jul 27 03:02:13.667: INFO: Pod "wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6-z8x9f" satisfied condition "running" +STEP: deleting ReplicationController wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6 in namespace emptydir-wrapper-9180, will wait for the garbage collector to delete the pods 07/27/23 03:02:13.667 +Jul 27 03:02:13.760: INFO: Deleting ReplicationController wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6 took: 26.914351ms +Jul 27 03:02:13.860: INFO: Terminating ReplicationController wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6 pods took: 100.128659ms +STEP: Creating RC which spawns configmap-volume pods 07/27/23 03:02:16.573 +Jul 27 03:02:16.623: INFO: Pod name wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410: Found 0 pods out of 5 +Jul 27 03:02:21.656: INFO: Pod name wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410: Found 5 pods out of 5 +STEP: Ensuring each pod is running 07/27/23 03:02:21.656 +Jul 27 03:02:21.656: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410-2j6k5" in namespace "emptydir-wrapper-9180" to be "running" +Jul 27 03:02:21.665: INFO: Pod "wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410-2j6k5": Phase="Running", Reason="", readiness=true. Elapsed: 8.615833ms +Jul 27 03:02:21.665: INFO: Pod "wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410-2j6k5" satisfied condition "running" +Jul 27 03:02:21.665: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410-45ssb" in namespace "emptydir-wrapper-9180" to be "running" +Jul 27 03:02:21.674: INFO: Pod "wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410-45ssb": Phase="Running", Reason="", readiness=true. Elapsed: 9.178603ms +Jul 27 03:02:21.674: INFO: Pod "wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410-45ssb" satisfied condition "running" +Jul 27 03:02:21.674: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410-4vvdv" in namespace "emptydir-wrapper-9180" to be "running" +Jul 27 03:02:21.682: INFO: Pod "wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410-4vvdv": Phase="Running", Reason="", readiness=true. Elapsed: 8.152637ms +Jul 27 03:02:21.682: INFO: Pod "wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410-4vvdv" satisfied condition "running" +Jul 27 03:02:21.682: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410-9jgtr" in namespace "emptydir-wrapper-9180" to be "running" +Jul 27 03:02:21.690: INFO: Pod "wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410-9jgtr": Phase="Running", Reason="", readiness=true. Elapsed: 8.13124ms +Jul 27 03:02:21.690: INFO: Pod "wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410-9jgtr" satisfied condition "running" +Jul 27 03:02:21.690: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410-mtp69" in namespace "emptydir-wrapper-9180" to be "running" +Jul 27 03:02:21.699: INFO: Pod "wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410-mtp69": Phase="Running", Reason="", readiness=true. Elapsed: 8.885296ms +Jul 27 03:02:21.699: INFO: Pod "wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410-mtp69" satisfied condition "running" +STEP: deleting ReplicationController wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410 in namespace emptydir-wrapper-9180, will wait for the garbage collector to delete the pods 07/27/23 03:02:21.699 +Jul 27 03:02:21.794: INFO: Deleting ReplicationController wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410 took: 27.311806ms +Jul 27 03:02:21.895: INFO: Terminating ReplicationController wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410 pods took: 100.898945ms +STEP: Cleaning up the configMaps 07/27/23 03:02:24.696 +[AfterEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/node/init/init.go:32 -Jun 12 22:28:19.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] Garbage collector +Jul 27 03:02:25.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] Garbage collector +[DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] Garbage collector +[DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes tear down framework | framework.go:193 -STEP: Destroying namespace "gc-978" for this suite. 06/12/23 22:28:19.304 +STEP: Destroying namespace "emptydir-wrapper-9180" for this suite. 07/27/23 03:02:25.854 ------------------------------ -• [SLOW TEST] [10.299 seconds] -[sig-api-machinery] Garbage collector -test/e2e/apimachinery/framework.go:23 - should delete pods created by rc when not orphaning [Conformance] - test/e2e/apimachinery/garbage_collector.go:312 +• [SLOW TEST] [25.897 seconds] +[sig-storage] EmptyDir wrapper volumes +test/e2e/storage/utils/framework.go:23 + should not cause race condition when used for configmaps [Serial] [Conformance] + test/e2e/storage/empty_dir_wrapper.go:189 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] Garbage collector + [BeforeEach] [sig-storage] EmptyDir wrapper volumes set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:28:09.027 - Jun 12 22:28:09.027: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename gc 06/12/23 22:28:09.029 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:28:09.086 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:28:09.121 - [BeforeEach] [sig-api-machinery] Garbage collector + STEP: Creating a kubernetes client 07/27/23 03:01:59.979 + Jul 27 03:01:59.979: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename emptydir-wrapper 07/27/23 03:01:59.98 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:02:00.025 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:02:00.033 + [BeforeEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/metrics/init/init.go:31 - [It] should delete pods created by rc when not orphaning [Conformance] - test/e2e/apimachinery/garbage_collector.go:312 - STEP: create the rc 06/12/23 22:28:09.167 - STEP: delete the rc 06/12/23 22:28:14.227 - STEP: wait for all pods to be garbage collected 06/12/23 22:28:14.249 - STEP: Gathering metrics 06/12/23 22:28:19.268 - W0612 22:28:19.284080 23 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. - Jun 12 22:28:19.284: INFO: For apiserver_request_total: - For apiserver_request_latency_seconds: - For apiserver_init_events_total: - For garbage_collector_attempt_to_delete_queue_latency: - For garbage_collector_attempt_to_delete_work_duration: - For garbage_collector_attempt_to_orphan_queue_latency: - For garbage_collector_attempt_to_orphan_work_duration: - For garbage_collector_dirty_processing_latency_microseconds: - For garbage_collector_event_processing_latency_microseconds: - For garbage_collector_graph_changes_queue_latency: - For garbage_collector_graph_changes_work_duration: - For garbage_collector_orphan_processing_latency_microseconds: - For namespace_queue_latency: - For namespace_queue_latency_sum: - For namespace_queue_latency_count: - For namespace_retries: - For namespace_work_duration: - For namespace_work_duration_sum: - For namespace_work_duration_count: - For function_duration_seconds: - For errors_total: - For evicted_pods_total: - - [AfterEach] [sig-api-machinery] Garbage collector + [It] should not cause race condition when used for configmaps [Serial] [Conformance] + test/e2e/storage/empty_dir_wrapper.go:189 + STEP: Creating 50 configmaps 07/27/23 03:02:00.046 + STEP: Creating RC which spawns configmap-volume pods 07/27/23 03:02:01.035 + Jul 27 03:02:01.079: INFO: Pod name wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e: Found 0 pods out of 5 + Jul 27 03:02:06.102: INFO: Pod name wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e: Found 5 pods out of 5 + STEP: Ensuring each pod is running 07/27/23 03:02:06.102 + Jul 27 03:02:06.102: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e-bvrsf" in namespace "emptydir-wrapper-9180" to be "running" + Jul 27 03:02:06.114: INFO: Pod "wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e-bvrsf": Phase="Running", Reason="", readiness=true. Elapsed: 11.678939ms + Jul 27 03:02:06.114: INFO: Pod "wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e-bvrsf" satisfied condition "running" + Jul 27 03:02:06.114: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e-cbnxt" in namespace "emptydir-wrapper-9180" to be "running" + Jul 27 03:02:06.122: INFO: Pod "wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e-cbnxt": Phase="Running", Reason="", readiness=true. Elapsed: 8.240712ms + Jul 27 03:02:06.122: INFO: Pod "wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e-cbnxt" satisfied condition "running" + Jul 27 03:02:06.122: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e-dskfk" in namespace "emptydir-wrapper-9180" to be "running" + Jul 27 03:02:06.131: INFO: Pod "wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e-dskfk": Phase="Running", Reason="", readiness=true. Elapsed: 8.925268ms + Jul 27 03:02:06.131: INFO: Pod "wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e-dskfk" satisfied condition "running" + Jul 27 03:02:06.131: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e-fc8ms" in namespace "emptydir-wrapper-9180" to be "running" + Jul 27 03:02:06.139: INFO: Pod "wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e-fc8ms": Phase="Running", Reason="", readiness=true. Elapsed: 8.02853ms + Jul 27 03:02:06.139: INFO: Pod "wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e-fc8ms" satisfied condition "running" + Jul 27 03:02:06.139: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e-t9rnm" in namespace "emptydir-wrapper-9180" to be "running" + Jul 27 03:02:06.148: INFO: Pod "wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e-t9rnm": Phase="Running", Reason="", readiness=true. Elapsed: 9.16535ms + Jul 27 03:02:06.148: INFO: Pod "wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e-t9rnm" satisfied condition "running" + STEP: deleting ReplicationController wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e in namespace emptydir-wrapper-9180, will wait for the garbage collector to delete the pods 07/27/23 03:02:06.148 + Jul 27 03:02:06.254: INFO: Deleting ReplicationController wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e took: 39.820595ms + Jul 27 03:02:06.355: INFO: Terminating ReplicationController wrapped-volume-race-d32472f9-e735-40df-b56d-719fc9d4794e pods took: 101.186082ms + STEP: Creating RC which spawns configmap-volume pods 07/27/23 03:02:08.57 + Jul 27 03:02:08.605: INFO: Pod name wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6: Found 0 pods out of 5 + Jul 27 03:02:13.624: INFO: Pod name wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6: Found 5 pods out of 5 + STEP: Ensuring each pod is running 07/27/23 03:02:13.624 + Jul 27 03:02:13.624: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6-54gr7" in namespace "emptydir-wrapper-9180" to be "running" + Jul 27 03:02:13.633: INFO: Pod "wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6-54gr7": Phase="Running", Reason="", readiness=true. Elapsed: 8.168342ms + Jul 27 03:02:13.633: INFO: Pod "wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6-54gr7" satisfied condition "running" + Jul 27 03:02:13.633: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6-6rmb6" in namespace "emptydir-wrapper-9180" to be "running" + Jul 27 03:02:13.641: INFO: Pod "wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6-6rmb6": Phase="Running", Reason="", readiness=true. Elapsed: 8.152927ms + Jul 27 03:02:13.641: INFO: Pod "wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6-6rmb6" satisfied condition "running" + Jul 27 03:02:13.641: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6-phg86" in namespace "emptydir-wrapper-9180" to be "running" + Jul 27 03:02:13.650: INFO: Pod "wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6-phg86": Phase="Running", Reason="", readiness=true. Elapsed: 8.913851ms + Jul 27 03:02:13.650: INFO: Pod "wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6-phg86" satisfied condition "running" + Jul 27 03:02:13.650: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6-rx7hl" in namespace "emptydir-wrapper-9180" to be "running" + Jul 27 03:02:13.658: INFO: Pod "wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6-rx7hl": Phase="Running", Reason="", readiness=true. Elapsed: 8.488354ms + Jul 27 03:02:13.658: INFO: Pod "wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6-rx7hl" satisfied condition "running" + Jul 27 03:02:13.658: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6-z8x9f" in namespace "emptydir-wrapper-9180" to be "running" + Jul 27 03:02:13.667: INFO: Pod "wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6-z8x9f": Phase="Running", Reason="", readiness=true. Elapsed: 8.812477ms + Jul 27 03:02:13.667: INFO: Pod "wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6-z8x9f" satisfied condition "running" + STEP: deleting ReplicationController wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6 in namespace emptydir-wrapper-9180, will wait for the garbage collector to delete the pods 07/27/23 03:02:13.667 + Jul 27 03:02:13.760: INFO: Deleting ReplicationController wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6 took: 26.914351ms + Jul 27 03:02:13.860: INFO: Terminating ReplicationController wrapped-volume-race-6805578f-eb3c-4275-871a-f60a636333a6 pods took: 100.128659ms + STEP: Creating RC which spawns configmap-volume pods 07/27/23 03:02:16.573 + Jul 27 03:02:16.623: INFO: Pod name wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410: Found 0 pods out of 5 + Jul 27 03:02:21.656: INFO: Pod name wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410: Found 5 pods out of 5 + STEP: Ensuring each pod is running 07/27/23 03:02:21.656 + Jul 27 03:02:21.656: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410-2j6k5" in namespace "emptydir-wrapper-9180" to be "running" + Jul 27 03:02:21.665: INFO: Pod "wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410-2j6k5": Phase="Running", Reason="", readiness=true. Elapsed: 8.615833ms + Jul 27 03:02:21.665: INFO: Pod "wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410-2j6k5" satisfied condition "running" + Jul 27 03:02:21.665: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410-45ssb" in namespace "emptydir-wrapper-9180" to be "running" + Jul 27 03:02:21.674: INFO: Pod "wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410-45ssb": Phase="Running", Reason="", readiness=true. Elapsed: 9.178603ms + Jul 27 03:02:21.674: INFO: Pod "wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410-45ssb" satisfied condition "running" + Jul 27 03:02:21.674: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410-4vvdv" in namespace "emptydir-wrapper-9180" to be "running" + Jul 27 03:02:21.682: INFO: Pod "wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410-4vvdv": Phase="Running", Reason="", readiness=true. Elapsed: 8.152637ms + Jul 27 03:02:21.682: INFO: Pod "wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410-4vvdv" satisfied condition "running" + Jul 27 03:02:21.682: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410-9jgtr" in namespace "emptydir-wrapper-9180" to be "running" + Jul 27 03:02:21.690: INFO: Pod "wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410-9jgtr": Phase="Running", Reason="", readiness=true. Elapsed: 8.13124ms + Jul 27 03:02:21.690: INFO: Pod "wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410-9jgtr" satisfied condition "running" + Jul 27 03:02:21.690: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410-mtp69" in namespace "emptydir-wrapper-9180" to be "running" + Jul 27 03:02:21.699: INFO: Pod "wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410-mtp69": Phase="Running", Reason="", readiness=true. Elapsed: 8.885296ms + Jul 27 03:02:21.699: INFO: Pod "wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410-mtp69" satisfied condition "running" + STEP: deleting ReplicationController wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410 in namespace emptydir-wrapper-9180, will wait for the garbage collector to delete the pods 07/27/23 03:02:21.699 + Jul 27 03:02:21.794: INFO: Deleting ReplicationController wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410 took: 27.311806ms + Jul 27 03:02:21.895: INFO: Terminating ReplicationController wrapped-volume-race-eab6830b-7b63-4c60-b1ee-6c9090cc7410 pods took: 100.898945ms + STEP: Cleaning up the configMaps 07/27/23 03:02:24.696 + [AfterEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/node/init/init.go:32 - Jun 12 22:28:19.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + Jul 27 03:02:25.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes tear down framework | framework.go:193 - STEP: Destroying namespace "gc-978" for this suite. 06/12/23 22:28:19.304 + STEP: Destroying namespace "emptydir-wrapper-9180" for this suite. 07/27/23 03:02:25.854 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSS ------------------------------ -[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] - works for CRD with validation schema [Conformance] - test/e2e/apimachinery/crd_publish_openapi.go:69 -[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +[sig-node] Pods + should run through the lifecycle of Pods and PodStatus [Conformance] + test/e2e/common/node/pods.go:896 +[BeforeEach] [sig-node] Pods set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:28:19.351 -Jun 12 22:28:19.351: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename crd-publish-openapi 06/12/23 22:28:19.352 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:28:19.464 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:28:19.475 -[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +STEP: Creating a kubernetes client 07/27/23 03:02:25.876 +Jul 27 03:02:25.876: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename pods 07/27/23 03:02:25.877 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:02:25.949 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:02:25.958 +[BeforeEach] [sig-node] Pods test/e2e/framework/metrics/init/init.go:31 -[It] works for CRD with validation schema [Conformance] - test/e2e/apimachinery/crd_publish_openapi.go:69 -Jun 12 22:28:19.498: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: kubectl validation (kubectl create and apply) allows request with known and required properties 06/12/23 22:28:29.155 -Jun 12 22:28:29.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-7120 --namespace=crd-publish-openapi-7120 create -f -' -Jun 12 22:28:34.246: INFO: stderr: "" -Jun 12 22:28:34.246: INFO: stdout: "e2e-test-crd-publish-openapi-9251-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" -Jun 12 22:28:34.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-7120 --namespace=crd-publish-openapi-7120 delete e2e-test-crd-publish-openapi-9251-crds test-foo' -Jun 12 22:28:34.640: INFO: stderr: "" -Jun 12 22:28:34.640: INFO: stdout: "e2e-test-crd-publish-openapi-9251-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" -Jun 12 22:28:34.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-7120 --namespace=crd-publish-openapi-7120 apply -f -' -Jun 12 22:28:38.649: INFO: stderr: "" -Jun 12 22:28:38.649: INFO: stdout: "e2e-test-crd-publish-openapi-9251-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" -Jun 12 22:28:38.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-7120 --namespace=crd-publish-openapi-7120 delete e2e-test-crd-publish-openapi-9251-crds test-foo' -Jun 12 22:28:38.904: INFO: stderr: "" -Jun 12 22:28:38.904: INFO: stdout: "e2e-test-crd-publish-openapi-9251-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" -STEP: kubectl validation (kubectl create and apply) rejects request with value outside defined enum values 06/12/23 22:28:38.904 -Jun 12 22:28:38.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-7120 --namespace=crd-publish-openapi-7120 create -f -' -Jun 12 22:28:40.286: INFO: rc: 1 -STEP: kubectl validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema 06/12/23 22:28:40.286 -Jun 12 22:28:40.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-7120 --namespace=crd-publish-openapi-7120 create -f -' -Jun 12 22:28:41.365: INFO: rc: 1 -Jun 12 22:28:41.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-7120 --namespace=crd-publish-openapi-7120 apply -f -' -Jun 12 22:28:42.841: INFO: rc: 1 -STEP: kubectl validation (kubectl create and apply) rejects request without required properties 06/12/23 22:28:42.841 -Jun 12 22:28:42.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-7120 --namespace=crd-publish-openapi-7120 create -f -' -Jun 12 22:28:45.275: INFO: rc: 1 -Jun 12 22:28:45.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-7120 --namespace=crd-publish-openapi-7120 apply -f -' -Jun 12 22:28:49.032: INFO: rc: 1 -STEP: kubectl explain works to explain CR properties 06/12/23 22:28:49.032 -Jun 12 22:28:49.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-7120 explain e2e-test-crd-publish-openapi-9251-crds' -Jun 12 22:28:50.295: INFO: stderr: "" -Jun 12 22:28:50.295: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9251-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" -STEP: kubectl explain works to explain CR properties recursively 06/12/23 22:28:50.296 -Jun 12 22:28:50.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-7120 explain e2e-test-crd-publish-openapi-9251-crds.metadata' -Jun 12 22:28:51.346: INFO: stderr: "" -Jun 12 22:28:51.346: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9251-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n return a 409.\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n Deprecated: selfLink is a legacy read-only field that is no longer\n populated by the system.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" -Jun 12 22:28:51.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-7120 explain e2e-test-crd-publish-openapi-9251-crds.spec' -Jun 12 22:28:52.528: INFO: stderr: "" -Jun 12 22:28:52.528: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9251-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" -Jun 12 22:28:52.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-7120 explain e2e-test-crd-publish-openapi-9251-crds.spec.bars' -Jun 12 22:28:54.570: INFO: stderr: "" -Jun 12 22:28:54.570: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9251-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n feeling\t\n Whether Bar is feeling great.\n\n name\t -required-\n Name of Bar.\n\n" -STEP: kubectl explain works to return error when explain is called on property that doesn't exist 06/12/23 22:28:54.57 -Jun 12 22:28:54.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-7120 explain e2e-test-crd-publish-openapi-9251-crds.spec.bars2' -Jun 12 22:28:56.870: INFO: rc: 1 -[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 +[It] should run through the lifecycle of Pods and PodStatus [Conformance] + test/e2e/common/node/pods.go:896 +STEP: creating a Pod with a static label 07/27/23 03:02:26.008 +STEP: watching for Pod to be ready 07/27/23 03:02:26.041 +Jul 27 03:02:26.046: INFO: observed Pod pod-test in namespace pods-6667 in phase Pending with labels: map[test-pod-static:true] & conditions [] +Jul 27 03:02:26.054: INFO: observed Pod pod-test in namespace pods-6667 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:26 +0000 UTC }] +Jul 27 03:02:26.081: INFO: observed Pod pod-test in namespace pods-6667 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:26 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:26 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:26 +0000 UTC }] +Jul 27 03:02:26.741: INFO: observed Pod pod-test in namespace pods-6667 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:26 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:26 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:26 +0000 UTC }] +Jul 27 03:02:26.801: INFO: observed Pod pod-test in namespace pods-6667 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:26 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:26 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:26 +0000 UTC }] +Jul 27 03:02:27.461: INFO: Found Pod pod-test in namespace pods-6667 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:26 +0000 UTC }] +STEP: patching the Pod with a new Label and updated data 07/27/23 03:02:27.471 +STEP: getting the Pod and ensuring that it's patched 07/27/23 03:02:27.509 +STEP: replacing the Pod's status Ready condition to False 07/27/23 03:02:27.517 +STEP: check the Pod again to ensure its Ready conditions are False 07/27/23 03:02:27.545 +STEP: deleting the Pod via a Collection with a LabelSelector 07/27/23 03:02:27.545 +STEP: watching for the Pod to be deleted 07/27/23 03:02:27.567 +Jul 27 03:02:27.574: INFO: observed event type MODIFIED +Jul 27 03:02:29.471: INFO: observed event type MODIFIED +Jul 27 03:02:29.722: INFO: observed event type MODIFIED +Jul 27 03:02:30.473: INFO: observed event type MODIFIED +Jul 27 03:02:30.484: INFO: observed event type MODIFIED +[AfterEach] [sig-node] Pods test/e2e/framework/node/init/init.go:32 -Jun 12 22:29:07.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +Jul 27 03:02:30.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Pods test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-node] Pods dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +[DeferCleanup (Each)] [sig-node] Pods tear down framework | framework.go:193 -STEP: Destroying namespace "crd-publish-openapi-7120" for this suite. 06/12/23 22:29:07.154 +STEP: Destroying namespace "pods-6667" for this suite. 07/27/23 03:02:30.519 ------------------------------ -• [SLOW TEST] [47.820 seconds] -[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] -test/e2e/apimachinery/framework.go:23 - works for CRD with validation schema [Conformance] - test/e2e/apimachinery/crd_publish_openapi.go:69 +• [4.665 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should run through the lifecycle of Pods and PodStatus [Conformance] + test/e2e/common/node/pods.go:896 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + [BeforeEach] [sig-node] Pods set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:28:19.351 - Jun 12 22:28:19.351: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename crd-publish-openapi 06/12/23 22:28:19.352 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:28:19.464 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:28:19.475 - [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] - test/e2e/framework/metrics/init/init.go:31 - [It] works for CRD with validation schema [Conformance] - test/e2e/apimachinery/crd_publish_openapi.go:69 - Jun 12 22:28:19.498: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: kubectl validation (kubectl create and apply) allows request with known and required properties 06/12/23 22:28:29.155 - Jun 12 22:28:29.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-7120 --namespace=crd-publish-openapi-7120 create -f -' - Jun 12 22:28:34.246: INFO: stderr: "" - Jun 12 22:28:34.246: INFO: stdout: "e2e-test-crd-publish-openapi-9251-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" - Jun 12 22:28:34.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-7120 --namespace=crd-publish-openapi-7120 delete e2e-test-crd-publish-openapi-9251-crds test-foo' - Jun 12 22:28:34.640: INFO: stderr: "" - Jun 12 22:28:34.640: INFO: stdout: "e2e-test-crd-publish-openapi-9251-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" - Jun 12 22:28:34.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-7120 --namespace=crd-publish-openapi-7120 apply -f -' - Jun 12 22:28:38.649: INFO: stderr: "" - Jun 12 22:28:38.649: INFO: stdout: "e2e-test-crd-publish-openapi-9251-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" - Jun 12 22:28:38.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-7120 --namespace=crd-publish-openapi-7120 delete e2e-test-crd-publish-openapi-9251-crds test-foo' - Jun 12 22:28:38.904: INFO: stderr: "" - Jun 12 22:28:38.904: INFO: stdout: "e2e-test-crd-publish-openapi-9251-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" - STEP: kubectl validation (kubectl create and apply) rejects request with value outside defined enum values 06/12/23 22:28:38.904 - Jun 12 22:28:38.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-7120 --namespace=crd-publish-openapi-7120 create -f -' - Jun 12 22:28:40.286: INFO: rc: 1 - STEP: kubectl validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema 06/12/23 22:28:40.286 - Jun 12 22:28:40.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-7120 --namespace=crd-publish-openapi-7120 create -f -' - Jun 12 22:28:41.365: INFO: rc: 1 - Jun 12 22:28:41.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-7120 --namespace=crd-publish-openapi-7120 apply -f -' - Jun 12 22:28:42.841: INFO: rc: 1 - STEP: kubectl validation (kubectl create and apply) rejects request without required properties 06/12/23 22:28:42.841 - Jun 12 22:28:42.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-7120 --namespace=crd-publish-openapi-7120 create -f -' - Jun 12 22:28:45.275: INFO: rc: 1 - Jun 12 22:28:45.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-7120 --namespace=crd-publish-openapi-7120 apply -f -' - Jun 12 22:28:49.032: INFO: rc: 1 - STEP: kubectl explain works to explain CR properties 06/12/23 22:28:49.032 - Jun 12 22:28:49.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-7120 explain e2e-test-crd-publish-openapi-9251-crds' - Jun 12 22:28:50.295: INFO: stderr: "" - Jun 12 22:28:50.295: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9251-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" - STEP: kubectl explain works to explain CR properties recursively 06/12/23 22:28:50.296 - Jun 12 22:28:50.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-7120 explain e2e-test-crd-publish-openapi-9251-crds.metadata' - Jun 12 22:28:51.346: INFO: stderr: "" - Jun 12 22:28:51.346: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9251-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n return a 409.\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n Deprecated: selfLink is a legacy read-only field that is no longer\n populated by the system.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" - Jun 12 22:28:51.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-7120 explain e2e-test-crd-publish-openapi-9251-crds.spec' - Jun 12 22:28:52.528: INFO: stderr: "" - Jun 12 22:28:52.528: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9251-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" - Jun 12 22:28:52.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-7120 explain e2e-test-crd-publish-openapi-9251-crds.spec.bars' - Jun 12 22:28:54.570: INFO: stderr: "" - Jun 12 22:28:54.570: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9251-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n feeling\t\n Whether Bar is feeling great.\n\n name\t -required-\n Name of Bar.\n\n" - STEP: kubectl explain works to return error when explain is called on property that doesn't exist 06/12/23 22:28:54.57 - Jun 12 22:28:54.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1249129573 --namespace=crd-publish-openapi-7120 explain e2e-test-crd-publish-openapi-9251-crds.spec.bars2' - Jun 12 22:28:56.870: INFO: rc: 1 - [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + STEP: Creating a kubernetes client 07/27/23 03:02:25.876 + Jul 27 03:02:25.876: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename pods 07/27/23 03:02:25.877 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:02:25.949 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:02:25.958 + [BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 + [It] should run through the lifecycle of Pods and PodStatus [Conformance] + test/e2e/common/node/pods.go:896 + STEP: creating a Pod with a static label 07/27/23 03:02:26.008 + STEP: watching for Pod to be ready 07/27/23 03:02:26.041 + Jul 27 03:02:26.046: INFO: observed Pod pod-test in namespace pods-6667 in phase Pending with labels: map[test-pod-static:true] & conditions [] + Jul 27 03:02:26.054: INFO: observed Pod pod-test in namespace pods-6667 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:26 +0000 UTC }] + Jul 27 03:02:26.081: INFO: observed Pod pod-test in namespace pods-6667 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:26 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:26 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:26 +0000 UTC }] + Jul 27 03:02:26.741: INFO: observed Pod pod-test in namespace pods-6667 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:26 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:26 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:26 +0000 UTC }] + Jul 27 03:02:26.801: INFO: observed Pod pod-test in namespace pods-6667 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:26 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:26 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:26 +0000 UTC }] + Jul 27 03:02:27.461: INFO: Found Pod pod-test in namespace pods-6667 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-07-27 03:02:26 +0000 UTC }] + STEP: patching the Pod with a new Label and updated data 07/27/23 03:02:27.471 + STEP: getting the Pod and ensuring that it's patched 07/27/23 03:02:27.509 + STEP: replacing the Pod's status Ready condition to False 07/27/23 03:02:27.517 + STEP: check the Pod again to ensure its Ready conditions are False 07/27/23 03:02:27.545 + STEP: deleting the Pod via a Collection with a LabelSelector 07/27/23 03:02:27.545 + STEP: watching for the Pod to be deleted 07/27/23 03:02:27.567 + Jul 27 03:02:27.574: INFO: observed event type MODIFIED + Jul 27 03:02:29.471: INFO: observed event type MODIFIED + Jul 27 03:02:29.722: INFO: observed event type MODIFIED + Jul 27 03:02:30.473: INFO: observed event type MODIFIED + Jul 27 03:02:30.484: INFO: observed event type MODIFIED + [AfterEach] [sig-node] Pods test/e2e/framework/node/init/init.go:32 - Jun 12 22:29:07.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + Jul 27 03:02:30.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Pods test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-node] Pods dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + [DeferCleanup (Each)] [sig-node] Pods tear down framework | framework.go:193 - STEP: Destroying namespace "crd-publish-openapi-7120" for this suite. 06/12/23 22:29:07.154 + STEP: Destroying namespace "pods-6667" for this suite. 07/27/23 03:02:30.519 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSS ------------------------------ -[sig-apps] Job - should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] - test/e2e/apps/job.go:426 -[BeforeEach] [sig-apps] Job +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:89 +[BeforeEach] [sig-storage] Projected configMap set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:29:07.202 -Jun 12 22:29:07.202: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename job 06/12/23 22:29:07.209 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:29:07.294 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:29:07.33 -[BeforeEach] [sig-apps] Job +STEP: Creating a kubernetes client 07/27/23 03:02:30.541 +Jul 27 03:02:30.541: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename projected 07/27/23 03:02:30.542 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:02:30.589 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:02:30.6 +[BeforeEach] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:31 -[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] - test/e2e/apps/job.go:426 -STEP: Creating a job 06/12/23 22:29:07.349 -STEP: Ensuring job reaches completions 06/12/23 22:29:07.383 -[AfterEach] [sig-apps] Job +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:89 +STEP: Creating configMap with name projected-configmap-test-volume-map-9e174fdd-44ad-424f-8b9a-966d85069fd1 07/27/23 03:02:30.615 +STEP: Creating a pod to test consume configMaps 07/27/23 03:02:30.638 +Jul 27 03:02:30.672: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bdfcadea-4881-40c5-9a4d-89a3ce23b6c7" in namespace "projected-764" to be "Succeeded or Failed" +Jul 27 03:02:30.680: INFO: Pod "pod-projected-configmaps-bdfcadea-4881-40c5-9a4d-89a3ce23b6c7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.413848ms +Jul 27 03:02:32.716: INFO: Pod "pod-projected-configmaps-bdfcadea-4881-40c5-9a4d-89a3ce23b6c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04469909s +Jul 27 03:02:34.693: INFO: Pod "pod-projected-configmaps-bdfcadea-4881-40c5-9a4d-89a3ce23b6c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021465917s +STEP: Saw pod success 07/27/23 03:02:34.693 +Jul 27 03:02:34.693: INFO: Pod "pod-projected-configmaps-bdfcadea-4881-40c5-9a4d-89a3ce23b6c7" satisfied condition "Succeeded or Failed" +Jul 27 03:02:34.704: INFO: Trying to get logs from node 10.245.128.19 pod pod-projected-configmaps-bdfcadea-4881-40c5-9a4d-89a3ce23b6c7 container agnhost-container: +STEP: delete the pod 07/27/23 03:02:34.724 +Jul 27 03:02:34.744: INFO: Waiting for pod pod-projected-configmaps-bdfcadea-4881-40c5-9a4d-89a3ce23b6c7 to disappear +Jul 27 03:02:34.752: INFO: Pod pod-projected-configmaps-bdfcadea-4881-40c5-9a4d-89a3ce23b6c7 no longer exists +[AfterEach] [sig-storage] Projected configMap test/e2e/framework/node/init/init.go:32 -Jun 12 22:29:25.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] Job +Jul 27 03:02:34.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] Job +[DeferCleanup (Each)] [sig-storage] Projected configMap dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] Job +[DeferCleanup (Each)] [sig-storage] Projected configMap tear down framework | framework.go:193 -STEP: Destroying namespace "job-5778" for this suite. 06/12/23 22:29:25.41 +STEP: Destroying namespace "projected-764" for this suite. 07/27/23 03:02:34.773 ------------------------------ -• [SLOW TEST] [18.229 seconds] -[sig-apps] Job -test/e2e/apps/framework.go:23 - should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] - test/e2e/apps/job.go:426 +• [4.254 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:89 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] Job + [BeforeEach] [sig-storage] Projected configMap set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:29:07.202 - Jun 12 22:29:07.202: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename job 06/12/23 22:29:07.209 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:29:07.294 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:29:07.33 - [BeforeEach] [sig-apps] Job + STEP: Creating a kubernetes client 07/27/23 03:02:30.541 + Jul 27 03:02:30.541: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename projected 07/27/23 03:02:30.542 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:02:30.589 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:02:30.6 + [BeforeEach] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:31 - [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] - test/e2e/apps/job.go:426 - STEP: Creating a job 06/12/23 22:29:07.349 - STEP: Ensuring job reaches completions 06/12/23 22:29:07.383 - [AfterEach] [sig-apps] Job + [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:89 + STEP: Creating configMap with name projected-configmap-test-volume-map-9e174fdd-44ad-424f-8b9a-966d85069fd1 07/27/23 03:02:30.615 + STEP: Creating a pod to test consume configMaps 07/27/23 03:02:30.638 + Jul 27 03:02:30.672: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bdfcadea-4881-40c5-9a4d-89a3ce23b6c7" in namespace "projected-764" to be "Succeeded or Failed" + Jul 27 03:02:30.680: INFO: Pod "pod-projected-configmaps-bdfcadea-4881-40c5-9a4d-89a3ce23b6c7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.413848ms + Jul 27 03:02:32.716: INFO: Pod "pod-projected-configmaps-bdfcadea-4881-40c5-9a4d-89a3ce23b6c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04469909s + Jul 27 03:02:34.693: INFO: Pod "pod-projected-configmaps-bdfcadea-4881-40c5-9a4d-89a3ce23b6c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021465917s + STEP: Saw pod success 07/27/23 03:02:34.693 + Jul 27 03:02:34.693: INFO: Pod "pod-projected-configmaps-bdfcadea-4881-40c5-9a4d-89a3ce23b6c7" satisfied condition "Succeeded or Failed" + Jul 27 03:02:34.704: INFO: Trying to get logs from node 10.245.128.19 pod pod-projected-configmaps-bdfcadea-4881-40c5-9a4d-89a3ce23b6c7 container agnhost-container: + STEP: delete the pod 07/27/23 03:02:34.724 + Jul 27 03:02:34.744: INFO: Waiting for pod pod-projected-configmaps-bdfcadea-4881-40c5-9a4d-89a3ce23b6c7 to disappear + Jul 27 03:02:34.752: INFO: Pod pod-projected-configmaps-bdfcadea-4881-40c5-9a4d-89a3ce23b6c7 no longer exists + [AfterEach] [sig-storage] Projected configMap test/e2e/framework/node/init/init.go:32 - Jun 12 22:29:25.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] Job + Jul 27 03:02:34.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] Job + [DeferCleanup (Each)] [sig-storage] Projected configMap dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] Job + [DeferCleanup (Each)] [sig-storage] Projected configMap tear down framework | framework.go:193 - STEP: Destroying namespace "job-5778" for this suite. 06/12/23 22:29:25.41 + STEP: Destroying namespace "projected-764" for this suite. 07/27/23 03:02:34.773 << End Captured GinkgoWriter Output ------------------------------ -SS +SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] Projected downwardAPI - should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:261 -[BeforeEach] [sig-storage] Projected downwardAPI +[sig-network] EndpointSlice + should have Endpoints and EndpointSlices pointing to API Server [Conformance] + test/e2e/network/endpointslice.go:66 +[BeforeEach] [sig-network] EndpointSlice set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:29:25.431 -Jun 12 22:29:25.432: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename projected 06/12/23 22:29:25.433 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:29:25.477 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:29:25.509 -[BeforeEach] [sig-storage] Projected downwardAPI +STEP: Creating a kubernetes client 07/27/23 03:02:34.797 +Jul 27 03:02:34.797: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename endpointslice 07/27/23 03:02:34.797 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:02:34.865 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:02:34.892 +[BeforeEach] [sig-network] EndpointSlice test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-storage] Projected downwardAPI - test/e2e/common/storage/projected_downwardapi.go:44 -[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:261 -STEP: Creating a pod to test downward API volume plugin 06/12/23 22:29:25.531 -Jun 12 22:29:25.587: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4b2b8aea-46e9-4d85-9028-d4da02527b6c" in namespace "projected-6223" to be "Succeeded or Failed" -Jun 12 22:29:25.602: INFO: Pod "downwardapi-volume-4b2b8aea-46e9-4d85-9028-d4da02527b6c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.047955ms -Jun 12 22:29:27.616: INFO: Pod "downwardapi-volume-4b2b8aea-46e9-4d85-9028-d4da02527b6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02852643s -Jun 12 22:29:29.633: INFO: Pod "downwardapi-volume-4b2b8aea-46e9-4d85-9028-d4da02527b6c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045232628s -Jun 12 22:29:31.613: INFO: Pod "downwardapi-volume-4b2b8aea-46e9-4d85-9028-d4da02527b6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025031772s -STEP: Saw pod success 06/12/23 22:29:31.613 -Jun 12 22:29:31.613: INFO: Pod "downwardapi-volume-4b2b8aea-46e9-4d85-9028-d4da02527b6c" satisfied condition "Succeeded or Failed" -Jun 12 22:29:31.624: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-4b2b8aea-46e9-4d85-9028-d4da02527b6c container client-container: -STEP: delete the pod 06/12/23 22:29:31.695 -Jun 12 22:29:31.728: INFO: Waiting for pod downwardapi-volume-4b2b8aea-46e9-4d85-9028-d4da02527b6c to disappear -Jun 12 22:29:31.737: INFO: Pod downwardapi-volume-4b2b8aea-46e9-4d85-9028-d4da02527b6c no longer exists -[AfterEach] [sig-storage] Projected downwardAPI +[BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:52 +[It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] + test/e2e/network/endpointslice.go:66 +Jul 27 03:02:34.958: INFO: Endpoints addresses: [172.20.0.1] , ports: [2040] +Jul 27 03:02:34.958: INFO: EndpointSlices addresses: [172.20.0.1] , ports: [2040] +[AfterEach] [sig-network] EndpointSlice test/e2e/framework/node/init/init.go:32 -Jun 12 22:29:31.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Projected downwardAPI +Jul 27 03:02:34.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] EndpointSlice test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Projected downwardAPI +[DeferCleanup (Each)] [sig-network] EndpointSlice dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Projected downwardAPI +[DeferCleanup (Each)] [sig-network] EndpointSlice tear down framework | framework.go:193 -STEP: Destroying namespace "projected-6223" for this suite. 06/12/23 22:29:31.761 +STEP: Destroying namespace "endpointslice-8210" for this suite. 07/27/23 03:02:34.973 ------------------------------ -• [SLOW TEST] [6.350 seconds] -[sig-storage] Projected downwardAPI -test/e2e/common/storage/framework.go:23 - should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:261 +• [0.199 seconds] +[sig-network] EndpointSlice +test/e2e/network/common/framework.go:23 + should have Endpoints and EndpointSlices pointing to API Server [Conformance] + test/e2e/network/endpointslice.go:66 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Projected downwardAPI + [BeforeEach] [sig-network] EndpointSlice set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:29:25.431 - Jun 12 22:29:25.432: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename projected 06/12/23 22:29:25.433 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:29:25.477 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:29:25.509 - [BeforeEach] [sig-storage] Projected downwardAPI + STEP: Creating a kubernetes client 07/27/23 03:02:34.797 + Jul 27 03:02:34.797: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename endpointslice 07/27/23 03:02:34.797 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:02:34.865 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:02:34.892 + [BeforeEach] [sig-network] EndpointSlice test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-storage] Projected downwardAPI - test/e2e/common/storage/projected_downwardapi.go:44 - [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] - test/e2e/common/storage/projected_downwardapi.go:261 - STEP: Creating a pod to test downward API volume plugin 06/12/23 22:29:25.531 - Jun 12 22:29:25.587: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4b2b8aea-46e9-4d85-9028-d4da02527b6c" in namespace "projected-6223" to be "Succeeded or Failed" - Jun 12 22:29:25.602: INFO: Pod "downwardapi-volume-4b2b8aea-46e9-4d85-9028-d4da02527b6c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.047955ms - Jun 12 22:29:27.616: INFO: Pod "downwardapi-volume-4b2b8aea-46e9-4d85-9028-d4da02527b6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02852643s - Jun 12 22:29:29.633: INFO: Pod "downwardapi-volume-4b2b8aea-46e9-4d85-9028-d4da02527b6c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045232628s - Jun 12 22:29:31.613: INFO: Pod "downwardapi-volume-4b2b8aea-46e9-4d85-9028-d4da02527b6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025031772s - STEP: Saw pod success 06/12/23 22:29:31.613 - Jun 12 22:29:31.613: INFO: Pod "downwardapi-volume-4b2b8aea-46e9-4d85-9028-d4da02527b6c" satisfied condition "Succeeded or Failed" - Jun 12 22:29:31.624: INFO: Trying to get logs from node 10.138.75.70 pod downwardapi-volume-4b2b8aea-46e9-4d85-9028-d4da02527b6c container client-container: - STEP: delete the pod 06/12/23 22:29:31.695 - Jun 12 22:29:31.728: INFO: Waiting for pod downwardapi-volume-4b2b8aea-46e9-4d85-9028-d4da02527b6c to disappear - Jun 12 22:29:31.737: INFO: Pod downwardapi-volume-4b2b8aea-46e9-4d85-9028-d4da02527b6c no longer exists - [AfterEach] [sig-storage] Projected downwardAPI + [BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:52 + [It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] + test/e2e/network/endpointslice.go:66 + Jul 27 03:02:34.958: INFO: Endpoints addresses: [172.20.0.1] , ports: [2040] + Jul 27 03:02:34.958: INFO: EndpointSlices addresses: [172.20.0.1] , ports: [2040] + [AfterEach] [sig-network] EndpointSlice test/e2e/framework/node/init/init.go:32 - Jun 12 22:29:31.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + Jul 27 03:02:34.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] EndpointSlice test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + [DeferCleanup (Each)] [sig-network] EndpointSlice dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + [DeferCleanup (Each)] [sig-network] EndpointSlice tear down framework | framework.go:193 - STEP: Destroying namespace "projected-6223" for this suite. 06/12/23 22:29:31.761 + STEP: Destroying namespace "endpointslice-8210" for this suite. 07/27/23 03:02:34.973 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-apps] CronJob - should replace jobs when ReplaceConcurrent [Conformance] - test/e2e/apps/cronjob.go:160 -[BeforeEach] [sig-apps] CronJob +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + listing custom resource definition objects works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:85 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:29:31.801 -Jun 12 22:29:31.801: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename cronjob 06/12/23 22:29:31.807 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:29:31.855 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:29:31.888 -[BeforeEach] [sig-apps] CronJob +STEP: Creating a kubernetes client 07/27/23 03:02:34.996 +Jul 27 03:02:34.996: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename custom-resource-definition 07/27/23 03:02:34.999 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:02:35.047 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:02:35.062 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 -[It] should replace jobs when ReplaceConcurrent [Conformance] - test/e2e/apps/cronjob.go:160 -STEP: Creating a ReplaceConcurrent cronjob 06/12/23 22:29:31.899 -STEP: Ensuring a job is scheduled 06/12/23 22:29:31.917 -STEP: Ensuring exactly one is scheduled 06/12/23 22:30:01.93 -STEP: Ensuring exactly one running job exists by listing jobs explicitly 06/12/23 22:30:01.94 -STEP: Ensuring the job is replaced with a new one 06/12/23 22:30:01.953 -STEP: Removing cronjob 06/12/23 22:31:01.966 -[AfterEach] [sig-apps] CronJob +[It] listing custom resource definition objects works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:85 +Jul 27 03:02:35.076: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 -Jun 12 22:31:01.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] CronJob +Jul 27 03:02:43.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] CronJob +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] CronJob +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] tear down framework | framework.go:193 -STEP: Destroying namespace "cronjob-4423" for this suite. 06/12/23 22:31:02.003 +STEP: Destroying namespace "custom-resource-definition-9143" for this suite. 07/27/23 03:02:43.287 ------------------------------ -• [SLOW TEST] [90.219 seconds] -[sig-apps] CronJob -test/e2e/apps/framework.go:23 - should replace jobs when ReplaceConcurrent [Conformance] - test/e2e/apps/cronjob.go:160 +• [SLOW TEST] [8.326 seconds] +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + Simple CustomResourceDefinition + test/e2e/apimachinery/custom_resource_definition.go:50 + listing custom resource definition objects works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:85 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] CronJob + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:29:31.801 - Jun 12 22:29:31.801: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename cronjob 06/12/23 22:29:31.807 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:29:31.855 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:29:31.888 - [BeforeEach] [sig-apps] CronJob + STEP: Creating a kubernetes client 07/27/23 03:02:34.996 + Jul 27 03:02:34.996: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename custom-resource-definition 07/27/23 03:02:34.999 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:02:35.047 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:02:35.062 + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 - [It] should replace jobs when ReplaceConcurrent [Conformance] - test/e2e/apps/cronjob.go:160 - STEP: Creating a ReplaceConcurrent cronjob 06/12/23 22:29:31.899 - STEP: Ensuring a job is scheduled 06/12/23 22:29:31.917 - STEP: Ensuring exactly one is scheduled 06/12/23 22:30:01.93 - STEP: Ensuring exactly one running job exists by listing jobs explicitly 06/12/23 22:30:01.94 - STEP: Ensuring the job is replaced with a new one 06/12/23 22:30:01.953 - STEP: Removing cronjob 06/12/23 22:31:01.966 - [AfterEach] [sig-apps] CronJob + [It] listing custom resource definition objects works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:85 + Jul 27 03:02:35.076: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 22:31:01.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] CronJob + Jul 27 03:02:43.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] CronJob + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] CronJob + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "cronjob-4423" for this suite. 06/12/23 22:31:02.003 + STEP: Destroying namespace "custom-resource-definition-9143" for this suite. 07/27/23 03:02:43.287 << End Captured GinkgoWriter Output ------------------------------ -SSSSS +SSSSSSSSSSSSS ------------------------------ -[sig-node] Containers - should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance] - test/e2e/common/node/containers.go:73 -[BeforeEach] [sig-node] Containers +[sig-apps] ReplicationController + should get and update a ReplicationController scale [Conformance] + test/e2e/apps/rc.go:402 +[BeforeEach] [sig-apps] ReplicationController set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:31:02.026 -Jun 12 22:31:02.026: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename containers 06/12/23 22:31:02.029 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:31:02.108 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:31:02.128 -[BeforeEach] [sig-node] Containers +STEP: Creating a kubernetes client 07/27/23 03:02:43.323 +Jul 27 03:02:43.323: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename replication-controller 07/27/23 03:02:43.324 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:02:43.367 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:02:43.375 +[BeforeEach] [sig-apps] ReplicationController test/e2e/framework/metrics/init/init.go:31 -[It] should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance] - test/e2e/common/node/containers.go:73 -STEP: Creating a pod to test override command 06/12/23 22:31:02.161 -Jun 12 22:31:02.187: INFO: Waiting up to 5m0s for pod "client-containers-354c73cf-bb49-4973-bf93-a4cfdd4be2c9" in namespace "containers-9204" to be "Succeeded or Failed" -Jun 12 22:31:02.203: INFO: Pod "client-containers-354c73cf-bb49-4973-bf93-a4cfdd4be2c9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.685532ms -Jun 12 22:31:04.216: INFO: Pod "client-containers-354c73cf-bb49-4973-bf93-a4cfdd4be2c9": Phase="Running", Reason="", readiness=true. Elapsed: 2.028561043s -Jun 12 22:31:06.244: INFO: Pod "client-containers-354c73cf-bb49-4973-bf93-a4cfdd4be2c9": Phase="Running", Reason="", readiness=false. Elapsed: 4.056786833s -Jun 12 22:31:08.241: INFO: Pod "client-containers-354c73cf-bb49-4973-bf93-a4cfdd4be2c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.053635781s -STEP: Saw pod success 06/12/23 22:31:08.241 -Jun 12 22:31:08.241: INFO: Pod "client-containers-354c73cf-bb49-4973-bf93-a4cfdd4be2c9" satisfied condition "Succeeded or Failed" -Jun 12 22:31:08.270: INFO: Trying to get logs from node 10.138.75.70 pod client-containers-354c73cf-bb49-4973-bf93-a4cfdd4be2c9 container agnhost-container: -STEP: delete the pod 06/12/23 22:31:08.359 -Jun 12 22:31:08.387: INFO: Waiting for pod client-containers-354c73cf-bb49-4973-bf93-a4cfdd4be2c9 to disappear -Jun 12 22:31:08.395: INFO: Pod client-containers-354c73cf-bb49-4973-bf93-a4cfdd4be2c9 no longer exists -[AfterEach] [sig-node] Containers +[BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 +[It] should get and update a ReplicationController scale [Conformance] + test/e2e/apps/rc.go:402 +STEP: Creating ReplicationController "e2e-rc-prcp9" 07/27/23 03:02:43.385 +W0727 03:02:43.406094 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "httpd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "httpd" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "httpd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "httpd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +Jul 27 03:02:43.406: INFO: Get Replication Controller "e2e-rc-prcp9" to confirm replicas +Jul 27 03:02:44.420: INFO: Get Replication Controller "e2e-rc-prcp9" to confirm replicas +Jul 27 03:02:44.433: INFO: Found 1 replicas for "e2e-rc-prcp9" replication controller +STEP: Getting scale subresource for ReplicationController "e2e-rc-prcp9" 07/27/23 03:02:44.433 +STEP: Updating a scale subresource 07/27/23 03:02:44.444 +STEP: Verifying replicas where modified for replication controller "e2e-rc-prcp9" 07/27/23 03:02:44.462 +Jul 27 03:02:44.462: INFO: Get Replication Controller "e2e-rc-prcp9" to confirm replicas +Jul 27 03:02:45.483: INFO: Get Replication Controller "e2e-rc-prcp9" to confirm replicas +Jul 27 03:02:45.541: INFO: Found 2 replicas for "e2e-rc-prcp9" replication controller +[AfterEach] [sig-apps] ReplicationController test/e2e/framework/node/init/init.go:32 -Jun 12 22:31:08.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Containers +Jul 27 03:02:45.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicationController test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Containers +[DeferCleanup (Each)] [sig-apps] ReplicationController dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Containers +[DeferCleanup (Each)] [sig-apps] ReplicationController tear down framework | framework.go:193 -STEP: Destroying namespace "containers-9204" for this suite. 06/12/23 22:31:08.417 +STEP: Destroying namespace "replication-controller-9440" for this suite. 07/27/23 03:02:45.555 ------------------------------ -• [SLOW TEST] [6.420 seconds] -[sig-node] Containers -test/e2e/common/node/framework.go:23 - should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance] - test/e2e/common/node/containers.go:73 +• [2.256 seconds] +[sig-apps] ReplicationController +test/e2e/apps/framework.go:23 + should get and update a ReplicationController scale [Conformance] + test/e2e/apps/rc.go:402 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Containers + [BeforeEach] [sig-apps] ReplicationController set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:31:02.026 - Jun 12 22:31:02.026: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename containers 06/12/23 22:31:02.029 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:31:02.108 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:31:02.128 - [BeforeEach] [sig-node] Containers + STEP: Creating a kubernetes client 07/27/23 03:02:43.323 + Jul 27 03:02:43.323: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename replication-controller 07/27/23 03:02:43.324 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:02:43.367 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:02:43.375 + [BeforeEach] [sig-apps] ReplicationController test/e2e/framework/metrics/init/init.go:31 - [It] should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance] - test/e2e/common/node/containers.go:73 - STEP: Creating a pod to test override command 06/12/23 22:31:02.161 - Jun 12 22:31:02.187: INFO: Waiting up to 5m0s for pod "client-containers-354c73cf-bb49-4973-bf93-a4cfdd4be2c9" in namespace "containers-9204" to be "Succeeded or Failed" - Jun 12 22:31:02.203: INFO: Pod "client-containers-354c73cf-bb49-4973-bf93-a4cfdd4be2c9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.685532ms - Jun 12 22:31:04.216: INFO: Pod "client-containers-354c73cf-bb49-4973-bf93-a4cfdd4be2c9": Phase="Running", Reason="", readiness=true. Elapsed: 2.028561043s - Jun 12 22:31:06.244: INFO: Pod "client-containers-354c73cf-bb49-4973-bf93-a4cfdd4be2c9": Phase="Running", Reason="", readiness=false. Elapsed: 4.056786833s - Jun 12 22:31:08.241: INFO: Pod "client-containers-354c73cf-bb49-4973-bf93-a4cfdd4be2c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.053635781s - STEP: Saw pod success 06/12/23 22:31:08.241 - Jun 12 22:31:08.241: INFO: Pod "client-containers-354c73cf-bb49-4973-bf93-a4cfdd4be2c9" satisfied condition "Succeeded or Failed" - Jun 12 22:31:08.270: INFO: Trying to get logs from node 10.138.75.70 pod client-containers-354c73cf-bb49-4973-bf93-a4cfdd4be2c9 container agnhost-container: - STEP: delete the pod 06/12/23 22:31:08.359 - Jun 12 22:31:08.387: INFO: Waiting for pod client-containers-354c73cf-bb49-4973-bf93-a4cfdd4be2c9 to disappear - Jun 12 22:31:08.395: INFO: Pod client-containers-354c73cf-bb49-4973-bf93-a4cfdd4be2c9 no longer exists - [AfterEach] [sig-node] Containers + [BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 + [It] should get and update a ReplicationController scale [Conformance] + test/e2e/apps/rc.go:402 + STEP: Creating ReplicationController "e2e-rc-prcp9" 07/27/23 03:02:43.385 + W0727 03:02:43.406094 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "httpd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "httpd" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "httpd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "httpd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + Jul 27 03:02:43.406: INFO: Get Replication Controller "e2e-rc-prcp9" to confirm replicas + Jul 27 03:02:44.420: INFO: Get Replication Controller "e2e-rc-prcp9" to confirm replicas + Jul 27 03:02:44.433: INFO: Found 1 replicas for "e2e-rc-prcp9" replication controller + STEP: Getting scale subresource for ReplicationController "e2e-rc-prcp9" 07/27/23 03:02:44.433 + STEP: Updating a scale subresource 07/27/23 03:02:44.444 + STEP: Verifying replicas where modified for replication controller "e2e-rc-prcp9" 07/27/23 03:02:44.462 + Jul 27 03:02:44.462: INFO: Get Replication Controller "e2e-rc-prcp9" to confirm replicas + Jul 27 03:02:45.483: INFO: Get Replication Controller "e2e-rc-prcp9" to confirm replicas + Jul 27 03:02:45.541: INFO: Found 2 replicas for "e2e-rc-prcp9" replication controller + [AfterEach] [sig-apps] ReplicationController test/e2e/framework/node/init/init.go:32 - Jun 12 22:31:08.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Containers + Jul 27 03:02:45.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicationController test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Containers + [DeferCleanup (Each)] [sig-apps] ReplicationController dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Containers + [DeferCleanup (Each)] [sig-apps] ReplicationController tear down framework | framework.go:193 - STEP: Destroying namespace "containers-9204" for this suite. 06/12/23 22:31:08.417 + STEP: Destroying namespace "replication-controller-9440" for this suite. 07/27/23 03:02:45.555 << End Captured GinkgoWriter Output ------------------------------ -SS +SSSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] CSIInlineVolumes - should support CSIVolumeSource in Pod API [Conformance] - test/e2e/storage/csi_inline.go:131 -[BeforeEach] [sig-storage] CSIInlineVolumes +[sig-network] Proxy version v1 + A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + test/e2e/network/proxy.go:286 +[BeforeEach] version v1 set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:31:08.446 -Jun 12 22:31:08.446: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename csiinlinevolumes 06/12/23 22:31:08.454 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:31:08.532 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:31:08.57 -[BeforeEach] [sig-storage] CSIInlineVolumes +STEP: Creating a kubernetes client 07/27/23 03:02:45.58 +Jul 27 03:02:45.580: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename proxy 07/27/23 03:02:45.581 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:02:45.669 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:02:45.677 +[BeforeEach] version v1 test/e2e/framework/metrics/init/init.go:31 -[It] should support CSIVolumeSource in Pod API [Conformance] - test/e2e/storage/csi_inline.go:131 -STEP: creating 06/12/23 22:31:08.591 -W0612 22:31:08.673801 23 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "pod-csi-inline-volumes" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "pod-csi-inline-volumes" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "pod-csi-inline-volumes" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "pod-csi-inline-volumes" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") -W0612 22:31:08.673972 23 warnings.go:70] pod-csi-inline-volumes uses an inline volume provided by CSIDriver e2e.example.com and namespace csiinlinevolumes-9835 has a pod security warn level that is lower than privileged -STEP: getting 06/12/23 22:31:08.713 -STEP: listing in namespace 06/12/23 22:31:08.722 -STEP: patching 06/12/23 22:31:08.732 -STEP: deleting 06/12/23 22:31:08.757 -[AfterEach] [sig-storage] CSIInlineVolumes +[It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + test/e2e/network/proxy.go:286 +Jul 27 03:02:45.686: INFO: Creating pod... +Jul 27 03:02:45.728: INFO: Waiting up to 5m0s for pod "agnhost" in namespace "proxy-6116" to be "running" +Jul 27 03:02:45.737: INFO: Pod "agnhost": Phase="Pending", Reason="", readiness=false. Elapsed: 9.263705ms +Jul 27 03:02:47.747: INFO: Pod "agnhost": Phase="Running", Reason="", readiness=true. Elapsed: 2.019278745s +Jul 27 03:02:47.747: INFO: Pod "agnhost" satisfied condition "running" +Jul 27 03:02:47.747: INFO: Creating service... +Jul 27 03:02:47.790: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-6116/pods/agnhost/proxy/some/path/with/DELETE +Jul 27 03:02:47.828: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Jul 27 03:02:47.828: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-6116/pods/agnhost/proxy/some/path/with/GET +Jul 27 03:02:47.861: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET +Jul 27 03:02:47.861: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-6116/pods/agnhost/proxy/some/path/with/HEAD +Jul 27 03:02:47.886: INFO: http.Client request:HEAD | StatusCode:200 +Jul 27 03:02:47.886: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-6116/pods/agnhost/proxy/some/path/with/OPTIONS +Jul 27 03:02:47.910: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Jul 27 03:02:47.910: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-6116/pods/agnhost/proxy/some/path/with/PATCH +Jul 27 03:02:47.936: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Jul 27 03:02:47.936: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-6116/pods/agnhost/proxy/some/path/with/POST +Jul 27 03:02:47.965: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Jul 27 03:02:47.965: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-6116/pods/agnhost/proxy/some/path/with/PUT +Jul 27 03:02:47.995: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +Jul 27 03:02:47.995: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-6116/services/test-service/proxy/some/path/with/DELETE +Jul 27 03:02:48.024: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Jul 27 03:02:48.024: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-6116/services/test-service/proxy/some/path/with/GET +Jul 27 03:02:48.060: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET +Jul 27 03:02:48.060: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-6116/services/test-service/proxy/some/path/with/HEAD +Jul 27 03:02:48.099: INFO: http.Client request:HEAD | StatusCode:200 +Jul 27 03:02:48.099: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-6116/services/test-service/proxy/some/path/with/OPTIONS +Jul 27 03:02:48.134: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Jul 27 03:02:48.134: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-6116/services/test-service/proxy/some/path/with/PATCH +Jul 27 03:02:48.193: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Jul 27 03:02:48.193: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-6116/services/test-service/proxy/some/path/with/POST +Jul 27 03:02:48.229: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Jul 27 03:02:48.229: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-6116/services/test-service/proxy/some/path/with/PUT +Jul 27 03:02:48.250: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +[AfterEach] version v1 test/e2e/framework/node/init/init.go:32 -Jun 12 22:31:08.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] CSIInlineVolumes +Jul 27 03:02:48.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] version v1 test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] CSIInlineVolumes +[DeferCleanup (Each)] version v1 dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] CSIInlineVolumes +[DeferCleanup (Each)] version v1 tear down framework | framework.go:193 -STEP: Destroying namespace "csiinlinevolumes-9835" for this suite. 06/12/23 22:31:08.798 +STEP: Destroying namespace "proxy-6116" for this suite. 07/27/23 03:02:48.262 ------------------------------ -• [0.366 seconds] -[sig-storage] CSIInlineVolumes -test/e2e/storage/utils/framework.go:23 - should support CSIVolumeSource in Pod API [Conformance] - test/e2e/storage/csi_inline.go:131 +• [2.707 seconds] +[sig-network] Proxy +test/e2e/network/common/framework.go:23 + version v1 + test/e2e/network/proxy.go:74 + A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + test/e2e/network/proxy.go:286 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] CSIInlineVolumes + [BeforeEach] version v1 set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:31:08.446 - Jun 12 22:31:08.446: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename csiinlinevolumes 06/12/23 22:31:08.454 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:31:08.532 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:31:08.57 - [BeforeEach] [sig-storage] CSIInlineVolumes + STEP: Creating a kubernetes client 07/27/23 03:02:45.58 + Jul 27 03:02:45.580: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename proxy 07/27/23 03:02:45.581 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:02:45.669 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:02:45.677 + [BeforeEach] version v1 test/e2e/framework/metrics/init/init.go:31 - [It] should support CSIVolumeSource in Pod API [Conformance] - test/e2e/storage/csi_inline.go:131 - STEP: creating 06/12/23 22:31:08.591 - W0612 22:31:08.673801 23 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "pod-csi-inline-volumes" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "pod-csi-inline-volumes" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "pod-csi-inline-volumes" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "pod-csi-inline-volumes" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") - W0612 22:31:08.673972 23 warnings.go:70] pod-csi-inline-volumes uses an inline volume provided by CSIDriver e2e.example.com and namespace csiinlinevolumes-9835 has a pod security warn level that is lower than privileged - STEP: getting 06/12/23 22:31:08.713 - STEP: listing in namespace 06/12/23 22:31:08.722 - STEP: patching 06/12/23 22:31:08.732 - STEP: deleting 06/12/23 22:31:08.757 - [AfterEach] [sig-storage] CSIInlineVolumes + [It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + test/e2e/network/proxy.go:286 + Jul 27 03:02:45.686: INFO: Creating pod... + Jul 27 03:02:45.728: INFO: Waiting up to 5m0s for pod "agnhost" in namespace "proxy-6116" to be "running" + Jul 27 03:02:45.737: INFO: Pod "agnhost": Phase="Pending", Reason="", readiness=false. Elapsed: 9.263705ms + Jul 27 03:02:47.747: INFO: Pod "agnhost": Phase="Running", Reason="", readiness=true. Elapsed: 2.019278745s + Jul 27 03:02:47.747: INFO: Pod "agnhost" satisfied condition "running" + Jul 27 03:02:47.747: INFO: Creating service... + Jul 27 03:02:47.790: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-6116/pods/agnhost/proxy/some/path/with/DELETE + Jul 27 03:02:47.828: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE + Jul 27 03:02:47.828: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-6116/pods/agnhost/proxy/some/path/with/GET + Jul 27 03:02:47.861: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET + Jul 27 03:02:47.861: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-6116/pods/agnhost/proxy/some/path/with/HEAD + Jul 27 03:02:47.886: INFO: http.Client request:HEAD | StatusCode:200 + Jul 27 03:02:47.886: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-6116/pods/agnhost/proxy/some/path/with/OPTIONS + Jul 27 03:02:47.910: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS + Jul 27 03:02:47.910: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-6116/pods/agnhost/proxy/some/path/with/PATCH + Jul 27 03:02:47.936: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH + Jul 27 03:02:47.936: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-6116/pods/agnhost/proxy/some/path/with/POST + Jul 27 03:02:47.965: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST + Jul 27 03:02:47.965: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-6116/pods/agnhost/proxy/some/path/with/PUT + Jul 27 03:02:47.995: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT + Jul 27 03:02:47.995: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-6116/services/test-service/proxy/some/path/with/DELETE + Jul 27 03:02:48.024: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE + Jul 27 03:02:48.024: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-6116/services/test-service/proxy/some/path/with/GET + Jul 27 03:02:48.060: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET + Jul 27 03:02:48.060: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-6116/services/test-service/proxy/some/path/with/HEAD + Jul 27 03:02:48.099: INFO: http.Client request:HEAD | StatusCode:200 + Jul 27 03:02:48.099: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-6116/services/test-service/proxy/some/path/with/OPTIONS + Jul 27 03:02:48.134: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS + Jul 27 03:02:48.134: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-6116/services/test-service/proxy/some/path/with/PATCH + Jul 27 03:02:48.193: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH + Jul 27 03:02:48.193: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-6116/services/test-service/proxy/some/path/with/POST + Jul 27 03:02:48.229: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST + Jul 27 03:02:48.229: INFO: Starting http.Client for https://172.21.0.1:443/api/v1/namespaces/proxy-6116/services/test-service/proxy/some/path/with/PUT + Jul 27 03:02:48.250: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT + [AfterEach] version v1 test/e2e/framework/node/init/init.go:32 - Jun 12 22:31:08.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] CSIInlineVolumes + Jul 27 03:02:48.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] version v1 test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] CSIInlineVolumes + [DeferCleanup (Each)] version v1 dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] CSIInlineVolumes + [DeferCleanup (Each)] version v1 tear down framework | framework.go:193 - STEP: Destroying namespace "csiinlinevolumes-9835" for this suite. 06/12/23 22:31:08.798 + STEP: Destroying namespace "proxy-6116" for this suite. 07/27/23 03:02:48.262 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-node] RuntimeClass - should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance] - test/e2e/common/node/runtimeclass.go:104 -[BeforeEach] [sig-node] RuntimeClass +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource with pruning [Conformance] + test/e2e/apimachinery/webhook.go:341 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:31:08.814 -Jun 12 22:31:08.814: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename runtimeclass 06/12/23 22:31:08.816 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:31:08.89 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:31:08.904 -[BeforeEach] [sig-node] RuntimeClass +STEP: Creating a kubernetes client 07/27/23 03:02:48.29 +Jul 27 03:02:48.290: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename webhook 07/27/23 03:02:48.291 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:02:48.349 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:02:48.363 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 -[It] should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance] - test/e2e/common/node/runtimeclass.go:104 -Jun 12 22:31:08.980: INFO: Waiting up to 1m20s for at least 1 pods in namespace runtimeclass-5046 to be scheduled -[AfterEach] [sig-node] RuntimeClass +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 07/27/23 03:02:48.463 +STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 03:02:48.647 +STEP: Deploying the webhook pod 07/27/23 03:02:48.688 +STEP: Wait for the deployment to be ready 07/27/23 03:02:48.728 +Jul 27 03:02:48.779: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 07/27/23 03:02:50.805 +STEP: Verifying the service has paired with the endpoint 07/27/23 03:02:50.842 +Jul 27 03:02:51.843: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource with pruning [Conformance] + test/e2e/apimachinery/webhook.go:341 +Jul 27 03:02:51.854: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-635-crds.webhook.example.com via the AdmissionRegistration API 07/27/23 03:02:52.387 +STEP: Creating a custom resource that should be mutated by the webhook 07/27/23 03:02:52.431 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 -Jun 12 22:31:09.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] RuntimeClass +Jul 27 03:02:55.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] RuntimeClass +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] RuntimeClass +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 -STEP: Destroying namespace "runtimeclass-5046" for this suite. 06/12/23 22:31:09.037 +STEP: Destroying namespace "webhook-8416" for this suite. 07/27/23 03:02:55.267 +STEP: Destroying namespace "webhook-8416-markers" for this suite. 07/27/23 03:02:55.291 ------------------------------ -• [0.242 seconds] -[sig-node] RuntimeClass -test/e2e/common/node/framework.go:23 - should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance] - test/e2e/common/node/runtimeclass.go:104 +• [SLOW TEST] [7.055 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should mutate custom resource with pruning [Conformance] + test/e2e/apimachinery/webhook.go:341 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] RuntimeClass + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:31:08.814 - Jun 12 22:31:08.814: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename runtimeclass 06/12/23 22:31:08.816 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:31:08.89 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:31:08.904 - [BeforeEach] [sig-node] RuntimeClass + STEP: Creating a kubernetes client 07/27/23 03:02:48.29 + Jul 27 03:02:48.290: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename webhook 07/27/23 03:02:48.291 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:02:48.349 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:02:48.363 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 - [It] should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance] - test/e2e/common/node/runtimeclass.go:104 - Jun 12 22:31:08.980: INFO: Waiting up to 1m20s for at least 1 pods in namespace runtimeclass-5046 to be scheduled - [AfterEach] [sig-node] RuntimeClass + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 07/27/23 03:02:48.463 + STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 03:02:48.647 + STEP: Deploying the webhook pod 07/27/23 03:02:48.688 + STEP: Wait for the deployment to be ready 07/27/23 03:02:48.728 + Jul 27 03:02:48.779: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 07/27/23 03:02:50.805 + STEP: Verifying the service has paired with the endpoint 07/27/23 03:02:50.842 + Jul 27 03:02:51.843: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should mutate custom resource with pruning [Conformance] + test/e2e/apimachinery/webhook.go:341 + Jul 27 03:02:51.854: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Registering the mutating webhook for custom resource e2e-test-webhook-635-crds.webhook.example.com via the AdmissionRegistration API 07/27/23 03:02:52.387 + STEP: Creating a custom resource that should be mutated by the webhook 07/27/23 03:02:52.431 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 22:31:09.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] RuntimeClass + Jul 27 03:02:55.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] RuntimeClass + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] RuntimeClass + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "runtimeclass-5046" for this suite. 06/12/23 22:31:09.037 + STEP: Destroying namespace "webhook-8416" for this suite. 07/27/23 03:02:55.267 + STEP: Destroying namespace "webhook-8416-markers" for this suite. 07/27/23 03:02:55.291 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSS +SSS ------------------------------ -[sig-node] Variable Expansion - should succeed in writing subpaths in container [Slow] [Conformance] - test/e2e/common/node/expansion.go:297 -[BeforeEach] [sig-node] Variable Expansion +[sig-storage] Projected configMap + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:47 +[BeforeEach] [sig-storage] Projected configMap set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:31:09.057 -Jun 12 22:31:09.057: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename var-expansion 06/12/23 22:31:09.06 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:31:09.189 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:31:09.208 -[BeforeEach] [sig-node] Variable Expansion +STEP: Creating a kubernetes client 07/27/23 03:02:55.345 +Jul 27 03:02:55.345: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename projected 07/27/23 03:02:55.346 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:02:55.387 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:02:55.396 +[BeforeEach] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:31 -[It] should succeed in writing subpaths in container [Slow] [Conformance] - test/e2e/common/node/expansion.go:297 -STEP: creating the pod 06/12/23 22:31:09.22 -STEP: waiting for pod running 06/12/23 22:31:09.255 -Jun 12 22:31:09.256: INFO: Waiting up to 2m0s for pod "var-expansion-f20499f6-7227-4542-832d-2aa720d32733" in namespace "var-expansion-8089" to be "running" -Jun 12 22:31:09.266: INFO: Pod "var-expansion-f20499f6-7227-4542-832d-2aa720d32733": Phase="Pending", Reason="", readiness=false. Elapsed: 10.305162ms -Jun 12 22:31:11.278: INFO: Pod "var-expansion-f20499f6-7227-4542-832d-2aa720d32733": Phase="Running", Reason="", readiness=true. Elapsed: 2.021741376s -Jun 12 22:31:11.278: INFO: Pod "var-expansion-f20499f6-7227-4542-832d-2aa720d32733" satisfied condition "running" -STEP: creating a file in subpath 06/12/23 22:31:11.278 -Jun 12 22:31:11.289: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-8089 PodName:var-expansion-f20499f6-7227-4542-832d-2aa720d32733 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} -Jun 12 22:31:11.289: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 22:31:11.290: INFO: ExecWithOptions: Clientset creation -Jun 12 22:31:11.290: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/var-expansion-8089/pods/var-expansion-f20499f6-7227-4542-832d-2aa720d32733/exec?command=%2Fbin%2Fsh&command=-c&command=touch+%2Fvolume_mount%2Fmypath%2Ffoo%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) -STEP: test for file in mounted path 06/12/23 22:31:11.461 -Jun 12 22:31:11.478: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-8089 PodName:var-expansion-f20499f6-7227-4542-832d-2aa720d32733 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} -Jun 12 22:31:11.478: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -Jun 12 22:31:11.479: INFO: ExecWithOptions: Clientset creation -Jun 12 22:31:11.479: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/var-expansion-8089/pods/var-expansion-f20499f6-7227-4542-832d-2aa720d32733/exec?command=%2Fbin%2Fsh&command=-c&command=test+-f+%2Fsubpath_mount%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) -STEP: updating the annotation value 06/12/23 22:31:11.664 -Jun 12 22:31:12.204: INFO: Successfully updated pod "var-expansion-f20499f6-7227-4542-832d-2aa720d32733" -STEP: waiting for annotated pod running 06/12/23 22:31:12.204 -Jun 12 22:31:12.204: INFO: Waiting up to 2m0s for pod "var-expansion-f20499f6-7227-4542-832d-2aa720d32733" in namespace "var-expansion-8089" to be "running" -Jun 12 22:31:12.216: INFO: Pod "var-expansion-f20499f6-7227-4542-832d-2aa720d32733": Phase="Running", Reason="", readiness=true. Elapsed: 11.278918ms -Jun 12 22:31:12.216: INFO: Pod "var-expansion-f20499f6-7227-4542-832d-2aa720d32733" satisfied condition "running" -STEP: deleting the pod gracefully 06/12/23 22:31:12.216 -Jun 12 22:31:12.216: INFO: Deleting pod "var-expansion-f20499f6-7227-4542-832d-2aa720d32733" in namespace "var-expansion-8089" -Jun 12 22:31:12.236: INFO: Wait up to 5m0s for pod "var-expansion-f20499f6-7227-4542-832d-2aa720d32733" to be fully deleted -[AfterEach] [sig-node] Variable Expansion +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:47 +STEP: Creating configMap with name projected-configmap-test-volume-ec899c67-0f18-4bfb-8a6e-39a3544bb36d 07/27/23 03:02:55.406 +STEP: Creating a pod to test consume configMaps 07/27/23 03:02:55.424 +Jul 27 03:02:55.465: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-069ec7d2-3092-40d7-a413-8ab4a8ed6d3e" in namespace "projected-2274" to be "Succeeded or Failed" +Jul 27 03:02:55.476: INFO: Pod "pod-projected-configmaps-069ec7d2-3092-40d7-a413-8ab4a8ed6d3e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.759727ms +Jul 27 03:02:57.484: INFO: Pod "pod-projected-configmaps-069ec7d2-3092-40d7-a413-8ab4a8ed6d3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019907061s +Jul 27 03:02:59.498: INFO: Pod "pod-projected-configmaps-069ec7d2-3092-40d7-a413-8ab4a8ed6d3e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03352945s +Jul 27 03:03:01.487: INFO: Pod "pod-projected-configmaps-069ec7d2-3092-40d7-a413-8ab4a8ed6d3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02267261s +STEP: Saw pod success 07/27/23 03:03:01.487 +Jul 27 03:03:01.487: INFO: Pod "pod-projected-configmaps-069ec7d2-3092-40d7-a413-8ab4a8ed6d3e" satisfied condition "Succeeded or Failed" +Jul 27 03:03:01.495: INFO: Trying to get logs from node 10.245.128.19 pod pod-projected-configmaps-069ec7d2-3092-40d7-a413-8ab4a8ed6d3e container agnhost-container: +STEP: delete the pod 07/27/23 03:03:01.52 +Jul 27 03:03:01.539: INFO: Waiting for pod pod-projected-configmaps-069ec7d2-3092-40d7-a413-8ab4a8ed6d3e to disappear +Jul 27 03:03:01.546: INFO: Pod pod-projected-configmaps-069ec7d2-3092-40d7-a413-8ab4a8ed6d3e no longer exists +[AfterEach] [sig-storage] Projected configMap test/e2e/framework/node/init/init.go:32 -Jun 12 22:31:48.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Variable Expansion +Jul 27 03:03:01.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Variable Expansion +[DeferCleanup (Each)] [sig-storage] Projected configMap dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Variable Expansion +[DeferCleanup (Each)] [sig-storage] Projected configMap tear down framework | framework.go:193 -STEP: Destroying namespace "var-expansion-8089" for this suite. 06/12/23 22:31:48.275 +STEP: Destroying namespace "projected-2274" for this suite. 07/27/23 03:03:01.561 ------------------------------ -• [SLOW TEST] [39.233 seconds] -[sig-node] Variable Expansion -test/e2e/common/node/framework.go:23 - should succeed in writing subpaths in container [Slow] [Conformance] - test/e2e/common/node/expansion.go:297 +• [SLOW TEST] [6.238 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:47 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Variable Expansion + [BeforeEach] [sig-storage] Projected configMap set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:31:09.057 - Jun 12 22:31:09.057: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename var-expansion 06/12/23 22:31:09.06 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:31:09.189 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:31:09.208 - [BeforeEach] [sig-node] Variable Expansion + STEP: Creating a kubernetes client 07/27/23 03:02:55.345 + Jul 27 03:02:55.345: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename projected 07/27/23 03:02:55.346 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:02:55.387 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:02:55.396 + [BeforeEach] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:31 - [It] should succeed in writing subpaths in container [Slow] [Conformance] - test/e2e/common/node/expansion.go:297 - STEP: creating the pod 06/12/23 22:31:09.22 - STEP: waiting for pod running 06/12/23 22:31:09.255 - Jun 12 22:31:09.256: INFO: Waiting up to 2m0s for pod "var-expansion-f20499f6-7227-4542-832d-2aa720d32733" in namespace "var-expansion-8089" to be "running" - Jun 12 22:31:09.266: INFO: Pod "var-expansion-f20499f6-7227-4542-832d-2aa720d32733": Phase="Pending", Reason="", readiness=false. Elapsed: 10.305162ms - Jun 12 22:31:11.278: INFO: Pod "var-expansion-f20499f6-7227-4542-832d-2aa720d32733": Phase="Running", Reason="", readiness=true. Elapsed: 2.021741376s - Jun 12 22:31:11.278: INFO: Pod "var-expansion-f20499f6-7227-4542-832d-2aa720d32733" satisfied condition "running" - STEP: creating a file in subpath 06/12/23 22:31:11.278 - Jun 12 22:31:11.289: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-8089 PodName:var-expansion-f20499f6-7227-4542-832d-2aa720d32733 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} - Jun 12 22:31:11.289: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 22:31:11.290: INFO: ExecWithOptions: Clientset creation - Jun 12 22:31:11.290: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/var-expansion-8089/pods/var-expansion-f20499f6-7227-4542-832d-2aa720d32733/exec?command=%2Fbin%2Fsh&command=-c&command=touch+%2Fvolume_mount%2Fmypath%2Ffoo%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) - STEP: test for file in mounted path 06/12/23 22:31:11.461 - Jun 12 22:31:11.478: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-8089 PodName:var-expansion-f20499f6-7227-4542-832d-2aa720d32733 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} - Jun 12 22:31:11.478: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - Jun 12 22:31:11.479: INFO: ExecWithOptions: Clientset creation - Jun 12 22:31:11.479: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/var-expansion-8089/pods/var-expansion-f20499f6-7227-4542-832d-2aa720d32733/exec?command=%2Fbin%2Fsh&command=-c&command=test+-f+%2Fsubpath_mount%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) - STEP: updating the annotation value 06/12/23 22:31:11.664 - Jun 12 22:31:12.204: INFO: Successfully updated pod "var-expansion-f20499f6-7227-4542-832d-2aa720d32733" - STEP: waiting for annotated pod running 06/12/23 22:31:12.204 - Jun 12 22:31:12.204: INFO: Waiting up to 2m0s for pod "var-expansion-f20499f6-7227-4542-832d-2aa720d32733" in namespace "var-expansion-8089" to be "running" - Jun 12 22:31:12.216: INFO: Pod "var-expansion-f20499f6-7227-4542-832d-2aa720d32733": Phase="Running", Reason="", readiness=true. Elapsed: 11.278918ms - Jun 12 22:31:12.216: INFO: Pod "var-expansion-f20499f6-7227-4542-832d-2aa720d32733" satisfied condition "running" - STEP: deleting the pod gracefully 06/12/23 22:31:12.216 - Jun 12 22:31:12.216: INFO: Deleting pod "var-expansion-f20499f6-7227-4542-832d-2aa720d32733" in namespace "var-expansion-8089" - Jun 12 22:31:12.236: INFO: Wait up to 5m0s for pod "var-expansion-f20499f6-7227-4542-832d-2aa720d32733" to be fully deleted - [AfterEach] [sig-node] Variable Expansion + [It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:47 + STEP: Creating configMap with name projected-configmap-test-volume-ec899c67-0f18-4bfb-8a6e-39a3544bb36d 07/27/23 03:02:55.406 + STEP: Creating a pod to test consume configMaps 07/27/23 03:02:55.424 + Jul 27 03:02:55.465: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-069ec7d2-3092-40d7-a413-8ab4a8ed6d3e" in namespace "projected-2274" to be "Succeeded or Failed" + Jul 27 03:02:55.476: INFO: Pod "pod-projected-configmaps-069ec7d2-3092-40d7-a413-8ab4a8ed6d3e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.759727ms + Jul 27 03:02:57.484: INFO: Pod "pod-projected-configmaps-069ec7d2-3092-40d7-a413-8ab4a8ed6d3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019907061s + Jul 27 03:02:59.498: INFO: Pod "pod-projected-configmaps-069ec7d2-3092-40d7-a413-8ab4a8ed6d3e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03352945s + Jul 27 03:03:01.487: INFO: Pod "pod-projected-configmaps-069ec7d2-3092-40d7-a413-8ab4a8ed6d3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02267261s + STEP: Saw pod success 07/27/23 03:03:01.487 + Jul 27 03:03:01.487: INFO: Pod "pod-projected-configmaps-069ec7d2-3092-40d7-a413-8ab4a8ed6d3e" satisfied condition "Succeeded or Failed" + Jul 27 03:03:01.495: INFO: Trying to get logs from node 10.245.128.19 pod pod-projected-configmaps-069ec7d2-3092-40d7-a413-8ab4a8ed6d3e container agnhost-container: + STEP: delete the pod 07/27/23 03:03:01.52 + Jul 27 03:03:01.539: INFO: Waiting for pod pod-projected-configmaps-069ec7d2-3092-40d7-a413-8ab4a8ed6d3e to disappear + Jul 27 03:03:01.546: INFO: Pod pod-projected-configmaps-069ec7d2-3092-40d7-a413-8ab4a8ed6d3e no longer exists + [AfterEach] [sig-storage] Projected configMap test/e2e/framework/node/init/init.go:32 - Jun 12 22:31:48.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Variable Expansion + Jul 27 03:03:01.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Variable Expansion + [DeferCleanup (Each)] [sig-storage] Projected configMap dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Variable Expansion + [DeferCleanup (Each)] [sig-storage] Projected configMap tear down framework | framework.go:193 - STEP: Destroying namespace "var-expansion-8089" for this suite. 06/12/23 22:31:48.275 + STEP: Destroying namespace "projected-2274" for this suite. 07/27/23 03:03:01.561 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSS ------------------------------ -[sig-node] InitContainer [NodeConformance] - should not start app containers if init containers fail on a RestartAlways pod [Conformance] - test/e2e/common/node/init_container.go:334 -[BeforeEach] [sig-node] InitContainer [NodeConformance] +[sig-storage] Projected configMap + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:74 +[BeforeEach] [sig-storage] Projected configMap set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:31:48.298 -Jun 12 22:31:48.298: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename init-container 06/12/23 22:31:48.3 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:31:48.346 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:31:48.358 -[BeforeEach] [sig-node] InitContainer [NodeConformance] +STEP: Creating a kubernetes client 07/27/23 03:03:01.584 +Jul 27 03:03:01.584: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename projected 07/27/23 03:03:01.585 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:03:01.631 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:03:01.64 +[BeforeEach] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-node] InitContainer [NodeConformance] - test/e2e/common/node/init_container.go:165 -[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] - test/e2e/common/node/init_container.go:334 -STEP: creating the pod 06/12/23 22:31:48.372 -Jun 12 22:31:48.373: INFO: PodSpec: initContainers in spec.initContainers -Jun 12 22:32:33.455: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-fbe34834-9c06-4be0-b120-36261156a614", GenerateName:"", Namespace:"init-container-5792", SelfLink:"", UID:"379bc167-5bf7-4982-9a7a-307ecf44ffae", ResourceVersion:"148228", Generation:0, CreationTimestamp:time.Date(2023, time.June, 12, 22, 31, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"373227686"}, Annotations:map[string]string{"cni.projectcalico.org/containerID":"6b77a0cf2424e1d9cd64df9e792645b646045c614b9f31fa0f6aac75e644d41c", "cni.projectcalico.org/podIP":"172.30.224.51/32", "cni.projectcalico.org/podIPs":"172.30.224.51/32", "k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"k8s-pod-network\",\n \"ips\": [\n \"172.30.224.51\"\n ],\n \"default\": true,\n \"dns\": {}\n}]", "openshift.io/scc":"anyuid"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.June, 12, 22, 31, 48, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004f45578), Subresource:""}, v1.ManagedFieldsEntry{Manager:"calico", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.June, 12, 22, 31, 49, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004f455a8), Subresource:"status"}, v1.ManagedFieldsEntry{Manager:"multus", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.June, 12, 22, 31, 49, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004f455d8), Subresource:"status"}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.June, 12, 22, 32, 33, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004f45608), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-l6nlw", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc00070dd00), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"registry.k8s.io/e2e-test-images/busybox:1.29-4", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil), Claims:[]v1.ResourceClaim(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-l6nlw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00ad13080), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"registry.k8s.io/e2e-test-images/busybox:1.29-4", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil), Claims:[]v1.ResourceClaim(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-l6nlw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00ad130e0), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"registry.k8s.io/pause:3.9", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Claims:[]v1.ResourceClaim(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-l6nlw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00ad13020), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0046766b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"10.138.75.70", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0047470a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004676770)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0046767a0)}, v1.Toleration{Key:"node.kubernetes.io/memory-pressure", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0046767bc), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0046767c0), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0068de9d0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.June, 12, 22, 31, 48, 0, time.Local), Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.June, 12, 22, 31, 48, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.June, 12, 22, 31, 48, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.June, 12, 22, 31, 48, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.75.70", PodIP:"172.30.224.51", PodIPs:[]v1.PodIP{v1.PodIP{IP:"172.30.224.51"}}, StartTime:time.Date(2023, time.June, 12, 22, 31, 48, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc004747180)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0047471f0)}, Ready:false, RestartCount:3, Image:"registry.k8s.io/e2e-test-images/busybox:1.29-4", ImageID:"registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937", ContainerID:"cri-o://7ed3d6adbe9e8cd314e0b35771d5e8d7c7f835337037754f83dc7d4efe2fe330", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00070de00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/busybox:1.29-4", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00070ddc0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/pause:3.9", ImageID:"", ContainerID:"", Started:(*bool)(0xc00467683f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} -[AfterEach] [sig-node] InitContainer [NodeConformance] +[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:74 +STEP: Creating configMap with name projected-configmap-test-volume-d39cf789-7aea-4ffc-9793-7428c6bc28da 07/27/23 03:03:01.649 +STEP: Creating a pod to test consume configMaps 07/27/23 03:03:01.674 +Jul 27 03:03:01.706: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3b5e8883-3537-4a3e-a3f3-38e44d610f32" in namespace "projected-2091" to be "Succeeded or Failed" +Jul 27 03:03:01.719: INFO: Pod "pod-projected-configmaps-3b5e8883-3537-4a3e-a3f3-38e44d610f32": Phase="Pending", Reason="", readiness=false. Elapsed: 12.829825ms +Jul 27 03:03:03.730: INFO: Pod "pod-projected-configmaps-3b5e8883-3537-4a3e-a3f3-38e44d610f32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023687789s +Jul 27 03:03:05.729: INFO: Pod "pod-projected-configmaps-3b5e8883-3537-4a3e-a3f3-38e44d610f32": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02285877s +Jul 27 03:03:07.728: INFO: Pod "pod-projected-configmaps-3b5e8883-3537-4a3e-a3f3-38e44d610f32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021726361s +STEP: Saw pod success 07/27/23 03:03:07.728 +Jul 27 03:03:07.728: INFO: Pod "pod-projected-configmaps-3b5e8883-3537-4a3e-a3f3-38e44d610f32" satisfied condition "Succeeded or Failed" +Jul 27 03:03:07.737: INFO: Trying to get logs from node 10.245.128.19 pod pod-projected-configmaps-3b5e8883-3537-4a3e-a3f3-38e44d610f32 container agnhost-container: +STEP: delete the pod 07/27/23 03:03:07.756 +Jul 27 03:03:07.775: INFO: Waiting for pod pod-projected-configmaps-3b5e8883-3537-4a3e-a3f3-38e44d610f32 to disappear +Jul 27 03:03:07.782: INFO: Pod pod-projected-configmaps-3b5e8883-3537-4a3e-a3f3-38e44d610f32 no longer exists +[AfterEach] [sig-storage] Projected configMap test/e2e/framework/node/init/init.go:32 -Jun 12 22:32:33.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] +Jul 27 03:03:07.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] +[DeferCleanup (Each)] [sig-storage] Projected configMap dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] +[DeferCleanup (Each)] [sig-storage] Projected configMap tear down framework | framework.go:193 -STEP: Destroying namespace "init-container-5792" for this suite. 06/12/23 22:32:33.471 +STEP: Destroying namespace "projected-2091" for this suite. 07/27/23 03:03:07.793 ------------------------------ -• [SLOW TEST] [45.189 seconds] -[sig-node] InitContainer [NodeConformance] -test/e2e/common/node/framework.go:23 - should not start app containers if init containers fail on a RestartAlways pod [Conformance] - test/e2e/common/node/init_container.go:334 +• [SLOW TEST] [6.231 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:74 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] InitContainer [NodeConformance] + [BeforeEach] [sig-storage] Projected configMap set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:31:48.298 - Jun 12 22:31:48.298: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename init-container 06/12/23 22:31:48.3 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:31:48.346 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:31:48.358 - [BeforeEach] [sig-node] InitContainer [NodeConformance] + STEP: Creating a kubernetes client 07/27/23 03:03:01.584 + Jul 27 03:03:01.584: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename projected 07/27/23 03:03:01.585 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:03:01.631 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:03:01.64 + [BeforeEach] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-node] InitContainer [NodeConformance] - test/e2e/common/node/init_container.go:165 - [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] - test/e2e/common/node/init_container.go:334 - STEP: creating the pod 06/12/23 22:31:48.372 - Jun 12 22:31:48.373: INFO: PodSpec: initContainers in spec.initContainers - Jun 12 22:32:33.455: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-fbe34834-9c06-4be0-b120-36261156a614", GenerateName:"", Namespace:"init-container-5792", SelfLink:"", UID:"379bc167-5bf7-4982-9a7a-307ecf44ffae", ResourceVersion:"148228", Generation:0, CreationTimestamp:time.Date(2023, time.June, 12, 22, 31, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"373227686"}, Annotations:map[string]string{"cni.projectcalico.org/containerID":"6b77a0cf2424e1d9cd64df9e792645b646045c614b9f31fa0f6aac75e644d41c", "cni.projectcalico.org/podIP":"172.30.224.51/32", "cni.projectcalico.org/podIPs":"172.30.224.51/32", "k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"k8s-pod-network\",\n \"ips\": [\n \"172.30.224.51\"\n ],\n \"default\": true,\n \"dns\": {}\n}]", "openshift.io/scc":"anyuid"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.June, 12, 22, 31, 48, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004f45578), Subresource:""}, v1.ManagedFieldsEntry{Manager:"calico", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.June, 12, 22, 31, 49, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004f455a8), Subresource:"status"}, v1.ManagedFieldsEntry{Manager:"multus", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.June, 12, 22, 31, 49, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004f455d8), Subresource:"status"}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.June, 12, 22, 32, 33, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004f45608), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-l6nlw", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc00070dd00), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"registry.k8s.io/e2e-test-images/busybox:1.29-4", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil), Claims:[]v1.ResourceClaim(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-l6nlw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00ad13080), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"registry.k8s.io/e2e-test-images/busybox:1.29-4", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil), Claims:[]v1.ResourceClaim(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-l6nlw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00ad130e0), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"registry.k8s.io/pause:3.9", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Claims:[]v1.ResourceClaim(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-l6nlw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00ad13020), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0046766b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"10.138.75.70", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0047470a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004676770)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0046767a0)}, v1.Toleration{Key:"node.kubernetes.io/memory-pressure", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0046767bc), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0046767c0), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0068de9d0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.June, 12, 22, 31, 48, 0, time.Local), Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.June, 12, 22, 31, 48, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.June, 12, 22, 31, 48, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.June, 12, 22, 31, 48, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.75.70", PodIP:"172.30.224.51", PodIPs:[]v1.PodIP{v1.PodIP{IP:"172.30.224.51"}}, StartTime:time.Date(2023, time.June, 12, 22, 31, 48, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc004747180)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0047471f0)}, Ready:false, RestartCount:3, Image:"registry.k8s.io/e2e-test-images/busybox:1.29-4", ImageID:"registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937", ContainerID:"cri-o://7ed3d6adbe9e8cd314e0b35771d5e8d7c7f835337037754f83dc7d4efe2fe330", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00070de00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/busybox:1.29-4", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00070ddc0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/pause:3.9", ImageID:"", ContainerID:"", Started:(*bool)(0xc00467683f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} - [AfterEach] [sig-node] InitContainer [NodeConformance] + [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:74 + STEP: Creating configMap with name projected-configmap-test-volume-d39cf789-7aea-4ffc-9793-7428c6bc28da 07/27/23 03:03:01.649 + STEP: Creating a pod to test consume configMaps 07/27/23 03:03:01.674 + Jul 27 03:03:01.706: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3b5e8883-3537-4a3e-a3f3-38e44d610f32" in namespace "projected-2091" to be "Succeeded or Failed" + Jul 27 03:03:01.719: INFO: Pod "pod-projected-configmaps-3b5e8883-3537-4a3e-a3f3-38e44d610f32": Phase="Pending", Reason="", readiness=false. Elapsed: 12.829825ms + Jul 27 03:03:03.730: INFO: Pod "pod-projected-configmaps-3b5e8883-3537-4a3e-a3f3-38e44d610f32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023687789s + Jul 27 03:03:05.729: INFO: Pod "pod-projected-configmaps-3b5e8883-3537-4a3e-a3f3-38e44d610f32": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02285877s + Jul 27 03:03:07.728: INFO: Pod "pod-projected-configmaps-3b5e8883-3537-4a3e-a3f3-38e44d610f32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021726361s + STEP: Saw pod success 07/27/23 03:03:07.728 + Jul 27 03:03:07.728: INFO: Pod "pod-projected-configmaps-3b5e8883-3537-4a3e-a3f3-38e44d610f32" satisfied condition "Succeeded or Failed" + Jul 27 03:03:07.737: INFO: Trying to get logs from node 10.245.128.19 pod pod-projected-configmaps-3b5e8883-3537-4a3e-a3f3-38e44d610f32 container agnhost-container: + STEP: delete the pod 07/27/23 03:03:07.756 + Jul 27 03:03:07.775: INFO: Waiting for pod pod-projected-configmaps-3b5e8883-3537-4a3e-a3f3-38e44d610f32 to disappear + Jul 27 03:03:07.782: INFO: Pod pod-projected-configmaps-3b5e8883-3537-4a3e-a3f3-38e44d610f32 no longer exists + [AfterEach] [sig-storage] Projected configMap test/e2e/framework/node/init/init.go:32 - Jun 12 22:32:33.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + Jul 27 03:03:07.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + [DeferCleanup (Each)] [sig-storage] Projected configMap dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + [DeferCleanup (Each)] [sig-storage] Projected configMap tear down framework | framework.go:193 - STEP: Destroying namespace "init-container-5792" for this suite. 06/12/23 22:32:33.471 + STEP: Destroying namespace "projected-2091" for this suite. 07/27/23 03:03:07.793 << End Captured GinkgoWriter Output ------------------------------ -SS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-apps] DisruptionController - should observe PodDisruptionBudget status updated [Conformance] - test/e2e/apps/disruption.go:141 -[BeforeEach] [sig-apps] DisruptionController +[sig-storage] CSIStorageCapacity + should support CSIStorageCapacities API operations [Conformance] + test/e2e/storage/csistoragecapacity.go:49 +[BeforeEach] [sig-storage] CSIStorageCapacity set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:32:33.488 -Jun 12 22:32:33.489: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename disruption 06/12/23 22:32:33.491 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:32:33.532 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:32:33.543 -[BeforeEach] [sig-apps] DisruptionController +STEP: Creating a kubernetes client 07/27/23 03:03:07.817 +Jul 27 03:03:07.817: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename csistoragecapacity 07/27/23 03:03:07.817 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:03:07.859 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:03:07.867 +[BeforeEach] [sig-storage] CSIStorageCapacity test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-apps] DisruptionController - test/e2e/apps/disruption.go:72 -[It] should observe PodDisruptionBudget status updated [Conformance] - test/e2e/apps/disruption.go:141 -STEP: Waiting for the pdb to be processed 06/12/23 22:32:33.574 -STEP: Waiting for all pods to be running 06/12/23 22:32:33.674 -Jun 12 22:32:33.688: INFO: running pods: 0 < 3 -Jun 12 22:32:35.700: INFO: running pods: 0 < 3 -[AfterEach] [sig-apps] DisruptionController +[It] should support CSIStorageCapacities API operations [Conformance] + test/e2e/storage/csistoragecapacity.go:49 +STEP: getting /apis 07/27/23 03:03:07.877 +STEP: getting /apis/storage.k8s.io 07/27/23 03:03:07.887 +STEP: getting /apis/storage.k8s.io/v1 07/27/23 03:03:07.891 +STEP: creating 07/27/23 03:03:07.894 +STEP: watching 07/27/23 03:03:07.951 +Jul 27 03:03:07.951: INFO: starting watch +STEP: getting 07/27/23 03:03:07.973 +STEP: listing in namespace 07/27/23 03:03:07.983 +STEP: listing across namespaces 07/27/23 03:03:07.993 +STEP: patching 07/27/23 03:03:08.005 +STEP: updating 07/27/23 03:03:08.021 +Jul 27 03:03:08.036: INFO: waiting for watch events with expected annotations in namespace +Jul 27 03:03:08.036: INFO: waiting for watch events with expected annotations across namespace +STEP: deleting 07/27/23 03:03:08.036 +STEP: deleting a collection 07/27/23 03:03:08.08 +[AfterEach] [sig-storage] CSIStorageCapacity test/e2e/framework/node/init/init.go:32 -Jun 12 22:32:37.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] DisruptionController +Jul 27 03:03:08.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] CSIStorageCapacity test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] DisruptionController +[DeferCleanup (Each)] [sig-storage] CSIStorageCapacity dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] DisruptionController +[DeferCleanup (Each)] [sig-storage] CSIStorageCapacity tear down framework | framework.go:193 -STEP: Destroying namespace "disruption-3470" for this suite. 06/12/23 22:32:37.727 +STEP: Destroying namespace "csistoragecapacity-9121" for this suite. 07/27/23 03:03:08.15 ------------------------------ -• [4.260 seconds] -[sig-apps] DisruptionController -test/e2e/apps/framework.go:23 - should observe PodDisruptionBudget status updated [Conformance] - test/e2e/apps/disruption.go:141 +• [0.356 seconds] +[sig-storage] CSIStorageCapacity +test/e2e/storage/utils/framework.go:23 + should support CSIStorageCapacities API operations [Conformance] + test/e2e/storage/csistoragecapacity.go:49 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] DisruptionController + [BeforeEach] [sig-storage] CSIStorageCapacity set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:32:33.488 - Jun 12 22:32:33.489: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename disruption 06/12/23 22:32:33.491 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:32:33.532 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:32:33.543 - [BeforeEach] [sig-apps] DisruptionController + STEP: Creating a kubernetes client 07/27/23 03:03:07.817 + Jul 27 03:03:07.817: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename csistoragecapacity 07/27/23 03:03:07.817 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:03:07.859 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:03:07.867 + [BeforeEach] [sig-storage] CSIStorageCapacity test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-apps] DisruptionController - test/e2e/apps/disruption.go:72 - [It] should observe PodDisruptionBudget status updated [Conformance] - test/e2e/apps/disruption.go:141 - STEP: Waiting for the pdb to be processed 06/12/23 22:32:33.574 - STEP: Waiting for all pods to be running 06/12/23 22:32:33.674 - Jun 12 22:32:33.688: INFO: running pods: 0 < 3 - Jun 12 22:32:35.700: INFO: running pods: 0 < 3 - [AfterEach] [sig-apps] DisruptionController + [It] should support CSIStorageCapacities API operations [Conformance] + test/e2e/storage/csistoragecapacity.go:49 + STEP: getting /apis 07/27/23 03:03:07.877 + STEP: getting /apis/storage.k8s.io 07/27/23 03:03:07.887 + STEP: getting /apis/storage.k8s.io/v1 07/27/23 03:03:07.891 + STEP: creating 07/27/23 03:03:07.894 + STEP: watching 07/27/23 03:03:07.951 + Jul 27 03:03:07.951: INFO: starting watch + STEP: getting 07/27/23 03:03:07.973 + STEP: listing in namespace 07/27/23 03:03:07.983 + STEP: listing across namespaces 07/27/23 03:03:07.993 + STEP: patching 07/27/23 03:03:08.005 + STEP: updating 07/27/23 03:03:08.021 + Jul 27 03:03:08.036: INFO: waiting for watch events with expected annotations in namespace + Jul 27 03:03:08.036: INFO: waiting for watch events with expected annotations across namespace + STEP: deleting 07/27/23 03:03:08.036 + STEP: deleting a collection 07/27/23 03:03:08.08 + [AfterEach] [sig-storage] CSIStorageCapacity test/e2e/framework/node/init/init.go:32 - Jun 12 22:32:37.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] DisruptionController + Jul 27 03:03:08.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] CSIStorageCapacity test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] DisruptionController + [DeferCleanup (Each)] [sig-storage] CSIStorageCapacity dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] DisruptionController + [DeferCleanup (Each)] [sig-storage] CSIStorageCapacity tear down framework | framework.go:193 - STEP: Destroying namespace "disruption-3470" for this suite. 06/12/23 22:32:37.727 + STEP: Destroying namespace "csistoragecapacity-9121" for this suite. 07/27/23 03:03:08.15 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSS ------------------------------ -[sig-api-machinery] ResourceQuota - should verify ResourceQuota with terminating scopes. [Conformance] - test/e2e/apimachinery/resource_quota.go:690 -[BeforeEach] [sig-api-machinery] ResourceQuota +[sig-node] Sysctls [LinuxOnly] [NodeConformance] + should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/common/node/sysctl.go:123 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/common/node/sysctl.go:37 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:32:37.751 -Jun 12 22:32:37.752: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename resourcequota 06/12/23 22:32:37.756 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:32:37.821 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:32:37.851 -[BeforeEach] [sig-api-machinery] ResourceQuota +STEP: Creating a kubernetes client 07/27/23 03:03:08.174 +Jul 27 03:03:08.174: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename sysctl 07/27/23 03:03:08.175 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:03:08.219 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:03:08.227 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] test/e2e/framework/metrics/init/init.go:31 -[It] should verify ResourceQuota with terminating scopes. [Conformance] - test/e2e/apimachinery/resource_quota.go:690 -STEP: Creating a ResourceQuota with terminating scope 06/12/23 22:32:37.874 -STEP: Ensuring ResourceQuota status is calculated 06/12/23 22:32:37.904 -STEP: Creating a ResourceQuota with not terminating scope 06/12/23 22:32:39.92 -STEP: Ensuring ResourceQuota status is calculated 06/12/23 22:32:39.938 -STEP: Creating a long running pod 06/12/23 22:32:41.951 -STEP: Ensuring resource quota with not terminating scope captures the pod usage 06/12/23 22:32:41.992 -STEP: Ensuring resource quota with terminating scope ignored the pod usage 06/12/23 22:32:44.026 -STEP: Deleting the pod 06/12/23 22:32:46.042 -STEP: Ensuring resource quota status released the pod usage 06/12/23 22:32:46.107 -STEP: Creating a terminating pod 06/12/23 22:32:48.123 -STEP: Ensuring resource quota with terminating scope captures the pod usage 06/12/23 22:32:48.158 -STEP: Ensuring resource quota with not terminating scope ignored the pod usage 06/12/23 22:32:50.173 -STEP: Deleting the pod 06/12/23 22:32:52.198 -STEP: Ensuring resource quota status released the pod usage 06/12/23 22:32:52.23 -[AfterEach] [sig-api-machinery] ResourceQuota +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/common/node/sysctl.go:67 +[It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/common/node/sysctl.go:123 +STEP: Creating a pod with one valid and two invalid sysctls 07/27/23 03:03:08.236 +[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] test/e2e/framework/node/init/init.go:32 -Jun 12 22:32:54.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +Jul 27 03:03:08.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +[DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota +[DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] tear down framework | framework.go:193 -STEP: Destroying namespace "resourcequota-4508" for this suite. 06/12/23 22:32:54.266 +STEP: Destroying namespace "sysctl-9499" for this suite. 07/27/23 03:03:08.276 ------------------------------ -• [SLOW TEST] [16.531 seconds] -[sig-api-machinery] ResourceQuota -test/e2e/apimachinery/framework.go:23 - should verify ResourceQuota with terminating scopes. [Conformance] - test/e2e/apimachinery/resource_quota.go:690 +• [0.127 seconds] +[sig-node] Sysctls [LinuxOnly] [NodeConformance] +test/e2e/common/node/framework.go:23 + should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/common/node/sysctl.go:123 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-api-machinery] ResourceQuota + [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/common/node/sysctl.go:37 + [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:32:37.751 - Jun 12 22:32:37.752: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename resourcequota 06/12/23 22:32:37.756 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:32:37.821 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:32:37.851 - [BeforeEach] [sig-api-machinery] ResourceQuota + STEP: Creating a kubernetes client 07/27/23 03:03:08.174 + Jul 27 03:03:08.174: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename sysctl 07/27/23 03:03:08.175 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:03:08.219 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:03:08.227 + [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] test/e2e/framework/metrics/init/init.go:31 - [It] should verify ResourceQuota with terminating scopes. [Conformance] - test/e2e/apimachinery/resource_quota.go:690 - STEP: Creating a ResourceQuota with terminating scope 06/12/23 22:32:37.874 - STEP: Ensuring ResourceQuota status is calculated 06/12/23 22:32:37.904 - STEP: Creating a ResourceQuota with not terminating scope 06/12/23 22:32:39.92 - STEP: Ensuring ResourceQuota status is calculated 06/12/23 22:32:39.938 - STEP: Creating a long running pod 06/12/23 22:32:41.951 - STEP: Ensuring resource quota with not terminating scope captures the pod usage 06/12/23 22:32:41.992 - STEP: Ensuring resource quota with terminating scope ignored the pod usage 06/12/23 22:32:44.026 - STEP: Deleting the pod 06/12/23 22:32:46.042 - STEP: Ensuring resource quota status released the pod usage 06/12/23 22:32:46.107 - STEP: Creating a terminating pod 06/12/23 22:32:48.123 - STEP: Ensuring resource quota with terminating scope captures the pod usage 06/12/23 22:32:48.158 - STEP: Ensuring resource quota with not terminating scope ignored the pod usage 06/12/23 22:32:50.173 - STEP: Deleting the pod 06/12/23 22:32:52.198 - STEP: Ensuring resource quota status released the pod usage 06/12/23 22:32:52.23 - [AfterEach] [sig-api-machinery] ResourceQuota + [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/common/node/sysctl.go:67 + [It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/common/node/sysctl.go:123 + STEP: Creating a pod with one valid and two invalid sysctls 07/27/23 03:03:08.236 + [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] test/e2e/framework/node/init/init.go:32 - Jun 12 22:32:54.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + Jul 27 03:03:08.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] tear down framework | framework.go:193 - STEP: Destroying namespace "resourcequota-4508" for this suite. 06/12/23 22:32:54.266 + STEP: Destroying namespace "sysctl-9499" for this suite. 07/27/23 03:03:08.276 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] Subpath Atomic writer volumes - should support subpaths with configmap pod with mountPath of existing file [Conformance] - test/e2e/storage/subpath.go:80 -[BeforeEach] [sig-storage] Subpath +[sig-node] Variable Expansion + should succeed in writing subpaths in container [Slow] [Conformance] + test/e2e/common/node/expansion.go:297 +[BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:32:54.291 -Jun 12 22:32:54.291: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename subpath 06/12/23 22:32:54.297 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:32:54.374 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:32:54.388 -[BeforeEach] [sig-storage] Subpath +STEP: Creating a kubernetes client 07/27/23 03:03:08.303 +Jul 27 03:03:08.304: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename var-expansion 07/27/23 03:03:08.304 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:03:08.349 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:03:08.358 +[BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] Atomic writer volumes - test/e2e/storage/subpath.go:40 -STEP: Setting up data 06/12/23 22:32:54.423 -[It] should support subpaths with configmap pod with mountPath of existing file [Conformance] - test/e2e/storage/subpath.go:80 -STEP: Creating pod pod-subpath-test-configmap-tn6p 06/12/23 22:32:54.476 -STEP: Creating a pod to test atomic-volume-subpath 06/12/23 22:32:54.476 -Jun 12 22:32:54.505: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-tn6p" in namespace "subpath-6274" to be "Succeeded or Failed" -Jun 12 22:32:54.553: INFO: Pod "pod-subpath-test-configmap-tn6p": Phase="Pending", Reason="", readiness=false. Elapsed: 47.822369ms -Jun 12 22:32:56.562: INFO: Pod "pod-subpath-test-configmap-tn6p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056415931s -Jun 12 22:32:58.564: INFO: Pod "pod-subpath-test-configmap-tn6p": Phase="Running", Reason="", readiness=true. Elapsed: 4.058890963s -Jun 12 22:33:00.611: INFO: Pod "pod-subpath-test-configmap-tn6p": Phase="Running", Reason="", readiness=true. Elapsed: 6.10564152s -Jun 12 22:33:02.622: INFO: Pod "pod-subpath-test-configmap-tn6p": Phase="Running", Reason="", readiness=true. Elapsed: 8.117115008s -Jun 12 22:33:04.565: INFO: Pod "pod-subpath-test-configmap-tn6p": Phase="Running", Reason="", readiness=true. Elapsed: 10.060172703s -Jun 12 22:33:06.564: INFO: Pod "pod-subpath-test-configmap-tn6p": Phase="Running", Reason="", readiness=true. Elapsed: 12.058682733s -Jun 12 22:33:08.567: INFO: Pod "pod-subpath-test-configmap-tn6p": Phase="Running", Reason="", readiness=true. Elapsed: 14.061515426s -Jun 12 22:33:10.564: INFO: Pod "pod-subpath-test-configmap-tn6p": Phase="Running", Reason="", readiness=true. Elapsed: 16.058932012s -Jun 12 22:33:12.565: INFO: Pod "pod-subpath-test-configmap-tn6p": Phase="Running", Reason="", readiness=true. Elapsed: 18.05948538s -Jun 12 22:33:14.564: INFO: Pod "pod-subpath-test-configmap-tn6p": Phase="Running", Reason="", readiness=true. Elapsed: 20.05903011s -Jun 12 22:33:16.566: INFO: Pod "pod-subpath-test-configmap-tn6p": Phase="Running", Reason="", readiness=true. Elapsed: 22.060593068s -Jun 12 22:33:18.567: INFO: Pod "pod-subpath-test-configmap-tn6p": Phase="Running", Reason="", readiness=false. Elapsed: 24.061808467s -Jun 12 22:33:20.563: INFO: Pod "pod-subpath-test-configmap-tn6p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.057734323s -STEP: Saw pod success 06/12/23 22:33:20.563 -Jun 12 22:33:20.563: INFO: Pod "pod-subpath-test-configmap-tn6p" satisfied condition "Succeeded or Failed" -Jun 12 22:33:20.576: INFO: Trying to get logs from node 10.138.75.70 pod pod-subpath-test-configmap-tn6p container test-container-subpath-configmap-tn6p: -STEP: delete the pod 06/12/23 22:33:20.64 -Jun 12 22:33:20.670: INFO: Waiting for pod pod-subpath-test-configmap-tn6p to disappear -Jun 12 22:33:20.684: INFO: Pod pod-subpath-test-configmap-tn6p no longer exists -STEP: Deleting pod pod-subpath-test-configmap-tn6p 06/12/23 22:33:20.684 -Jun 12 22:33:20.684: INFO: Deleting pod "pod-subpath-test-configmap-tn6p" in namespace "subpath-6274" -[AfterEach] [sig-storage] Subpath +[It] should succeed in writing subpaths in container [Slow] [Conformance] + test/e2e/common/node/expansion.go:297 +STEP: creating the pod 07/27/23 03:03:08.367 +STEP: waiting for pod running 07/27/23 03:03:08.395 +Jul 27 03:03:08.395: INFO: Waiting up to 2m0s for pod "var-expansion-3c46067a-9122-404e-a2e9-cce296a6b303" in namespace "var-expansion-6522" to be "running" +Jul 27 03:03:08.406: INFO: Pod "var-expansion-3c46067a-9122-404e-a2e9-cce296a6b303": Phase="Pending", Reason="", readiness=false. Elapsed: 10.000663ms +Jul 27 03:03:10.418: INFO: Pod "var-expansion-3c46067a-9122-404e-a2e9-cce296a6b303": Phase="Running", Reason="", readiness=true. Elapsed: 2.022504969s +Jul 27 03:03:10.418: INFO: Pod "var-expansion-3c46067a-9122-404e-a2e9-cce296a6b303" satisfied condition "running" +STEP: creating a file in subpath 07/27/23 03:03:10.418 +Jul 27 03:03:10.437: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-6522 PodName:var-expansion-3c46067a-9122-404e-a2e9-cce296a6b303 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jul 27 03:03:10.437: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 03:03:10.438: INFO: ExecWithOptions: Clientset creation +Jul 27 03:03:10.438: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/var-expansion-6522/pods/var-expansion-3c46067a-9122-404e-a2e9-cce296a6b303/exec?command=%2Fbin%2Fsh&command=-c&command=touch+%2Fvolume_mount%2Fmypath%2Ffoo%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) +STEP: test for file in mounted path 07/27/23 03:03:10.66 +Jul 27 03:03:10.669: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-6522 PodName:var-expansion-3c46067a-9122-404e-a2e9-cce296a6b303 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jul 27 03:03:10.669: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 03:03:10.669: INFO: ExecWithOptions: Clientset creation +Jul 27 03:03:10.669: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/var-expansion-6522/pods/var-expansion-3c46067a-9122-404e-a2e9-cce296a6b303/exec?command=%2Fbin%2Fsh&command=-c&command=test+-f+%2Fsubpath_mount%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) +STEP: updating the annotation value 07/27/23 03:03:10.847 +Jul 27 03:03:11.383: INFO: Successfully updated pod "var-expansion-3c46067a-9122-404e-a2e9-cce296a6b303" +STEP: waiting for annotated pod running 07/27/23 03:03:11.383 +Jul 27 03:03:11.383: INFO: Waiting up to 2m0s for pod "var-expansion-3c46067a-9122-404e-a2e9-cce296a6b303" in namespace "var-expansion-6522" to be "running" +Jul 27 03:03:11.392: INFO: Pod "var-expansion-3c46067a-9122-404e-a2e9-cce296a6b303": Phase="Running", Reason="", readiness=true. Elapsed: 9.398941ms +Jul 27 03:03:11.392: INFO: Pod "var-expansion-3c46067a-9122-404e-a2e9-cce296a6b303" satisfied condition "running" +STEP: deleting the pod gracefully 07/27/23 03:03:11.392 +Jul 27 03:03:11.392: INFO: Deleting pod "var-expansion-3c46067a-9122-404e-a2e9-cce296a6b303" in namespace "var-expansion-6522" +Jul 27 03:03:11.411: INFO: Wait up to 5m0s for pod "var-expansion-3c46067a-9122-404e-a2e9-cce296a6b303" to be fully deleted +[AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 -Jun 12 22:33:20.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Subpath +Jul 27 03:03:45.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Subpath +[DeferCleanup (Each)] [sig-node] Variable Expansion dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Subpath +[DeferCleanup (Each)] [sig-node] Variable Expansion tear down framework | framework.go:193 -STEP: Destroying namespace "subpath-6274" for this suite. 06/12/23 22:33:20.705 +STEP: Destroying namespace "var-expansion-6522" for this suite. 07/27/23 03:03:45.439 ------------------------------ -• [SLOW TEST] [26.428 seconds] -[sig-storage] Subpath -test/e2e/storage/utils/framework.go:23 - Atomic writer volumes - test/e2e/storage/subpath.go:36 - should support subpaths with configmap pod with mountPath of existing file [Conformance] - test/e2e/storage/subpath.go:80 +• [SLOW TEST] [37.168 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should succeed in writing subpaths in container [Slow] [Conformance] + test/e2e/common/node/expansion.go:297 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Subpath + [BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:32:54.291 - Jun 12 22:32:54.291: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename subpath 06/12/23 22:32:54.297 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:32:54.374 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:32:54.388 - [BeforeEach] [sig-storage] Subpath + STEP: Creating a kubernetes client 07/27/23 03:03:08.303 + Jul 27 03:03:08.304: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename var-expansion 07/27/23 03:03:08.304 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:03:08.349 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:03:08.358 + [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] Atomic writer volumes - test/e2e/storage/subpath.go:40 - STEP: Setting up data 06/12/23 22:32:54.423 - [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] - test/e2e/storage/subpath.go:80 - STEP: Creating pod pod-subpath-test-configmap-tn6p 06/12/23 22:32:54.476 - STEP: Creating a pod to test atomic-volume-subpath 06/12/23 22:32:54.476 - Jun 12 22:32:54.505: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-tn6p" in namespace "subpath-6274" to be "Succeeded or Failed" - Jun 12 22:32:54.553: INFO: Pod "pod-subpath-test-configmap-tn6p": Phase="Pending", Reason="", readiness=false. Elapsed: 47.822369ms - Jun 12 22:32:56.562: INFO: Pod "pod-subpath-test-configmap-tn6p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056415931s - Jun 12 22:32:58.564: INFO: Pod "pod-subpath-test-configmap-tn6p": Phase="Running", Reason="", readiness=true. Elapsed: 4.058890963s - Jun 12 22:33:00.611: INFO: Pod "pod-subpath-test-configmap-tn6p": Phase="Running", Reason="", readiness=true. Elapsed: 6.10564152s - Jun 12 22:33:02.622: INFO: Pod "pod-subpath-test-configmap-tn6p": Phase="Running", Reason="", readiness=true. Elapsed: 8.117115008s - Jun 12 22:33:04.565: INFO: Pod "pod-subpath-test-configmap-tn6p": Phase="Running", Reason="", readiness=true. Elapsed: 10.060172703s - Jun 12 22:33:06.564: INFO: Pod "pod-subpath-test-configmap-tn6p": Phase="Running", Reason="", readiness=true. Elapsed: 12.058682733s - Jun 12 22:33:08.567: INFO: Pod "pod-subpath-test-configmap-tn6p": Phase="Running", Reason="", readiness=true. Elapsed: 14.061515426s - Jun 12 22:33:10.564: INFO: Pod "pod-subpath-test-configmap-tn6p": Phase="Running", Reason="", readiness=true. Elapsed: 16.058932012s - Jun 12 22:33:12.565: INFO: Pod "pod-subpath-test-configmap-tn6p": Phase="Running", Reason="", readiness=true. Elapsed: 18.05948538s - Jun 12 22:33:14.564: INFO: Pod "pod-subpath-test-configmap-tn6p": Phase="Running", Reason="", readiness=true. Elapsed: 20.05903011s - Jun 12 22:33:16.566: INFO: Pod "pod-subpath-test-configmap-tn6p": Phase="Running", Reason="", readiness=true. Elapsed: 22.060593068s - Jun 12 22:33:18.567: INFO: Pod "pod-subpath-test-configmap-tn6p": Phase="Running", Reason="", readiness=false. Elapsed: 24.061808467s - Jun 12 22:33:20.563: INFO: Pod "pod-subpath-test-configmap-tn6p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.057734323s - STEP: Saw pod success 06/12/23 22:33:20.563 - Jun 12 22:33:20.563: INFO: Pod "pod-subpath-test-configmap-tn6p" satisfied condition "Succeeded or Failed" - Jun 12 22:33:20.576: INFO: Trying to get logs from node 10.138.75.70 pod pod-subpath-test-configmap-tn6p container test-container-subpath-configmap-tn6p: - STEP: delete the pod 06/12/23 22:33:20.64 - Jun 12 22:33:20.670: INFO: Waiting for pod pod-subpath-test-configmap-tn6p to disappear - Jun 12 22:33:20.684: INFO: Pod pod-subpath-test-configmap-tn6p no longer exists - STEP: Deleting pod pod-subpath-test-configmap-tn6p 06/12/23 22:33:20.684 - Jun 12 22:33:20.684: INFO: Deleting pod "pod-subpath-test-configmap-tn6p" in namespace "subpath-6274" - [AfterEach] [sig-storage] Subpath + [It] should succeed in writing subpaths in container [Slow] [Conformance] + test/e2e/common/node/expansion.go:297 + STEP: creating the pod 07/27/23 03:03:08.367 + STEP: waiting for pod running 07/27/23 03:03:08.395 + Jul 27 03:03:08.395: INFO: Waiting up to 2m0s for pod "var-expansion-3c46067a-9122-404e-a2e9-cce296a6b303" in namespace "var-expansion-6522" to be "running" + Jul 27 03:03:08.406: INFO: Pod "var-expansion-3c46067a-9122-404e-a2e9-cce296a6b303": Phase="Pending", Reason="", readiness=false. Elapsed: 10.000663ms + Jul 27 03:03:10.418: INFO: Pod "var-expansion-3c46067a-9122-404e-a2e9-cce296a6b303": Phase="Running", Reason="", readiness=true. Elapsed: 2.022504969s + Jul 27 03:03:10.418: INFO: Pod "var-expansion-3c46067a-9122-404e-a2e9-cce296a6b303" satisfied condition "running" + STEP: creating a file in subpath 07/27/23 03:03:10.418 + Jul 27 03:03:10.437: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-6522 PodName:var-expansion-3c46067a-9122-404e-a2e9-cce296a6b303 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jul 27 03:03:10.437: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 03:03:10.438: INFO: ExecWithOptions: Clientset creation + Jul 27 03:03:10.438: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/var-expansion-6522/pods/var-expansion-3c46067a-9122-404e-a2e9-cce296a6b303/exec?command=%2Fbin%2Fsh&command=-c&command=touch+%2Fvolume_mount%2Fmypath%2Ffoo%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) + STEP: test for file in mounted path 07/27/23 03:03:10.66 + Jul 27 03:03:10.669: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-6522 PodName:var-expansion-3c46067a-9122-404e-a2e9-cce296a6b303 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jul 27 03:03:10.669: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 03:03:10.669: INFO: ExecWithOptions: Clientset creation + Jul 27 03:03:10.669: INFO: ExecWithOptions: execute(POST https://172.21.0.1:443/api/v1/namespaces/var-expansion-6522/pods/var-expansion-3c46067a-9122-404e-a2e9-cce296a6b303/exec?command=%2Fbin%2Fsh&command=-c&command=test+-f+%2Fsubpath_mount%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) + STEP: updating the annotation value 07/27/23 03:03:10.847 + Jul 27 03:03:11.383: INFO: Successfully updated pod "var-expansion-3c46067a-9122-404e-a2e9-cce296a6b303" + STEP: waiting for annotated pod running 07/27/23 03:03:11.383 + Jul 27 03:03:11.383: INFO: Waiting up to 2m0s for pod "var-expansion-3c46067a-9122-404e-a2e9-cce296a6b303" in namespace "var-expansion-6522" to be "running" + Jul 27 03:03:11.392: INFO: Pod "var-expansion-3c46067a-9122-404e-a2e9-cce296a6b303": Phase="Running", Reason="", readiness=true. Elapsed: 9.398941ms + Jul 27 03:03:11.392: INFO: Pod "var-expansion-3c46067a-9122-404e-a2e9-cce296a6b303" satisfied condition "running" + STEP: deleting the pod gracefully 07/27/23 03:03:11.392 + Jul 27 03:03:11.392: INFO: Deleting pod "var-expansion-3c46067a-9122-404e-a2e9-cce296a6b303" in namespace "var-expansion-6522" + Jul 27 03:03:11.411: INFO: Wait up to 5m0s for pod "var-expansion-3c46067a-9122-404e-a2e9-cce296a6b303" to be fully deleted + [AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 - Jun 12 22:33:20.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Subpath + Jul 27 03:03:45.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Subpath + [DeferCleanup (Each)] [sig-node] Variable Expansion dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Subpath + [DeferCleanup (Each)] [sig-node] Variable Expansion tear down framework | framework.go:193 - STEP: Destroying namespace "subpath-6274" for this suite. 06/12/23 22:33:20.705 + STEP: Destroying namespace "var-expansion-6522" for this suite. 07/27/23 03:03:45.439 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSS +SSS ------------------------------ -[sig-storage] Secrets - should be consumable from pods in volume [NodeConformance] [Conformance] - test/e2e/common/storage/secrets_volume.go:47 -[BeforeEach] [sig-storage] Secrets +[sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces + should list and delete a collection of PodDisruptionBudgets [Conformance] + test/e2e/apps/disruption.go:87 +[BeforeEach] [sig-apps] DisruptionController set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:33:20.728 -Jun 12 22:33:20.729: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename secrets 06/12/23 22:33:20.732 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:33:20.772 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:33:20.782 -[BeforeEach] [sig-storage] Secrets +STEP: Creating a kubernetes client 07/27/23 03:03:45.472 +Jul 27 03:03:45.472: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename disruption 07/27/23 03:03:45.473 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:03:45.513 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:03:45.522 +[BeforeEach] [sig-apps] DisruptionController test/e2e/framework/metrics/init/init.go:31 -[It] should be consumable from pods in volume [NodeConformance] [Conformance] - test/e2e/common/storage/secrets_volume.go:47 -STEP: Creating secret with name secret-test-a7ee7f47-3bb3-4df7-b3ab-c58a83956008 06/12/23 22:33:20.796 -STEP: Creating a pod to test consume secrets 06/12/23 22:33:20.815 -Jun 12 22:33:20.847: INFO: Waiting up to 5m0s for pod "pod-secrets-0d9d0b1a-671e-4b31-8dbe-2c0ef70a95b2" in namespace "secrets-1912" to be "Succeeded or Failed" -Jun 12 22:33:20.862: INFO: Pod "pod-secrets-0d9d0b1a-671e-4b31-8dbe-2c0ef70a95b2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.158492ms -Jun 12 22:33:22.875: INFO: Pod "pod-secrets-0d9d0b1a-671e-4b31-8dbe-2c0ef70a95b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027611702s -Jun 12 22:33:24.882: INFO: Pod "pod-secrets-0d9d0b1a-671e-4b31-8dbe-2c0ef70a95b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034237405s -STEP: Saw pod success 06/12/23 22:33:24.882 -Jun 12 22:33:24.882: INFO: Pod "pod-secrets-0d9d0b1a-671e-4b31-8dbe-2c0ef70a95b2" satisfied condition "Succeeded or Failed" -Jun 12 22:33:24.892: INFO: Trying to get logs from node 10.138.75.70 pod pod-secrets-0d9d0b1a-671e-4b31-8dbe-2c0ef70a95b2 container secret-volume-test: -STEP: delete the pod 06/12/23 22:33:24.929 -Jun 12 22:33:24.955: INFO: Waiting for pod pod-secrets-0d9d0b1a-671e-4b31-8dbe-2c0ef70a95b2 to disappear -Jun 12 22:33:24.964: INFO: Pod pod-secrets-0d9d0b1a-671e-4b31-8dbe-2c0ef70a95b2 no longer exists -[AfterEach] [sig-storage] Secrets +[BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:72 +[BeforeEach] Listing PodDisruptionBudgets for all namespaces + set up framework | framework.go:178 +STEP: Creating a kubernetes client 07/27/23 03:03:45.534 +Jul 27 03:03:45.534: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename disruption-2 07/27/23 03:03:45.535 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:03:45.583 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:03:45.592 +[BeforeEach] Listing PodDisruptionBudgets for all namespaces + test/e2e/framework/metrics/init/init.go:31 +[It] should list and delete a collection of PodDisruptionBudgets [Conformance] + test/e2e/apps/disruption.go:87 +STEP: Waiting for the pdb to be processed 07/27/23 03:03:45.642 +STEP: Waiting for the pdb to be processed 07/27/23 03:03:47.675 +STEP: Waiting for the pdb to be processed 07/27/23 03:03:49.721 +STEP: listing a collection of PDBs across all namespaces 07/27/23 03:03:51.738 +STEP: listing a collection of PDBs in namespace disruption-8939 07/27/23 03:03:51.746 +STEP: deleting a collection of PDBs 07/27/23 03:03:51.753 +STEP: Waiting for the PDB collection to be deleted 07/27/23 03:03:51.777 +[AfterEach] Listing PodDisruptionBudgets for all namespaces test/e2e/framework/node/init/init.go:32 -Jun 12 22:33:24.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Secrets +Jul 27 03:03:51.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-apps] DisruptionController + test/e2e/framework/node/init/init.go:32 +Jul 27 03:03:51.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] Listing PodDisruptionBudgets for all namespaces test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Secrets +[DeferCleanup (Each)] Listing PodDisruptionBudgets for all namespaces dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Secrets +[DeferCleanup (Each)] Listing PodDisruptionBudgets for all namespaces + tear down framework | framework.go:193 +STEP: Destroying namespace "disruption-2-4658" for this suite. 07/27/23 03:03:51.807 +[DeferCleanup (Each)] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] DisruptionController + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] DisruptionController tear down framework | framework.go:193 -STEP: Destroying namespace "secrets-1912" for this suite. 06/12/23 22:33:24.995 +STEP: Destroying namespace "disruption-8939" for this suite. 07/27/23 03:03:51.829 ------------------------------ -• [4.298 seconds] -[sig-storage] Secrets -test/e2e/common/storage/framework.go:23 - should be consumable from pods in volume [NodeConformance] [Conformance] - test/e2e/common/storage/secrets_volume.go:47 +• [SLOW TEST] [6.381 seconds] +[sig-apps] DisruptionController +test/e2e/apps/framework.go:23 + Listing PodDisruptionBudgets for all namespaces + test/e2e/apps/disruption.go:78 + should list and delete a collection of PodDisruptionBudgets [Conformance] + test/e2e/apps/disruption.go:87 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Secrets + [BeforeEach] [sig-apps] DisruptionController set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:33:20.728 - Jun 12 22:33:20.729: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename secrets 06/12/23 22:33:20.732 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:33:20.772 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:33:20.782 - [BeforeEach] [sig-storage] Secrets + STEP: Creating a kubernetes client 07/27/23 03:03:45.472 + Jul 27 03:03:45.472: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename disruption 07/27/23 03:03:45.473 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:03:45.513 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:03:45.522 + [BeforeEach] [sig-apps] DisruptionController test/e2e/framework/metrics/init/init.go:31 - [It] should be consumable from pods in volume [NodeConformance] [Conformance] - test/e2e/common/storage/secrets_volume.go:47 - STEP: Creating secret with name secret-test-a7ee7f47-3bb3-4df7-b3ab-c58a83956008 06/12/23 22:33:20.796 - STEP: Creating a pod to test consume secrets 06/12/23 22:33:20.815 - Jun 12 22:33:20.847: INFO: Waiting up to 5m0s for pod "pod-secrets-0d9d0b1a-671e-4b31-8dbe-2c0ef70a95b2" in namespace "secrets-1912" to be "Succeeded or Failed" - Jun 12 22:33:20.862: INFO: Pod "pod-secrets-0d9d0b1a-671e-4b31-8dbe-2c0ef70a95b2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.158492ms - Jun 12 22:33:22.875: INFO: Pod "pod-secrets-0d9d0b1a-671e-4b31-8dbe-2c0ef70a95b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027611702s - Jun 12 22:33:24.882: INFO: Pod "pod-secrets-0d9d0b1a-671e-4b31-8dbe-2c0ef70a95b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034237405s - STEP: Saw pod success 06/12/23 22:33:24.882 - Jun 12 22:33:24.882: INFO: Pod "pod-secrets-0d9d0b1a-671e-4b31-8dbe-2c0ef70a95b2" satisfied condition "Succeeded or Failed" - Jun 12 22:33:24.892: INFO: Trying to get logs from node 10.138.75.70 pod pod-secrets-0d9d0b1a-671e-4b31-8dbe-2c0ef70a95b2 container secret-volume-test: - STEP: delete the pod 06/12/23 22:33:24.929 - Jun 12 22:33:24.955: INFO: Waiting for pod pod-secrets-0d9d0b1a-671e-4b31-8dbe-2c0ef70a95b2 to disappear - Jun 12 22:33:24.964: INFO: Pod pod-secrets-0d9d0b1a-671e-4b31-8dbe-2c0ef70a95b2 no longer exists - [AfterEach] [sig-storage] Secrets + [BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:72 + [BeforeEach] Listing PodDisruptionBudgets for all namespaces + set up framework | framework.go:178 + STEP: Creating a kubernetes client 07/27/23 03:03:45.534 + Jul 27 03:03:45.534: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename disruption-2 07/27/23 03:03:45.535 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:03:45.583 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:03:45.592 + [BeforeEach] Listing PodDisruptionBudgets for all namespaces + test/e2e/framework/metrics/init/init.go:31 + [It] should list and delete a collection of PodDisruptionBudgets [Conformance] + test/e2e/apps/disruption.go:87 + STEP: Waiting for the pdb to be processed 07/27/23 03:03:45.642 + STEP: Waiting for the pdb to be processed 07/27/23 03:03:47.675 + STEP: Waiting for the pdb to be processed 07/27/23 03:03:49.721 + STEP: listing a collection of PDBs across all namespaces 07/27/23 03:03:51.738 + STEP: listing a collection of PDBs in namespace disruption-8939 07/27/23 03:03:51.746 + STEP: deleting a collection of PDBs 07/27/23 03:03:51.753 + STEP: Waiting for the PDB collection to be deleted 07/27/23 03:03:51.777 + [AfterEach] Listing PodDisruptionBudgets for all namespaces test/e2e/framework/node/init/init.go:32 - Jun 12 22:33:24.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Secrets + Jul 27 03:03:51.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-apps] DisruptionController + test/e2e/framework/node/init/init.go:32 + Jul 27 03:03:51.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] Listing PodDisruptionBudgets for all namespaces test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Secrets + [DeferCleanup (Each)] Listing PodDisruptionBudgets for all namespaces dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Secrets + [DeferCleanup (Each)] Listing PodDisruptionBudgets for all namespaces + tear down framework | framework.go:193 + STEP: Destroying namespace "disruption-2-4658" for this suite. 07/27/23 03:03:51.807 + [DeferCleanup (Each)] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] DisruptionController + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] DisruptionController tear down framework | framework.go:193 - STEP: Destroying namespace "secrets-1912" for this suite. 06/12/23 22:33:24.995 + STEP: Destroying namespace "disruption-8939" for this suite. 07/27/23 03:03:51.829 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-network] DNS - should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] - test/e2e/network/dns.go:193 -[BeforeEach] [sig-network] DNS +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:99 +[BeforeEach] [sig-storage] ConfigMap set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:33:25.035 -Jun 12 22:33:25.035: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename dns 06/12/23 22:33:25.037 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:33:25.075 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:33:25.087 -[BeforeEach] [sig-network] DNS +STEP: Creating a kubernetes client 07/27/23 03:03:51.854 +Jul 27 03:03:51.854: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename configmap 07/27/23 03:03:51.855 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:03:51.896 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:03:51.907 +[BeforeEach] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:31 -[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] - test/e2e/network/dns.go:193 -STEP: Creating a test headless service 06/12/23 22:33:25.127 -STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3347 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3347;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3347 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3347;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3347.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3347.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3347.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3347.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3347.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3347.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3347.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3347.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3347.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3347.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3347.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3347.svc;check="$$(dig +notcp +noall +answer +search 207.124.21.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.21.124.207_udp@PTR;check="$$(dig +tcp +noall +answer +search 207.124.21.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.21.124.207_tcp@PTR;sleep 1; done - 06/12/23 22:33:25.223 -STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3347 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3347;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3347 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3347;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3347.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3347.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3347.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3347.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3347.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3347.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3347.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3347.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3347.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3347.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3347.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3347.svc;check="$$(dig +notcp +noall +answer +search 207.124.21.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.21.124.207_udp@PTR;check="$$(dig +tcp +noall +answer +search 207.124.21.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.21.124.207_tcp@PTR;sleep 1; done - 06/12/23 22:33:25.223 -STEP: creating a pod to probe DNS 06/12/23 22:33:25.224 -STEP: submitting the pod to kubernetes 06/12/23 22:33:25.224 -Jun 12 22:33:25.291: INFO: Waiting up to 15m0s for pod "dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5" in namespace "dns-3347" to be "running" -Jun 12 22:33:25.311: INFO: Pod "dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.066126ms -Jun 12 22:33:27.322: INFO: Pod "dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031241614s -Jun 12 22:33:29.322: INFO: Pod "dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5": Phase="Running", Reason="", readiness=true. Elapsed: 4.031042328s -Jun 12 22:33:29.322: INFO: Pod "dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5" satisfied condition "running" -STEP: retrieving the pod 06/12/23 22:33:29.322 -STEP: looking for the results for each expected name from probers 06/12/23 22:33:29.332 -Jun 12 22:33:29.352: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5: the server could not find the requested resource (get pods dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5) -Jun 12 22:33:29.366: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5: the server could not find the requested resource (get pods dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5) -Jun 12 22:33:29.379: INFO: Unable to read wheezy_udp@dns-test-service.dns-3347 from pod dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5: the server could not find the requested resource (get pods dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5) -Jun 12 22:33:29.395: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3347 from pod dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5: the server could not find the requested resource (get pods dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5) -Jun 12 22:33:29.414: INFO: Unable to read wheezy_udp@dns-test-service.dns-3347.svc from pod dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5: the server could not find the requested resource (get pods dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5) -Jun 12 22:33:29.429: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3347.svc from pod dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5: the server could not find the requested resource (get pods dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5) -Jun 12 22:33:29.443: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3347.svc from pod dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5: the server could not find the requested resource (get pods dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5) -Jun 12 22:33:29.457: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3347.svc from pod dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5: the server could not find the requested resource (get pods dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5) -Jun 12 22:33:29.522: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5: the server could not find the requested resource (get pods dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5) -Jun 12 22:33:29.537: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5: the server could not find the requested resource (get pods dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5) -Jun 12 22:33:29.548: INFO: Unable to read jessie_udp@dns-test-service.dns-3347 from pod dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5: the server could not find the requested resource (get pods dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5) -Jun 12 22:33:29.561: INFO: Unable to read jessie_tcp@dns-test-service.dns-3347 from pod dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5: the server could not find the requested resource (get pods dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5) -Jun 12 22:33:29.573: INFO: Unable to read jessie_udp@dns-test-service.dns-3347.svc from pod dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5: the server could not find the requested resource (get pods dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5) -Jun 12 22:33:29.596: INFO: Unable to read jessie_tcp@dns-test-service.dns-3347.svc from pod dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5: the server could not find the requested resource (get pods dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5) -Jun 12 22:33:29.614: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3347.svc from pod dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5: the server could not find the requested resource (get pods dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5) -Jun 12 22:33:29.656: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3347.svc from pod dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5: the server could not find the requested resource (get pods dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5) -Jun 12 22:33:29.739: INFO: Lookups using dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3347 wheezy_tcp@dns-test-service.dns-3347 wheezy_udp@dns-test-service.dns-3347.svc wheezy_tcp@dns-test-service.dns-3347.svc wheezy_udp@_http._tcp.dns-test-service.dns-3347.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3347.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3347 jessie_tcp@dns-test-service.dns-3347 jessie_udp@dns-test-service.dns-3347.svc jessie_tcp@dns-test-service.dns-3347.svc jessie_udp@_http._tcp.dns-test-service.dns-3347.svc jessie_tcp@_http._tcp.dns-test-service.dns-3347.svc] - -Jun 12 22:33:35.087: INFO: DNS probes using dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5 succeeded - -STEP: deleting the pod 06/12/23 22:33:35.087 -STEP: deleting the test service 06/12/23 22:33:35.146 -STEP: deleting the test headless service 06/12/23 22:33:35.199 -[AfterEach] [sig-network] DNS +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:99 +STEP: Creating configMap with name configmap-test-volume-map-bc5bf3b1-8600-4a80-a7d0-83c34dbd8eff 07/27/23 03:03:51.917 +STEP: Creating a pod to test consume configMaps 07/27/23 03:03:51.942 +Jul 27 03:03:51.968: INFO: Waiting up to 5m0s for pod "pod-configmaps-7abb5cab-4dc5-4956-b425-5600f87ea7d4" in namespace "configmap-3537" to be "Succeeded or Failed" +Jul 27 03:03:51.979: INFO: Pod "pod-configmaps-7abb5cab-4dc5-4956-b425-5600f87ea7d4": Phase="Pending", Reason="", readiness=false. Elapsed: 11.137571ms +Jul 27 03:03:53.988: INFO: Pod "pod-configmaps-7abb5cab-4dc5-4956-b425-5600f87ea7d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019778985s +Jul 27 03:03:55.989: INFO: Pod "pod-configmaps-7abb5cab-4dc5-4956-b425-5600f87ea7d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021009633s +STEP: Saw pod success 07/27/23 03:03:55.989 +Jul 27 03:03:55.989: INFO: Pod "pod-configmaps-7abb5cab-4dc5-4956-b425-5600f87ea7d4" satisfied condition "Succeeded or Failed" +Jul 27 03:03:55.998: INFO: Trying to get logs from node 10.245.128.19 pod pod-configmaps-7abb5cab-4dc5-4956-b425-5600f87ea7d4 container agnhost-container: +STEP: delete the pod 07/27/23 03:03:56.026 +Jul 27 03:03:56.053: INFO: Waiting for pod pod-configmaps-7abb5cab-4dc5-4956-b425-5600f87ea7d4 to disappear +Jul 27 03:03:56.061: INFO: Pod pod-configmaps-7abb5cab-4dc5-4956-b425-5600f87ea7d4 no longer exists +[AfterEach] [sig-storage] ConfigMap test/e2e/framework/node/init/init.go:32 -Jun 12 22:33:35.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] DNS +Jul 27 03:03:56.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] DNS +[DeferCleanup (Each)] [sig-storage] ConfigMap dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] DNS +[DeferCleanup (Each)] [sig-storage] ConfigMap tear down framework | framework.go:193 -STEP: Destroying namespace "dns-3347" for this suite. 06/12/23 22:33:35.254 +STEP: Destroying namespace "configmap-3537" for this suite. 07/27/23 03:03:56.075 ------------------------------ -• [SLOW TEST] [10.236 seconds] -[sig-network] DNS -test/e2e/network/common/framework.go:23 - should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] - test/e2e/network/dns.go:193 +• [4.242 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:99 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] DNS + [BeforeEach] [sig-storage] ConfigMap set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:33:25.035 - Jun 12 22:33:25.035: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename dns 06/12/23 22:33:25.037 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:33:25.075 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:33:25.087 - [BeforeEach] [sig-network] DNS + STEP: Creating a kubernetes client 07/27/23 03:03:51.854 + Jul 27 03:03:51.854: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename configmap 07/27/23 03:03:51.855 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:03:51.896 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:03:51.907 + [BeforeEach] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:31 - [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] - test/e2e/network/dns.go:193 - STEP: Creating a test headless service 06/12/23 22:33:25.127 - STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3347 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3347;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3347 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3347;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3347.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3347.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3347.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3347.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3347.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3347.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3347.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3347.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3347.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3347.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3347.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3347.svc;check="$$(dig +notcp +noall +answer +search 207.124.21.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.21.124.207_udp@PTR;check="$$(dig +tcp +noall +answer +search 207.124.21.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.21.124.207_tcp@PTR;sleep 1; done - 06/12/23 22:33:25.223 - STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3347 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3347;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3347 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3347;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3347.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3347.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3347.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3347.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3347.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3347.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3347.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3347.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3347.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3347.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3347.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3347.svc;check="$$(dig +notcp +noall +answer +search 207.124.21.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.21.124.207_udp@PTR;check="$$(dig +tcp +noall +answer +search 207.124.21.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.21.124.207_tcp@PTR;sleep 1; done - 06/12/23 22:33:25.223 - STEP: creating a pod to probe DNS 06/12/23 22:33:25.224 - STEP: submitting the pod to kubernetes 06/12/23 22:33:25.224 - Jun 12 22:33:25.291: INFO: Waiting up to 15m0s for pod "dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5" in namespace "dns-3347" to be "running" - Jun 12 22:33:25.311: INFO: Pod "dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.066126ms - Jun 12 22:33:27.322: INFO: Pod "dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031241614s - Jun 12 22:33:29.322: INFO: Pod "dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5": Phase="Running", Reason="", readiness=true. Elapsed: 4.031042328s - Jun 12 22:33:29.322: INFO: Pod "dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5" satisfied condition "running" - STEP: retrieving the pod 06/12/23 22:33:29.322 - STEP: looking for the results for each expected name from probers 06/12/23 22:33:29.332 - Jun 12 22:33:29.352: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5: the server could not find the requested resource (get pods dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5) - Jun 12 22:33:29.366: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5: the server could not find the requested resource (get pods dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5) - Jun 12 22:33:29.379: INFO: Unable to read wheezy_udp@dns-test-service.dns-3347 from pod dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5: the server could not find the requested resource (get pods dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5) - Jun 12 22:33:29.395: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3347 from pod dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5: the server could not find the requested resource (get pods dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5) - Jun 12 22:33:29.414: INFO: Unable to read wheezy_udp@dns-test-service.dns-3347.svc from pod dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5: the server could not find the requested resource (get pods dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5) - Jun 12 22:33:29.429: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3347.svc from pod dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5: the server could not find the requested resource (get pods dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5) - Jun 12 22:33:29.443: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3347.svc from pod dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5: the server could not find the requested resource (get pods dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5) - Jun 12 22:33:29.457: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3347.svc from pod dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5: the server could not find the requested resource (get pods dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5) - Jun 12 22:33:29.522: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5: the server could not find the requested resource (get pods dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5) - Jun 12 22:33:29.537: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5: the server could not find the requested resource (get pods dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5) - Jun 12 22:33:29.548: INFO: Unable to read jessie_udp@dns-test-service.dns-3347 from pod dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5: the server could not find the requested resource (get pods dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5) - Jun 12 22:33:29.561: INFO: Unable to read jessie_tcp@dns-test-service.dns-3347 from pod dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5: the server could not find the requested resource (get pods dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5) - Jun 12 22:33:29.573: INFO: Unable to read jessie_udp@dns-test-service.dns-3347.svc from pod dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5: the server could not find the requested resource (get pods dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5) - Jun 12 22:33:29.596: INFO: Unable to read jessie_tcp@dns-test-service.dns-3347.svc from pod dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5: the server could not find the requested resource (get pods dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5) - Jun 12 22:33:29.614: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3347.svc from pod dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5: the server could not find the requested resource (get pods dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5) - Jun 12 22:33:29.656: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3347.svc from pod dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5: the server could not find the requested resource (get pods dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5) - Jun 12 22:33:29.739: INFO: Lookups using dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3347 wheezy_tcp@dns-test-service.dns-3347 wheezy_udp@dns-test-service.dns-3347.svc wheezy_tcp@dns-test-service.dns-3347.svc wheezy_udp@_http._tcp.dns-test-service.dns-3347.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3347.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3347 jessie_tcp@dns-test-service.dns-3347 jessie_udp@dns-test-service.dns-3347.svc jessie_tcp@dns-test-service.dns-3347.svc jessie_udp@_http._tcp.dns-test-service.dns-3347.svc jessie_tcp@_http._tcp.dns-test-service.dns-3347.svc] - - Jun 12 22:33:35.087: INFO: DNS probes using dns-3347/dns-test-948fe551-b88d-478b-80fa-64d0a6fbf2a5 succeeded - - STEP: deleting the pod 06/12/23 22:33:35.087 - STEP: deleting the test service 06/12/23 22:33:35.146 - STEP: deleting the test headless service 06/12/23 22:33:35.199 - [AfterEach] [sig-network] DNS + [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:99 + STEP: Creating configMap with name configmap-test-volume-map-bc5bf3b1-8600-4a80-a7d0-83c34dbd8eff 07/27/23 03:03:51.917 + STEP: Creating a pod to test consume configMaps 07/27/23 03:03:51.942 + Jul 27 03:03:51.968: INFO: Waiting up to 5m0s for pod "pod-configmaps-7abb5cab-4dc5-4956-b425-5600f87ea7d4" in namespace "configmap-3537" to be "Succeeded or Failed" + Jul 27 03:03:51.979: INFO: Pod "pod-configmaps-7abb5cab-4dc5-4956-b425-5600f87ea7d4": Phase="Pending", Reason="", readiness=false. Elapsed: 11.137571ms + Jul 27 03:03:53.988: INFO: Pod "pod-configmaps-7abb5cab-4dc5-4956-b425-5600f87ea7d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019778985s + Jul 27 03:03:55.989: INFO: Pod "pod-configmaps-7abb5cab-4dc5-4956-b425-5600f87ea7d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021009633s + STEP: Saw pod success 07/27/23 03:03:55.989 + Jul 27 03:03:55.989: INFO: Pod "pod-configmaps-7abb5cab-4dc5-4956-b425-5600f87ea7d4" satisfied condition "Succeeded or Failed" + Jul 27 03:03:55.998: INFO: Trying to get logs from node 10.245.128.19 pod pod-configmaps-7abb5cab-4dc5-4956-b425-5600f87ea7d4 container agnhost-container: + STEP: delete the pod 07/27/23 03:03:56.026 + Jul 27 03:03:56.053: INFO: Waiting for pod pod-configmaps-7abb5cab-4dc5-4956-b425-5600f87ea7d4 to disappear + Jul 27 03:03:56.061: INFO: Pod pod-configmaps-7abb5cab-4dc5-4956-b425-5600f87ea7d4 no longer exists + [AfterEach] [sig-storage] ConfigMap test/e2e/framework/node/init/init.go:32 - Jun 12 22:33:35.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] DNS + Jul 27 03:03:56.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] DNS + [DeferCleanup (Each)] [sig-storage] ConfigMap dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] DNS + [DeferCleanup (Each)] [sig-storage] ConfigMap tear down framework | framework.go:193 - STEP: Destroying namespace "dns-3347" for this suite. 06/12/23 22:33:35.254 + STEP: Destroying namespace "configmap-3537" for this suite. 07/27/23 03:03:56.075 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSS +SSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] EmptyDir volumes - should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:177 -[BeforeEach] [sig-storage] EmptyDir volumes +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should honor timeout [Conformance] + test/e2e/apimachinery/webhook.go:381 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:33:35.277 -Jun 12 22:33:35.278: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename emptydir 06/12/23 22:33:35.282 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:33:35.357 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:33:35.372 -[BeforeEach] [sig-storage] EmptyDir volumes +STEP: Creating a kubernetes client 07/27/23 03:03:56.097 +Jul 27 03:03:56.097: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename webhook 07/27/23 03:03:56.098 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:03:56.139 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:03:56.151 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 -[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:177 -STEP: Creating a pod to test emptydir 0666 on node default medium 06/12/23 22:33:35.387 -Jun 12 22:33:35.417: INFO: Waiting up to 5m0s for pod "pod-bfc05a10-d228-488f-8d33-48a90dc980eb" in namespace "emptydir-1311" to be "Succeeded or Failed" -Jun 12 22:33:35.433: INFO: Pod "pod-bfc05a10-d228-488f-8d33-48a90dc980eb": Phase="Pending", Reason="", readiness=false. Elapsed: 15.210743ms -Jun 12 22:33:37.497: INFO: Pod "pod-bfc05a10-d228-488f-8d33-48a90dc980eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079578268s -Jun 12 22:33:39.445: INFO: Pod "pod-bfc05a10-d228-488f-8d33-48a90dc980eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027593578s -Jun 12 22:33:41.445: INFO: Pod "pod-bfc05a10-d228-488f-8d33-48a90dc980eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027212063s -STEP: Saw pod success 06/12/23 22:33:41.445 -Jun 12 22:33:41.445: INFO: Pod "pod-bfc05a10-d228-488f-8d33-48a90dc980eb" satisfied condition "Succeeded or Failed" -Jun 12 22:33:41.454: INFO: Trying to get logs from node 10.138.75.70 pod pod-bfc05a10-d228-488f-8d33-48a90dc980eb container test-container: -STEP: delete the pod 06/12/23 22:33:41.479 -Jun 12 22:33:41.503: INFO: Waiting for pod pod-bfc05a10-d228-488f-8d33-48a90dc980eb to disappear -Jun 12 22:33:41.513: INFO: Pod pod-bfc05a10-d228-488f-8d33-48a90dc980eb no longer exists -[AfterEach] [sig-storage] EmptyDir volumes +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 07/27/23 03:03:56.217 +STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 03:03:56.518 +STEP: Deploying the webhook pod 07/27/23 03:03:56.552 +STEP: Wait for the deployment to be ready 07/27/23 03:03:56.581 +Jul 27 03:03:56.600: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 07/27/23 03:03:58.626 +STEP: Verifying the service has paired with the endpoint 07/27/23 03:03:58.661 +Jul 27 03:03:59.662: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should honor timeout [Conformance] + test/e2e/apimachinery/webhook.go:381 +STEP: Setting timeout (1s) shorter than webhook latency (5s) 07/27/23 03:03:59.673 +STEP: Registering slow webhook via the AdmissionRegistration API 07/27/23 03:03:59.673 +STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) 07/27/23 03:03:59.722 +STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore 07/27/23 03:04:00.754 +STEP: Registering slow webhook via the AdmissionRegistration API 07/27/23 03:04:00.754 +STEP: Having no error when timeout is longer than webhook latency 07/27/23 03:04:01.855 +STEP: Registering slow webhook via the AdmissionRegistration API 07/27/23 03:04:01.855 +STEP: Having no error when timeout is empty (defaulted to 10s in v1) 07/27/23 03:04:07.001 +STEP: Registering slow webhook via the AdmissionRegistration API 07/27/23 03:04:07.001 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 -Jun 12 22:33:41.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +Jul 27 03:04:12.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] EmptyDir volumes +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 -STEP: Destroying namespace "emptydir-1311" for this suite. 06/12/23 22:33:41.535 +STEP: Destroying namespace "webhook-4040" for this suite. 07/27/23 03:04:12.218 +STEP: Destroying namespace "webhook-4040-markers" for this suite. 07/27/23 03:04:12.244 ------------------------------ -• [SLOW TEST] [6.276 seconds] -[sig-storage] EmptyDir volumes -test/e2e/common/storage/framework.go:23 - should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:177 +• [SLOW TEST] [16.174 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should honor timeout [Conformance] + test/e2e/apimachinery/webhook.go:381 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] EmptyDir volumes + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:33:35.277 - Jun 12 22:33:35.278: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename emptydir 06/12/23 22:33:35.282 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:33:35.357 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:33:35.372 - [BeforeEach] [sig-storage] EmptyDir volumes + STEP: Creating a kubernetes client 07/27/23 03:03:56.097 + Jul 27 03:03:56.097: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename webhook 07/27/23 03:03:56.098 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:03:56.139 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:03:56.151 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 - [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] - test/e2e/common/storage/empty_dir.go:177 - STEP: Creating a pod to test emptydir 0666 on node default medium 06/12/23 22:33:35.387 - Jun 12 22:33:35.417: INFO: Waiting up to 5m0s for pod "pod-bfc05a10-d228-488f-8d33-48a90dc980eb" in namespace "emptydir-1311" to be "Succeeded or Failed" - Jun 12 22:33:35.433: INFO: Pod "pod-bfc05a10-d228-488f-8d33-48a90dc980eb": Phase="Pending", Reason="", readiness=false. Elapsed: 15.210743ms - Jun 12 22:33:37.497: INFO: Pod "pod-bfc05a10-d228-488f-8d33-48a90dc980eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079578268s - Jun 12 22:33:39.445: INFO: Pod "pod-bfc05a10-d228-488f-8d33-48a90dc980eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027593578s - Jun 12 22:33:41.445: INFO: Pod "pod-bfc05a10-d228-488f-8d33-48a90dc980eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027212063s - STEP: Saw pod success 06/12/23 22:33:41.445 - Jun 12 22:33:41.445: INFO: Pod "pod-bfc05a10-d228-488f-8d33-48a90dc980eb" satisfied condition "Succeeded or Failed" - Jun 12 22:33:41.454: INFO: Trying to get logs from node 10.138.75.70 pod pod-bfc05a10-d228-488f-8d33-48a90dc980eb container test-container: - STEP: delete the pod 06/12/23 22:33:41.479 - Jun 12 22:33:41.503: INFO: Waiting for pod pod-bfc05a10-d228-488f-8d33-48a90dc980eb to disappear - Jun 12 22:33:41.513: INFO: Pod pod-bfc05a10-d228-488f-8d33-48a90dc980eb no longer exists - [AfterEach] [sig-storage] EmptyDir volumes + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 07/27/23 03:03:56.217 + STEP: Create role binding to let webhook read extension-apiserver-authentication 07/27/23 03:03:56.518 + STEP: Deploying the webhook pod 07/27/23 03:03:56.552 + STEP: Wait for the deployment to be ready 07/27/23 03:03:56.581 + Jul 27 03:03:56.600: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 07/27/23 03:03:58.626 + STEP: Verifying the service has paired with the endpoint 07/27/23 03:03:58.661 + Jul 27 03:03:59.662: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should honor timeout [Conformance] + test/e2e/apimachinery/webhook.go:381 + STEP: Setting timeout (1s) shorter than webhook latency (5s) 07/27/23 03:03:59.673 + STEP: Registering slow webhook via the AdmissionRegistration API 07/27/23 03:03:59.673 + STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) 07/27/23 03:03:59.722 + STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore 07/27/23 03:04:00.754 + STEP: Registering slow webhook via the AdmissionRegistration API 07/27/23 03:04:00.754 + STEP: Having no error when timeout is longer than webhook latency 07/27/23 03:04:01.855 + STEP: Registering slow webhook via the AdmissionRegistration API 07/27/23 03:04:01.855 + STEP: Having no error when timeout is empty (defaulted to 10s in v1) 07/27/23 03:04:07.001 + STEP: Registering slow webhook via the AdmissionRegistration API 07/27/23 03:04:07.001 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 22:33:41.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + Jul 27 03:04:12.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "emptydir-1311" for this suite. 06/12/23 22:33:41.535 + STEP: Destroying namespace "webhook-4040" for this suite. 07/27/23 03:04:12.218 + STEP: Destroying namespace "webhook-4040-markers" for this suite. 07/27/23 03:04:12.244 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSS +SS ------------------------------ [sig-node] RuntimeClass - should support RuntimeClasses API operations [Conformance] - test/e2e/common/node/runtimeclass.go:189 + should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:129 [BeforeEach] [sig-node] RuntimeClass set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:33:41.556 -Jun 12 22:33:41.556: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename runtimeclass 06/12/23 22:33:41.558 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:33:41.602 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:33:41.614 +STEP: Creating a kubernetes client 07/27/23 03:04:12.271 +Jul 27 03:04:12.271: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename runtimeclass 07/27/23 03:04:12.274 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:04:12.332 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:04:12.345 [BeforeEach] [sig-node] RuntimeClass test/e2e/framework/metrics/init/init.go:31 -[It] should support RuntimeClasses API operations [Conformance] - test/e2e/common/node/runtimeclass.go:189 -STEP: getting /apis 06/12/23 22:33:41.625 -STEP: getting /apis/node.k8s.io 06/12/23 22:33:41.636 -STEP: getting /apis/node.k8s.io/v1 06/12/23 22:33:41.647 -STEP: creating 06/12/23 22:33:41.65 -STEP: watching 06/12/23 22:33:41.705 -Jun 12 22:33:41.706: INFO: starting watch -STEP: getting 06/12/23 22:33:41.726 -STEP: listing 06/12/23 22:33:41.735 -STEP: patching 06/12/23 22:33:41.752 -STEP: updating 06/12/23 22:33:41.766 -Jun 12 22:33:41.780: INFO: waiting for watch events with expected annotations -STEP: deleting 06/12/23 22:33:41.78 -STEP: deleting a collection 06/12/23 22:33:41.823 +[It] should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:129 +Jul 27 03:04:12.418: INFO: Waiting up to 1m20s for at least 1 pods in namespace runtimeclass-1428 to be scheduled +Jul 27 03:04:12.433: INFO: 1 pods are not scheduled: [runtimeclass-1428/test-runtimeclass-runtimeclass-1428-preconfigured-handler-75622(f730f96b-ad93-483d-9c59-894f9fe1fffb)] [AfterEach] [sig-node] RuntimeClass test/e2e/framework/node/init/init.go:32 -Jun 12 22:33:41.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +Jul 27 03:04:14.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] RuntimeClass test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] RuntimeClass dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-node] RuntimeClass tear down framework | framework.go:193 -STEP: Destroying namespace "runtimeclass-4443" for this suite. 06/12/23 22:33:41.884 +STEP: Destroying namespace "runtimeclass-1428" for this suite. 07/27/23 03:04:14.5 ------------------------------ -• [0.343 seconds] +• [2.280 seconds] [sig-node] RuntimeClass test/e2e/common/node/framework.go:23 - should support RuntimeClasses API operations [Conformance] - test/e2e/common/node/runtimeclass.go:189 + should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:129 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-node] RuntimeClass set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:33:41.556 - Jun 12 22:33:41.556: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename runtimeclass 06/12/23 22:33:41.558 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:33:41.602 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:33:41.614 + STEP: Creating a kubernetes client 07/27/23 03:04:12.271 + Jul 27 03:04:12.271: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename runtimeclass 07/27/23 03:04:12.274 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:04:12.332 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:04:12.345 [BeforeEach] [sig-node] RuntimeClass test/e2e/framework/metrics/init/init.go:31 - [It] should support RuntimeClasses API operations [Conformance] - test/e2e/common/node/runtimeclass.go:189 - STEP: getting /apis 06/12/23 22:33:41.625 - STEP: getting /apis/node.k8s.io 06/12/23 22:33:41.636 - STEP: getting /apis/node.k8s.io/v1 06/12/23 22:33:41.647 - STEP: creating 06/12/23 22:33:41.65 - STEP: watching 06/12/23 22:33:41.705 - Jun 12 22:33:41.706: INFO: starting watch - STEP: getting 06/12/23 22:33:41.726 - STEP: listing 06/12/23 22:33:41.735 - STEP: patching 06/12/23 22:33:41.752 - STEP: updating 06/12/23 22:33:41.766 - Jun 12 22:33:41.780: INFO: waiting for watch events with expected annotations - STEP: deleting 06/12/23 22:33:41.78 - STEP: deleting a collection 06/12/23 22:33:41.823 + [It] should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:129 + Jul 27 03:04:12.418: INFO: Waiting up to 1m20s for at least 1 pods in namespace runtimeclass-1428 to be scheduled + Jul 27 03:04:12.433: INFO: 1 pods are not scheduled: [runtimeclass-1428/test-runtimeclass-runtimeclass-1428-preconfigured-handler-75622(f730f96b-ad93-483d-9c59-894f9fe1fffb)] [AfterEach] [sig-node] RuntimeClass test/e2e/framework/node/init/init.go:32 - Jun 12 22:33:41.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + Jul 27 03:04:14.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] RuntimeClass test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] RuntimeClass dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-node] RuntimeClass tear down framework | framework.go:193 - STEP: Destroying namespace "runtimeclass-4443" for this suite. 06/12/23 22:33:41.884 + STEP: Destroying namespace "runtimeclass-1428" for this suite. 07/27/23 03:04:14.5 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSS ------------------------------- -[sig-network] Services - should complete a service status lifecycle [Conformance] - test/e2e/network/service.go:3428 -[BeforeEach] [sig-network] Services - set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:33:41.904 -Jun 12 22:33:41.905: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename services 06/12/23 22:33:41.908 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:33:41.954 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:33:41.969 -[BeforeEach] [sig-network] Services - test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-network] Services - test/e2e/network/service.go:766 -[It] should complete a service status lifecycle [Conformance] - test/e2e/network/service.go:3428 -STEP: creating a Service 06/12/23 22:33:42.008 -STEP: watching for the Service to be added 06/12/23 22:33:42.045 -Jun 12 22:33:42.053: INFO: Found Service test-service-kzgn8 in namespace services-8120 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] -Jun 12 22:33:42.053: INFO: Service test-service-kzgn8 created -STEP: Getting /status 06/12/23 22:33:42.054 -Jun 12 22:33:42.069: INFO: Service test-service-kzgn8 has LoadBalancer: {[]} -STEP: patching the ServiceStatus 06/12/23 22:33:42.069 -STEP: watching for the Service to be patched 06/12/23 22:33:42.09 -Jun 12 22:33:42.094: INFO: observed Service test-service-kzgn8 in namespace services-8120 with annotations: map[] & LoadBalancer: {[]} -Jun 12 22:33:42.094: INFO: Found Service test-service-kzgn8 in namespace services-8120 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} -Jun 12 22:33:42.094: INFO: Service test-service-kzgn8 has service status patched -STEP: updating the ServiceStatus 06/12/23 22:33:42.094 -Jun 12 22:33:42.135: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} -STEP: watching for the Service to be updated 06/12/23 22:33:42.135 -Jun 12 22:33:42.147: INFO: Observed Service test-service-kzgn8 in namespace services-8120 with annotations: map[] & Conditions: {[]} -Jun 12 22:33:42.149: INFO: Observed event: &Service{ObjectMeta:{test-service-kzgn8 services-8120 3f22b393-5b11-437f-a854-f309df56aa0e 149213 0 2023-06-12 22:33:42 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2023-06-12 22:33:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} } {e2e.test Update v1 2023-06-12 22:33:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}} status}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:172.21.188.142,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[172.21.188.142],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} -Jun 12 22:33:42.150: INFO: Found Service test-service-kzgn8 in namespace services-8120 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] -Jun 12 22:33:42.151: INFO: Service test-service-kzgn8 has service status updated -STEP: patching the service 06/12/23 22:33:42.151 -STEP: watching for the Service to be patched 06/12/23 22:33:42.185 -Jun 12 22:33:42.189: INFO: observed Service test-service-kzgn8 in namespace services-8120 with labels: map[test-service-static:true] -Jun 12 22:33:42.190: INFO: observed Service test-service-kzgn8 in namespace services-8120 with labels: map[test-service-static:true] -Jun 12 22:33:42.190: INFO: observed Service test-service-kzgn8 in namespace services-8120 with labels: map[test-service-static:true] -Jun 12 22:33:42.191: INFO: Found Service test-service-kzgn8 in namespace services-8120 with labels: map[test-service:patched test-service-static:true] -Jun 12 22:33:42.191: INFO: Service test-service-kzgn8 patched -STEP: deleting the service 06/12/23 22:33:42.191 -STEP: watching for the Service to be deleted 06/12/23 22:33:42.24 -Jun 12 22:33:42.245: INFO: Observed event: ADDED -Jun 12 22:33:42.245: INFO: Observed event: MODIFIED -Jun 12 22:33:42.245: INFO: Observed event: MODIFIED -Jun 12 22:33:42.245: INFO: Observed event: MODIFIED -Jun 12 22:33:42.246: INFO: Found Service test-service-kzgn8 in namespace services-8120 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] -Jun 12 22:33:42.246: INFO: Service test-service-kzgn8 deleted -[AfterEach] [sig-network] Services - test/e2e/framework/node/init/init.go:32 -Jun 12 22:33:42.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-network] Services - test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-network] Services - dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-network] Services - tear down framework | framework.go:193 -STEP: Destroying namespace "services-8120" for this suite. 06/12/23 22:33:42.286 ------------------------------- -• [0.404 seconds] -[sig-network] Services -test/e2e/network/common/framework.go:23 - should complete a service status lifecycle [Conformance] - test/e2e/network/service.go:3428 - - Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-network] Services - set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:33:41.904 - Jun 12 22:33:41.905: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename services 06/12/23 22:33:41.908 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:33:41.954 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:33:41.969 - [BeforeEach] [sig-network] Services - test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-network] Services - test/e2e/network/service.go:766 - [It] should complete a service status lifecycle [Conformance] - test/e2e/network/service.go:3428 - STEP: creating a Service 06/12/23 22:33:42.008 - STEP: watching for the Service to be added 06/12/23 22:33:42.045 - Jun 12 22:33:42.053: INFO: Found Service test-service-kzgn8 in namespace services-8120 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] - Jun 12 22:33:42.053: INFO: Service test-service-kzgn8 created - STEP: Getting /status 06/12/23 22:33:42.054 - Jun 12 22:33:42.069: INFO: Service test-service-kzgn8 has LoadBalancer: {[]} - STEP: patching the ServiceStatus 06/12/23 22:33:42.069 - STEP: watching for the Service to be patched 06/12/23 22:33:42.09 - Jun 12 22:33:42.094: INFO: observed Service test-service-kzgn8 in namespace services-8120 with annotations: map[] & LoadBalancer: {[]} - Jun 12 22:33:42.094: INFO: Found Service test-service-kzgn8 in namespace services-8120 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} - Jun 12 22:33:42.094: INFO: Service test-service-kzgn8 has service status patched - STEP: updating the ServiceStatus 06/12/23 22:33:42.094 - Jun 12 22:33:42.135: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} - STEP: watching for the Service to be updated 06/12/23 22:33:42.135 - Jun 12 22:33:42.147: INFO: Observed Service test-service-kzgn8 in namespace services-8120 with annotations: map[] & Conditions: {[]} - Jun 12 22:33:42.149: INFO: Observed event: &Service{ObjectMeta:{test-service-kzgn8 services-8120 3f22b393-5b11-437f-a854-f309df56aa0e 149213 0 2023-06-12 22:33:42 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2023-06-12 22:33:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} } {e2e.test Update v1 2023-06-12 22:33:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}} status}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:172.21.188.142,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[172.21.188.142],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} - Jun 12 22:33:42.150: INFO: Found Service test-service-kzgn8 in namespace services-8120 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] - Jun 12 22:33:42.151: INFO: Service test-service-kzgn8 has service status updated - STEP: patching the service 06/12/23 22:33:42.151 - STEP: watching for the Service to be patched 06/12/23 22:33:42.185 - Jun 12 22:33:42.189: INFO: observed Service test-service-kzgn8 in namespace services-8120 with labels: map[test-service-static:true] - Jun 12 22:33:42.190: INFO: observed Service test-service-kzgn8 in namespace services-8120 with labels: map[test-service-static:true] - Jun 12 22:33:42.190: INFO: observed Service test-service-kzgn8 in namespace services-8120 with labels: map[test-service-static:true] - Jun 12 22:33:42.191: INFO: Found Service test-service-kzgn8 in namespace services-8120 with labels: map[test-service:patched test-service-static:true] - Jun 12 22:33:42.191: INFO: Service test-service-kzgn8 patched - STEP: deleting the service 06/12/23 22:33:42.191 - STEP: watching for the Service to be deleted 06/12/23 22:33:42.24 - Jun 12 22:33:42.245: INFO: Observed event: ADDED - Jun 12 22:33:42.245: INFO: Observed event: MODIFIED - Jun 12 22:33:42.245: INFO: Observed event: MODIFIED - Jun 12 22:33:42.245: INFO: Observed event: MODIFIED - Jun 12 22:33:42.246: INFO: Found Service test-service-kzgn8 in namespace services-8120 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] - Jun 12 22:33:42.246: INFO: Service test-service-kzgn8 deleted - [AfterEach] [sig-network] Services - test/e2e/framework/node/init/init.go:32 - Jun 12 22:33:42.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-network] Services - test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-network] Services - dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-network] Services - tear down framework | framework.go:193 - STEP: Destroying namespace "services-8120" for this suite. 06/12/23 22:33:42.286 - << End Captured GinkgoWriter Output +SSSSSSSS ------------------------------ -[sig-storage] ConfigMap - should be consumable from pods in volume [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:47 -[BeforeEach] [sig-storage] ConfigMap +[sig-node] Secrets + should be consumable from pods in env vars [NodeConformance] [Conformance] + test/e2e/common/node/secrets.go:46 +[BeforeEach] [sig-node] Secrets set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:33:42.309 -Jun 12 22:33:42.310: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename configmap 06/12/23 22:33:42.313 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:33:42.356 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:33:42.387 -[BeforeEach] [sig-storage] ConfigMap +STEP: Creating a kubernetes client 07/27/23 03:04:14.551 +Jul 27 03:04:14.552: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename secrets 07/27/23 03:04:14.552 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:04:14.592 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:04:14.6 +[BeforeEach] [sig-node] Secrets test/e2e/framework/metrics/init/init.go:31 -[It] should be consumable from pods in volume [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:47 -STEP: Creating configMap with name configmap-test-volume-7f4ea4d8-3ed1-46ce-ae00-ccfe64da13df 06/12/23 22:33:42.41 -STEP: Creating a pod to test consume configMaps 06/12/23 22:33:42.427 -Jun 12 22:33:42.492: INFO: Waiting up to 5m0s for pod "pod-configmaps-e3614cf2-bd50-4379-900a-84cbf4bf89a1" in namespace "configmap-7970" to be "Succeeded or Failed" -Jun 12 22:33:42.515: INFO: Pod "pod-configmaps-e3614cf2-bd50-4379-900a-84cbf4bf89a1": Phase="Pending", Reason="", readiness=false. Elapsed: 23.326408ms -Jun 12 22:33:44.525: INFO: Pod "pod-configmaps-e3614cf2-bd50-4379-900a-84cbf4bf89a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03357446s -Jun 12 22:33:46.526: INFO: Pod "pod-configmaps-e3614cf2-bd50-4379-900a-84cbf4bf89a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034381287s -Jun 12 22:33:48.526: INFO: Pod "pod-configmaps-e3614cf2-bd50-4379-900a-84cbf4bf89a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03450694s -STEP: Saw pod success 06/12/23 22:33:48.526 -Jun 12 22:33:48.527: INFO: Pod "pod-configmaps-e3614cf2-bd50-4379-900a-84cbf4bf89a1" satisfied condition "Succeeded or Failed" -Jun 12 22:33:48.537: INFO: Trying to get logs from node 10.138.75.70 pod pod-configmaps-e3614cf2-bd50-4379-900a-84cbf4bf89a1 container agnhost-container: -STEP: delete the pod 06/12/23 22:33:48.559 -Jun 12 22:33:48.589: INFO: Waiting for pod pod-configmaps-e3614cf2-bd50-4379-900a-84cbf4bf89a1 to disappear -Jun 12 22:33:48.599: INFO: Pod pod-configmaps-e3614cf2-bd50-4379-900a-84cbf4bf89a1 no longer exists -[AfterEach] [sig-storage] ConfigMap +[It] should be consumable from pods in env vars [NodeConformance] [Conformance] + test/e2e/common/node/secrets.go:46 +STEP: Creating secret with name secret-test-53c48b2a-32af-422c-b41f-756c3d3af2b6 07/27/23 03:04:14.609 +STEP: Creating a pod to test consume secrets 07/27/23 03:04:14.623 +Jul 27 03:04:14.651: INFO: Waiting up to 5m0s for pod "pod-secrets-939bf431-9c9a-4b1c-8084-d2d5cc9ef250" in namespace "secrets-9099" to be "Succeeded or Failed" +Jul 27 03:04:14.659: INFO: Pod "pod-secrets-939bf431-9c9a-4b1c-8084-d2d5cc9ef250": Phase="Pending", Reason="", readiness=false. Elapsed: 8.363639ms +Jul 27 03:04:16.677: INFO: Pod "pod-secrets-939bf431-9c9a-4b1c-8084-d2d5cc9ef250": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02680425s +Jul 27 03:04:18.667: INFO: Pod "pod-secrets-939bf431-9c9a-4b1c-8084-d2d5cc9ef250": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01661765s +Jul 27 03:04:20.666: INFO: Pod "pod-secrets-939bf431-9c9a-4b1c-8084-d2d5cc9ef250": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015882891s +STEP: Saw pod success 07/27/23 03:04:20.666 +Jul 27 03:04:20.667: INFO: Pod "pod-secrets-939bf431-9c9a-4b1c-8084-d2d5cc9ef250" satisfied condition "Succeeded or Failed" +Jul 27 03:04:20.674: INFO: Trying to get logs from node 10.245.128.19 pod pod-secrets-939bf431-9c9a-4b1c-8084-d2d5cc9ef250 container secret-env-test: +STEP: delete the pod 07/27/23 03:04:20.724 +Jul 27 03:04:20.750: INFO: Waiting for pod pod-secrets-939bf431-9c9a-4b1c-8084-d2d5cc9ef250 to disappear +Jul 27 03:04:20.757: INFO: Pod pod-secrets-939bf431-9c9a-4b1c-8084-d2d5cc9ef250 no longer exists +[AfterEach] [sig-node] Secrets test/e2e/framework/node/init/init.go:32 -Jun 12 22:33:48.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] ConfigMap +Jul 27 03:04:20.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Secrets test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] ConfigMap +[DeferCleanup (Each)] [sig-node] Secrets dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] ConfigMap +[DeferCleanup (Each)] [sig-node] Secrets tear down framework | framework.go:193 -STEP: Destroying namespace "configmap-7970" for this suite. 06/12/23 22:33:48.612 +STEP: Destroying namespace "secrets-9099" for this suite. 07/27/23 03:04:20.769 ------------------------------ -• [SLOW TEST] [6.319 seconds] -[sig-storage] ConfigMap -test/e2e/common/storage/framework.go:23 - should be consumable from pods in volume [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:47 +• [SLOW TEST] [6.239 seconds] +[sig-node] Secrets +test/e2e/common/node/framework.go:23 + should be consumable from pods in env vars [NodeConformance] [Conformance] + test/e2e/common/node/secrets.go:46 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] ConfigMap + [BeforeEach] [sig-node] Secrets set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:33:42.309 - Jun 12 22:33:42.310: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename configmap 06/12/23 22:33:42.313 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:33:42.356 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:33:42.387 - [BeforeEach] [sig-storage] ConfigMap + STEP: Creating a kubernetes client 07/27/23 03:04:14.551 + Jul 27 03:04:14.552: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename secrets 07/27/23 03:04:14.552 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:04:14.592 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:04:14.6 + [BeforeEach] [sig-node] Secrets test/e2e/framework/metrics/init/init.go:31 - [It] should be consumable from pods in volume [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:47 - STEP: Creating configMap with name configmap-test-volume-7f4ea4d8-3ed1-46ce-ae00-ccfe64da13df 06/12/23 22:33:42.41 - STEP: Creating a pod to test consume configMaps 06/12/23 22:33:42.427 - Jun 12 22:33:42.492: INFO: Waiting up to 5m0s for pod "pod-configmaps-e3614cf2-bd50-4379-900a-84cbf4bf89a1" in namespace "configmap-7970" to be "Succeeded or Failed" - Jun 12 22:33:42.515: INFO: Pod "pod-configmaps-e3614cf2-bd50-4379-900a-84cbf4bf89a1": Phase="Pending", Reason="", readiness=false. Elapsed: 23.326408ms - Jun 12 22:33:44.525: INFO: Pod "pod-configmaps-e3614cf2-bd50-4379-900a-84cbf4bf89a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03357446s - Jun 12 22:33:46.526: INFO: Pod "pod-configmaps-e3614cf2-bd50-4379-900a-84cbf4bf89a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034381287s - Jun 12 22:33:48.526: INFO: Pod "pod-configmaps-e3614cf2-bd50-4379-900a-84cbf4bf89a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03450694s - STEP: Saw pod success 06/12/23 22:33:48.526 - Jun 12 22:33:48.527: INFO: Pod "pod-configmaps-e3614cf2-bd50-4379-900a-84cbf4bf89a1" satisfied condition "Succeeded or Failed" - Jun 12 22:33:48.537: INFO: Trying to get logs from node 10.138.75.70 pod pod-configmaps-e3614cf2-bd50-4379-900a-84cbf4bf89a1 container agnhost-container: - STEP: delete the pod 06/12/23 22:33:48.559 - Jun 12 22:33:48.589: INFO: Waiting for pod pod-configmaps-e3614cf2-bd50-4379-900a-84cbf4bf89a1 to disappear - Jun 12 22:33:48.599: INFO: Pod pod-configmaps-e3614cf2-bd50-4379-900a-84cbf4bf89a1 no longer exists - [AfterEach] [sig-storage] ConfigMap + [It] should be consumable from pods in env vars [NodeConformance] [Conformance] + test/e2e/common/node/secrets.go:46 + STEP: Creating secret with name secret-test-53c48b2a-32af-422c-b41f-756c3d3af2b6 07/27/23 03:04:14.609 + STEP: Creating a pod to test consume secrets 07/27/23 03:04:14.623 + Jul 27 03:04:14.651: INFO: Waiting up to 5m0s for pod "pod-secrets-939bf431-9c9a-4b1c-8084-d2d5cc9ef250" in namespace "secrets-9099" to be "Succeeded or Failed" + Jul 27 03:04:14.659: INFO: Pod "pod-secrets-939bf431-9c9a-4b1c-8084-d2d5cc9ef250": Phase="Pending", Reason="", readiness=false. Elapsed: 8.363639ms + Jul 27 03:04:16.677: INFO: Pod "pod-secrets-939bf431-9c9a-4b1c-8084-d2d5cc9ef250": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02680425s + Jul 27 03:04:18.667: INFO: Pod "pod-secrets-939bf431-9c9a-4b1c-8084-d2d5cc9ef250": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01661765s + Jul 27 03:04:20.666: INFO: Pod "pod-secrets-939bf431-9c9a-4b1c-8084-d2d5cc9ef250": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015882891s + STEP: Saw pod success 07/27/23 03:04:20.666 + Jul 27 03:04:20.667: INFO: Pod "pod-secrets-939bf431-9c9a-4b1c-8084-d2d5cc9ef250" satisfied condition "Succeeded or Failed" + Jul 27 03:04:20.674: INFO: Trying to get logs from node 10.245.128.19 pod pod-secrets-939bf431-9c9a-4b1c-8084-d2d5cc9ef250 container secret-env-test: + STEP: delete the pod 07/27/23 03:04:20.724 + Jul 27 03:04:20.750: INFO: Waiting for pod pod-secrets-939bf431-9c9a-4b1c-8084-d2d5cc9ef250 to disappear + Jul 27 03:04:20.757: INFO: Pod pod-secrets-939bf431-9c9a-4b1c-8084-d2d5cc9ef250 no longer exists + [AfterEach] [sig-node] Secrets test/e2e/framework/node/init/init.go:32 - Jun 12 22:33:48.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] ConfigMap + Jul 27 03:04:20.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Secrets test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] ConfigMap + [DeferCleanup (Each)] [sig-node] Secrets dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] ConfigMap + [DeferCleanup (Each)] [sig-node] Secrets tear down framework | framework.go:193 - STEP: Destroying namespace "configmap-7970" for this suite. 06/12/23 22:33:48.612 + STEP: Destroying namespace "secrets-9099" for this suite. 07/27/23 03:04:20.769 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +SS ------------------------------ -[sig-apps] ReplicationController - should surface a failure condition on a common issue like exceeded quota [Conformance] - test/e2e/apps/rc.go:83 -[BeforeEach] [sig-apps] ReplicationController +[sig-node] Variable Expansion + should allow substituting values in a volume subpath [Conformance] + test/e2e/common/node/expansion.go:112 +[BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:33:48.639 -Jun 12 22:33:48.639: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename replication-controller 06/12/23 22:33:48.64 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:33:48.686 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:33:48.7 -[BeforeEach] [sig-apps] ReplicationController +STEP: Creating a kubernetes client 07/27/23 03:04:20.791 +Jul 27 03:04:20.791: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename var-expansion 07/27/23 03:04:20.792 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:04:20.851 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:04:20.859 +[BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-apps] ReplicationController - test/e2e/apps/rc.go:57 -[It] should surface a failure condition on a common issue like exceeded quota [Conformance] - test/e2e/apps/rc.go:83 -Jun 12 22:33:48.713: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace -STEP: Creating rc "condition-test" that asks for more than the allowed pod quota 06/12/23 22:33:49.803 -STEP: Checking rc "condition-test" has the desired failure condition set 06/12/23 22:33:49.817 -STEP: Scaling down rc "condition-test" to satisfy pod quota 06/12/23 22:33:50.865 -Jun 12 22:33:50.911: INFO: Updating replication controller "condition-test" -STEP: Checking rc "condition-test" has no failure condition set 06/12/23 22:33:50.911 -[AfterEach] [sig-apps] ReplicationController +[It] should allow substituting values in a volume subpath [Conformance] + test/e2e/common/node/expansion.go:112 +STEP: Creating a pod to test substitution in volume subpath 07/27/23 03:04:20.868 +W0727 03:04:20.895935 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "dapi-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "dapi-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "dapi-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "dapi-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +Jul 27 03:04:20.896: INFO: Waiting up to 5m0s for pod "var-expansion-2e4761f0-3784-4ce0-921d-66f3849260da" in namespace "var-expansion-8945" to be "Succeeded or Failed" +Jul 27 03:04:20.904: INFO: Pod "var-expansion-2e4761f0-3784-4ce0-921d-66f3849260da": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073838ms +Jul 27 03:04:22.915: INFO: Pod "var-expansion-2e4761f0-3784-4ce0-921d-66f3849260da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018955063s +Jul 27 03:04:24.939: INFO: Pod "var-expansion-2e4761f0-3784-4ce0-921d-66f3849260da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043755839s +STEP: Saw pod success 07/27/23 03:04:24.939 +Jul 27 03:04:24.939: INFO: Pod "var-expansion-2e4761f0-3784-4ce0-921d-66f3849260da" satisfied condition "Succeeded or Failed" +Jul 27 03:04:24.948: INFO: Trying to get logs from node 10.245.128.19 pod var-expansion-2e4761f0-3784-4ce0-921d-66f3849260da container dapi-container: +STEP: delete the pod 07/27/23 03:04:24.99 +Jul 27 03:04:25.035: INFO: Waiting for pod var-expansion-2e4761f0-3784-4ce0-921d-66f3849260da to disappear +Jul 27 03:04:25.044: INFO: Pod var-expansion-2e4761f0-3784-4ce0-921d-66f3849260da no longer exists +[AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 -Jun 12 22:33:50.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] ReplicationController +Jul 27 03:04:25.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] ReplicationController +[DeferCleanup (Each)] [sig-node] Variable Expansion dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] ReplicationController +[DeferCleanup (Each)] [sig-node] Variable Expansion tear down framework | framework.go:193 -STEP: Destroying namespace "replication-controller-5909" for this suite. 06/12/23 22:33:50.942 +STEP: Destroying namespace "var-expansion-8945" for this suite. 07/27/23 03:04:25.058 ------------------------------ -• [2.317 seconds] -[sig-apps] ReplicationController -test/e2e/apps/framework.go:23 - should surface a failure condition on a common issue like exceeded quota [Conformance] - test/e2e/apps/rc.go:83 +• [4.292 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should allow substituting values in a volume subpath [Conformance] + test/e2e/common/node/expansion.go:112 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] ReplicationController + [BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:33:48.639 - Jun 12 22:33:48.639: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename replication-controller 06/12/23 22:33:48.64 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:33:48.686 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:33:48.7 - [BeforeEach] [sig-apps] ReplicationController + STEP: Creating a kubernetes client 07/27/23 03:04:20.791 + Jul 27 03:04:20.791: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename var-expansion 07/27/23 03:04:20.792 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:04:20.851 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:04:20.859 + [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-apps] ReplicationController - test/e2e/apps/rc.go:57 - [It] should surface a failure condition on a common issue like exceeded quota [Conformance] - test/e2e/apps/rc.go:83 - Jun 12 22:33:48.713: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace - STEP: Creating rc "condition-test" that asks for more than the allowed pod quota 06/12/23 22:33:49.803 - STEP: Checking rc "condition-test" has the desired failure condition set 06/12/23 22:33:49.817 - STEP: Scaling down rc "condition-test" to satisfy pod quota 06/12/23 22:33:50.865 - Jun 12 22:33:50.911: INFO: Updating replication controller "condition-test" - STEP: Checking rc "condition-test" has no failure condition set 06/12/23 22:33:50.911 - [AfterEach] [sig-apps] ReplicationController + [It] should allow substituting values in a volume subpath [Conformance] + test/e2e/common/node/expansion.go:112 + STEP: Creating a pod to test substitution in volume subpath 07/27/23 03:04:20.868 + W0727 03:04:20.895935 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "dapi-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "dapi-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "dapi-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "dapi-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + Jul 27 03:04:20.896: INFO: Waiting up to 5m0s for pod "var-expansion-2e4761f0-3784-4ce0-921d-66f3849260da" in namespace "var-expansion-8945" to be "Succeeded or Failed" + Jul 27 03:04:20.904: INFO: Pod "var-expansion-2e4761f0-3784-4ce0-921d-66f3849260da": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073838ms + Jul 27 03:04:22.915: INFO: Pod "var-expansion-2e4761f0-3784-4ce0-921d-66f3849260da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018955063s + Jul 27 03:04:24.939: INFO: Pod "var-expansion-2e4761f0-3784-4ce0-921d-66f3849260da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043755839s + STEP: Saw pod success 07/27/23 03:04:24.939 + Jul 27 03:04:24.939: INFO: Pod "var-expansion-2e4761f0-3784-4ce0-921d-66f3849260da" satisfied condition "Succeeded or Failed" + Jul 27 03:04:24.948: INFO: Trying to get logs from node 10.245.128.19 pod var-expansion-2e4761f0-3784-4ce0-921d-66f3849260da container dapi-container: + STEP: delete the pod 07/27/23 03:04:24.99 + Jul 27 03:04:25.035: INFO: Waiting for pod var-expansion-2e4761f0-3784-4ce0-921d-66f3849260da to disappear + Jul 27 03:04:25.044: INFO: Pod var-expansion-2e4761f0-3784-4ce0-921d-66f3849260da no longer exists + [AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 - Jun 12 22:33:50.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] ReplicationController + Jul 27 03:04:25.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] ReplicationController + [DeferCleanup (Each)] [sig-node] Variable Expansion dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] ReplicationController + [DeferCleanup (Each)] [sig-node] Variable Expansion tear down framework | framework.go:193 - STEP: Destroying namespace "replication-controller-5909" for this suite. 06/12/23 22:33:50.942 + STEP: Destroying namespace "var-expansion-8945" for this suite. 07/27/23 03:04:25.058 << End Captured GinkgoWriter Output ------------------------------ -SSSSS +SSSSSSSSSS ------------------------------ -[sig-storage] Secrets - optional updates should be reflected in volume [NodeConformance] [Conformance] - test/e2e/common/storage/secrets_volume.go:205 -[BeforeEach] [sig-storage] Secrets +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of different groups [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:276 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:33:50.959 -Jun 12 22:33:50.959: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename secrets 06/12/23 22:33:50.962 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:33:51.063 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:33:51.159 -[BeforeEach] [sig-storage] Secrets +STEP: Creating a kubernetes client 07/27/23 03:04:25.084 +Jul 27 03:04:25.084: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename crd-publish-openapi 07/27/23 03:04:25.084 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:04:25.2 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:04:25.208 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 -[It] optional updates should be reflected in volume [NodeConformance] [Conformance] - test/e2e/common/storage/secrets_volume.go:205 -Jun 12 22:33:51.196: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node -STEP: Creating secret with name s-test-opt-del-efcbc092-83b3-4114-ac19-e387081506ce 06/12/23 22:33:51.196 -STEP: Creating secret with name s-test-opt-upd-6bf54fac-fa49-45d0-9356-3b8c785b7513 06/12/23 22:33:51.219 -STEP: Creating the pod 06/12/23 22:33:51.266 -Jun 12 22:33:51.304: INFO: Waiting up to 5m0s for pod "pod-secrets-bf323c01-385f-4c7c-8d4b-7f54a102b119" in namespace "secrets-5942" to be "running and ready" -Jun 12 22:33:51.314: INFO: Pod "pod-secrets-bf323c01-385f-4c7c-8d4b-7f54a102b119": Phase="Pending", Reason="", readiness=false. Elapsed: 9.976174ms -Jun 12 22:33:51.314: INFO: The phase of Pod pod-secrets-bf323c01-385f-4c7c-8d4b-7f54a102b119 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 22:33:53.324: INFO: Pod "pod-secrets-bf323c01-385f-4c7c-8d4b-7f54a102b119": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020293118s -Jun 12 22:33:53.324: INFO: The phase of Pod pod-secrets-bf323c01-385f-4c7c-8d4b-7f54a102b119 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 22:33:55.323: INFO: Pod "pod-secrets-bf323c01-385f-4c7c-8d4b-7f54a102b119": Phase="Running", Reason="", readiness=true. Elapsed: 4.019339308s -Jun 12 22:33:55.323: INFO: The phase of Pod pod-secrets-bf323c01-385f-4c7c-8d4b-7f54a102b119 is Running (Ready = true) -Jun 12 22:33:55.323: INFO: Pod "pod-secrets-bf323c01-385f-4c7c-8d4b-7f54a102b119" satisfied condition "running and ready" -STEP: Deleting secret s-test-opt-del-efcbc092-83b3-4114-ac19-e387081506ce 06/12/23 22:33:55.394 -STEP: Updating secret s-test-opt-upd-6bf54fac-fa49-45d0-9356-3b8c785b7513 06/12/23 22:33:55.413 -STEP: Creating secret with name s-test-opt-create-bc8d1e82-30e9-4f91-a26e-fe4697df093d 06/12/23 22:33:55.426 -STEP: waiting to observe update in volume 06/12/23 22:33:55.44 -[AfterEach] [sig-storage] Secrets +[It] works for multiple CRDs of different groups [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:276 +STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation 07/27/23 03:04:25.221 +Jul 27 03:04:25.222: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +Jul 27 03:04:34.882: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 -Jun 12 22:35:09.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Secrets +Jul 27 03:04:58.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Secrets +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Secrets +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] tear down framework | framework.go:193 -STEP: Destroying namespace "secrets-5942" for this suite. 06/12/23 22:35:09.023 +STEP: Destroying namespace "crd-publish-openapi-5348" for this suite. 07/27/23 03:04:58.34 ------------------------------ -• [SLOW TEST] [78.106 seconds] -[sig-storage] Secrets -test/e2e/common/storage/framework.go:23 - optional updates should be reflected in volume [NodeConformance] [Conformance] - test/e2e/common/storage/secrets_volume.go:205 +• [SLOW TEST] [33.271 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for multiple CRDs of different groups [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:276 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Secrets + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:33:50.959 - Jun 12 22:33:50.959: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename secrets 06/12/23 22:33:50.962 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:33:51.063 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:33:51.159 - [BeforeEach] [sig-storage] Secrets + STEP: Creating a kubernetes client 07/27/23 03:04:25.084 + Jul 27 03:04:25.084: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename crd-publish-openapi 07/27/23 03:04:25.084 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:04:25.2 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:04:25.208 + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 - [It] optional updates should be reflected in volume [NodeConformance] [Conformance] - test/e2e/common/storage/secrets_volume.go:205 - Jun 12 22:33:51.196: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node - STEP: Creating secret with name s-test-opt-del-efcbc092-83b3-4114-ac19-e387081506ce 06/12/23 22:33:51.196 - STEP: Creating secret with name s-test-opt-upd-6bf54fac-fa49-45d0-9356-3b8c785b7513 06/12/23 22:33:51.219 - STEP: Creating the pod 06/12/23 22:33:51.266 - Jun 12 22:33:51.304: INFO: Waiting up to 5m0s for pod "pod-secrets-bf323c01-385f-4c7c-8d4b-7f54a102b119" in namespace "secrets-5942" to be "running and ready" - Jun 12 22:33:51.314: INFO: Pod "pod-secrets-bf323c01-385f-4c7c-8d4b-7f54a102b119": Phase="Pending", Reason="", readiness=false. Elapsed: 9.976174ms - Jun 12 22:33:51.314: INFO: The phase of Pod pod-secrets-bf323c01-385f-4c7c-8d4b-7f54a102b119 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 22:33:53.324: INFO: Pod "pod-secrets-bf323c01-385f-4c7c-8d4b-7f54a102b119": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020293118s - Jun 12 22:33:53.324: INFO: The phase of Pod pod-secrets-bf323c01-385f-4c7c-8d4b-7f54a102b119 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 22:33:55.323: INFO: Pod "pod-secrets-bf323c01-385f-4c7c-8d4b-7f54a102b119": Phase="Running", Reason="", readiness=true. Elapsed: 4.019339308s - Jun 12 22:33:55.323: INFO: The phase of Pod pod-secrets-bf323c01-385f-4c7c-8d4b-7f54a102b119 is Running (Ready = true) - Jun 12 22:33:55.323: INFO: Pod "pod-secrets-bf323c01-385f-4c7c-8d4b-7f54a102b119" satisfied condition "running and ready" - STEP: Deleting secret s-test-opt-del-efcbc092-83b3-4114-ac19-e387081506ce 06/12/23 22:33:55.394 - STEP: Updating secret s-test-opt-upd-6bf54fac-fa49-45d0-9356-3b8c785b7513 06/12/23 22:33:55.413 - STEP: Creating secret with name s-test-opt-create-bc8d1e82-30e9-4f91-a26e-fe4697df093d 06/12/23 22:33:55.426 - STEP: waiting to observe update in volume 06/12/23 22:33:55.44 - [AfterEach] [sig-storage] Secrets + [It] works for multiple CRDs of different groups [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:276 + STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation 07/27/23 03:04:25.221 + Jul 27 03:04:25.222: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + Jul 27 03:04:34.882: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 - Jun 12 22:35:09.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Secrets + Jul 27 03:04:58.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Secrets + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Secrets + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] tear down framework | framework.go:193 - STEP: Destroying namespace "secrets-5942" for this suite. 06/12/23 22:35:09.023 + STEP: Destroying namespace "crd-publish-openapi-5348" for this suite. 07/27/23 03:04:58.34 << End Captured GinkgoWriter Output ------------------------------ -SSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-apps] CronJob - should schedule multiple jobs concurrently [Conformance] - test/e2e/apps/cronjob.go:69 -[BeforeEach] [sig-apps] CronJob +[sig-api-machinery] Discovery + should validate PreferredVersion for each APIGroup [Conformance] + test/e2e/apimachinery/discovery.go:122 +[BeforeEach] [sig-api-machinery] Discovery set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:35:09.065 -Jun 12 22:35:09.066: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename cronjob 06/12/23 22:35:09.068 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:35:09.14 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:35:09.193 -[BeforeEach] [sig-apps] CronJob +STEP: Creating a kubernetes client 07/27/23 03:04:58.357 +Jul 27 03:04:58.357: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename discovery 07/27/23 03:04:58.357 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:04:58.388 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:04:58.405 +[BeforeEach] [sig-api-machinery] Discovery test/e2e/framework/metrics/init/init.go:31 -[It] should schedule multiple jobs concurrently [Conformance] - test/e2e/apps/cronjob.go:69 -STEP: Creating a cronjob 06/12/23 22:35:09.206 -STEP: Ensuring more than one job is running at a time 06/12/23 22:35:09.223 -STEP: Ensuring at least two running jobs exists by listing jobs explicitly 06/12/23 22:37:01.242 -STEP: Removing cronjob 06/12/23 22:37:01.28 -[AfterEach] [sig-apps] CronJob +[BeforeEach] [sig-api-machinery] Discovery + test/e2e/apimachinery/discovery.go:43 +STEP: Setting up server cert 07/27/23 03:04:58.416 +[It] should validate PreferredVersion for each APIGroup [Conformance] + test/e2e/apimachinery/discovery.go:122 +Jul 27 03:04:59.098: INFO: Checking APIGroup: apiregistration.k8s.io +Jul 27 03:04:59.102: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 +Jul 27 03:04:59.102: INFO: Versions found [{apiregistration.k8s.io/v1 v1}] +Jul 27 03:04:59.102: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 +Jul 27 03:04:59.102: INFO: Checking APIGroup: apps +Jul 27 03:04:59.105: INFO: PreferredVersion.GroupVersion: apps/v1 +Jul 27 03:04:59.105: INFO: Versions found [{apps/v1 v1}] +Jul 27 03:04:59.105: INFO: apps/v1 matches apps/v1 +Jul 27 03:04:59.105: INFO: Checking APIGroup: events.k8s.io +Jul 27 03:04:59.108: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 +Jul 27 03:04:59.108: INFO: Versions found [{events.k8s.io/v1 v1}] +Jul 27 03:04:59.108: INFO: events.k8s.io/v1 matches events.k8s.io/v1 +Jul 27 03:04:59.108: INFO: Checking APIGroup: authentication.k8s.io +Jul 27 03:04:59.111: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 +Jul 27 03:04:59.111: INFO: Versions found [{authentication.k8s.io/v1 v1}] +Jul 27 03:04:59.112: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 +Jul 27 03:04:59.112: INFO: Checking APIGroup: authorization.k8s.io +Jul 27 03:04:59.114: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 +Jul 27 03:04:59.114: INFO: Versions found [{authorization.k8s.io/v1 v1}] +Jul 27 03:04:59.114: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 +Jul 27 03:04:59.114: INFO: Checking APIGroup: autoscaling +Jul 27 03:04:59.118: INFO: PreferredVersion.GroupVersion: autoscaling/v2 +Jul 27 03:04:59.118: INFO: Versions found [{autoscaling/v2 v2} {autoscaling/v1 v1}] +Jul 27 03:04:59.118: INFO: autoscaling/v2 matches autoscaling/v2 +Jul 27 03:04:59.118: INFO: Checking APIGroup: batch +Jul 27 03:04:59.121: INFO: PreferredVersion.GroupVersion: batch/v1 +Jul 27 03:04:59.121: INFO: Versions found [{batch/v1 v1}] +Jul 27 03:04:59.121: INFO: batch/v1 matches batch/v1 +Jul 27 03:04:59.121: INFO: Checking APIGroup: certificates.k8s.io +Jul 27 03:04:59.124: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 +Jul 27 03:04:59.124: INFO: Versions found [{certificates.k8s.io/v1 v1}] +Jul 27 03:04:59.124: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 +Jul 27 03:04:59.124: INFO: Checking APIGroup: networking.k8s.io +Jul 27 03:04:59.131: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 +Jul 27 03:04:59.131: INFO: Versions found [{networking.k8s.io/v1 v1}] +Jul 27 03:04:59.131: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 +Jul 27 03:04:59.131: INFO: Checking APIGroup: policy +Jul 27 03:04:59.134: INFO: PreferredVersion.GroupVersion: policy/v1 +Jul 27 03:04:59.134: INFO: Versions found [{policy/v1 v1}] +Jul 27 03:04:59.134: INFO: policy/v1 matches policy/v1 +Jul 27 03:04:59.134: INFO: Checking APIGroup: rbac.authorization.k8s.io +Jul 27 03:04:59.137: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 +Jul 27 03:04:59.137: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1}] +Jul 27 03:04:59.137: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 +Jul 27 03:04:59.137: INFO: Checking APIGroup: storage.k8s.io +Jul 27 03:04:59.140: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 +Jul 27 03:04:59.140: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] +Jul 27 03:04:59.140: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 +Jul 27 03:04:59.140: INFO: Checking APIGroup: admissionregistration.k8s.io +Jul 27 03:04:59.143: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 +Jul 27 03:04:59.143: INFO: Versions found [{admissionregistration.k8s.io/v1 v1}] +Jul 27 03:04:59.143: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 +Jul 27 03:04:59.143: INFO: Checking APIGroup: apiextensions.k8s.io +Jul 27 03:04:59.149: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 +Jul 27 03:04:59.149: INFO: Versions found [{apiextensions.k8s.io/v1 v1}] +Jul 27 03:04:59.149: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 +Jul 27 03:04:59.149: INFO: Checking APIGroup: scheduling.k8s.io +Jul 27 03:04:59.153: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 +Jul 27 03:04:59.153: INFO: Versions found [{scheduling.k8s.io/v1 v1}] +Jul 27 03:04:59.153: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 +Jul 27 03:04:59.153: INFO: Checking APIGroup: coordination.k8s.io +Jul 27 03:04:59.156: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 +Jul 27 03:04:59.156: INFO: Versions found [{coordination.k8s.io/v1 v1}] +Jul 27 03:04:59.156: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 +Jul 27 03:04:59.156: INFO: Checking APIGroup: node.k8s.io +Jul 27 03:04:59.159: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 +Jul 27 03:04:59.159: INFO: Versions found [{node.k8s.io/v1 v1}] +Jul 27 03:04:59.159: INFO: node.k8s.io/v1 matches node.k8s.io/v1 +Jul 27 03:04:59.159: INFO: Checking APIGroup: discovery.k8s.io +Jul 27 03:04:59.162: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 +Jul 27 03:04:59.162: INFO: Versions found [{discovery.k8s.io/v1 v1}] +Jul 27 03:04:59.162: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 +Jul 27 03:04:59.162: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io +Jul 27 03:04:59.167: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta3 +Jul 27 03:04:59.167: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta3 v1beta3} {flowcontrol.apiserver.k8s.io/v1beta2 v1beta2}] +Jul 27 03:04:59.167: INFO: flowcontrol.apiserver.k8s.io/v1beta3 matches flowcontrol.apiserver.k8s.io/v1beta3 +Jul 27 03:04:59.167: INFO: Checking APIGroup: apps.openshift.io +Jul 27 03:04:59.170: INFO: PreferredVersion.GroupVersion: apps.openshift.io/v1 +Jul 27 03:04:59.170: INFO: Versions found [{apps.openshift.io/v1 v1}] +Jul 27 03:04:59.170: INFO: apps.openshift.io/v1 matches apps.openshift.io/v1 +Jul 27 03:04:59.170: INFO: Checking APIGroup: authorization.openshift.io +Jul 27 03:04:59.174: INFO: PreferredVersion.GroupVersion: authorization.openshift.io/v1 +Jul 27 03:04:59.174: INFO: Versions found [{authorization.openshift.io/v1 v1}] +Jul 27 03:04:59.174: INFO: authorization.openshift.io/v1 matches authorization.openshift.io/v1 +Jul 27 03:04:59.174: INFO: Checking APIGroup: build.openshift.io +Jul 27 03:04:59.178: INFO: PreferredVersion.GroupVersion: build.openshift.io/v1 +Jul 27 03:04:59.178: INFO: Versions found [{build.openshift.io/v1 v1}] +Jul 27 03:04:59.178: INFO: build.openshift.io/v1 matches build.openshift.io/v1 +Jul 27 03:04:59.178: INFO: Checking APIGroup: image.openshift.io +Jul 27 03:04:59.182: INFO: PreferredVersion.GroupVersion: image.openshift.io/v1 +Jul 27 03:04:59.182: INFO: Versions found [{image.openshift.io/v1 v1}] +Jul 27 03:04:59.182: INFO: image.openshift.io/v1 matches image.openshift.io/v1 +Jul 27 03:04:59.182: INFO: Checking APIGroup: oauth.openshift.io +Jul 27 03:04:59.187: INFO: PreferredVersion.GroupVersion: oauth.openshift.io/v1 +Jul 27 03:04:59.187: INFO: Versions found [{oauth.openshift.io/v1 v1}] +Jul 27 03:04:59.187: INFO: oauth.openshift.io/v1 matches oauth.openshift.io/v1 +Jul 27 03:04:59.187: INFO: Checking APIGroup: project.openshift.io +Jul 27 03:04:59.191: INFO: PreferredVersion.GroupVersion: project.openshift.io/v1 +Jul 27 03:04:59.191: INFO: Versions found [{project.openshift.io/v1 v1}] +Jul 27 03:04:59.191: INFO: project.openshift.io/v1 matches project.openshift.io/v1 +Jul 27 03:04:59.191: INFO: Checking APIGroup: quota.openshift.io +Jul 27 03:04:59.195: INFO: PreferredVersion.GroupVersion: quota.openshift.io/v1 +Jul 27 03:04:59.195: INFO: Versions found [{quota.openshift.io/v1 v1}] +Jul 27 03:04:59.195: INFO: quota.openshift.io/v1 matches quota.openshift.io/v1 +Jul 27 03:04:59.195: INFO: Checking APIGroup: route.openshift.io +Jul 27 03:04:59.200: INFO: PreferredVersion.GroupVersion: route.openshift.io/v1 +Jul 27 03:04:59.200: INFO: Versions found [{route.openshift.io/v1 v1}] +Jul 27 03:04:59.200: INFO: route.openshift.io/v1 matches route.openshift.io/v1 +Jul 27 03:04:59.200: INFO: Checking APIGroup: security.openshift.io +Jul 27 03:04:59.204: INFO: PreferredVersion.GroupVersion: security.openshift.io/v1 +Jul 27 03:04:59.204: INFO: Versions found [{security.openshift.io/v1 v1}] +Jul 27 03:04:59.204: INFO: security.openshift.io/v1 matches security.openshift.io/v1 +Jul 27 03:04:59.204: INFO: Checking APIGroup: template.openshift.io +Jul 27 03:04:59.208: INFO: PreferredVersion.GroupVersion: template.openshift.io/v1 +Jul 27 03:04:59.208: INFO: Versions found [{template.openshift.io/v1 v1}] +Jul 27 03:04:59.208: INFO: template.openshift.io/v1 matches template.openshift.io/v1 +Jul 27 03:04:59.208: INFO: Checking APIGroup: user.openshift.io +Jul 27 03:04:59.211: INFO: PreferredVersion.GroupVersion: user.openshift.io/v1 +Jul 27 03:04:59.211: INFO: Versions found [{user.openshift.io/v1 v1}] +Jul 27 03:04:59.211: INFO: user.openshift.io/v1 matches user.openshift.io/v1 +Jul 27 03:04:59.211: INFO: Checking APIGroup: packages.operators.coreos.com +Jul 27 03:04:59.215: INFO: PreferredVersion.GroupVersion: packages.operators.coreos.com/v1 +Jul 27 03:04:59.215: INFO: Versions found [{packages.operators.coreos.com/v1 v1}] +Jul 27 03:04:59.215: INFO: packages.operators.coreos.com/v1 matches packages.operators.coreos.com/v1 +Jul 27 03:04:59.215: INFO: Checking APIGroup: config.openshift.io +Jul 27 03:04:59.218: INFO: PreferredVersion.GroupVersion: config.openshift.io/v1 +Jul 27 03:04:59.218: INFO: Versions found [{config.openshift.io/v1 v1}] +Jul 27 03:04:59.218: INFO: config.openshift.io/v1 matches config.openshift.io/v1 +Jul 27 03:04:59.218: INFO: Checking APIGroup: operator.openshift.io +Jul 27 03:04:59.223: INFO: PreferredVersion.GroupVersion: operator.openshift.io/v1 +Jul 27 03:04:59.223: INFO: Versions found [{operator.openshift.io/v1 v1} {operator.openshift.io/v1alpha1 v1alpha1}] +Jul 27 03:04:59.223: INFO: operator.openshift.io/v1 matches operator.openshift.io/v1 +Jul 27 03:04:59.223: INFO: Checking APIGroup: apiserver.openshift.io +Jul 27 03:04:59.226: INFO: PreferredVersion.GroupVersion: apiserver.openshift.io/v1 +Jul 27 03:04:59.226: INFO: Versions found [{apiserver.openshift.io/v1 v1}] +Jul 27 03:04:59.226: INFO: apiserver.openshift.io/v1 matches apiserver.openshift.io/v1 +Jul 27 03:04:59.226: INFO: Checking APIGroup: cloudcredential.openshift.io +Jul 27 03:04:59.229: INFO: PreferredVersion.GroupVersion: cloudcredential.openshift.io/v1 +Jul 27 03:04:59.229: INFO: Versions found [{cloudcredential.openshift.io/v1 v1}] +Jul 27 03:04:59.229: INFO: cloudcredential.openshift.io/v1 matches cloudcredential.openshift.io/v1 +Jul 27 03:04:59.229: INFO: Checking APIGroup: console.openshift.io +Jul 27 03:04:59.233: INFO: PreferredVersion.GroupVersion: console.openshift.io/v1 +Jul 27 03:04:59.233: INFO: Versions found [{console.openshift.io/v1 v1} {console.openshift.io/v1alpha1 v1alpha1}] +Jul 27 03:04:59.233: INFO: console.openshift.io/v1 matches console.openshift.io/v1 +Jul 27 03:04:59.233: INFO: Checking APIGroup: crd.projectcalico.org +Jul 27 03:04:59.238: INFO: PreferredVersion.GroupVersion: crd.projectcalico.org/v1 +Jul 27 03:04:59.238: INFO: Versions found [{crd.projectcalico.org/v1 v1}] +Jul 27 03:04:59.238: INFO: crd.projectcalico.org/v1 matches crd.projectcalico.org/v1 +Jul 27 03:04:59.238: INFO: Checking APIGroup: imageregistry.operator.openshift.io +Jul 27 03:04:59.244: INFO: PreferredVersion.GroupVersion: imageregistry.operator.openshift.io/v1 +Jul 27 03:04:59.244: INFO: Versions found [{imageregistry.operator.openshift.io/v1 v1}] +Jul 27 03:04:59.244: INFO: imageregistry.operator.openshift.io/v1 matches imageregistry.operator.openshift.io/v1 +Jul 27 03:04:59.244: INFO: Checking APIGroup: ingress.operator.openshift.io +Jul 27 03:04:59.248: INFO: PreferredVersion.GroupVersion: ingress.operator.openshift.io/v1 +Jul 27 03:04:59.248: INFO: Versions found [{ingress.operator.openshift.io/v1 v1}] +Jul 27 03:04:59.248: INFO: ingress.operator.openshift.io/v1 matches ingress.operator.openshift.io/v1 +Jul 27 03:04:59.248: INFO: Checking APIGroup: k8s.cni.cncf.io +Jul 27 03:04:59.252: INFO: PreferredVersion.GroupVersion: k8s.cni.cncf.io/v1 +Jul 27 03:04:59.252: INFO: Versions found [{k8s.cni.cncf.io/v1 v1}] +Jul 27 03:04:59.252: INFO: k8s.cni.cncf.io/v1 matches k8s.cni.cncf.io/v1 +Jul 27 03:04:59.252: INFO: Checking APIGroup: machineconfiguration.openshift.io +Jul 27 03:04:59.255: INFO: PreferredVersion.GroupVersion: machineconfiguration.openshift.io/v1 +Jul 27 03:04:59.255: INFO: Versions found [{machineconfiguration.openshift.io/v1 v1}] +Jul 27 03:04:59.255: INFO: machineconfiguration.openshift.io/v1 matches machineconfiguration.openshift.io/v1 +Jul 27 03:04:59.255: INFO: Checking APIGroup: monitoring.coreos.com +Jul 27 03:04:59.259: INFO: PreferredVersion.GroupVersion: monitoring.coreos.com/v1 +Jul 27 03:04:59.259: INFO: Versions found [{monitoring.coreos.com/v1 v1} {monitoring.coreos.com/v1beta1 v1beta1} {monitoring.coreos.com/v1alpha1 v1alpha1}] +Jul 27 03:04:59.259: INFO: monitoring.coreos.com/v1 matches monitoring.coreos.com/v1 +Jul 27 03:04:59.259: INFO: Checking APIGroup: network.operator.openshift.io +Jul 27 03:04:59.261: INFO: PreferredVersion.GroupVersion: network.operator.openshift.io/v1 +Jul 27 03:04:59.261: INFO: Versions found [{network.operator.openshift.io/v1 v1}] +Jul 27 03:04:59.261: INFO: network.operator.openshift.io/v1 matches network.operator.openshift.io/v1 +Jul 27 03:04:59.261: INFO: Checking APIGroup: operator.tigera.io +Jul 27 03:04:59.264: INFO: PreferredVersion.GroupVersion: operator.tigera.io/v1 +Jul 27 03:04:59.264: INFO: Versions found [{operator.tigera.io/v1 v1}] +Jul 27 03:04:59.264: INFO: operator.tigera.io/v1 matches operator.tigera.io/v1 +Jul 27 03:04:59.264: INFO: Checking APIGroup: operators.coreos.com +Jul 27 03:04:59.269: INFO: PreferredVersion.GroupVersion: operators.coreos.com/v2 +Jul 27 03:04:59.269: INFO: Versions found [{operators.coreos.com/v2 v2} {operators.coreos.com/v1 v1} {operators.coreos.com/v1alpha2 v1alpha2} {operators.coreos.com/v1alpha1 v1alpha1}] +Jul 27 03:04:59.269: INFO: operators.coreos.com/v2 matches operators.coreos.com/v2 +Jul 27 03:04:59.269: INFO: Checking APIGroup: performance.openshift.io +Jul 27 03:04:59.272: INFO: PreferredVersion.GroupVersion: performance.openshift.io/v2 +Jul 27 03:04:59.272: INFO: Versions found [{performance.openshift.io/v2 v2} {performance.openshift.io/v1 v1} {performance.openshift.io/v1alpha1 v1alpha1}] +Jul 27 03:04:59.272: INFO: performance.openshift.io/v2 matches performance.openshift.io/v2 +Jul 27 03:04:59.272: INFO: Checking APIGroup: samples.operator.openshift.io +Jul 27 03:04:59.279: INFO: PreferredVersion.GroupVersion: samples.operator.openshift.io/v1 +Jul 27 03:04:59.279: INFO: Versions found [{samples.operator.openshift.io/v1 v1}] +Jul 27 03:04:59.279: INFO: samples.operator.openshift.io/v1 matches samples.operator.openshift.io/v1 +Jul 27 03:04:59.279: INFO: Checking APIGroup: security.internal.openshift.io +Jul 27 03:04:59.282: INFO: PreferredVersion.GroupVersion: security.internal.openshift.io/v1 +Jul 27 03:04:59.282: INFO: Versions found [{security.internal.openshift.io/v1 v1}] +Jul 27 03:04:59.282: INFO: security.internal.openshift.io/v1 matches security.internal.openshift.io/v1 +Jul 27 03:04:59.282: INFO: Checking APIGroup: snapshot.storage.k8s.io +Jul 27 03:04:59.285: INFO: PreferredVersion.GroupVersion: snapshot.storage.k8s.io/v1 +Jul 27 03:04:59.285: INFO: Versions found [{snapshot.storage.k8s.io/v1 v1}] +Jul 27 03:04:59.285: INFO: snapshot.storage.k8s.io/v1 matches snapshot.storage.k8s.io/v1 +Jul 27 03:04:59.285: INFO: Checking APIGroup: tuned.openshift.io +Jul 27 03:04:59.288: INFO: PreferredVersion.GroupVersion: tuned.openshift.io/v1 +Jul 27 03:04:59.288: INFO: Versions found [{tuned.openshift.io/v1 v1}] +Jul 27 03:04:59.288: INFO: tuned.openshift.io/v1 matches tuned.openshift.io/v1 +Jul 27 03:04:59.288: INFO: Checking APIGroup: controlplane.operator.openshift.io +Jul 27 03:04:59.294: INFO: PreferredVersion.GroupVersion: controlplane.operator.openshift.io/v1alpha1 +Jul 27 03:04:59.294: INFO: Versions found [{controlplane.operator.openshift.io/v1alpha1 v1alpha1}] +Jul 27 03:04:59.294: INFO: controlplane.operator.openshift.io/v1alpha1 matches controlplane.operator.openshift.io/v1alpha1 +Jul 27 03:04:59.294: INFO: Checking APIGroup: ibm.com +Jul 27 03:04:59.299: INFO: PreferredVersion.GroupVersion: ibm.com/v1alpha1 +Jul 27 03:04:59.299: INFO: Versions found [{ibm.com/v1alpha1 v1alpha1}] +Jul 27 03:04:59.299: INFO: ibm.com/v1alpha1 matches ibm.com/v1alpha1 +Jul 27 03:04:59.299: INFO: Checking APIGroup: migration.k8s.io +Jul 27 03:04:59.304: INFO: PreferredVersion.GroupVersion: migration.k8s.io/v1alpha1 +Jul 27 03:04:59.304: INFO: Versions found [{migration.k8s.io/v1alpha1 v1alpha1}] +Jul 27 03:04:59.304: INFO: migration.k8s.io/v1alpha1 matches migration.k8s.io/v1alpha1 +Jul 27 03:04:59.304: INFO: Checking APIGroup: whereabouts.cni.cncf.io +Jul 27 03:04:59.346: INFO: PreferredVersion.GroupVersion: whereabouts.cni.cncf.io/v1alpha1 +Jul 27 03:04:59.346: INFO: Versions found [{whereabouts.cni.cncf.io/v1alpha1 v1alpha1}] +Jul 27 03:04:59.346: INFO: whereabouts.cni.cncf.io/v1alpha1 matches whereabouts.cni.cncf.io/v1alpha1 +Jul 27 03:04:59.346: INFO: Checking APIGroup: helm.openshift.io +Jul 27 03:04:59.398: INFO: PreferredVersion.GroupVersion: helm.openshift.io/v1beta1 +Jul 27 03:04:59.398: INFO: Versions found [{helm.openshift.io/v1beta1 v1beta1}] +Jul 27 03:04:59.398: INFO: helm.openshift.io/v1beta1 matches helm.openshift.io/v1beta1 +Jul 27 03:04:59.398: INFO: Checking APIGroup: metrics.k8s.io +Jul 27 03:04:59.448: INFO: PreferredVersion.GroupVersion: metrics.k8s.io/v1beta1 +Jul 27 03:04:59.448: INFO: Versions found [{metrics.k8s.io/v1beta1 v1beta1}] +Jul 27 03:04:59.448: INFO: metrics.k8s.io/v1beta1 matches metrics.k8s.io/v1beta1 +[AfterEach] [sig-api-machinery] Discovery test/e2e/framework/node/init/init.go:32 -Jun 12 22:37:01.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] CronJob +Jul 27 03:04:59.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Discovery test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] CronJob +[DeferCleanup (Each)] [sig-api-machinery] Discovery dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] CronJob +[DeferCleanup (Each)] [sig-api-machinery] Discovery tear down framework | framework.go:193 -STEP: Destroying namespace "cronjob-9804" for this suite. 06/12/23 22:37:01.386 +STEP: Destroying namespace "discovery-6258" for this suite. 07/27/23 03:04:59.512 ------------------------------ -• [SLOW TEST] [112.336 seconds] -[sig-apps] CronJob -test/e2e/apps/framework.go:23 - should schedule multiple jobs concurrently [Conformance] - test/e2e/apps/cronjob.go:69 +• [1.203 seconds] +[sig-api-machinery] Discovery +test/e2e/apimachinery/framework.go:23 + should validate PreferredVersion for each APIGroup [Conformance] + test/e2e/apimachinery/discovery.go:122 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] CronJob + [BeforeEach] [sig-api-machinery] Discovery set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:35:09.065 - Jun 12 22:35:09.066: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename cronjob 06/12/23 22:35:09.068 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:35:09.14 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:35:09.193 - [BeforeEach] [sig-apps] CronJob + STEP: Creating a kubernetes client 07/27/23 03:04:58.357 + Jul 27 03:04:58.357: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename discovery 07/27/23 03:04:58.357 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:04:58.388 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:04:58.405 + [BeforeEach] [sig-api-machinery] Discovery test/e2e/framework/metrics/init/init.go:31 - [It] should schedule multiple jobs concurrently [Conformance] - test/e2e/apps/cronjob.go:69 - STEP: Creating a cronjob 06/12/23 22:35:09.206 - STEP: Ensuring more than one job is running at a time 06/12/23 22:35:09.223 - STEP: Ensuring at least two running jobs exists by listing jobs explicitly 06/12/23 22:37:01.242 - STEP: Removing cronjob 06/12/23 22:37:01.28 - [AfterEach] [sig-apps] CronJob + [BeforeEach] [sig-api-machinery] Discovery + test/e2e/apimachinery/discovery.go:43 + STEP: Setting up server cert 07/27/23 03:04:58.416 + [It] should validate PreferredVersion for each APIGroup [Conformance] + test/e2e/apimachinery/discovery.go:122 + Jul 27 03:04:59.098: INFO: Checking APIGroup: apiregistration.k8s.io + Jul 27 03:04:59.102: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 + Jul 27 03:04:59.102: INFO: Versions found [{apiregistration.k8s.io/v1 v1}] + Jul 27 03:04:59.102: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 + Jul 27 03:04:59.102: INFO: Checking APIGroup: apps + Jul 27 03:04:59.105: INFO: PreferredVersion.GroupVersion: apps/v1 + Jul 27 03:04:59.105: INFO: Versions found [{apps/v1 v1}] + Jul 27 03:04:59.105: INFO: apps/v1 matches apps/v1 + Jul 27 03:04:59.105: INFO: Checking APIGroup: events.k8s.io + Jul 27 03:04:59.108: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 + Jul 27 03:04:59.108: INFO: Versions found [{events.k8s.io/v1 v1}] + Jul 27 03:04:59.108: INFO: events.k8s.io/v1 matches events.k8s.io/v1 + Jul 27 03:04:59.108: INFO: Checking APIGroup: authentication.k8s.io + Jul 27 03:04:59.111: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 + Jul 27 03:04:59.111: INFO: Versions found [{authentication.k8s.io/v1 v1}] + Jul 27 03:04:59.112: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 + Jul 27 03:04:59.112: INFO: Checking APIGroup: authorization.k8s.io + Jul 27 03:04:59.114: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 + Jul 27 03:04:59.114: INFO: Versions found [{authorization.k8s.io/v1 v1}] + Jul 27 03:04:59.114: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 + Jul 27 03:04:59.114: INFO: Checking APIGroup: autoscaling + Jul 27 03:04:59.118: INFO: PreferredVersion.GroupVersion: autoscaling/v2 + Jul 27 03:04:59.118: INFO: Versions found [{autoscaling/v2 v2} {autoscaling/v1 v1}] + Jul 27 03:04:59.118: INFO: autoscaling/v2 matches autoscaling/v2 + Jul 27 03:04:59.118: INFO: Checking APIGroup: batch + Jul 27 03:04:59.121: INFO: PreferredVersion.GroupVersion: batch/v1 + Jul 27 03:04:59.121: INFO: Versions found [{batch/v1 v1}] + Jul 27 03:04:59.121: INFO: batch/v1 matches batch/v1 + Jul 27 03:04:59.121: INFO: Checking APIGroup: certificates.k8s.io + Jul 27 03:04:59.124: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 + Jul 27 03:04:59.124: INFO: Versions found [{certificates.k8s.io/v1 v1}] + Jul 27 03:04:59.124: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 + Jul 27 03:04:59.124: INFO: Checking APIGroup: networking.k8s.io + Jul 27 03:04:59.131: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 + Jul 27 03:04:59.131: INFO: Versions found [{networking.k8s.io/v1 v1}] + Jul 27 03:04:59.131: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 + Jul 27 03:04:59.131: INFO: Checking APIGroup: policy + Jul 27 03:04:59.134: INFO: PreferredVersion.GroupVersion: policy/v1 + Jul 27 03:04:59.134: INFO: Versions found [{policy/v1 v1}] + Jul 27 03:04:59.134: INFO: policy/v1 matches policy/v1 + Jul 27 03:04:59.134: INFO: Checking APIGroup: rbac.authorization.k8s.io + Jul 27 03:04:59.137: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 + Jul 27 03:04:59.137: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1}] + Jul 27 03:04:59.137: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 + Jul 27 03:04:59.137: INFO: Checking APIGroup: storage.k8s.io + Jul 27 03:04:59.140: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 + Jul 27 03:04:59.140: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] + Jul 27 03:04:59.140: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 + Jul 27 03:04:59.140: INFO: Checking APIGroup: admissionregistration.k8s.io + Jul 27 03:04:59.143: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 + Jul 27 03:04:59.143: INFO: Versions found [{admissionregistration.k8s.io/v1 v1}] + Jul 27 03:04:59.143: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 + Jul 27 03:04:59.143: INFO: Checking APIGroup: apiextensions.k8s.io + Jul 27 03:04:59.149: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 + Jul 27 03:04:59.149: INFO: Versions found [{apiextensions.k8s.io/v1 v1}] + Jul 27 03:04:59.149: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 + Jul 27 03:04:59.149: INFO: Checking APIGroup: scheduling.k8s.io + Jul 27 03:04:59.153: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 + Jul 27 03:04:59.153: INFO: Versions found [{scheduling.k8s.io/v1 v1}] + Jul 27 03:04:59.153: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 + Jul 27 03:04:59.153: INFO: Checking APIGroup: coordination.k8s.io + Jul 27 03:04:59.156: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 + Jul 27 03:04:59.156: INFO: Versions found [{coordination.k8s.io/v1 v1}] + Jul 27 03:04:59.156: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 + Jul 27 03:04:59.156: INFO: Checking APIGroup: node.k8s.io + Jul 27 03:04:59.159: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 + Jul 27 03:04:59.159: INFO: Versions found [{node.k8s.io/v1 v1}] + Jul 27 03:04:59.159: INFO: node.k8s.io/v1 matches node.k8s.io/v1 + Jul 27 03:04:59.159: INFO: Checking APIGroup: discovery.k8s.io + Jul 27 03:04:59.162: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 + Jul 27 03:04:59.162: INFO: Versions found [{discovery.k8s.io/v1 v1}] + Jul 27 03:04:59.162: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 + Jul 27 03:04:59.162: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io + Jul 27 03:04:59.167: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta3 + Jul 27 03:04:59.167: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta3 v1beta3} {flowcontrol.apiserver.k8s.io/v1beta2 v1beta2}] + Jul 27 03:04:59.167: INFO: flowcontrol.apiserver.k8s.io/v1beta3 matches flowcontrol.apiserver.k8s.io/v1beta3 + Jul 27 03:04:59.167: INFO: Checking APIGroup: apps.openshift.io + Jul 27 03:04:59.170: INFO: PreferredVersion.GroupVersion: apps.openshift.io/v1 + Jul 27 03:04:59.170: INFO: Versions found [{apps.openshift.io/v1 v1}] + Jul 27 03:04:59.170: INFO: apps.openshift.io/v1 matches apps.openshift.io/v1 + Jul 27 03:04:59.170: INFO: Checking APIGroup: authorization.openshift.io + Jul 27 03:04:59.174: INFO: PreferredVersion.GroupVersion: authorization.openshift.io/v1 + Jul 27 03:04:59.174: INFO: Versions found [{authorization.openshift.io/v1 v1}] + Jul 27 03:04:59.174: INFO: authorization.openshift.io/v1 matches authorization.openshift.io/v1 + Jul 27 03:04:59.174: INFO: Checking APIGroup: build.openshift.io + Jul 27 03:04:59.178: INFO: PreferredVersion.GroupVersion: build.openshift.io/v1 + Jul 27 03:04:59.178: INFO: Versions found [{build.openshift.io/v1 v1}] + Jul 27 03:04:59.178: INFO: build.openshift.io/v1 matches build.openshift.io/v1 + Jul 27 03:04:59.178: INFO: Checking APIGroup: image.openshift.io + Jul 27 03:04:59.182: INFO: PreferredVersion.GroupVersion: image.openshift.io/v1 + Jul 27 03:04:59.182: INFO: Versions found [{image.openshift.io/v1 v1}] + Jul 27 03:04:59.182: INFO: image.openshift.io/v1 matches image.openshift.io/v1 + Jul 27 03:04:59.182: INFO: Checking APIGroup: oauth.openshift.io + Jul 27 03:04:59.187: INFO: PreferredVersion.GroupVersion: oauth.openshift.io/v1 + Jul 27 03:04:59.187: INFO: Versions found [{oauth.openshift.io/v1 v1}] + Jul 27 03:04:59.187: INFO: oauth.openshift.io/v1 matches oauth.openshift.io/v1 + Jul 27 03:04:59.187: INFO: Checking APIGroup: project.openshift.io + Jul 27 03:04:59.191: INFO: PreferredVersion.GroupVersion: project.openshift.io/v1 + Jul 27 03:04:59.191: INFO: Versions found [{project.openshift.io/v1 v1}] + Jul 27 03:04:59.191: INFO: project.openshift.io/v1 matches project.openshift.io/v1 + Jul 27 03:04:59.191: INFO: Checking APIGroup: quota.openshift.io + Jul 27 03:04:59.195: INFO: PreferredVersion.GroupVersion: quota.openshift.io/v1 + Jul 27 03:04:59.195: INFO: Versions found [{quota.openshift.io/v1 v1}] + Jul 27 03:04:59.195: INFO: quota.openshift.io/v1 matches quota.openshift.io/v1 + Jul 27 03:04:59.195: INFO: Checking APIGroup: route.openshift.io + Jul 27 03:04:59.200: INFO: PreferredVersion.GroupVersion: route.openshift.io/v1 + Jul 27 03:04:59.200: INFO: Versions found [{route.openshift.io/v1 v1}] + Jul 27 03:04:59.200: INFO: route.openshift.io/v1 matches route.openshift.io/v1 + Jul 27 03:04:59.200: INFO: Checking APIGroup: security.openshift.io + Jul 27 03:04:59.204: INFO: PreferredVersion.GroupVersion: security.openshift.io/v1 + Jul 27 03:04:59.204: INFO: Versions found [{security.openshift.io/v1 v1}] + Jul 27 03:04:59.204: INFO: security.openshift.io/v1 matches security.openshift.io/v1 + Jul 27 03:04:59.204: INFO: Checking APIGroup: template.openshift.io + Jul 27 03:04:59.208: INFO: PreferredVersion.GroupVersion: template.openshift.io/v1 + Jul 27 03:04:59.208: INFO: Versions found [{template.openshift.io/v1 v1}] + Jul 27 03:04:59.208: INFO: template.openshift.io/v1 matches template.openshift.io/v1 + Jul 27 03:04:59.208: INFO: Checking APIGroup: user.openshift.io + Jul 27 03:04:59.211: INFO: PreferredVersion.GroupVersion: user.openshift.io/v1 + Jul 27 03:04:59.211: INFO: Versions found [{user.openshift.io/v1 v1}] + Jul 27 03:04:59.211: INFO: user.openshift.io/v1 matches user.openshift.io/v1 + Jul 27 03:04:59.211: INFO: Checking APIGroup: packages.operators.coreos.com + Jul 27 03:04:59.215: INFO: PreferredVersion.GroupVersion: packages.operators.coreos.com/v1 + Jul 27 03:04:59.215: INFO: Versions found [{packages.operators.coreos.com/v1 v1}] + Jul 27 03:04:59.215: INFO: packages.operators.coreos.com/v1 matches packages.operators.coreos.com/v1 + Jul 27 03:04:59.215: INFO: Checking APIGroup: config.openshift.io + Jul 27 03:04:59.218: INFO: PreferredVersion.GroupVersion: config.openshift.io/v1 + Jul 27 03:04:59.218: INFO: Versions found [{config.openshift.io/v1 v1}] + Jul 27 03:04:59.218: INFO: config.openshift.io/v1 matches config.openshift.io/v1 + Jul 27 03:04:59.218: INFO: Checking APIGroup: operator.openshift.io + Jul 27 03:04:59.223: INFO: PreferredVersion.GroupVersion: operator.openshift.io/v1 + Jul 27 03:04:59.223: INFO: Versions found [{operator.openshift.io/v1 v1} {operator.openshift.io/v1alpha1 v1alpha1}] + Jul 27 03:04:59.223: INFO: operator.openshift.io/v1 matches operator.openshift.io/v1 + Jul 27 03:04:59.223: INFO: Checking APIGroup: apiserver.openshift.io + Jul 27 03:04:59.226: INFO: PreferredVersion.GroupVersion: apiserver.openshift.io/v1 + Jul 27 03:04:59.226: INFO: Versions found [{apiserver.openshift.io/v1 v1}] + Jul 27 03:04:59.226: INFO: apiserver.openshift.io/v1 matches apiserver.openshift.io/v1 + Jul 27 03:04:59.226: INFO: Checking APIGroup: cloudcredential.openshift.io + Jul 27 03:04:59.229: INFO: PreferredVersion.GroupVersion: cloudcredential.openshift.io/v1 + Jul 27 03:04:59.229: INFO: Versions found [{cloudcredential.openshift.io/v1 v1}] + Jul 27 03:04:59.229: INFO: cloudcredential.openshift.io/v1 matches cloudcredential.openshift.io/v1 + Jul 27 03:04:59.229: INFO: Checking APIGroup: console.openshift.io + Jul 27 03:04:59.233: INFO: PreferredVersion.GroupVersion: console.openshift.io/v1 + Jul 27 03:04:59.233: INFO: Versions found [{console.openshift.io/v1 v1} {console.openshift.io/v1alpha1 v1alpha1}] + Jul 27 03:04:59.233: INFO: console.openshift.io/v1 matches console.openshift.io/v1 + Jul 27 03:04:59.233: INFO: Checking APIGroup: crd.projectcalico.org + Jul 27 03:04:59.238: INFO: PreferredVersion.GroupVersion: crd.projectcalico.org/v1 + Jul 27 03:04:59.238: INFO: Versions found [{crd.projectcalico.org/v1 v1}] + Jul 27 03:04:59.238: INFO: crd.projectcalico.org/v1 matches crd.projectcalico.org/v1 + Jul 27 03:04:59.238: INFO: Checking APIGroup: imageregistry.operator.openshift.io + Jul 27 03:04:59.244: INFO: PreferredVersion.GroupVersion: imageregistry.operator.openshift.io/v1 + Jul 27 03:04:59.244: INFO: Versions found [{imageregistry.operator.openshift.io/v1 v1}] + Jul 27 03:04:59.244: INFO: imageregistry.operator.openshift.io/v1 matches imageregistry.operator.openshift.io/v1 + Jul 27 03:04:59.244: INFO: Checking APIGroup: ingress.operator.openshift.io + Jul 27 03:04:59.248: INFO: PreferredVersion.GroupVersion: ingress.operator.openshift.io/v1 + Jul 27 03:04:59.248: INFO: Versions found [{ingress.operator.openshift.io/v1 v1}] + Jul 27 03:04:59.248: INFO: ingress.operator.openshift.io/v1 matches ingress.operator.openshift.io/v1 + Jul 27 03:04:59.248: INFO: Checking APIGroup: k8s.cni.cncf.io + Jul 27 03:04:59.252: INFO: PreferredVersion.GroupVersion: k8s.cni.cncf.io/v1 + Jul 27 03:04:59.252: INFO: Versions found [{k8s.cni.cncf.io/v1 v1}] + Jul 27 03:04:59.252: INFO: k8s.cni.cncf.io/v1 matches k8s.cni.cncf.io/v1 + Jul 27 03:04:59.252: INFO: Checking APIGroup: machineconfiguration.openshift.io + Jul 27 03:04:59.255: INFO: PreferredVersion.GroupVersion: machineconfiguration.openshift.io/v1 + Jul 27 03:04:59.255: INFO: Versions found [{machineconfiguration.openshift.io/v1 v1}] + Jul 27 03:04:59.255: INFO: machineconfiguration.openshift.io/v1 matches machineconfiguration.openshift.io/v1 + Jul 27 03:04:59.255: INFO: Checking APIGroup: monitoring.coreos.com + Jul 27 03:04:59.259: INFO: PreferredVersion.GroupVersion: monitoring.coreos.com/v1 + Jul 27 03:04:59.259: INFO: Versions found [{monitoring.coreos.com/v1 v1} {monitoring.coreos.com/v1beta1 v1beta1} {monitoring.coreos.com/v1alpha1 v1alpha1}] + Jul 27 03:04:59.259: INFO: monitoring.coreos.com/v1 matches monitoring.coreos.com/v1 + Jul 27 03:04:59.259: INFO: Checking APIGroup: network.operator.openshift.io + Jul 27 03:04:59.261: INFO: PreferredVersion.GroupVersion: network.operator.openshift.io/v1 + Jul 27 03:04:59.261: INFO: Versions found [{network.operator.openshift.io/v1 v1}] + Jul 27 03:04:59.261: INFO: network.operator.openshift.io/v1 matches network.operator.openshift.io/v1 + Jul 27 03:04:59.261: INFO: Checking APIGroup: operator.tigera.io + Jul 27 03:04:59.264: INFO: PreferredVersion.GroupVersion: operator.tigera.io/v1 + Jul 27 03:04:59.264: INFO: Versions found [{operator.tigera.io/v1 v1}] + Jul 27 03:04:59.264: INFO: operator.tigera.io/v1 matches operator.tigera.io/v1 + Jul 27 03:04:59.264: INFO: Checking APIGroup: operators.coreos.com + Jul 27 03:04:59.269: INFO: PreferredVersion.GroupVersion: operators.coreos.com/v2 + Jul 27 03:04:59.269: INFO: Versions found [{operators.coreos.com/v2 v2} {operators.coreos.com/v1 v1} {operators.coreos.com/v1alpha2 v1alpha2} {operators.coreos.com/v1alpha1 v1alpha1}] + Jul 27 03:04:59.269: INFO: operators.coreos.com/v2 matches operators.coreos.com/v2 + Jul 27 03:04:59.269: INFO: Checking APIGroup: performance.openshift.io + Jul 27 03:04:59.272: INFO: PreferredVersion.GroupVersion: performance.openshift.io/v2 + Jul 27 03:04:59.272: INFO: Versions found [{performance.openshift.io/v2 v2} {performance.openshift.io/v1 v1} {performance.openshift.io/v1alpha1 v1alpha1}] + Jul 27 03:04:59.272: INFO: performance.openshift.io/v2 matches performance.openshift.io/v2 + Jul 27 03:04:59.272: INFO: Checking APIGroup: samples.operator.openshift.io + Jul 27 03:04:59.279: INFO: PreferredVersion.GroupVersion: samples.operator.openshift.io/v1 + Jul 27 03:04:59.279: INFO: Versions found [{samples.operator.openshift.io/v1 v1}] + Jul 27 03:04:59.279: INFO: samples.operator.openshift.io/v1 matches samples.operator.openshift.io/v1 + Jul 27 03:04:59.279: INFO: Checking APIGroup: security.internal.openshift.io + Jul 27 03:04:59.282: INFO: PreferredVersion.GroupVersion: security.internal.openshift.io/v1 + Jul 27 03:04:59.282: INFO: Versions found [{security.internal.openshift.io/v1 v1}] + Jul 27 03:04:59.282: INFO: security.internal.openshift.io/v1 matches security.internal.openshift.io/v1 + Jul 27 03:04:59.282: INFO: Checking APIGroup: snapshot.storage.k8s.io + Jul 27 03:04:59.285: INFO: PreferredVersion.GroupVersion: snapshot.storage.k8s.io/v1 + Jul 27 03:04:59.285: INFO: Versions found [{snapshot.storage.k8s.io/v1 v1}] + Jul 27 03:04:59.285: INFO: snapshot.storage.k8s.io/v1 matches snapshot.storage.k8s.io/v1 + Jul 27 03:04:59.285: INFO: Checking APIGroup: tuned.openshift.io + Jul 27 03:04:59.288: INFO: PreferredVersion.GroupVersion: tuned.openshift.io/v1 + Jul 27 03:04:59.288: INFO: Versions found [{tuned.openshift.io/v1 v1}] + Jul 27 03:04:59.288: INFO: tuned.openshift.io/v1 matches tuned.openshift.io/v1 + Jul 27 03:04:59.288: INFO: Checking APIGroup: controlplane.operator.openshift.io + Jul 27 03:04:59.294: INFO: PreferredVersion.GroupVersion: controlplane.operator.openshift.io/v1alpha1 + Jul 27 03:04:59.294: INFO: Versions found [{controlplane.operator.openshift.io/v1alpha1 v1alpha1}] + Jul 27 03:04:59.294: INFO: controlplane.operator.openshift.io/v1alpha1 matches controlplane.operator.openshift.io/v1alpha1 + Jul 27 03:04:59.294: INFO: Checking APIGroup: ibm.com + Jul 27 03:04:59.299: INFO: PreferredVersion.GroupVersion: ibm.com/v1alpha1 + Jul 27 03:04:59.299: INFO: Versions found [{ibm.com/v1alpha1 v1alpha1}] + Jul 27 03:04:59.299: INFO: ibm.com/v1alpha1 matches ibm.com/v1alpha1 + Jul 27 03:04:59.299: INFO: Checking APIGroup: migration.k8s.io + Jul 27 03:04:59.304: INFO: PreferredVersion.GroupVersion: migration.k8s.io/v1alpha1 + Jul 27 03:04:59.304: INFO: Versions found [{migration.k8s.io/v1alpha1 v1alpha1}] + Jul 27 03:04:59.304: INFO: migration.k8s.io/v1alpha1 matches migration.k8s.io/v1alpha1 + Jul 27 03:04:59.304: INFO: Checking APIGroup: whereabouts.cni.cncf.io + Jul 27 03:04:59.346: INFO: PreferredVersion.GroupVersion: whereabouts.cni.cncf.io/v1alpha1 + Jul 27 03:04:59.346: INFO: Versions found [{whereabouts.cni.cncf.io/v1alpha1 v1alpha1}] + Jul 27 03:04:59.346: INFO: whereabouts.cni.cncf.io/v1alpha1 matches whereabouts.cni.cncf.io/v1alpha1 + Jul 27 03:04:59.346: INFO: Checking APIGroup: helm.openshift.io + Jul 27 03:04:59.398: INFO: PreferredVersion.GroupVersion: helm.openshift.io/v1beta1 + Jul 27 03:04:59.398: INFO: Versions found [{helm.openshift.io/v1beta1 v1beta1}] + Jul 27 03:04:59.398: INFO: helm.openshift.io/v1beta1 matches helm.openshift.io/v1beta1 + Jul 27 03:04:59.398: INFO: Checking APIGroup: metrics.k8s.io + Jul 27 03:04:59.448: INFO: PreferredVersion.GroupVersion: metrics.k8s.io/v1beta1 + Jul 27 03:04:59.448: INFO: Versions found [{metrics.k8s.io/v1beta1 v1beta1}] + Jul 27 03:04:59.448: INFO: metrics.k8s.io/v1beta1 matches metrics.k8s.io/v1beta1 + [AfterEach] [sig-api-machinery] Discovery test/e2e/framework/node/init/init.go:32 - Jun 12 22:37:01.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] CronJob + Jul 27 03:04:59.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Discovery test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] CronJob + [DeferCleanup (Each)] [sig-api-machinery] Discovery dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] CronJob + [DeferCleanup (Each)] [sig-api-machinery] Discovery tear down framework | framework.go:193 - STEP: Destroying namespace "cronjob-9804" for this suite. 06/12/23 22:37:01.386 + STEP: Destroying namespace "discovery-6258" for this suite. 07/27/23 03:04:59.512 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] Subpath Atomic writer volumes - should support subpaths with secret pod [Conformance] - test/e2e/storage/subpath.go:60 -[BeforeEach] [sig-storage] Subpath +[sig-storage] Projected downwardAPI + should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:207 +[BeforeEach] [sig-storage] Projected downwardAPI set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:37:01.43 -Jun 12 22:37:01.430: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename subpath 06/12/23 22:37:01.432 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:37:01.503 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:37:01.519 -[BeforeEach] [sig-storage] Subpath +STEP: Creating a kubernetes client 07/27/23 03:04:59.561 +Jul 27 03:04:59.561: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename projected 07/27/23 03:04:59.562 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:04:59.589 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:04:59.598 +[BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] Atomic writer volumes - test/e2e/storage/subpath.go:40 -STEP: Setting up data 06/12/23 22:37:01.585 -[It] should support subpaths with secret pod [Conformance] - test/e2e/storage/subpath.go:60 -STEP: Creating pod pod-subpath-test-secret-ksx2 06/12/23 22:37:01.687 -STEP: Creating a pod to test atomic-volume-subpath 06/12/23 22:37:01.687 -Jun 12 22:37:01.759: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-ksx2" in namespace "subpath-7505" to be "Succeeded or Failed" -Jun 12 22:37:01.784: INFO: Pod "pod-subpath-test-secret-ksx2": Phase="Pending", Reason="", readiness=false. Elapsed: 22.003596ms -Jun 12 22:37:03.816: INFO: Pod "pod-subpath-test-secret-ksx2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053677167s -Jun 12 22:37:05.845: INFO: Pod "pod-subpath-test-secret-ksx2": Phase="Running", Reason="", readiness=true. Elapsed: 4.083178096s -Jun 12 22:37:07.835: INFO: Pod "pod-subpath-test-secret-ksx2": Phase="Running", Reason="", readiness=true. Elapsed: 6.073279173s -Jun 12 22:37:09.804: INFO: Pod "pod-subpath-test-secret-ksx2": Phase="Running", Reason="", readiness=true. Elapsed: 8.041663649s -Jun 12 22:37:11.799: INFO: Pod "pod-subpath-test-secret-ksx2": Phase="Running", Reason="", readiness=true. Elapsed: 10.036905739s -Jun 12 22:37:13.826: INFO: Pod "pod-subpath-test-secret-ksx2": Phase="Running", Reason="", readiness=true. Elapsed: 12.064110798s -Jun 12 22:37:15.796: INFO: Pod "pod-subpath-test-secret-ksx2": Phase="Running", Reason="", readiness=true. Elapsed: 14.033398773s -Jun 12 22:37:17.796: INFO: Pod "pod-subpath-test-secret-ksx2": Phase="Running", Reason="", readiness=true. Elapsed: 16.034057197s -Jun 12 22:37:19.798: INFO: Pod "pod-subpath-test-secret-ksx2": Phase="Running", Reason="", readiness=true. Elapsed: 18.035454615s -Jun 12 22:37:21.795: INFO: Pod "pod-subpath-test-secret-ksx2": Phase="Running", Reason="", readiness=true. Elapsed: 20.032710535s -Jun 12 22:37:23.794: INFO: Pod "pod-subpath-test-secret-ksx2": Phase="Running", Reason="", readiness=true. Elapsed: 22.032368525s -Jun 12 22:37:25.797: INFO: Pod "pod-subpath-test-secret-ksx2": Phase="Running", Reason="", readiness=false. Elapsed: 24.035154632s -Jun 12 22:37:27.797: INFO: Pod "pod-subpath-test-secret-ksx2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.034629714s -STEP: Saw pod success 06/12/23 22:37:27.797 -Jun 12 22:37:27.798: INFO: Pod "pod-subpath-test-secret-ksx2" satisfied condition "Succeeded or Failed" -Jun 12 22:37:27.808: INFO: Trying to get logs from node 10.138.75.70 pod pod-subpath-test-secret-ksx2 container test-container-subpath-secret-ksx2: -STEP: delete the pod 06/12/23 22:37:27.863 -Jun 12 22:37:27.888: INFO: Waiting for pod pod-subpath-test-secret-ksx2 to disappear -Jun 12 22:37:27.897: INFO: Pod pod-subpath-test-secret-ksx2 no longer exists -STEP: Deleting pod pod-subpath-test-secret-ksx2 06/12/23 22:37:27.897 -Jun 12 22:37:27.898: INFO: Deleting pod "pod-subpath-test-secret-ksx2" in namespace "subpath-7505" -[AfterEach] [sig-storage] Subpath +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 +[It] should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:207 +STEP: Creating a pod to test downward API volume plugin 07/27/23 03:04:59.606 +Jul 27 03:05:00.626: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0d81ea20-6a32-41d8-90bc-6bea6c454911" in namespace "projected-2117" to be "Succeeded or Failed" +Jul 27 03:05:00.634: INFO: Pod "downwardapi-volume-0d81ea20-6a32-41d8-90bc-6bea6c454911": Phase="Pending", Reason="", readiness=false. Elapsed: 8.220028ms +Jul 27 03:05:02.643: INFO: Pod "downwardapi-volume-0d81ea20-6a32-41d8-90bc-6bea6c454911": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017474275s +Jul 27 03:05:04.646: INFO: Pod "downwardapi-volume-0d81ea20-6a32-41d8-90bc-6bea6c454911": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019964168s +STEP: Saw pod success 07/27/23 03:05:04.646 +Jul 27 03:05:04.646: INFO: Pod "downwardapi-volume-0d81ea20-6a32-41d8-90bc-6bea6c454911" satisfied condition "Succeeded or Failed" +Jul 27 03:05:04.653: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-0d81ea20-6a32-41d8-90bc-6bea6c454911 container client-container: +STEP: delete the pod 07/27/23 03:05:04.686 +Jul 27 03:05:04.712: INFO: Waiting for pod downwardapi-volume-0d81ea20-6a32-41d8-90bc-6bea6c454911 to disappear +Jul 27 03:05:04.720: INFO: Pod downwardapi-volume-0d81ea20-6a32-41d8-90bc-6bea6c454911 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/node/init/init.go:32 -Jun 12 22:37:27.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Subpath +Jul 27 03:05:04.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Subpath +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Subpath +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI tear down framework | framework.go:193 -STEP: Destroying namespace "subpath-7505" for this suite. 06/12/23 22:37:27.921 +STEP: Destroying namespace "projected-2117" for this suite. 07/27/23 03:05:04.735 ------------------------------ -• [SLOW TEST] [26.504 seconds] -[sig-storage] Subpath -test/e2e/storage/utils/framework.go:23 - Atomic writer volumes - test/e2e/storage/subpath.go:36 - should support subpaths with secret pod [Conformance] - test/e2e/storage/subpath.go:60 +• [SLOW TEST] [5.189 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:207 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Subpath + [BeforeEach] [sig-storage] Projected downwardAPI set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:37:01.43 - Jun 12 22:37:01.430: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename subpath 06/12/23 22:37:01.432 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:37:01.503 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:37:01.519 - [BeforeEach] [sig-storage] Subpath + STEP: Creating a kubernetes client 07/27/23 03:04:59.561 + Jul 27 03:04:59.561: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename projected 07/27/23 03:04:59.562 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:04:59.589 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:04:59.598 + [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] Atomic writer volumes - test/e2e/storage/subpath.go:40 - STEP: Setting up data 06/12/23 22:37:01.585 - [It] should support subpaths with secret pod [Conformance] - test/e2e/storage/subpath.go:60 - STEP: Creating pod pod-subpath-test-secret-ksx2 06/12/23 22:37:01.687 - STEP: Creating a pod to test atomic-volume-subpath 06/12/23 22:37:01.687 - Jun 12 22:37:01.759: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-ksx2" in namespace "subpath-7505" to be "Succeeded or Failed" - Jun 12 22:37:01.784: INFO: Pod "pod-subpath-test-secret-ksx2": Phase="Pending", Reason="", readiness=false. Elapsed: 22.003596ms - Jun 12 22:37:03.816: INFO: Pod "pod-subpath-test-secret-ksx2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053677167s - Jun 12 22:37:05.845: INFO: Pod "pod-subpath-test-secret-ksx2": Phase="Running", Reason="", readiness=true. Elapsed: 4.083178096s - Jun 12 22:37:07.835: INFO: Pod "pod-subpath-test-secret-ksx2": Phase="Running", Reason="", readiness=true. Elapsed: 6.073279173s - Jun 12 22:37:09.804: INFO: Pod "pod-subpath-test-secret-ksx2": Phase="Running", Reason="", readiness=true. Elapsed: 8.041663649s - Jun 12 22:37:11.799: INFO: Pod "pod-subpath-test-secret-ksx2": Phase="Running", Reason="", readiness=true. Elapsed: 10.036905739s - Jun 12 22:37:13.826: INFO: Pod "pod-subpath-test-secret-ksx2": Phase="Running", Reason="", readiness=true. Elapsed: 12.064110798s - Jun 12 22:37:15.796: INFO: Pod "pod-subpath-test-secret-ksx2": Phase="Running", Reason="", readiness=true. Elapsed: 14.033398773s - Jun 12 22:37:17.796: INFO: Pod "pod-subpath-test-secret-ksx2": Phase="Running", Reason="", readiness=true. Elapsed: 16.034057197s - Jun 12 22:37:19.798: INFO: Pod "pod-subpath-test-secret-ksx2": Phase="Running", Reason="", readiness=true. Elapsed: 18.035454615s - Jun 12 22:37:21.795: INFO: Pod "pod-subpath-test-secret-ksx2": Phase="Running", Reason="", readiness=true. Elapsed: 20.032710535s - Jun 12 22:37:23.794: INFO: Pod "pod-subpath-test-secret-ksx2": Phase="Running", Reason="", readiness=true. Elapsed: 22.032368525s - Jun 12 22:37:25.797: INFO: Pod "pod-subpath-test-secret-ksx2": Phase="Running", Reason="", readiness=false. Elapsed: 24.035154632s - Jun 12 22:37:27.797: INFO: Pod "pod-subpath-test-secret-ksx2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.034629714s - STEP: Saw pod success 06/12/23 22:37:27.797 - Jun 12 22:37:27.798: INFO: Pod "pod-subpath-test-secret-ksx2" satisfied condition "Succeeded or Failed" - Jun 12 22:37:27.808: INFO: Trying to get logs from node 10.138.75.70 pod pod-subpath-test-secret-ksx2 container test-container-subpath-secret-ksx2: - STEP: delete the pod 06/12/23 22:37:27.863 - Jun 12 22:37:27.888: INFO: Waiting for pod pod-subpath-test-secret-ksx2 to disappear - Jun 12 22:37:27.897: INFO: Pod pod-subpath-test-secret-ksx2 no longer exists - STEP: Deleting pod pod-subpath-test-secret-ksx2 06/12/23 22:37:27.897 - Jun 12 22:37:27.898: INFO: Deleting pod "pod-subpath-test-secret-ksx2" in namespace "subpath-7505" - [AfterEach] [sig-storage] Subpath + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 + [It] should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:207 + STEP: Creating a pod to test downward API volume plugin 07/27/23 03:04:59.606 + Jul 27 03:05:00.626: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0d81ea20-6a32-41d8-90bc-6bea6c454911" in namespace "projected-2117" to be "Succeeded or Failed" + Jul 27 03:05:00.634: INFO: Pod "downwardapi-volume-0d81ea20-6a32-41d8-90bc-6bea6c454911": Phase="Pending", Reason="", readiness=false. Elapsed: 8.220028ms + Jul 27 03:05:02.643: INFO: Pod "downwardapi-volume-0d81ea20-6a32-41d8-90bc-6bea6c454911": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017474275s + Jul 27 03:05:04.646: INFO: Pod "downwardapi-volume-0d81ea20-6a32-41d8-90bc-6bea6c454911": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019964168s + STEP: Saw pod success 07/27/23 03:05:04.646 + Jul 27 03:05:04.646: INFO: Pod "downwardapi-volume-0d81ea20-6a32-41d8-90bc-6bea6c454911" satisfied condition "Succeeded or Failed" + Jul 27 03:05:04.653: INFO: Trying to get logs from node 10.245.128.19 pod downwardapi-volume-0d81ea20-6a32-41d8-90bc-6bea6c454911 container client-container: + STEP: delete the pod 07/27/23 03:05:04.686 + Jul 27 03:05:04.712: INFO: Waiting for pod downwardapi-volume-0d81ea20-6a32-41d8-90bc-6bea6c454911 to disappear + Jul 27 03:05:04.720: INFO: Pod downwardapi-volume-0d81ea20-6a32-41d8-90bc-6bea6c454911 no longer exists + [AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/node/init/init.go:32 - Jun 12 22:37:27.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Subpath + Jul 27 03:05:04.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Subpath + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Subpath + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI tear down framework | framework.go:193 - STEP: Destroying namespace "subpath-7505" for this suite. 06/12/23 22:37:27.921 + STEP: Destroying namespace "projected-2117" for this suite. 07/27/23 03:05:04.735 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSSSSSSSSSSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-storage] Projected combined - should project all components that make up the projection API [Projection][NodeConformance] [Conformance] - test/e2e/common/storage/projected_combined.go:44 -[BeforeEach] [sig-storage] Projected combined +[sig-apps] Daemon set [Serial] + should verify changes to a daemon set status [Conformance] + test/e2e/apps/daemon_set.go:862 +[BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:37:27.937 -Jun 12 22:37:27.937: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename projected 06/12/23 22:37:27.942 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:37:27.989 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:37:28.001 -[BeforeEach] [sig-storage] Projected combined +STEP: Creating a kubernetes client 07/27/23 03:05:04.751 +Jul 27 03:05:04.751: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename daemonsets 07/27/23 03:05:04.752 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:05:04.78 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:05:04.786 +[BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 -[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] - test/e2e/common/storage/projected_combined.go:44 -STEP: Creating configMap with name configmap-projected-all-test-volume-137d9ab3-870f-4385-9522-f642faca5918 06/12/23 22:37:28.016 -STEP: Creating secret with name secret-projected-all-test-volume-5ba4ca0c-8093-40e5-9eb5-972f6616cdb4 06/12/23 22:37:28.034 -STEP: Creating a pod to test Check all projections for projected volume plugin 06/12/23 22:37:28.067 -Jun 12 22:37:28.099: INFO: Waiting up to 5m0s for pod "projected-volume-a817abc1-5add-4a97-a718-fdd2aa132c4d" in namespace "projected-944" to be "Succeeded or Failed" -Jun 12 22:37:28.115: INFO: Pod "projected-volume-a817abc1-5add-4a97-a718-fdd2aa132c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.871979ms -Jun 12 22:37:30.127: INFO: Pod "projected-volume-a817abc1-5add-4a97-a718-fdd2aa132c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027730185s -Jun 12 22:37:32.133: INFO: Pod "projected-volume-a817abc1-5add-4a97-a718-fdd2aa132c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03281096s -Jun 12 22:37:34.135: INFO: Pod "projected-volume-a817abc1-5add-4a97-a718-fdd2aa132c4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035247529s -STEP: Saw pod success 06/12/23 22:37:34.135 -Jun 12 22:37:34.136: INFO: Pod "projected-volume-a817abc1-5add-4a97-a718-fdd2aa132c4d" satisfied condition "Succeeded or Failed" -Jun 12 22:37:34.147: INFO: Trying to get logs from node 10.138.75.70 pod projected-volume-a817abc1-5add-4a97-a718-fdd2aa132c4d container projected-all-volume-test: -STEP: delete the pod 06/12/23 22:37:34.184 -Jun 12 22:37:34.209: INFO: Waiting for pod projected-volume-a817abc1-5add-4a97-a718-fdd2aa132c4d to disappear -Jun 12 22:37:34.218: INFO: Pod projected-volume-a817abc1-5add-4a97-a718-fdd2aa132c4d no longer exists -[AfterEach] [sig-storage] Projected combined +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 +[It] should verify changes to a daemon set status [Conformance] + test/e2e/apps/daemon_set.go:862 +STEP: Creating simple DaemonSet "daemon-set" 07/27/23 03:05:04.863 +W0727 03:05:04.878426 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "app" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "app" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "app" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "app" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +STEP: Check that daemon pods launch on every node of the cluster. 07/27/23 03:05:04.878 +Jul 27 03:05:04.929: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jul 27 03:05:04.929: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 +Jul 27 03:05:05.953: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jul 27 03:05:05.953: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 +Jul 27 03:05:06.954: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Jul 27 03:05:06.954: INFO: Node 10.245.128.19 is running 0 daemon pod, expected 1 +Jul 27 03:05:07.954: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Jul 27 03:05:07.954: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +STEP: Getting /status 07/27/23 03:05:07.962 +Jul 27 03:05:07.972: INFO: Daemon Set daemon-set has Conditions: [] +STEP: updating the DaemonSet Status 07/27/23 03:05:07.972 +Jul 27 03:05:07.993: INFO: updatedStatus.Conditions: []v1.DaemonSetCondition{v1.DaemonSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the daemon set status to be updated 07/27/23 03:05:07.993 +Jul 27 03:05:07.997: INFO: Observed &DaemonSet event: ADDED +Jul 27 03:05:07.997: INFO: Observed &DaemonSet event: MODIFIED +Jul 27 03:05:07.998: INFO: Observed &DaemonSet event: MODIFIED +Jul 27 03:05:07.998: INFO: Observed &DaemonSet event: MODIFIED +Jul 27 03:05:07.998: INFO: Observed &DaemonSet event: MODIFIED +Jul 27 03:05:07.998: INFO: Observed &DaemonSet event: MODIFIED +Jul 27 03:05:07.998: INFO: Found daemon set daemon-set in namespace daemonsets-4586 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Jul 27 03:05:07.998: INFO: Daemon set daemon-set has an updated status +STEP: patching the DaemonSet Status 07/27/23 03:05:07.998 +STEP: watching for the daemon set status to be patched 07/27/23 03:05:08.02 +Jul 27 03:05:08.026: INFO: Observed &DaemonSet event: ADDED +Jul 27 03:05:08.026: INFO: Observed &DaemonSet event: MODIFIED +Jul 27 03:05:08.026: INFO: Observed &DaemonSet event: MODIFIED +Jul 27 03:05:08.026: INFO: Observed &DaemonSet event: MODIFIED +Jul 27 03:05:08.026: INFO: Observed &DaemonSet event: MODIFIED +Jul 27 03:05:08.027: INFO: Observed &DaemonSet event: MODIFIED +Jul 27 03:05:08.027: INFO: Observed daemon set daemon-set in namespace daemonsets-4586 with annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Jul 27 03:05:08.027: INFO: Observed &DaemonSet event: MODIFIED +Jul 27 03:05:08.027: INFO: Found daemon set daemon-set in namespace daemonsets-4586 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusPatched True 0001-01-01 00:00:00 +0000 UTC }] +Jul 27 03:05:08.027: INFO: Daemon set daemon-set has a patched status +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 +STEP: Deleting DaemonSet "daemon-set" 07/27/23 03:05:08.036 +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4586, will wait for the garbage collector to delete the pods 07/27/23 03:05:08.036 +Jul 27 03:05:08.109: INFO: Deleting DaemonSet.extensions daemon-set took: 14.702403ms +Jul 27 03:05:08.210: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.953066ms +Jul 27 03:05:11.119: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jul 27 03:05:11.119: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Jul 27 03:05:11.128: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"130957"},"items":null} + +Jul 27 03:05:11.136: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"130957"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 -Jun 12 22:37:34.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-storage] Projected combined +Jul 27 03:05:11.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-storage] Projected combined +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-storage] Projected combined +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 -STEP: Destroying namespace "projected-944" for this suite. 06/12/23 22:37:34.235 +STEP: Destroying namespace "daemonsets-4586" for this suite. 07/27/23 03:05:11.19 ------------------------------ -• [SLOW TEST] [6.319 seconds] -[sig-storage] Projected combined -test/e2e/common/storage/framework.go:23 - should project all components that make up the projection API [Projection][NodeConformance] [Conformance] - test/e2e/common/storage/projected_combined.go:44 +• [SLOW TEST] [6.454 seconds] +[sig-apps] Daemon set [Serial] +test/e2e/apps/framework.go:23 + should verify changes to a daemon set status [Conformance] + test/e2e/apps/daemon_set.go:862 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-storage] Projected combined + [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:37:27.937 - Jun 12 22:37:27.937: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename projected 06/12/23 22:37:27.942 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:37:27.989 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:37:28.001 - [BeforeEach] [sig-storage] Projected combined + STEP: Creating a kubernetes client 07/27/23 03:05:04.751 + Jul 27 03:05:04.751: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename daemonsets 07/27/23 03:05:04.752 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:05:04.78 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:05:04.786 + [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 - [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] - test/e2e/common/storage/projected_combined.go:44 - STEP: Creating configMap with name configmap-projected-all-test-volume-137d9ab3-870f-4385-9522-f642faca5918 06/12/23 22:37:28.016 - STEP: Creating secret with name secret-projected-all-test-volume-5ba4ca0c-8093-40e5-9eb5-972f6616cdb4 06/12/23 22:37:28.034 - STEP: Creating a pod to test Check all projections for projected volume plugin 06/12/23 22:37:28.067 - Jun 12 22:37:28.099: INFO: Waiting up to 5m0s for pod "projected-volume-a817abc1-5add-4a97-a718-fdd2aa132c4d" in namespace "projected-944" to be "Succeeded or Failed" - Jun 12 22:37:28.115: INFO: Pod "projected-volume-a817abc1-5add-4a97-a718-fdd2aa132c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.871979ms - Jun 12 22:37:30.127: INFO: Pod "projected-volume-a817abc1-5add-4a97-a718-fdd2aa132c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027730185s - Jun 12 22:37:32.133: INFO: Pod "projected-volume-a817abc1-5add-4a97-a718-fdd2aa132c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03281096s - Jun 12 22:37:34.135: INFO: Pod "projected-volume-a817abc1-5add-4a97-a718-fdd2aa132c4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035247529s - STEP: Saw pod success 06/12/23 22:37:34.135 - Jun 12 22:37:34.136: INFO: Pod "projected-volume-a817abc1-5add-4a97-a718-fdd2aa132c4d" satisfied condition "Succeeded or Failed" - Jun 12 22:37:34.147: INFO: Trying to get logs from node 10.138.75.70 pod projected-volume-a817abc1-5add-4a97-a718-fdd2aa132c4d container projected-all-volume-test: - STEP: delete the pod 06/12/23 22:37:34.184 - Jun 12 22:37:34.209: INFO: Waiting for pod projected-volume-a817abc1-5add-4a97-a718-fdd2aa132c4d to disappear - Jun 12 22:37:34.218: INFO: Pod projected-volume-a817abc1-5add-4a97-a718-fdd2aa132c4d no longer exists - [AfterEach] [sig-storage] Projected combined + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 + [It] should verify changes to a daemon set status [Conformance] + test/e2e/apps/daemon_set.go:862 + STEP: Creating simple DaemonSet "daemon-set" 07/27/23 03:05:04.863 + W0727 03:05:04.878426 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "app" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "app" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "app" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "app" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + STEP: Check that daemon pods launch on every node of the cluster. 07/27/23 03:05:04.878 + Jul 27 03:05:04.929: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jul 27 03:05:04.929: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 + Jul 27 03:05:05.953: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jul 27 03:05:05.953: INFO: Node 10.245.128.17 is running 0 daemon pod, expected 1 + Jul 27 03:05:06.954: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Jul 27 03:05:06.954: INFO: Node 10.245.128.19 is running 0 daemon pod, expected 1 + Jul 27 03:05:07.954: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Jul 27 03:05:07.954: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + STEP: Getting /status 07/27/23 03:05:07.962 + Jul 27 03:05:07.972: INFO: Daemon Set daemon-set has Conditions: [] + STEP: updating the DaemonSet Status 07/27/23 03:05:07.972 + Jul 27 03:05:07.993: INFO: updatedStatus.Conditions: []v1.DaemonSetCondition{v1.DaemonSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} + STEP: watching for the daemon set status to be updated 07/27/23 03:05:07.993 + Jul 27 03:05:07.997: INFO: Observed &DaemonSet event: ADDED + Jul 27 03:05:07.997: INFO: Observed &DaemonSet event: MODIFIED + Jul 27 03:05:07.998: INFO: Observed &DaemonSet event: MODIFIED + Jul 27 03:05:07.998: INFO: Observed &DaemonSet event: MODIFIED + Jul 27 03:05:07.998: INFO: Observed &DaemonSet event: MODIFIED + Jul 27 03:05:07.998: INFO: Observed &DaemonSet event: MODIFIED + Jul 27 03:05:07.998: INFO: Found daemon set daemon-set in namespace daemonsets-4586 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] + Jul 27 03:05:07.998: INFO: Daemon set daemon-set has an updated status + STEP: patching the DaemonSet Status 07/27/23 03:05:07.998 + STEP: watching for the daemon set status to be patched 07/27/23 03:05:08.02 + Jul 27 03:05:08.026: INFO: Observed &DaemonSet event: ADDED + Jul 27 03:05:08.026: INFO: Observed &DaemonSet event: MODIFIED + Jul 27 03:05:08.026: INFO: Observed &DaemonSet event: MODIFIED + Jul 27 03:05:08.026: INFO: Observed &DaemonSet event: MODIFIED + Jul 27 03:05:08.026: INFO: Observed &DaemonSet event: MODIFIED + Jul 27 03:05:08.027: INFO: Observed &DaemonSet event: MODIFIED + Jul 27 03:05:08.027: INFO: Observed daemon set daemon-set in namespace daemonsets-4586 with annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] + Jul 27 03:05:08.027: INFO: Observed &DaemonSet event: MODIFIED + Jul 27 03:05:08.027: INFO: Found daemon set daemon-set in namespace daemonsets-4586 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusPatched True 0001-01-01 00:00:00 +0000 UTC }] + Jul 27 03:05:08.027: INFO: Daemon set daemon-set has a patched status + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 + STEP: Deleting DaemonSet "daemon-set" 07/27/23 03:05:08.036 + STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4586, will wait for the garbage collector to delete the pods 07/27/23 03:05:08.036 + Jul 27 03:05:08.109: INFO: Deleting DaemonSet.extensions daemon-set took: 14.702403ms + Jul 27 03:05:08.210: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.953066ms + Jul 27 03:05:11.119: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jul 27 03:05:11.119: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set + Jul 27 03:05:11.128: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"130957"},"items":null} + + Jul 27 03:05:11.136: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"130957"},"items":null} + + [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 - Jun 12 22:37:34.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-storage] Projected combined + Jul 27 03:05:11.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-storage] Projected combined + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-storage] Projected combined + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 - STEP: Destroying namespace "projected-944" for this suite. 06/12/23 22:37:34.235 + STEP: Destroying namespace "daemonsets-4586" for this suite. 07/27/23 03:05:11.19 << End Captured GinkgoWriter Output ------------------------------ -SSS +SSSSSSSSSSSSSSSSSSS ------------------------------ -[sig-node] Probing container - should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] - test/e2e/common/node/container_probe.go:135 -[BeforeEach] [sig-node] Probing container +[sig-network] Services + should test the lifecycle of an Endpoint [Conformance] + test/e2e/network/service.go:3244 +[BeforeEach] [sig-network] Services set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:37:34.257 -Jun 12 22:37:34.257: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename container-probe 06/12/23 22:37:34.26 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:37:34.303 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:37:34.317 -[BeforeEach] [sig-node] Probing container +STEP: Creating a kubernetes client 07/27/23 03:05:11.207 +Jul 27 03:05:11.207: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename services 07/27/23 03:05:11.208 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:05:11.236 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:05:11.242 +[BeforeEach] [sig-network] Services test/e2e/framework/metrics/init/init.go:31 -[BeforeEach] [sig-node] Probing container - test/e2e/common/node/container_probe.go:63 -[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] - test/e2e/common/node/container_probe.go:135 -STEP: Creating pod busybox-7d941bc2-c249-4fa8-a16c-5cf9be7b933e in namespace container-probe-4153 06/12/23 22:37:34.331 -Jun 12 22:37:34.358: INFO: Waiting up to 5m0s for pod "busybox-7d941bc2-c249-4fa8-a16c-5cf9be7b933e" in namespace "container-probe-4153" to be "not pending" -Jun 12 22:37:34.371: INFO: Pod "busybox-7d941bc2-c249-4fa8-a16c-5cf9be7b933e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.966374ms -Jun 12 22:37:36.390: INFO: Pod "busybox-7d941bc2-c249-4fa8-a16c-5cf9be7b933e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032529255s -Jun 12 22:37:38.464: INFO: Pod "busybox-7d941bc2-c249-4fa8-a16c-5cf9be7b933e": Phase="Running", Reason="", readiness=true. Elapsed: 4.105765019s -Jun 12 22:37:38.464: INFO: Pod "busybox-7d941bc2-c249-4fa8-a16c-5cf9be7b933e" satisfied condition "not pending" -Jun 12 22:37:38.464: INFO: Started pod busybox-7d941bc2-c249-4fa8-a16c-5cf9be7b933e in namespace container-probe-4153 -STEP: checking the pod's current state and verifying that restartCount is present 06/12/23 22:37:38.464 -Jun 12 22:37:38.473: INFO: Initial restart count of pod busybox-7d941bc2-c249-4fa8-a16c-5cf9be7b933e is 0 -Jun 12 22:38:27.082: INFO: Restart count of pod container-probe-4153/busybox-7d941bc2-c249-4fa8-a16c-5cf9be7b933e is now 1 (48.609060151s elapsed) -STEP: deleting the pod 06/12/23 22:38:27.082 -[AfterEach] [sig-node] Probing container +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should test the lifecycle of an Endpoint [Conformance] + test/e2e/network/service.go:3244 +STEP: creating an Endpoint 07/27/23 03:05:11.29 +STEP: waiting for available Endpoint 07/27/23 03:05:11.348 +STEP: listing all Endpoints 07/27/23 03:05:11.353 +STEP: updating the Endpoint 07/27/23 03:05:11.372 +STEP: fetching the Endpoint 07/27/23 03:05:11.393 +STEP: patching the Endpoint 07/27/23 03:05:11.407 +STEP: fetching the Endpoint 07/27/23 03:05:11.428 +STEP: deleting the Endpoint by Collection 07/27/23 03:05:11.444 +STEP: waiting for Endpoint deletion 07/27/23 03:05:11.476 +STEP: fetching the Endpoint 07/27/23 03:05:11.479 +[AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 -Jun 12 22:38:27.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-node] Probing container +Jul 27 03:05:11.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-node] Probing container +[DeferCleanup (Each)] [sig-network] Services dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-node] Probing container +[DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 -STEP: Destroying namespace "container-probe-4153" for this suite. 06/12/23 22:38:27.123 +STEP: Destroying namespace "services-6741" for this suite. 07/27/23 03:05:11.504 ------------------------------ -• [SLOW TEST] [52.882 seconds] -[sig-node] Probing container -test/e2e/common/node/framework.go:23 - should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] - test/e2e/common/node/container_probe.go:135 +• [0.311 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should test the lifecycle of an Endpoint [Conformance] + test/e2e/network/service.go:3244 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-node] Probing container + [BeforeEach] [sig-network] Services set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:37:34.257 - Jun 12 22:37:34.257: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename container-probe 06/12/23 22:37:34.26 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:37:34.303 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:37:34.317 - [BeforeEach] [sig-node] Probing container + STEP: Creating a kubernetes client 07/27/23 03:05:11.207 + Jul 27 03:05:11.207: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename services 07/27/23 03:05:11.208 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:05:11.236 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:05:11.242 + [BeforeEach] [sig-network] Services test/e2e/framework/metrics/init/init.go:31 - [BeforeEach] [sig-node] Probing container - test/e2e/common/node/container_probe.go:63 - [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] - test/e2e/common/node/container_probe.go:135 - STEP: Creating pod busybox-7d941bc2-c249-4fa8-a16c-5cf9be7b933e in namespace container-probe-4153 06/12/23 22:37:34.331 - Jun 12 22:37:34.358: INFO: Waiting up to 5m0s for pod "busybox-7d941bc2-c249-4fa8-a16c-5cf9be7b933e" in namespace "container-probe-4153" to be "not pending" - Jun 12 22:37:34.371: INFO: Pod "busybox-7d941bc2-c249-4fa8-a16c-5cf9be7b933e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.966374ms - Jun 12 22:37:36.390: INFO: Pod "busybox-7d941bc2-c249-4fa8-a16c-5cf9be7b933e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032529255s - Jun 12 22:37:38.464: INFO: Pod "busybox-7d941bc2-c249-4fa8-a16c-5cf9be7b933e": Phase="Running", Reason="", readiness=true. Elapsed: 4.105765019s - Jun 12 22:37:38.464: INFO: Pod "busybox-7d941bc2-c249-4fa8-a16c-5cf9be7b933e" satisfied condition "not pending" - Jun 12 22:37:38.464: INFO: Started pod busybox-7d941bc2-c249-4fa8-a16c-5cf9be7b933e in namespace container-probe-4153 - STEP: checking the pod's current state and verifying that restartCount is present 06/12/23 22:37:38.464 - Jun 12 22:37:38.473: INFO: Initial restart count of pod busybox-7d941bc2-c249-4fa8-a16c-5cf9be7b933e is 0 - Jun 12 22:38:27.082: INFO: Restart count of pod container-probe-4153/busybox-7d941bc2-c249-4fa8-a16c-5cf9be7b933e is now 1 (48.609060151s elapsed) - STEP: deleting the pod 06/12/23 22:38:27.082 - [AfterEach] [sig-node] Probing container + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should test the lifecycle of an Endpoint [Conformance] + test/e2e/network/service.go:3244 + STEP: creating an Endpoint 07/27/23 03:05:11.29 + STEP: waiting for available Endpoint 07/27/23 03:05:11.348 + STEP: listing all Endpoints 07/27/23 03:05:11.353 + STEP: updating the Endpoint 07/27/23 03:05:11.372 + STEP: fetching the Endpoint 07/27/23 03:05:11.393 + STEP: patching the Endpoint 07/27/23 03:05:11.407 + STEP: fetching the Endpoint 07/27/23 03:05:11.428 + STEP: deleting the Endpoint by Collection 07/27/23 03:05:11.444 + STEP: waiting for Endpoint deletion 07/27/23 03:05:11.476 + STEP: fetching the Endpoint 07/27/23 03:05:11.479 + [AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 - Jun 12 22:38:27.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-node] Probing container + Jul 27 03:05:11.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-node] Probing container + [DeferCleanup (Each)] [sig-network] Services dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-node] Probing container + [DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 - STEP: Destroying namespace "container-probe-4153" for this suite. 06/12/23 22:38:27.123 + STEP: Destroying namespace "services-6741" for this suite. 07/27/23 03:05:11.504 << End Captured GinkgoWriter Output ------------------------------ -SSSSSS +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap - updates should be reflected in volume [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:124 + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:240 [BeforeEach] [sig-storage] ConfigMap set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:38:27.152 -Jun 12 22:38:27.152: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename configmap 06/12/23 22:38:27.154 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:38:27.217 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:38:27.233 +STEP: Creating a kubernetes client 07/27/23 03:05:11.519 +Jul 27 03:05:11.519: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename configmap 07/27/23 03:05:11.519 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:05:11.553 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:05:11.56 [BeforeEach] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:31 -[It] updates should be reflected in volume [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:124 -Jun 12 22:38:27.280: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node -STEP: Creating configMap with name configmap-test-upd-5aa55ac9-43cb-4cf5-b4b4-467b7a861cbb 06/12/23 22:38:27.28 -STEP: Creating the pod 06/12/23 22:38:27.294 -Jun 12 22:38:27.317: INFO: Waiting up to 5m0s for pod "pod-configmaps-596602a7-26b6-4697-85f7-85c874f76f24" in namespace "configmap-6275" to be "running and ready" -Jun 12 22:38:27.329: INFO: Pod "pod-configmaps-596602a7-26b6-4697-85f7-85c874f76f24": Phase="Pending", Reason="", readiness=false. Elapsed: 11.64865ms -Jun 12 22:38:27.329: INFO: The phase of Pod pod-configmaps-596602a7-26b6-4697-85f7-85c874f76f24 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 22:38:29.340: INFO: Pod "pod-configmaps-596602a7-26b6-4697-85f7-85c874f76f24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022723805s -Jun 12 22:38:29.341: INFO: The phase of Pod pod-configmaps-596602a7-26b6-4697-85f7-85c874f76f24 is Pending, waiting for it to be Running (with Ready = true) -Jun 12 22:38:31.341: INFO: Pod "pod-configmaps-596602a7-26b6-4697-85f7-85c874f76f24": Phase="Running", Reason="", readiness=true. Elapsed: 4.023928489s -Jun 12 22:38:31.342: INFO: The phase of Pod pod-configmaps-596602a7-26b6-4697-85f7-85c874f76f24 is Running (Ready = true) -Jun 12 22:38:31.342: INFO: Pod "pod-configmaps-596602a7-26b6-4697-85f7-85c874f76f24" satisfied condition "running and ready" -STEP: Updating configmap configmap-test-upd-5aa55ac9-43cb-4cf5-b4b4-467b7a861cbb 06/12/23 22:38:31.38 -STEP: waiting to observe update in volume 06/12/23 22:38:31.395 +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:240 +Jul 27 03:05:11.580: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node +STEP: Creating configMap with name cm-test-opt-del-cb66c34d-f1fb-43d9-8d59-72ba2c09ffbd 07/27/23 03:05:11.58 +STEP: Creating configMap with name cm-test-opt-upd-36046ffb-4e8e-48dd-94a3-344a62c4172c 07/27/23 03:05:11.591 +STEP: Creating the pod 07/27/23 03:05:11.601 +Jul 27 03:05:11.628: INFO: Waiting up to 5m0s for pod "pod-configmaps-26653f5c-b67e-4ff0-a583-4a1ea1cca2f8" in namespace "configmap-5914" to be "running and ready" +Jul 27 03:05:11.638: INFO: Pod "pod-configmaps-26653f5c-b67e-4ff0-a583-4a1ea1cca2f8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.110508ms +Jul 27 03:05:11.638: INFO: The phase of Pod pod-configmaps-26653f5c-b67e-4ff0-a583-4a1ea1cca2f8 is Pending, waiting for it to be Running (with Ready = true) +Jul 27 03:05:13.648: INFO: Pod "pod-configmaps-26653f5c-b67e-4ff0-a583-4a1ea1cca2f8": Phase="Running", Reason="", readiness=true. Elapsed: 2.020567124s +Jul 27 03:05:13.649: INFO: The phase of Pod pod-configmaps-26653f5c-b67e-4ff0-a583-4a1ea1cca2f8 is Running (Ready = true) +Jul 27 03:05:13.649: INFO: Pod "pod-configmaps-26653f5c-b67e-4ff0-a583-4a1ea1cca2f8" satisfied condition "running and ready" +STEP: Deleting configmap cm-test-opt-del-cb66c34d-f1fb-43d9-8d59-72ba2c09ffbd 07/27/23 03:05:13.707 +STEP: Updating configmap cm-test-opt-upd-36046ffb-4e8e-48dd-94a3-344a62c4172c 07/27/23 03:05:13.721 +STEP: Creating configMap with name cm-test-opt-create-374a604e-8bb4-4b84-86a8-f38b54d31a44 07/27/23 03:05:13.732 +STEP: waiting to observe update in volume 07/27/23 03:05:13.742 [AfterEach] [sig-storage] ConfigMap test/e2e/framework/node/init/init.go:32 -Jun 12 22:39:34.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +Jul 27 03:05:15.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] ConfigMap dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] ConfigMap tear down framework | framework.go:193 -STEP: Destroying namespace "configmap-6275" for this suite. 06/12/23 22:39:34.843 +STEP: Destroying namespace "configmap-5914" for this suite. 07/27/23 03:05:15.823 ------------------------------ -• [SLOW TEST] [67.711 seconds] +• [4.318 seconds] [sig-storage] ConfigMap test/e2e/common/storage/framework.go:23 - updates should be reflected in volume [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:124 + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:240 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] ConfigMap set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:38:27.152 - Jun 12 22:38:27.152: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename configmap 06/12/23 22:38:27.154 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:38:27.217 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:38:27.233 + STEP: Creating a kubernetes client 07/27/23 03:05:11.519 + Jul 27 03:05:11.519: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename configmap 07/27/23 03:05:11.519 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:05:11.553 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:05:11.56 [BeforeEach] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:31 - [It] updates should be reflected in volume [NodeConformance] [Conformance] - test/e2e/common/storage/configmap_volume.go:124 - Jun 12 22:38:27.280: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node - STEP: Creating configMap with name configmap-test-upd-5aa55ac9-43cb-4cf5-b4b4-467b7a861cbb 06/12/23 22:38:27.28 - STEP: Creating the pod 06/12/23 22:38:27.294 - Jun 12 22:38:27.317: INFO: Waiting up to 5m0s for pod "pod-configmaps-596602a7-26b6-4697-85f7-85c874f76f24" in namespace "configmap-6275" to be "running and ready" - Jun 12 22:38:27.329: INFO: Pod "pod-configmaps-596602a7-26b6-4697-85f7-85c874f76f24": Phase="Pending", Reason="", readiness=false. Elapsed: 11.64865ms - Jun 12 22:38:27.329: INFO: The phase of Pod pod-configmaps-596602a7-26b6-4697-85f7-85c874f76f24 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 22:38:29.340: INFO: Pod "pod-configmaps-596602a7-26b6-4697-85f7-85c874f76f24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022723805s - Jun 12 22:38:29.341: INFO: The phase of Pod pod-configmaps-596602a7-26b6-4697-85f7-85c874f76f24 is Pending, waiting for it to be Running (with Ready = true) - Jun 12 22:38:31.341: INFO: Pod "pod-configmaps-596602a7-26b6-4697-85f7-85c874f76f24": Phase="Running", Reason="", readiness=true. Elapsed: 4.023928489s - Jun 12 22:38:31.342: INFO: The phase of Pod pod-configmaps-596602a7-26b6-4697-85f7-85c874f76f24 is Running (Ready = true) - Jun 12 22:38:31.342: INFO: Pod "pod-configmaps-596602a7-26b6-4697-85f7-85c874f76f24" satisfied condition "running and ready" - STEP: Updating configmap configmap-test-upd-5aa55ac9-43cb-4cf5-b4b4-467b7a861cbb 06/12/23 22:38:31.38 - STEP: waiting to observe update in volume 06/12/23 22:38:31.395 + [It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:240 + Jul 27 03:05:11.580: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node + STEP: Creating configMap with name cm-test-opt-del-cb66c34d-f1fb-43d9-8d59-72ba2c09ffbd 07/27/23 03:05:11.58 + STEP: Creating configMap with name cm-test-opt-upd-36046ffb-4e8e-48dd-94a3-344a62c4172c 07/27/23 03:05:11.591 + STEP: Creating the pod 07/27/23 03:05:11.601 + Jul 27 03:05:11.628: INFO: Waiting up to 5m0s for pod "pod-configmaps-26653f5c-b67e-4ff0-a583-4a1ea1cca2f8" in namespace "configmap-5914" to be "running and ready" + Jul 27 03:05:11.638: INFO: Pod "pod-configmaps-26653f5c-b67e-4ff0-a583-4a1ea1cca2f8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.110508ms + Jul 27 03:05:11.638: INFO: The phase of Pod pod-configmaps-26653f5c-b67e-4ff0-a583-4a1ea1cca2f8 is Pending, waiting for it to be Running (with Ready = true) + Jul 27 03:05:13.648: INFO: Pod "pod-configmaps-26653f5c-b67e-4ff0-a583-4a1ea1cca2f8": Phase="Running", Reason="", readiness=true. Elapsed: 2.020567124s + Jul 27 03:05:13.649: INFO: The phase of Pod pod-configmaps-26653f5c-b67e-4ff0-a583-4a1ea1cca2f8 is Running (Ready = true) + Jul 27 03:05:13.649: INFO: Pod "pod-configmaps-26653f5c-b67e-4ff0-a583-4a1ea1cca2f8" satisfied condition "running and ready" + STEP: Deleting configmap cm-test-opt-del-cb66c34d-f1fb-43d9-8d59-72ba2c09ffbd 07/27/23 03:05:13.707 + STEP: Updating configmap cm-test-opt-upd-36046ffb-4e8e-48dd-94a3-344a62c4172c 07/27/23 03:05:13.721 + STEP: Creating configMap with name cm-test-opt-create-374a604e-8bb4-4b84-86a8-f38b54d31a44 07/27/23 03:05:13.732 + STEP: waiting to observe update in volume 07/27/23 03:05:13.742 [AfterEach] [sig-storage] ConfigMap test/e2e/framework/node/init/init.go:32 - Jun 12 22:39:34.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + Jul 27 03:05:15.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] ConfigMap dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] ConfigMap tear down framework | framework.go:193 - STEP: Destroying namespace "configmap-6275" for this suite. 06/12/23 22:39:34.843 + STEP: Destroying namespace "configmap-5914" for this suite. 07/27/23 03:05:15.823 << End Captured GinkgoWriter Output ------------------------------ -[sig-apps] ReplicaSet - should list and delete a collection of ReplicaSets [Conformance] - test/e2e/apps/replica_set.go:165 -[BeforeEach] [sig-apps] ReplicaSet +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:108 +[BeforeEach] [sig-node] Probing container set up framework | framework.go:178 -STEP: Creating a kubernetes client 06/12/23 22:39:34.863 -Jun 12 22:39:34.863: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 -STEP: Building a namespace api object, basename replicaset 06/12/23 22:39:34.866 -STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:39:34.927 -STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:39:34.991 -[BeforeEach] [sig-apps] ReplicaSet +STEP: Creating a kubernetes client 07/27/23 03:05:15.846 +Jul 27 03:05:15.846: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 +STEP: Building a namespace api object, basename container-probe 07/27/23 03:05:15.847 +STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:05:15.873 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:05:15.881 +[BeforeEach] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:31 -[It] should list and delete a collection of ReplicaSets [Conformance] - test/e2e/apps/replica_set.go:165 -STEP: Create a ReplicaSet 06/12/23 22:39:35.016 -STEP: Verify that the required pods have come up 06/12/23 22:39:35.039 -Jun 12 22:39:35.129: INFO: Pod name sample-pod: Found 3 pods out of 3 -STEP: ensuring each pod is running 06/12/23 22:39:35.174 -Jun 12 22:39:35.174: INFO: Waiting up to 5m0s for pod "test-rs-6k6g6" in namespace "replicaset-1977" to be "running" -Jun 12 22:39:35.175: INFO: Waiting up to 5m0s for pod "test-rs-b9nrl" in namespace "replicaset-1977" to be "running" -Jun 12 22:39:35.174: INFO: Waiting up to 5m0s for pod "test-rs-pz5ck" in namespace "replicaset-1977" to be "running" -Jun 12 22:39:35.191: INFO: Pod "test-rs-pz5ck": Phase="Pending", Reason="", readiness=false. Elapsed: 15.172456ms -Jun 12 22:39:35.191: INFO: Pod "test-rs-b9nrl": Phase="Pending", Reason="", readiness=false. Elapsed: 15.991499ms -Jun 12 22:39:35.191: INFO: Pod "test-rs-6k6g6": Phase="Pending", Reason="", readiness=false. Elapsed: 17.077516ms -Jun 12 22:39:37.202: INFO: Pod "test-rs-6k6g6": Phase="Running", Reason="", readiness=true. Elapsed: 2.027297519s -Jun 12 22:39:37.202: INFO: Pod "test-rs-6k6g6" satisfied condition "running" -Jun 12 22:39:37.204: INFO: Pod "test-rs-pz5ck": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028988505s -Jun 12 22:39:37.207: INFO: Pod "test-rs-b9nrl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032335207s -Jun 12 22:39:39.203: INFO: Pod "test-rs-pz5ck": Phase="Running", Reason="", readiness=true. Elapsed: 4.027149733s -Jun 12 22:39:39.203: INFO: Pod "test-rs-pz5ck" satisfied condition "running" -Jun 12 22:39:39.206: INFO: Pod "test-rs-b9nrl": Phase="Running", Reason="", readiness=true. Elapsed: 4.030773796s -Jun 12 22:39:39.206: INFO: Pod "test-rs-b9nrl" satisfied condition "running" -Jun 12 22:39:39.217: INFO: Replica Status: {Replicas:3 FullyLabeledReplicas:3 ReadyReplicas:3 AvailableReplicas:3 ObservedGeneration:1 Conditions:[]} -STEP: Listing all ReplicaSets 06/12/23 22:39:39.217 -STEP: DeleteCollection of the ReplicaSets 06/12/23 22:39:39.236 -STEP: After DeleteCollection verify that ReplicaSets have been deleted 06/12/23 22:39:39.257 -[AfterEach] [sig-apps] ReplicaSet +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 +[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:108 +W0727 03:05:15.910221 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "test-webserver" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "test-webserver" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "test-webserver" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "test-webserver" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") +[AfterEach] [sig-node] Probing container test/e2e/framework/node/init/init.go:32 -Jun 12 22:39:39.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready -[DeferCleanup (Each)] [sig-apps] ReplicaSet +Jul 27 03:06:15.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:33 -[DeferCleanup (Each)] [sig-apps] ReplicaSet +[DeferCleanup (Each)] [sig-node] Probing container dump namespaces | framework.go:196 -[DeferCleanup (Each)] [sig-apps] ReplicaSet +[DeferCleanup (Each)] [sig-node] Probing container tear down framework | framework.go:193 -STEP: Destroying namespace "replicaset-1977" for this suite. 06/12/23 22:39:39.284 +STEP: Destroying namespace "container-probe-5857" for this suite. 07/27/23 03:06:15.931 ------------------------------ -• [4.435 seconds] -[sig-apps] ReplicaSet -test/e2e/apps/framework.go:23 - should list and delete a collection of ReplicaSets [Conformance] - test/e2e/apps/replica_set.go:165 +• [SLOW TEST] [60.099 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:108 Begin Captured GinkgoWriter Output >> - [BeforeEach] [sig-apps] ReplicaSet + [BeforeEach] [sig-node] Probing container set up framework | framework.go:178 - STEP: Creating a kubernetes client 06/12/23 22:39:34.863 - Jun 12 22:39:34.863: INFO: >>> kubeConfig: /tmp/kubeconfig-1249129573 - STEP: Building a namespace api object, basename replicaset 06/12/23 22:39:34.866 - STEP: Waiting for a default service account to be provisioned in namespace 06/12/23 22:39:34.927 - STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/12/23 22:39:34.991 - [BeforeEach] [sig-apps] ReplicaSet + STEP: Creating a kubernetes client 07/27/23 03:05:15.846 + Jul 27 03:05:15.846: INFO: >>> kubeConfig: /tmp/kubeconfig-1337358882 + STEP: Building a namespace api object, basename container-probe 07/27/23 03:05:15.847 + STEP: Waiting for a default service account to be provisioned in namespace 07/27/23 03:05:15.873 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/27/23 03:05:15.881 + [BeforeEach] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:31 - [It] should list and delete a collection of ReplicaSets [Conformance] - test/e2e/apps/replica_set.go:165 - STEP: Create a ReplicaSet 06/12/23 22:39:35.016 - STEP: Verify that the required pods have come up 06/12/23 22:39:35.039 - Jun 12 22:39:35.129: INFO: Pod name sample-pod: Found 3 pods out of 3 - STEP: ensuring each pod is running 06/12/23 22:39:35.174 - Jun 12 22:39:35.174: INFO: Waiting up to 5m0s for pod "test-rs-6k6g6" in namespace "replicaset-1977" to be "running" - Jun 12 22:39:35.175: INFO: Waiting up to 5m0s for pod "test-rs-b9nrl" in namespace "replicaset-1977" to be "running" - Jun 12 22:39:35.174: INFO: Waiting up to 5m0s for pod "test-rs-pz5ck" in namespace "replicaset-1977" to be "running" - Jun 12 22:39:35.191: INFO: Pod "test-rs-pz5ck": Phase="Pending", Reason="", readiness=false. Elapsed: 15.172456ms - Jun 12 22:39:35.191: INFO: Pod "test-rs-b9nrl": Phase="Pending", Reason="", readiness=false. Elapsed: 15.991499ms - Jun 12 22:39:35.191: INFO: Pod "test-rs-6k6g6": Phase="Pending", Reason="", readiness=false. Elapsed: 17.077516ms - Jun 12 22:39:37.202: INFO: Pod "test-rs-6k6g6": Phase="Running", Reason="", readiness=true. Elapsed: 2.027297519s - Jun 12 22:39:37.202: INFO: Pod "test-rs-6k6g6" satisfied condition "running" - Jun 12 22:39:37.204: INFO: Pod "test-rs-pz5ck": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028988505s - Jun 12 22:39:37.207: INFO: Pod "test-rs-b9nrl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032335207s - Jun 12 22:39:39.203: INFO: Pod "test-rs-pz5ck": Phase="Running", Reason="", readiness=true. Elapsed: 4.027149733s - Jun 12 22:39:39.203: INFO: Pod "test-rs-pz5ck" satisfied condition "running" - Jun 12 22:39:39.206: INFO: Pod "test-rs-b9nrl": Phase="Running", Reason="", readiness=true. Elapsed: 4.030773796s - Jun 12 22:39:39.206: INFO: Pod "test-rs-b9nrl" satisfied condition "running" - Jun 12 22:39:39.217: INFO: Replica Status: {Replicas:3 FullyLabeledReplicas:3 ReadyReplicas:3 AvailableReplicas:3 ObservedGeneration:1 Conditions:[]} - STEP: Listing all ReplicaSets 06/12/23 22:39:39.217 - STEP: DeleteCollection of the ReplicaSets 06/12/23 22:39:39.236 - STEP: After DeleteCollection verify that ReplicaSets have been deleted 06/12/23 22:39:39.257 - [AfterEach] [sig-apps] ReplicaSet + [BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 + [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:108 + W0727 03:05:15.910221 20 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "test-webserver" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "test-webserver" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "test-webserver" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "test-webserver" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + [AfterEach] [sig-node] Probing container test/e2e/framework/node/init/init.go:32 - Jun 12 22:39:39.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready - [DeferCleanup (Each)] [sig-apps] ReplicaSet + Jul 27 03:06:15.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:33 - [DeferCleanup (Each)] [sig-apps] ReplicaSet + [DeferCleanup (Each)] [sig-node] Probing container dump namespaces | framework.go:196 - [DeferCleanup (Each)] [sig-apps] ReplicaSet + [DeferCleanup (Each)] [sig-node] Probing container tear down framework | framework.go:193 - STEP: Destroying namespace "replicaset-1977" for this suite. 06/12/23 22:39:39.284 + STEP: Destroying namespace "container-probe-5857" for this suite. 07/27/23 03:06:15.931 << End Captured GinkgoWriter Output ------------------------------ -SSSSSSSS +SSSSSSSSSSSSSSSSSSSS ------------------------------ [SynchronizedAfterSuite] test/e2e/e2e.go:88 @@ -41014,8 +40416,8 @@ test/e2e/e2e.go:88 test/e2e/e2e.go:88 [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:88 -Jun 12 22:39:39.303: INFO: Running AfterSuite actions on node 1 -Jun 12 22:39:39.303: INFO: Skipping dumping logs from cluster +Jul 27 03:06:15.947: INFO: Running AfterSuite actions on node 1 +Jul 27 03:06:15.947: INFO: Skipping dumping logs from cluster ------------------------------ [SynchronizedAfterSuite] PASSED [0.000 seconds] [SynchronizedAfterSuite] @@ -41026,8 +40428,8 @@ test/e2e/e2e.go:88 test/e2e/e2e.go:88 [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:88 - Jun 12 22:39:39.303: INFO: Running AfterSuite actions on node 1 - Jun 12 22:39:39.303: INFO: Skipping dumping logs from cluster + Jul 27 03:06:15.947: INFO: Running AfterSuite actions on node 1 + Jul 27 03:06:15.947: INFO: Skipping dumping logs from cluster << End Captured GinkgoWriter Output ------------------------------ [ReportAfterSuite] Kubernetes e2e suite report @@ -41049,7 +40451,7 @@ test/e2e/framework/test_context.go:529 [ReportAfterSuite] TOP-LEVEL test/e2e/framework/test_context.go:529 ------------------------------ -[ReportAfterSuite] PASSED [0.285 seconds] +[ReportAfterSuite] PASSED [0.073 seconds] [ReportAfterSuite] Kubernetes e2e JUnit report test/e2e/framework/test_context.go:529 @@ -41059,11 +40461,11 @@ test/e2e/framework/test_context.go:529 << End Captured GinkgoWriter Output ------------------------------ -Ran 368 of 7069 Specs in 7184.136 seconds +Ran 368 of 7069 Specs in 5914.164 seconds SUCCESS! -- 368 Passed | 0 Failed | 0 Pending | 6701 Skipped PASS -Ginkgo ran 1 suite in 1h59m46.024294069s +Ginkgo ran 1 suite in 1h38m34.487144402s Test Suite Passed You're using deprecated Ginkgo functionality: ============================================= diff --git a/v1.26/ibm-openshift/junit_01.xml b/v1.26/ibm-openshift/junit_01.xml index e5780c2613..85e631dadd 100644 --- a/v1.26/ibm-openshift/junit_01.xml +++ b/v1.26/ibm-openshift/junit_01.xml @@ -1,12 +1,12 @@ - - + + - + @@ -21,20479 +21,20479 @@ - - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + + - + - + - + - + + - + - - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + + - - + - + + - + - + - + - - + - + + - + - + + + - + - + + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + + - - + - + - + - + - + - + - + - + - + - + - + - + + - + - + + + - + - + + + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + + - + - - - + - + - + - + - + - + - + - + - + - + - + - + - + + - - + - + - + - + - + - + - + - + - + - + + + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + + - - + - + - + - + - + - + - + - + - + - - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - - + - + - + - + - + - + - - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + + - + - + - + + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + + + - + - + - + - + - + - + - - + - - + - - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - - + - + - + - + - + - + - - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - - + + - + - + - + + - + - + - - + - + - + - - + - + - + + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + + - + - + - - - + - + - + - + - + - + - + - + + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - - + - + - + + - + - - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + + - + - + - + - + - + - + - + - + + - + - + - + - + - - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - - - + - + - + - + - - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - - + - + - + - + - + - - + + - + - + - + - + - + - + - + - - + - - - + - + - + - + - + - + - - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - - + - + - + - + + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + + - + - + - + - + - - + - + - + - + - + - - + - + - + + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + + - + - + - + + - + - + - - + - + - - + - + - + - + - + - + - + - + - + - + - + - - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - - + - + - + - + - + - + + - + - + - + - + - + - + - + - + + - + - + - - + - + - + - + - + - + - + - + - + - - - + - + - + - + - + - + - + - + - - + - + - + - + - + + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + + - + - + - + - + - - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + + - + - + - + - + - + + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - - + - + - + - + - + - + - - - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - - + - - + + - + - + - + - + + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - - - + - - + - + + - + - + - + - + - + - + - + - + - + - + - + + + - + - + - + - + - + + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - - + - + - - - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - - + - + - + + - - + - + + + - + - + - + - + - + - + - + - + - + - + - + - + - - + + - + - + - + - + - + - + - + - + - + - + + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - - + - - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - - + - + - + + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - - + - - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + + - + - + - + - + + - + - + - + - + - + + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + + - + - + - + - + + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - - + - + + - + - + - + - + - - + - + - + + + - + - + - + - + - + - + - + - - + - + - + - + - + - + - - + - + - + - + + - + - + - + - - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + + + - + - + - + - + - + - + - - + - + - + - + - + - - + - + + - + - + - + - + - + - + - + + - + + - + + - + - + - + - + - + - + - + - - + - + - + - + - + - - + - + - + - + - + - + - + - + - + + - + - - + + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + + + - - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - - - + - + + - - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + + - + - + - + - + - + - + - - + - + - + - + - + - + - + - - + - + - - + - + - + - - + - + - + - - + - + - + - + - + - + - + - + - + - - + - + - + - + - - + - + - + - + - + - + - - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - - + - + - + - + - + + - + - - + - - + - + - + - - + - + - + - + - + - + + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + + - + - + - + + - + - + - + - - + - + - + - + - + + + - + - + - - + - - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + + - - + - - + + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - - + - + - + - + - + + - + - + - + + - + - + - + - + - + - - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - - + - + - + - + - + - + - + - + + - + - + - - - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + + - + + - + - + + - + + - + - + - + + - + + - + - + - + + - + - + - + - + - + - + - + + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - - - + - + - + - + - + - + - + - + - - + - + + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - - + - + - + - + + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + + - - + - + + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - - + - - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - - + - + - + + - + - + - + - + - + + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + + + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + + + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + + - + + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + + - - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - - + - - + - + - + - + - - + - + - + - + - + - + - + - - + - + - + + - + - + - + - + - + + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + + - + - + - + - - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + + - - - + - + - + + - + - + + - + - + - + - + - - + - + + - + - + - - + - + - + - + - + - + - + - + - + - + - - + - + + - + - + + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - - + - + + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - - + + + - + - - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + + - + - - + - + - + - + - + - + - + + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + + - - + - + - + - + - + - + - - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + + - + - + - + + - + - + - - + - + - - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + + - + - + - + - + - + - + - + + - + - + + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - - + - + - + - + + + - + - + - + + + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - - + - + - + - + + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + + - + - + - + - + - + - + - + + - - + - + - + + - + - + - + - + - + - + - - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + + - - + - - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + + - + - + - - + + - + - + - + - + - + - + - + - + - + - + - - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + + - + - + - + - + - + - - + - + - + + - + - + + - - + - + - - + - + - + - + - + - + - + - + + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + + - + - - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + + - + - + - - + - + - + - - + - + - + - + - + - + - - - + - + - + - + - + - + - + - + - - + + \ No newline at end of file