Time Namespace Component Host RelatedObject Reason Message
default ip-10-0-2-105.eu-west-3.compute.internal Starting
default ip-10-0-2-105.eu-west-3.compute.internal Starting
default ip-10-0-3-24.eu-west-3.compute.internal Starting
default ip-10-0-3-24.eu-west-3.compute.internal Starting
kube-system default-scheduler kube-scheduler LeaderElection ip-172-16-52-153.eu-west-3.compute.internal_ef1e2482-8a27-4c17-9d8a-36995ced652e became leader
kube-system cloud-controller-manager cloud-controller-manager LeaderElection ip-172-16-52-153.eu-west-3.compute.internal_1f72839f-a83d-40e7-a883-43f70ecf3d22 became leader
kube-system kube-controller-manager kube-controller-manager LeaderElection ip-172-16-52-153.eu-west-3.compute.internal_4c626f00-9c46-4b7e-b233-180571019243 became leader
kube-system eks-certificate-controller eks-certificates-controller LeaderElection ip-172-16-52-153.eu-west-3.compute.internal became leader
kube-system replicaset-controller coredns-577fccf48c SuccessfulCreate Created pod: coredns-577fccf48c-sz5dr
kube-system default-scheduler coredns-577fccf48c-sz5dr FailedScheduling no nodes available to schedule pods
kube-system deployment-controller coredns ScalingReplicaSet Scaled up replica set coredns-577fccf48c to 2
kube-system default-scheduler coredns-577fccf48c-5vcbw FailedScheduling no nodes available to schedule pods
kube-system replicaset-controller coredns-577fccf48c SuccessfulCreate Created pod: coredns-577fccf48c-5vcbw
kube-system ip-172-16-52-153.eu-west-3.compute.internal_9db06657-e36b-4e45-a2aa-6a6f2bb7ef86 cp-vpc-resource-controller LeaderElection ip-172-16-52-153.eu-west-3.compute.internal_9db06657-e36b-4e45-a2aa-6a6f2bb7ef86 became leader
(x2) default kubelet ip-10-0-3-24.eu-west-3.compute.internal ip-10-0-3-24.eu-west-3.compute.internal NodeHasNoDiskPressure Node ip-10-0-3-24.eu-west-3.compute.internal status is now: NodeHasNoDiskPressure
(x2) default kubelet ip-10-0-3-24.eu-west-3.compute.internal ip-10-0-3-24.eu-west-3.compute.internal NodeHasSufficientPID Node ip-10-0-3-24.eu-west-3.compute.internal status is now: NodeHasSufficientPID
(x2) default kubelet ip-10-0-3-24.eu-west-3.compute.internal ip-10-0-3-24.eu-west-3.compute.internal NodeHasSufficientMemory Node ip-10-0-3-24.eu-west-3.compute.internal status is now: NodeHasSufficientMemory
default kubelet ip-10-0-3-24.eu-west-3.compute.internal ip-10-0-3-24.eu-west-3.compute.internal Starting Starting kubelet.
default kubelet ip-10-0-3-24.eu-west-3.compute.internal ip-10-0-3-24.eu-west-3.compute.internal InvalidDiskCapacity invalid capacity 0 on image filesystem
kube-system daemonset-controller aws-node SuccessfulCreate Created pod: aws-node-dmbng
kube-system daemonset-controller kube-proxy SuccessfulCreate Created pod: kube-proxy-m6tc2
kube-system default-scheduler coredns-577fccf48c-5vcbw FailedScheduling 0/1 nodes are available: 1 node(s) had untolerated taint {node.cilium.io/agent-not-ready: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..
default kubelet ip-10-0-3-24.eu-west-3.compute.internal ip-10-0-3-24.eu-west-3.compute.internal NodeAllocatableEnforced Updated Node Allocatable limit across pods
kube-system default-scheduler aws-node-dmbng Scheduled Successfully assigned kube-system/aws-node-dmbng to ip-10-0-3-24.eu-west-3.compute.internal
default cloud-node-controller ip-10-0-3-24.eu-west-3.compute.internal Synced Node synced successfully
kube-system default-scheduler kube-proxy-m6tc2 Scheduled Successfully assigned kube-system/kube-proxy-m6tc2 to ip-10-0-3-24.eu-west-3.compute.internal
kube-system default-scheduler coredns-577fccf48c-sz5dr FailedScheduling 0/1 nodes are available: 1 node(s) had untolerated taint {node.cilium.io/agent-not-ready: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal kube-proxy-m6tc2 Pulling Pulling image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/eks/kube-proxy:v1.27.1-minimal-eksbuild.1"
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal aws-node-dmbng Pulling Pulling image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon-k8s-cni-init:v1.12.6-eksbuild.2"
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal kube-proxy-m6tc2 Pulled Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/eks/kube-proxy:v1.27.1-minimal-eksbuild.1" in 1.877010228s (1.877027173s including waiting)
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal aws-node-dmbng Pulled Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon-k8s-cni-init:v1.12.6-eksbuild.2" in 1.611158184s (1.611167325s including waiting)
kube-system default-scheduler aws-node-748vd Scheduled Successfully assigned kube-system/aws-node-748vd to ip-10-0-2-105.eu-west-3.compute.internal
(x2) default kubelet ip-10-0-2-105.eu-west-3.compute.internal ip-10-0-2-105.eu-west-3.compute.internal NodeHasSufficientPID Node ip-10-0-2-105.eu-west-3.compute.internal status is now: NodeHasSufficientPID
default kubelet ip-10-0-2-105.eu-west-3.compute.internal ip-10-0-2-105.eu-west-3.compute.internal NodeAllocatableEnforced Updated Node Allocatable limit across pods
default cloud-node-controller ip-10-0-2-105.eu-west-3.compute.internal Synced Node synced successfully
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal kube-proxy-m6tc2 Started Started container kube-proxy
default kubelet ip-10-0-2-105.eu-west-3.compute.internal ip-10-0-2-105.eu-west-3.compute.internal Starting Starting kubelet.
kube-system default-scheduler kube-proxy-cqn46 Scheduled Successfully assigned kube-system/kube-proxy-cqn46 to ip-10-0-2-105.eu-west-3.compute.internal
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal kube-proxy-m6tc2 Created Created container kube-proxy
kube-system daemonset-controller aws-node SuccessfulCreate Created pod: aws-node-748vd
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal aws-node-dmbng Started Started container aws-vpc-cni-init
default kubelet ip-10-0-2-105.eu-west-3.compute.internal ip-10-0-2-105.eu-west-3.compute.internal InvalidDiskCapacity invalid capacity 0 on image filesystem
(x2) default kubelet ip-10-0-2-105.eu-west-3.compute.internal ip-10-0-2-105.eu-west-3.compute.internal NodeHasNoDiskPressure Node ip-10-0-2-105.eu-west-3.compute.internal status is now: NodeHasNoDiskPressure
(x2) default kubelet ip-10-0-2-105.eu-west-3.compute.internal ip-10-0-2-105.eu-west-3.compute.internal NodeHasSufficientMemory Node ip-10-0-2-105.eu-west-3.compute.internal status is now: NodeHasSufficientMemory
kube-system daemonset-controller kube-proxy SuccessfulCreate Created pod: kube-proxy-cqn46
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal aws-node-dmbng Created Created container aws-vpc-cni-init
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal aws-node-748vd Pulling Pulling image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon-k8s-cni-init:v1.12.6-eksbuild.2"
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-proxy-cqn46 Pulling Pulling image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/eks/kube-proxy:v1.27.1-minimal-eksbuild.1"
default node-controller ip-10-0-3-24.eu-west-3.compute.internal RegisteredNode Node ip-10-0-3-24.eu-west-3.compute.internal event: Registered Node ip-10-0-3-24.eu-west-3.compute.internal in Controller
default node-controller ip-10-0-2-105.eu-west-3.compute.internal RegisteredNode Node ip-10-0-2-105.eu-west-3.compute.internal event: Registered Node ip-10-0-2-105.eu-west-3.compute.internal in Controller
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal aws-node-748vd Pulled Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon-k8s-cni-init:v1.12.6-eksbuild.2" in 1.456271413s (1.456280886s including waiting)
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal aws-node-748vd Created Created container aws-vpc-cni-init
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal aws-node-748vd Started Started container aws-vpc-cni-init
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal aws-node-dmbng Pulling Pulling image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon-k8s-cni:v1.12.6-eksbuild.2"
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-proxy-cqn46 Pulled Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/eks/kube-proxy:v1.27.1-minimal-eksbuild.1" in 1.972902658s (1.972914972s including waiting)
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-proxy-cqn46 Created Created container kube-proxy
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-proxy-cqn46 Started Started container kube-proxy
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal aws-node-dmbng Created Created container aws-node
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal aws-node-dmbng Pulled Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon-k8s-cni:v1.12.6-eksbuild.2" in 1.18428644s (1.184298778s including waiting)
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal aws-node-dmbng Started Started container aws-node
default kubelet ip-10-0-3-24.eu-west-3.compute.internal ip-10-0-3-24.eu-west-3.compute.internal NodeReady Node ip-10-0-3-24.eu-west-3.compute.internal status is now: NodeReady
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal aws-node-748vd Pulling Pulling image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon-k8s-cni:v1.12.6-eksbuild.2"
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal aws-node-748vd Started Started container aws-node
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal aws-node-748vd Pulled Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon-k8s-cni:v1.12.6-eksbuild.2" in 1.1497599s (1.149773225s including waiting)
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal aws-node-748vd Created Created container aws-node
default kubelet ip-10-0-2-105.eu-west-3.compute.internal ip-10-0-2-105.eu-west-3.compute.internal NodeReady Node ip-10-0-2-105.eu-west-3.compute.internal status is now: NodeReady
(x2) kube-system default-scheduler coredns-577fccf48c-sz5dr FailedScheduling 0/2 nodes are available: 2 node(s) had untolerated taint {node.cilium.io/agent-not-ready: true}. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..
(x2) kube-system default-scheduler coredns-577fccf48c-5vcbw FailedScheduling 0/2 nodes are available: 2 node(s) had untolerated taint {node.cilium.io/agent-not-ready: true}. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..
kube-system default-scheduler aws-node-2t9s7 Scheduled Successfully assigned kube-system/aws-node-2t9s7 to ip-10-0-2-105.eu-west-3.compute.internal
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal aws-node-748vd Killing Stopping container aws-node
kube-system daemonset-controller aws-node SuccessfulCreate Created pod: aws-node-2t9s7
kube-system daemonset-controller aws-node SuccessfulDelete Deleted pod: aws-node-748vd
kube-system daemonset-controller kube-proxy SuccessfulDelete Deleted pod: kube-proxy-m6tc2
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal kube-proxy-m6tc2 Killing Stopping container kube-proxy
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal aws-node-2t9s7 Pulling Pulling image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon-k8s-cni-init:v1.14.0-eksbuild.3"
kube-system deployment-controller coredns ScalingReplicaSet Scaled up replica set coredns-7f9bc84c58 to 2 from 1
kube-system default-scheduler kube-proxy-7fcmm Scheduled Successfully assigned kube-system/kube-proxy-7fcmm to ip-10-0-3-24.eu-west-3.compute.internal
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal kube-proxy-7fcmm Pulling Pulling image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/eks/kube-proxy:v1.27.4-minimal-eksbuild.2"
kube-system replicaset-controller coredns-577fccf48c SuccessfulDelete Deleted pod: coredns-577fccf48c-sz5dr
kube-system default-scheduler coredns-577fccf48c-sz5dr FailedScheduling skip schedule deleting pod: kube-system/coredns-577fccf48c-sz5dr
kube-system deployment-controller coredns ScalingReplicaSet Scaled down replica set coredns-577fccf48c to 1 from 2
kube-system deployment-controller coredns ScalingReplicaSet Scaled up replica set coredns-7f9bc84c58 to 1
kube-system daemonset-controller kube-proxy SuccessfulCreate Created pod: kube-proxy-7fcmm
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal kube-proxy-7fcmm Created Created container kube-proxy
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal aws-node-2t9s7 Created Created container aws-vpc-cni-init
kube-system default-scheduler coredns-7f9bc84c58-ws8z4 Scheduled Successfully assigned kube-system/coredns-7f9bc84c58-ws8z4 to ip-10-0-2-105.eu-west-3.compute.internal
kube-system default-scheduler coredns-7f9bc84c58-x7qpw Scheduled Successfully assigned kube-system/coredns-7f9bc84c58-x7qpw to ip-10-0-3-24.eu-west-3.compute.internal
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal coredns-7f9bc84c58-x7qpw Pulling Pulling image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/eks/coredns:v1.10.1-eksbuild.3"
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal kube-proxy-7fcmm Pulled Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/eks/kube-proxy:v1.27.4-minimal-eksbuild.2" in 1.250404067s (1.250415801s including waiting)
kube-system replicaset-controller coredns-7f9bc84c58 SuccessfulCreate Created pod: coredns-7f9bc84c58-ws8z4
kube-system replicaset-controller coredns-7f9bc84c58 SuccessfulCreate Created pod: coredns-7f9bc84c58-x7qpw
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal aws-node-2t9s7 Started Started container aws-vpc-cni-init
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal aws-node-2t9s7 Pulled Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon-k8s-cni-init:v1.14.0-eksbuild.3" in 2.101402038s (2.101414937s including waiting)
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal kube-proxy-7fcmm Started Started container kube-proxy
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal coredns-7f9bc84c58-x7qpw Created Created container coredns
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal aws-node-2t9s7 Pulling Pulling image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon-k8s-cni:v1.14.0-eksbuild.3"
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal coredns-7f9bc84c58-ws8z4 FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "ecf9051af393adf4d42bb2b2eec893e276359348c4a5493e9a3f02a6174a0e28": plugin type="aws-cni" name="aws-cni" failed (add): add cmd: Error received from AddNetwork gRPC call: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:50051: connect: connection refused"
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal coredns-7f9bc84c58-x7qpw Pulled Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/eks/coredns:v1.10.1-eksbuild.3" in 1.207214364s (1.20722959s including waiting)
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal coredns-7f9bc84c58-x7qpw Started Started container coredns
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal aws-node-2t9s7 Created Created container aws-node
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal coredns-7f9bc84c58-x7qpw Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 503
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal aws-node-2t9s7 Pulling Pulling image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon/aws-network-policy-agent:v1.0.1-eksbuild.1"
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal aws-node-2t9s7 Pulled Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon-k8s-cni:v1.14.0-eksbuild.3" in 1.589409324s (1.589423902s including waiting)
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-proxy-cqn46 Killing Stopping container kube-proxy
kube-system default-scheduler kube-proxy-vjrhj Scheduled Successfully assigned kube-system/kube-proxy-vjrhj to ip-10-0-2-105.eu-west-3.compute.internal
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-proxy-vjrhj Pulling Pulling image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/eks/kube-proxy:v1.27.4-minimal-eksbuild.2"
kube-system daemonset-controller kube-proxy SuccessfulDelete Deleted pod: kube-proxy-cqn46
kube-system daemonset-controller kube-proxy SuccessfulCreate Created pod: kube-proxy-vjrhj
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal aws-node-2t9s7 Started Started container aws-node
kube-system deployment-controller coredns ScalingReplicaSet Scaled down replica set coredns-577fccf48c to 0 from 1
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-proxy-vjrhj Created Created container kube-proxy
kube-system default-scheduler coredns-577fccf48c-5vcbw FailedScheduling skip schedule deleting pod: kube-system/coredns-577fccf48c-5vcbw
kube-system replicaset-controller coredns-577fccf48c SuccessfulDelete Deleted pod: coredns-577fccf48c-5vcbw
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-proxy-vjrhj Pulled Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/eks/kube-proxy:v1.27.4-minimal-eksbuild.2" in 930.816518ms (930.829167ms including waiting)
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-proxy-vjrhj Started Started container kube-proxy
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal aws-node-2t9s7 Created Created container aws-eks-nodeagent
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal aws-node-2t9s7 Started Started container aws-eks-nodeagent
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal aws-node-2t9s7 Pulled Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon/aws-network-policy-agent:v1.0.1-eksbuild.1" in 8.409430321s (8.409446158s including waiting)
kube-system daemonset-controller aws-node SuccessfulDelete Deleted pod: aws-node-dmbng
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal aws-node-dmbng Killing Stopping container aws-node
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal aws-node-b5nb9 Pulling Pulling image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon-k8s-cni-init:v1.14.0-eksbuild.3"
kube-system daemonset-controller aws-node SuccessfulCreate Created pod: aws-node-b5nb9
kube-system default-scheduler aws-node-b5nb9 Scheduled Successfully assigned kube-system/aws-node-b5nb9 to ip-10-0-3-24.eu-west-3.compute.internal
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal aws-node-b5nb9 Started Started container aws-vpc-cni-init
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal coredns-7f9bc84c58-ws8z4 Pulling Pulling image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/eks/coredns:v1.10.1-eksbuild.3"
(x2) kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal coredns-7f9bc84c58-ws8z4 SandboxChanged Pod sandbox changed, it will be killed and re-created.
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal aws-node-b5nb9 Created Created container aws-vpc-cni-init
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal aws-node-b5nb9 Pulled Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon-k8s-cni-init:v1.14.0-eksbuild.3" in 1.64509555s (1.645109095s including waiting)
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal aws-node-b5nb9 Pulling Pulling image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon-k8s-cni:v1.14.0-eksbuild.3"
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal coredns-7f9bc84c58-ws8z4 Pulled Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/eks/coredns:v1.10.1-eksbuild.3" in 1.198078691s (1.198088708s including waiting)
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal coredns-7f9bc84c58-ws8z4 Started Started container coredns
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal coredns-7f9bc84c58-ws8z4 Created Created container coredns
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal aws-node-b5nb9 Pulling Pulling image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon/aws-network-policy-agent:v1.0.1-eksbuild.1"
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal aws-node-b5nb9 Started Started container aws-node
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal aws-node-b5nb9 Created Created container aws-node
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal aws-node-b5nb9 Pulled Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon-k8s-cni:v1.14.0-eksbuild.3" in 1.39722713s (1.3972433s including waiting)
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal aws-node-b5nb9 Pulled Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon/aws-network-policy-agent:v1.0.1-eksbuild.1" in 8.337267096s (8.337279309s including waiting)
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal aws-node-b5nb9 Created Created container aws-eks-nodeagent
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal aws-node-b5nb9 Started Started container aws-eks-nodeagent
kube-system default-scheduler delete-aws-cni-9cbjk Scheduled Successfully assigned kube-system/delete-aws-cni-9cbjk to ip-10-0-3-24.eu-west-3.compute.internal
kube-system job-controller delete-aws-cni SuccessfulCreate Created pod: delete-aws-cni-9cbjk
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal delete-aws-cni-9cbjk Pulling Pulling image "bitnami/kubectl:1.27.3"
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal delete-aws-cni-9cbjk Created Created container kubectl
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal aws-node-b5nb9 Killing Stopping container aws-node
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal delete-aws-cni-9cbjk Started Started container kubectl
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal delete-aws-cni-9cbjk Pulled Successfully pulled image "bitnami/kubectl:1.27.3" in 3.976764158s (3.976778148s including waiting)
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal aws-node-2t9s7 Killing Stopping container aws-node
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal aws-node-b5nb9 Killing Stopping container aws-eks-nodeagent
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal aws-node-2t9s7 Killing Stopping container aws-eks-nodeagent
kube-system job-controller delete-aws-cni Completed Job completed
kube-system daemonset-controller cilium-envoy SuccessfulCreate Created pod: cilium-envoy-2fzwf
kube-system default-scheduler cilium-g94mr Scheduled Successfully assigned kube-system/cilium-g94mr to ip-10-0-2-105.eu-west-3.compute.internal
kube-system default-scheduler cilium-zpjnm Scheduled Successfully assigned kube-system/cilium-zpjnm to ip-10-0-3-24.eu-west-3.compute.internal
kube-system daemonset-controller cilium-envoy SuccessfulCreate Created pod: cilium-envoy-pzhcc
kube-system daemonset-controller cilium SuccessfulCreate Created pod: cilium-g94mr
kube-system default-scheduler cilium-operator-779bf49976-qznq9 Scheduled Successfully assigned kube-system/cilium-operator-779bf49976-qznq9 to ip-10-0-2-105.eu-west-3.compute.internal
kube-system default-scheduler cilium-envoy-2fzwf Scheduled Successfully assigned kube-system/cilium-envoy-2fzwf to ip-10-0-3-24.eu-west-3.compute.internal
kube-system default-scheduler cilium-operator-779bf49976-lgq5h Scheduled Successfully assigned kube-system/cilium-operator-779bf49976-lgq5h to ip-10-0-3-24.eu-west-3.compute.internal
kube-system deployment-controller cilium-operator ScalingReplicaSet Scaled up replica set cilium-operator-779bf49976 to 2
kube-system replicaset-controller cilium-operator-779bf49976 SuccessfulCreate Created pod: cilium-operator-779bf49976-lgq5h
kube-system default-scheduler cilium-envoy-pzhcc Scheduled Successfully assigned kube-system/cilium-envoy-pzhcc to ip-10-0-2-105.eu-west-3.compute.internal
kube-system replicaset-controller cilium-operator-779bf49976 SuccessfulCreate Created pod: cilium-operator-779bf49976-qznq9
kube-system daemonset-controller cilium SuccessfulCreate Created pod: cilium-zpjnm
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal cilium-envoy-2fzwf Pulling Pulling image "quay.io/cilium/cilium-envoy:v1.25.9-f039e2bd380b7eef2f2feea5750676bb36133699@sha256:023d09eeb8a44ae99b489f4af7ffed8b8b54f19a532e0bc6ab4c1e4b31acaab1"
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-operator-779bf49976-qznq9 FailedMount MountVolume.SetUp failed for volume "cilium-config-path" : failed to sync configmap cache: timed out waiting for the condition
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-envoy-pzhcc Pulling Pulling image "quay.io/cilium/cilium-envoy:v1.25.9-f039e2bd380b7eef2f2feea5750676bb36133699@sha256:023d09eeb8a44ae99b489f4af7ffed8b8b54f19a532e0bc6ab4c1e4b31acaab1"
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-g94mr Pulling Pulling image "quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72"
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal cilium-zpjnm FailedMount MountVolume.SetUp failed for volume "hubble-tls" : failed to sync secret cache: timed out waiting for the condition
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal cilium-zpjnm Pulling Pulling image "quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72"
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal cilium-operator-779bf49976-lgq5h Pulling Pulling image "quay.io/cilium/operator-aws:v1.14.1@sha256:ff57964aefd903456745e53a4697a4f6a026d8fffdb06f53f624a23d23ade37a"
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-operator-779bf49976-qznq9 Pulling Pulling image "quay.io/cilium/operator-aws:v1.14.1@sha256:ff57964aefd903456745e53a4697a4f6a026d8fffdb06f53f624a23d23ade37a"
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-envoy-pzhcc Pulled Successfully pulled image "quay.io/cilium/cilium-envoy:v1.25.9-f039e2bd380b7eef2f2feea5750676bb36133699@sha256:023d09eeb8a44ae99b489f4af7ffed8b8b54f19a532e0bc6ab4c1e4b31acaab1" in 3.249080977s (3.249088622s including waiting)
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-envoy-pzhcc Started Started container cilium-envoy
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-envoy-pzhcc Created Created container cilium-envoy
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal cilium-envoy-2fzwf Started Started container cilium-envoy
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal cilium-envoy-2fzwf Created Created container cilium-envoy
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal cilium-envoy-2fzwf Pulled Successfully pulled image "quay.io/cilium/cilium-envoy:v1.25.9-f039e2bd380b7eef2f2feea5750676bb36133699@sha256:023d09eeb8a44ae99b489f4af7ffed8b8b54f19a532e0bc6ab4c1e4b31acaab1" in 3.470194533s (3.470208939s including waiting)
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal cilium-operator-779bf49976-lgq5h Started Started container cilium-operator
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal cilium-operator-779bf49976-lgq5h Pulled Successfully pulled image "quay.io/cilium/operator-aws:v1.14.1@sha256:ff57964aefd903456745e53a4697a4f6a026d8fffdb06f53f624a23d23ade37a" in 3.108024267s (3.108053375s including waiting)
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal cilium-operator-779bf49976-lgq5h Created Created container cilium-operator
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-g94mr Pulled Successfully pulled image "quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72" in 5.524408054s (5.524423547s including waiting)
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-g94mr Created Created container config
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-g94mr Started Started container config
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal cilium-zpjnm Pulled Successfully pulled image "quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72" in 5.074430028s (5.074445136s including waiting)
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-operator-779bf49976-qznq9 Created Created container cilium-operator
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-operator-779bf49976-qznq9 Unhealthy Readiness probe failed: Get "http://127.0.0.1:9234/healthz": dial tcp 127.0.0.1:9234: connect: connection refused
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-operator-779bf49976-qznq9 Pulled Successfully pulled image "quay.io/cilium/operator-aws:v1.14.1@sha256:ff57964aefd903456745e53a4697a4f6a026d8fffdb06f53f624a23d23ade37a" in 4.120856851s (4.120872155s including waiting)
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal cilium-zpjnm Started Started container config
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal cilium-zpjnm Created Created container config
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-operator-779bf49976-qznq9 Started Started container cilium-operator
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-g94mr Created Created container mount-cgroup
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-g94mr Started Started container mount-cgroup
kube-system replicaset-controller coredns-7f9bc84c58 SuccessfulCreate Created pod: coredns-7f9bc84c58-nldsh
kube-system default-scheduler coredns-7f9bc84c58-nldsh Scheduled Successfully assigned kube-system/coredns-7f9bc84c58-nldsh to ip-10-0-2-105.eu-west-3.compute.internal
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-g94mr Pulled Container image "quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72" already present on machine
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal coredns-7f9bc84c58-ws8z4 Killing Stopping container coredns
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-g94mr Created Created container apply-sysctl-overwrites
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal coredns-7f9bc84c58-nldsh FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "6c278ed382ab1c38c6a7bd0c48c07d23b2ee76dbf6590fa946231bea30b2beae": plugin type="aws-cni" name="aws-cni" failed (add): add cmd: Error received from AddNetwork gRPC call: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:50051: connect: connection refused"
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-g94mr Started Started container apply-sysctl-overwrites
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal cilium-zpjnm Pulled Container image "quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72" already present on machine
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal cilium-zpjnm Created Created container mount-cgroup
(x2) karpenter controllermanager karpenter NoPods No matching pods found
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-g94mr Pulled Container image "quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72" already present on machine
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal cilium-zpjnm Started Started container mount-cgroup
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal cilium-zpjnm Created Created container apply-sysctl-overwrites
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal cilium-zpjnm Pulled Container image "quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72" already present on machine
karpenter deployment-controller karpenter ScalingReplicaSet Scaled up replica set karpenter-85dcd86d7c to 2
karpenter replicaset-controller karpenter-85dcd86d7c SuccessfulCreate Created pod: karpenter-85dcd86d7c-5dd2d
karpenter replicaset-controller karpenter-85dcd86d7c SuccessfulCreate Created pod: karpenter-85dcd86d7c-59jrk
karpenter default-scheduler karpenter-85dcd86d7c-5dd2d FailedScheduling 0/2 nodes are available: 2 node(s) had untolerated taint {node.cilium.io/agent-not-ready: true}. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..
karpenter default-scheduler karpenter-85dcd86d7c-59jrk FailedScheduling 0/2 nodes are available: 2 node(s) had untolerated taint {node.cilium.io/agent-not-ready: true}. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-g94mr Pulled Container image "quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72" already present on machine
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-g94mr Created Created container mount-bpf-fs
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-g94mr Started Started container mount-bpf-fs
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal cilium-zpjnm Started Started container apply-sysctl-overwrites
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-g94mr Created Created container clean-cilium-state
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal cilium-zpjnm Created Created container mount-bpf-fs
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-g94mr Started Started container clean-cilium-state
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal cilium-zpjnm Started Started container mount-bpf-fs
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal cilium-zpjnm Pulled Container image "quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72" already present on machine
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-g94mr Pulled Container image "quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72" already present on machine
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal cilium-zpjnm Created Created container clean-cilium-state
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-g94mr Pulled Container image "quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72" already present on machine
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-g94mr Created Created container install-cni-binaries
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-g94mr Started Started container install-cni-binaries
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal cilium-zpjnm Started Started container clean-cilium-state
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal cilium-zpjnm Pulled Container image "quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72" already present on machine
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal cilium-zpjnm Started Started container install-cni-binaries
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal cilium-zpjnm Created Created container install-cni-binaries
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal cilium-zpjnm Pulled Container image "quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72" already present on machine
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-g94mr Started Started container cilium-agent
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal cilium-zpjnm Started Started container cilium-agent
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal cilium-zpjnm Created Created container cilium-agent
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal cilium-zpjnm Pulled Container image "quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72" already present on machine
(x7) kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal cilium-envoy-2fzwf Unhealthy Startup probe failed: Get "http://localhost:9878/healthz": dial tcp [::1]:9878: connect: connection refused
(x7) kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-envoy-pzhcc Unhealthy Startup probe failed: Get "http://localhost:9878/healthz": dial tcp [::1]:9878: connect: connection refused
flux-system deployment-controller notification-controller ScalingReplicaSet Scaled up replica set notification-controller-ddf44665d to 1
flux-system replicaset-controller notification-controller-ddf44665d SuccessfulCreate Created pod: notification-controller-ddf44665d-wdlvf
flux-system replicaset-controller source-controller-56ccbf8db8 SuccessfulCreate Created pod: source-controller-56ccbf8db8-vvzzv
flux-system replicaset-controller helm-controller-57d8957947 SuccessfulCreate Created pod: helm-controller-57d8957947-6b2cj
flux-system default-scheduler kustomize-controller-858996fc8d-n969h FailedScheduling 0/2 nodes are available: 2 node(s) had untolerated taint {node.cilium.io/agent-not-ready: true}. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..
flux-system default-scheduler source-controller-56ccbf8db8-vvzzv FailedScheduling 0/2 nodes are available: 2 node(s) had untolerated taint {node.cilium.io/agent-not-ready: true}. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..
flux-system default-scheduler helm-controller-57d8957947-6b2cj FailedScheduling 0/2 nodes are available: 2 node(s) had untolerated taint {node.cilium.io/agent-not-ready: true}. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..
flux-system deployment-controller source-controller ScalingReplicaSet Scaled up replica set source-controller-56ccbf8db8 to 1
flux-system deployment-controller helm-controller ScalingReplicaSet Scaled up replica set helm-controller-57d8957947 to 1
flux-system deployment-controller kustomize-controller ScalingReplicaSet Scaled up replica set kustomize-controller-858996fc8d to 1
flux-system replicaset-controller kustomize-controller-858996fc8d SuccessfulCreate Created pod: kustomize-controller-858996fc8d-n969h
flux-system default-scheduler notification-controller-ddf44665d-wdlvf FailedScheduling 0/2 nodes are available: 2 node(s) had untolerated taint {node.cilium.io/agent-not-ready: true}. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..
(x2) kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal cilium-zpjnm Unhealthy Startup probe failed: Get "http://127.0.0.1:9879/healthz": dial tcp 127.0.0.1:9879: connect: connection refused
(x3) kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-g94mr Unhealthy Startup probe failed: Get "http://127.0.0.1:9879/healthz": dial tcp 127.0.0.1:9879: connect: connection refused
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal coredns-7f9bc84c58-x7qpw Killing Stopping container coredns
kube-system default-scheduler coredns-7f9bc84c58-8vw55 Scheduled Successfully assigned kube-system/coredns-7f9bc84c58-8vw55 to ip-10-0-3-24.eu-west-3.compute.internal
kube-system replicaset-controller coredns-7f9bc84c58 SuccessfulCreate Created pod: coredns-7f9bc84c58-8vw55
(x3) kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal coredns-7f9bc84c58-ws8z4 Unhealthy Readiness probe failed: Get "http://10.0.6.74:8181/ready": dial tcp 10.0.6.74:8181: connect: connection refused
(x3) kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal coredns-7f9bc84c58-x7qpw Unhealthy Readiness probe failed: Get "http://10.0.5.8:8181/ready": dial tcp 10.0.5.8:8181: connect: connection refused
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal coredns-7f9bc84c58-8vw55 FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d0ac448054ccc36e6b3fa6d926d64d9001eec430545d9caf91fbd5c4511bf63f": plugin type="cilium-cni" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http://localhost/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Is the agent running?
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal coredns-7f9bc84c58-nldsh FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "bb9512d64e5ad607f25ed6d614e2de5856168e0e32d68422e54c93c41d83c37a": plugin type="cilium-cni" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http://localhost/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Is the agent running?
flux-system default-scheduler notification-controller-ddf44665d-wdlvf Scheduled Successfully assigned flux-system/notification-controller-ddf44665d-wdlvf to ip-10-0-2-105.eu-west-3.compute.internal
flux-system default-scheduler helm-controller-57d8957947-6b2cj Scheduled Successfully assigned flux-system/helm-controller-57d8957947-6b2cj to ip-10-0-2-105.eu-west-3.compute.internal
karpenter default-scheduler karpenter-85dcd86d7c-59jrk Scheduled Successfully assigned karpenter/karpenter-85dcd86d7c-59jrk to ip-10-0-2-105.eu-west-3.compute.internal
karpenter default-scheduler karpenter-85dcd86d7c-5dd2d FailedScheduling 0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.cilium.io/agent-not-ready: true}. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling..
flux-system default-scheduler source-controller-56ccbf8db8-vvzzv Scheduled Successfully assigned flux-system/source-controller-56ccbf8db8-vvzzv to ip-10-0-2-105.eu-west-3.compute.internal
flux-system default-scheduler kustomize-controller-858996fc8d-n969h Scheduled Successfully assigned flux-system/kustomize-controller-858996fc8d-n969h to ip-10-0-2-105.eu-west-3.compute.internal
karpenter kubelet ip-10-0-2-105.eu-west-3.compute.internal karpenter-85dcd86d7c-59jrk FailedMount MountVolume.SetUp failed for volume "kube-api-access-s6md7" : failed to sync configmap cache: timed out waiting for the condition
karpenter default-scheduler karpenter-85dcd86d7c-5dd2d Scheduled Successfully assigned karpenter/karpenter-85dcd86d7c-5dd2d to ip-10-0-3-24.eu-west-3.compute.internal
flux-system kubelet ip-10-0-2-105.eu-west-3.compute.internal notification-controller-ddf44665d-wdlvf Pulling Pulling image "ghcr.io/fluxcd/notification-controller:v1.1.0"
flux-system kubelet ip-10-0-2-105.eu-west-3.compute.internal source-controller-56ccbf8db8-vvzzv Pulling Pulling image "ghcr.io/fluxcd/source-controller:v1.1.0"
karpenter kubelet ip-10-0-2-105.eu-west-3.compute.internal karpenter-85dcd86d7c-59jrk Pulling Pulling image "public.ecr.aws/karpenter/controller:v0.30.0@sha256:3d436ece23d17263edbaa2314281f3ac1c2b0d3fb9dfa531cb30509659d8a7c3"
flux-system kubelet ip-10-0-2-105.eu-west-3.compute.internal helm-controller-57d8957947-6b2cj Pulling Pulling image "ghcr.io/fluxcd/helm-controller:v0.36.0"
karpenter kubelet ip-10-0-3-24.eu-west-3.compute.internal karpenter-85dcd86d7c-5dd2d Pulling Pulling image "public.ecr.aws/karpenter/controller:v0.30.0@sha256:3d436ece23d17263edbaa2314281f3ac1c2b0d3fb9dfa531cb30509659d8a7c3"
flux-system kubelet ip-10-0-2-105.eu-west-3.compute.internal kustomize-controller-858996fc8d-n969h Pulling Pulling image "ghcr.io/fluxcd/kustomize-controller:v1.1.0"
karpenter kubelet ip-10-0-3-24.eu-west-3.compute.internal karpenter-85dcd86d7c-5dd2d Created Created container controller
karpenter kubelet ip-10-0-3-24.eu-west-3.compute.internal karpenter-85dcd86d7c-5dd2d Started Started container controller
flux-system kubelet ip-10-0-2-105.eu-west-3.compute.internal source-controller-56ccbf8db8-vvzzv Unhealthy Readiness probe failed: Get "http://10.0.12.128:9090/": dial tcp 10.0.12.128:9090: connect: connection refused
flux-system kubelet ip-10-0-2-105.eu-west-3.compute.internal source-controller-56ccbf8db8-vvzzv Started Started container manager
flux-system kubelet ip-10-0-2-105.eu-west-3.compute.internal source-controller-56ccbf8db8-vvzzv Created Created container manager
flux-system kubelet ip-10-0-2-105.eu-west-3.compute.internal source-controller-56ccbf8db8-vvzzv Pulled Successfully pulled image "ghcr.io/fluxcd/source-controller:v1.1.0" in 2.540976624s (2.540985454s including waiting)
flux-system source-controller-56ccbf8db8-vvzzv_483e3c38-5f71-4908-b17d-16dd36f399e7 source-controller-leader-election LeaderElection source-controller-56ccbf8db8-vvzzv_483e3c38-5f71-4908-b17d-16dd36f399e7 became leader
karpenter kubelet ip-10-0-3-24.eu-west-3.compute.internal karpenter-85dcd86d7c-5dd2d Pulled Successfully pulled image "public.ecr.aws/karpenter/controller:v0.30.0@sha256:3d436ece23d17263edbaa2314281f3ac1c2b0d3fb9dfa531cb30509659d8a7c3" in 2.591949413s (2.59196209s including waiting)
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal coredns-7f9bc84c58-8vw55 Created Created container coredns
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal coredns-7f9bc84c58-8vw55 Pulled Container image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/eks/coredns:v1.10.1-eksbuild.3" already present on machine
karpenter kubelet ip-10-0-2-105.eu-west-3.compute.internal karpenter-85dcd86d7c-59jrk Created Created container controller
karpenter kubelet ip-10-0-2-105.eu-west-3.compute.internal karpenter-85dcd86d7c-59jrk Pulled Successfully pulled image "public.ecr.aws/karpenter/controller:v0.30.0@sha256:3d436ece23d17263edbaa2314281f3ac1c2b0d3fb9dfa531cb30509659d8a7c3" in 3.42892512s (3.428935987s including waiting)
flux-system notification-controller-ddf44665d-wdlvf_f3c14e27-3a4b-43c6-9640-54ef5d98e7e3 notification-controller-leader-election LeaderElection notification-controller-ddf44665d-wdlvf_f3c14e27-3a4b-43c6-9640-54ef5d98e7e3 became leader
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal coredns-7f9bc84c58-8vw55 Started Started container coredns
flux-system kubelet ip-10-0-2-105.eu-west-3.compute.internal notification-controller-ddf44665d-wdlvf Pulled Successfully pulled image "ghcr.io/fluxcd/notification-controller:v1.1.0" in 3.284387127s (3.284415037s including waiting)
flux-system kubelet ip-10-0-2-105.eu-west-3.compute.internal notification-controller-ddf44665d-wdlvf Created Created container manager
flux-system kubelet ip-10-0-2-105.eu-west-3.compute.internal notification-controller-ddf44665d-wdlvf Started Started container manager
flux-system kubelet ip-10-0-2-105.eu-west-3.compute.internal kustomize-controller-858996fc8d-n969h Pulled Successfully pulled image "ghcr.io/fluxcd/kustomize-controller:v1.1.0" in 3.198010741s (3.1980186s including waiting)
flux-system kubelet ip-10-0-2-105.eu-west-3.compute.internal helm-controller-57d8957947-6b2cj Created Created container manager
flux-system kustomize-controller-858996fc8d-n969h_5573d5f4-1ca2-404b-b3f1-9509bc823269 kustomize-controller-leader-election LeaderElection kustomize-controller-858996fc8d-n969h_5573d5f4-1ca2-404b-b3f1-9509bc823269 became leader
flux-system kubelet ip-10-0-2-105.eu-west-3.compute.internal helm-controller-57d8957947-6b2cj Pulled Successfully pulled image "ghcr.io/fluxcd/helm-controller:v0.36.0" in 3.075550723s (3.075558237s including waiting)
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal coredns-7f9bc84c58-nldsh Started Started container coredns
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal coredns-7f9bc84c58-nldsh Created Created container coredns
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal coredns-7f9bc84c58-nldsh Pulled Container image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/eks/coredns:v1.10.1-eksbuild.3" already present on machine
flux-system kubelet ip-10-0-2-105.eu-west-3.compute.internal kustomize-controller-858996fc8d-n969h Started Started container manager
flux-system kubelet ip-10-0-2-105.eu-west-3.compute.internal helm-controller-57d8957947-6b2cj Started Started container manager
karpenter kubelet ip-10-0-2-105.eu-west-3.compute.internal karpenter-85dcd86d7c-59jrk Started Started container controller
flux-system kubelet ip-10-0-2-105.eu-west-3.compute.internal kustomize-controller-858996fc8d-n969h Created Created container manager
flux-system helm-controller-57d8957947-6b2cj_7aa127ad-a7ff-4aca-917b-1021154aeb9b helm-controller-leader-election LeaderElection helm-controller-57d8957947-6b2cj_7aa127ad-a7ff-4aca-917b-1021154aeb9b became leader
flux-system source-controller flux-system NewArtifact stored artifact for commit 'Add Flux sync manifests'
karpenter karpenter-85dcd86d7c-5dd2d_a2d0ad6c-3c58-4417-bba4-983f716b96b5 karpenter-leader-election LeaderElection karpenter-85dcd86d7c-5dd2d_a2d0ad6c-3c58-4417-bba4-983f716b96b5 became leader
flux-system kustomize-controller flux-system ReconciliationSucceeded Reconciliation finished in 2.457260074s, next run in 10m0s
flux-system kustomize-controller flux-system Progressing CustomResourceDefinition/alerts.notification.toolkit.fluxcd.io configured CustomResourceDefinition/buckets.source.toolkit.fluxcd.io configured CustomResourceDefinition/gitrepositories.source.toolkit.fluxcd.io configured CustomResourceDefinition/helmcharts.source.toolkit.fluxcd.io configured CustomResourceDefinition/helmreleases.helm.toolkit.fluxcd.io configured CustomResourceDefinition/helmrepositories.source.toolkit.fluxcd.io configured CustomResourceDefinition/kustomizations.kustomize.toolkit.fluxcd.io configured CustomResourceDefinition/ocirepositories.source.toolkit.fluxcd.io configured CustomResourceDefinition/providers.notification.toolkit.fluxcd.io configured CustomResourceDefinition/receivers.notification.toolkit.fluxcd.io configured Namespace/flux-system configured ResourceQuota/flux-system/critical-pods-flux-system configured ServiceAccount/flux-system/helm-controller configured ServiceAccount/flux-system/kustomize-controller configured ServiceAccount/flux-system/notification-controller configured ServiceAccount/flux-system/source-controller configured ClusterRole/crd-controller-flux-system configured ClusterRole/flux-edit-flux-system configured ClusterRole/flux-view-flux-system configured ClusterRoleBinding/cluster-reconciler-flux-system configured ClusterRoleBinding/crd-controller-flux-system configured Service/flux-system/notification-controller configured Service/flux-system/source-controller configured Service/flux-system/webhook-receiver configured Deployment/flux-system/helm-controller configured Deployment/flux-system/kustomize-controller configured Deployment/flux-system/notification-controller configured Deployment/flux-system/source-controller configured Kustomization/flux-system/apps created Kustomization/flux-system/crds created Kustomization/flux-system/crossplane-configuration created Kustomization/flux-system/crossplane-controller created Kustomization/flux-system/crossplane-providers created Kustomization/flux-system/flux-config created Kustomization/flux-system/flux-system configured Kustomization/flux-system/infrastructure created Kustomization/flux-system/namespaces created Kustomization/flux-system/observability created Kustomization/flux-system/security created NetworkPolicy/flux-system/allow-egress configured NetworkPolicy/flux-system/allow-scraping configured NetworkPolicy/flux-system/allow-webhooks configured GitRepository/flux-system/flux-system configured
flux-system kustomize-controller crds DependencyNotReady Dependencies do not meet ready condition, retrying in 30s
flux-system kustomize-controller namespaces ReconciliationSucceeded Reconciliation finished in 237.884135ms, next run in 4m0s
flux-system kustomize-controller namespaces Progressing Namespace/crossplane-system created Namespace/echo created Namespace/infrastructure created Namespace/observability created Namespace/security created
(x2) flux-system kustomize-controller observability DependencyNotReady Dependencies do not meet ready condition, retrying in 30s
(x2) flux-system kustomize-controller security DependencyNotReady Dependencies do not meet ready condition, retrying in 30s
(x2) flux-system kustomize-controller crossplane-controller DependencyNotReady Dependencies do not meet ready condition, retrying in 30s
(x2) flux-system kustomize-controller flux-config DependencyNotReady Dependencies do not meet ready condition, retrying in 30s
flux-system kustomize-controller crds Progressing CustomResourceDefinition/certificaterequests.cert-manager.io created CustomResourceDefinition/certificates.cert-manager.io created CustomResourceDefinition/challenges.acme.cert-manager.io created CustomResourceDefinition/clusterissuers.cert-manager.io created CustomResourceDefinition/issuers.cert-manager.io created CustomResourceDefinition/orders.acme.cert-manager.io created HelmRelease/observability/crds-prometheus-operator created Kustomization/kube-system/crds-gateway-api created Kustomization/security/crds-external-secrets created Kustomization/security/crds-kyverno created GitRepository/kube-system/gateway-api created GitRepository/security/external-secrets created GitRepository/security/kyverno created HelmRepository/flux-system/prometheus-community created
observability helm-controller crds-prometheus-operator info HelmChart 'flux-system/observability-crds-prometheus-operator' is not ready
flux-system kustomize-controller crds ReconciliationSucceeded Reconciliation finished in 3.395809673s, next run in 4m0s
flux-system source-controller observability-crds-prometheus-operator NoSourceArtifact no artifact available for HelmRepository source 'prometheus-community'
flux-system source-controller prometheus-community NewArtifact stored fetched index of size 3.905MB from 'https://prometheus-community.github.io/helm-charts'
flux-system source-controller observability-crds-prometheus-operator ChartPullSucceeded pulled 'prometheus-operator-crds' chart with version '5.1.0'
observability helm-controller crds-prometheus-operator info Helm install has started
kube-system source-controller gateway-api NewArtifact stored artifact for commit 'Merge pull request #2360 from robscott/changelog-v...'
security source-controller external-secrets NewArtifact stored artifact for commit 'fixing label limits (#2645)'
security source-controller kyverno NewArtifact stored artifact for commit 'release 1.10.3 (#8006)'
kube-system kustomize-controller crds-gateway-api Progressing CustomResourceDefinition/gatewayclasses.gateway.networking.k8s.io configured CustomResourceDefinition/gateways.gateway.networking.k8s.io configured CustomResourceDefinition/grpcroutes.gateway.networking.k8s.io created CustomResourceDefinition/httproutes.gateway.networking.k8s.io configured CustomResourceDefinition/referencegrants.gateway.networking.k8s.io configured CustomResourceDefinition/tcproutes.gateway.networking.k8s.io configured CustomResourceDefinition/tlsroutes.gateway.networking.k8s.io configured CustomResourceDefinition/udproutes.gateway.networking.k8s.io configured
kube-system kustomize-controller crds-gateway-api ReconciliationSucceeded Reconciliation finished in 3.832695598s, next run in 10m0s
security kustomize-controller crds-external-secrets ReconciliationSucceeded Reconciliation finished in 5.449787256s, next run in 10m0s
security kustomize-controller crds-external-secrets Progressing CustomResourceDefinition/acraccesstokens.generators.external-secrets.io created CustomResourceDefinition/clusterexternalsecrets.external-secrets.io created CustomResourceDefinition/clustersecretstores.external-secrets.io created CustomResourceDefinition/ecrauthorizationtokens.generators.external-secrets.io created CustomResourceDefinition/externalsecrets.external-secrets.io created CustomResourceDefinition/fakes.generators.external-secrets.io created CustomResourceDefinition/gcraccesstokens.generators.external-secrets.io created CustomResourceDefinition/passwords.generators.external-secrets.io created CustomResourceDefinition/pushsecrets.external-secrets.io created CustomResourceDefinition/secretstores.external-secrets.io created CustomResourceDefinition/vaultdynamicsecrets.generators.external-secrets.io created
observability helm-controller crds-prometheus-operator info Helm install succeeded
security kustomize-controller crds-kyverno Progressing CustomResourceDefinition/admissionreports.kyverno.io created CustomResourceDefinition/backgroundscanreports.kyverno.io created CustomResourceDefinition/cleanuppolicies.kyverno.io created CustomResourceDefinition/clusteradmissionreports.kyverno.io created CustomResourceDefinition/clusterbackgroundscanreports.kyverno.io created CustomResourceDefinition/clustercleanuppolicies.kyverno.io created CustomResourceDefinition/clusterpolicies.kyverno.io created CustomResourceDefinition/clusterpolicyreports.wgpolicyk8s.io created CustomResourceDefinition/policies.kyverno.io created CustomResourceDefinition/policyexceptions.kyverno.io created CustomResourceDefinition/policyreports.wgpolicyk8s.io created CustomResourceDefinition/updaterequests.kyverno.io created
security kustomize-controller crds-kyverno ReconciliationSucceeded Reconciliation finished in 11.998017275s, next run in 10m0s
flux-system helm-controller weave-gitops info HelmChart 'flux-system/flux-system-weave-gitops' is not ready
crossplane-system helm-controller crossplane info HelmChart 'crossplane-system/crossplane-system-crossplane' is not ready
(x3) flux-system kustomize-controller crossplane-providers DependencyNotReady Dependencies do not meet ready condition, retrying in 30s
flux-system kustomize-controller flux-config Progressing HTTPRoute/flux-system/weave-gitops created HelmRelease/flux-system/weave-gitops created PodMonitor/flux-system/flux-system created HelmRepository/flux-system/ww-gitops created
flux-system kustomize-controller crossplane-controller Progressing HelmRelease/crossplane-system/crossplane created HelmRepository/crossplane-system/crossplane created
flux-system kustomize-controller observability Progressing ExternalSecret/observability/kube-prometheus-stack-grafana-admin created HTTPRoute/observability/grafana created HelmRelease/observability/kube-prometheus-stack created
flux-system kustomize-controller security ReconciliationFailed IRSA/security/xplane-cert-manager-mycluster-0 dry-run failed: failed to get API group resources: unable to retrieve the complete list of server APIs: aws.platformref.upbound.io/v1alpha1: the server could not find the requested resource
observability helm-controller kube-prometheus-stack info HelmChart 'flux-system/observability-kube-prometheus-stack' is not ready
flux-system source-controller ww-gitops Succeeded Helm repository is ready
crossplane-system source-controller crossplane-system-crossplane NoSourceArtifact no artifact available for HelmRepository source 'crossplane'
flux-system helm-controller weave-gitops info Helm install has started
flux-system source-controller flux-system-weave-gitops ChartPullSucceeded pulled 'weave-gitops' chart with version '4.0.29'
flux-system source-controller observability-kube-prometheus-stack ChartPullSucceeded pulled 'kube-prometheus-stack' chart with version '50.3.1'
crossplane-system source-controller crossplane-system-crossplane ChartPullSucceeded pulled 'crossplane' chart with version '1.13.2'
crossplane-system helm-controller crossplane info Helm install has started
observability helm-controller kube-prometheus-stack info Helm install has started
crossplane-system source-controller crossplane NewArtifact stored fetched index of size 85.81kB from 'https://charts.crossplane.io/stable'
flux-system default-scheduler weave-gitops-66f9ddc754-b4n2g Scheduled Successfully assigned flux-system/weave-gitops-66f9ddc754-b4n2g to ip-10-0-3-24.eu-west-3.compute.internal
flux-system deployment-controller weave-gitops ScalingReplicaSet Scaled up replica set weave-gitops-66f9ddc754 to 1
flux-system replicaset-controller weave-gitops-66f9ddc754 SuccessfulCreate Created pod: weave-gitops-66f9ddc754-b4n2g
flux-system kubelet ip-10-0-3-24.eu-west-3.compute.internal weave-gitops-66f9ddc754-b4n2g Pulling Pulling image "ghcr.io/weaveworks/wego-app:v0.31.2"
crossplane-system replicaset-controller crossplane-rbac-manager-6dc5679868 SuccessfulCreate Created pod: crossplane-rbac-manager-6dc5679868-fzcrw
crossplane-system deployment-controller crossplane-rbac-manager ScalingReplicaSet Scaled up replica set crossplane-rbac-manager-6dc5679868 to 1
crossplane-system default-scheduler crossplane-6b9848b7bd-994cl Scheduled Successfully assigned crossplane-system/crossplane-6b9848b7bd-994cl to ip-10-0-3-24.eu-west-3.compute.internal
crossplane-system deployment-controller crossplane ScalingReplicaSet Scaled up replica set crossplane-6b9848b7bd to 1
crossplane-system default-scheduler crossplane-rbac-manager-6dc5679868-fzcrw Scheduled Successfully assigned crossplane-system/crossplane-rbac-manager-6dc5679868-fzcrw to ip-10-0-3-24.eu-west-3.compute.internal
crossplane-system replicaset-controller crossplane-6b9848b7bd SuccessfulCreate Created pod: crossplane-6b9848b7bd-994cl
crossplane-system kubelet ip-10-0-3-24.eu-west-3.compute.internal crossplane-rbac-manager-6dc5679868-fzcrw Pulling Pulling image "crossplane/crossplane:v1.13.2"
crossplane-system kubelet ip-10-0-3-24.eu-west-3.compute.internal crossplane-6b9848b7bd-994cl Pulling Pulling image "crossplane/crossplane:v1.13.2"
observability job-controller kube-prometheus-stack-admission-create SuccessfulCreate Created pod: kube-prometheus-stack-admission-create-7f8wb
observability default-scheduler kube-prometheus-stack-admission-create-7f8wb Scheduled Successfully assigned observability/kube-prometheus-stack-admission-create-7f8wb to ip-10-0-3-24.eu-west-3.compute.internal
crossplane-system kubelet ip-10-0-3-24.eu-west-3.compute.internal crossplane-rbac-manager-6dc5679868-fzcrw Pulled Successfully pulled image "crossplane/crossplane:v1.13.2" in 2.66378274s (2.663794455s including waiting)
crossplane-system kubelet ip-10-0-3-24.eu-west-3.compute.internal crossplane-6b9848b7bd-994cl Pulled Successfully pulled image "crossplane/crossplane:v1.13.2" in 2.561674128s (2.561688366s including waiting)
flux-system kubelet ip-10-0-3-24.eu-west-3.compute.internal weave-gitops-66f9ddc754-b4n2g Pulled Successfully pulled image "ghcr.io/weaveworks/wego-app:v0.31.2" in 4.139885585s (4.139899962s including waiting)
flux-system kubelet ip-10-0-3-24.eu-west-3.compute.internal weave-gitops-66f9ddc754-b4n2g Created Created container weave-gitops
crossplane-system kubelet ip-10-0-3-24.eu-west-3.compute.internal crossplane-6b9848b7bd-994cl Started Started container crossplane-init
crossplane-system kubelet ip-10-0-3-24.eu-west-3.compute.internal crossplane-rbac-manager-6dc5679868-fzcrw Created Created container crossplane-init
crossplane-system kubelet ip-10-0-3-24.eu-west-3.compute.internal crossplane-rbac-manager-6dc5679868-fzcrw Started Started container crossplane-init
crossplane-system kubelet ip-10-0-3-24.eu-west-3.compute.internal crossplane-6b9848b7bd-994cl Created Created container crossplane-init
flux-system kubelet ip-10-0-3-24.eu-west-3.compute.internal weave-gitops-66f9ddc754-b4n2g Started Started container weave-gitops
observability kubelet ip-10-0-3-24.eu-west-3.compute.internal kube-prometheus-stack-admission-create-7f8wb Pulling Pulling image "registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20221220-controller-v1.5.1-58-g787ea74b6"
observability kubelet ip-10-0-3-24.eu-west-3.compute.internal kube-prometheus-stack-admission-create-7f8wb Created Created container create
observability kubelet ip-10-0-3-24.eu-west-3.compute.internal kube-prometheus-stack-admission-create-7f8wb Started Started container create
flux-system helm-controller weave-gitops info Helm install succeeded
observability kubelet ip-10-0-3-24.eu-west-3.compute.internal kube-prometheus-stack-admission-create-7f8wb Pulled Successfully pulled image "registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20221220-controller-v1.5.1-58-g787ea74b6" in 1.280504128s (1.280512245s including waiting)
crossplane-system crossplane-rbac-manager-6dc5679868-fzcrw_f45fb928-ded6-4163-a798-58578e00670c crossplane-leader-election-rbac LeaderElection crossplane-rbac-manager-6dc5679868-fzcrw_f45fb928-ded6-4163-a798-58578e00670c became leader
crossplane-system kubelet ip-10-0-3-24.eu-west-3.compute.internal crossplane-rbac-manager-6dc5679868-fzcrw Started Started container crossplane
crossplane-system kubelet ip-10-0-3-24.eu-west-3.compute.internal crossplane-rbac-manager-6dc5679868-fzcrw Created Created container crossplane
flux-system kustomize-controller flux-config Progressing Health check passed in 15.185183904s
crossplane-system kubelet ip-10-0-3-24.eu-west-3.compute.internal crossplane-rbac-manager-6dc5679868-fzcrw Pulled Container image "crossplane/crossplane:v1.13.2" already present on machine
flux-system kustomize-controller flux-config ReconciliationSucceeded Reconciliation finished in 15.611695777s, next run in 4m0s
crossplane-system crossplane-6b9848b7bd-994cl_5c7fbfd5-6b87-46a3-a67b-ea24ad9f4010 crossplane-leader-election-core LeaderElection crossplane-6b9848b7bd-994cl_5c7fbfd5-6b87-46a3-a67b-ea24ad9f4010 became leader
crossplane-system kubelet ip-10-0-3-24.eu-west-3.compute.internal crossplane-6b9848b7bd-994cl Started Started container crossplane
crossplane-system kubelet ip-10-0-3-24.eu-west-3.compute.internal crossplane-6b9848b7bd-994cl Created Created container crossplane
crossplane-system kubelet ip-10-0-3-24.eu-west-3.compute.internal crossplane-6b9848b7bd-994cl Pulled Container image "crossplane/crossplane:v1.13.2" already present on machine
observability job-controller kube-prometheus-stack-admission-create Completed Job completed
observability default-scheduler kube-prometheus-stack-grafana-685df6bf8f-vczw5 Scheduled Successfully assigned observability/kube-prometheus-stack-grafana-685df6bf8f-vczw5 to ip-10-0-2-105.eu-west-3.compute.internal
observability daemonset-controller kube-prometheus-stack-prometheus-node-exporter SuccessfulCreate Created pod: kube-prometheus-stack-prometheus-node-exporter-bq2dc
observability kubelet ip-10-0-3-24.eu-west-3.compute.internal kube-prometheus-stack-prometheus-node-exporter-km8ht Pulling Pulling image "quay.io/prometheus/node-exporter:v1.6.1"
observability replicaset-controller kube-prometheus-stack-operator-764c84db8b SuccessfulCreate Created pod: kube-prometheus-stack-operator-764c84db8b-t7md5
observability replicaset-controller kube-prometheus-stack-grafana-685df6bf8f SuccessfulCreate Created pod: kube-prometheus-stack-grafana-685df6bf8f-vczw5
observability default-scheduler kube-prometheus-stack-prometheus-node-exporter-bq2dc Scheduled Successfully assigned observability/kube-prometheus-stack-prometheus-node-exporter-bq2dc to ip-10-0-2-105.eu-west-3.compute.internal
observability default-scheduler kube-prometheus-stack-operator-764c84db8b-t7md5 Scheduled Successfully assigned observability/kube-prometheus-stack-operator-764c84db8b-t7md5 to ip-10-0-3-24.eu-west-3.compute.internal
observability default-scheduler kube-prometheus-stack-prometheus-node-exporter-km8ht Scheduled Successfully assigned observability/kube-prometheus-stack-prometheus-node-exporter-km8ht to ip-10-0-3-24.eu-west-3.compute.internal
observability endpoint-controller kube-prometheus-stack-prometheus-node-exporter FailedToUpdateEndpoint Failed to update endpoint observability/kube-prometheus-stack-prometheus-node-exporter: Operation cannot be fulfilled on endpoints "kube-prometheus-stack-prometheus-node-exporter": the object has been modified; please apply your changes to the latest version and try again
observability deployment-controller kube-prometheus-stack-operator ScalingReplicaSet Scaled up replica set kube-prometheus-stack-operator-764c84db8b to 1
observability deployment-controller kube-prometheus-stack-kube-state-metrics ScalingReplicaSet Scaled up replica set kube-prometheus-stack-kube-state-metrics-8667b58b4d to 1
observability replicaset-controller kube-prometheus-stack-kube-state-metrics-8667b58b4d SuccessfulCreate Created pod: kube-prometheus-stack-kube-state-metrics-8667b58b4d-r4md4
observability daemonset-controller kube-prometheus-stack-prometheus-node-exporter SuccessfulCreate Created pod: kube-prometheus-stack-prometheus-node-exporter-km8ht
observability deployment-controller kube-prometheus-stack-grafana ScalingReplicaSet Scaled up replica set kube-prometheus-stack-grafana-685df6bf8f to 1
observability default-scheduler kube-prometheus-stack-kube-state-metrics-8667b58b4d-r4md4 Scheduled Successfully assigned observability/kube-prometheus-stack-kube-state-metrics-8667b58b4d-r4md4 to ip-10-0-2-105.eu-west-3.compute.internal
observability kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-prometheus-stack-grafana-685df6bf8f-vczw5 FailedMount MountVolume.SetUp failed for volume "sc-dashboard-provider" : failed to sync configmap cache: timed out waiting for the condition
observability kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-prometheus-stack-kube-state-metrics-8667b58b4d-r4md4 FailedMount MountVolume.SetUp failed for volume "kube-api-access-lddn7" : failed to sync configmap cache: timed out waiting for the condition
observability kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-prometheus-stack-prometheus-node-exporter-bq2dc Pulling Pulling image "quay.io/prometheus/node-exporter:v1.6.1"
observability kubelet ip-10-0-3-24.eu-west-3.compute.internal kube-prometheus-stack-operator-764c84db8b-t7md5 Pulling Pulling image "quay.io/prometheus-operator/prometheus-operator:v0.67.1"
crossplane-system helm-controller crossplane info Helm install succeeded
observability kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-prometheus-stack-prometheus-node-exporter-bq2dc Created Created container node-exporter
observability kubelet ip-10-0-3-24.eu-west-3.compute.internal kube-prometheus-stack-prometheus-node-exporter-km8ht Created Created container node-exporter
flux-system kustomize-controller crossplane-controller Progressing Health check passed in 20.096604172s
flux-system kustomize-controller crossplane-controller ReconciliationSucceeded Reconciliation finished in 20.514529406s, next run in 4m0s
observability kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-prometheus-stack-prometheus-node-exporter-bq2dc Pulled Successfully pulled image "quay.io/prometheus/node-exporter:v1.6.1" in 1.639182356s (1.63919348s including waiting)
observability kubelet ip-10-0-3-24.eu-west-3.compute.internal kube-prometheus-stack-prometheus-node-exporter-km8ht Pulled Successfully pulled image "quay.io/prometheus/node-exporter:v1.6.1" in 1.596761362s (1.596777613s including waiting)
observability kubelet ip-10-0-3-24.eu-west-3.compute.internal kube-prometheus-stack-prometheus-node-exporter-km8ht Started Started container node-exporter
observability kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-prometheus-stack-prometheus-node-exporter-bq2dc Started Started container node-exporter
observability kubelet ip-10-0-3-24.eu-west-3.compute.internal kube-prometheus-stack-operator-764c84db8b-t7md5 Started Started container kube-prometheus-stack
observability kubelet ip-10-0-3-24.eu-west-3.compute.internal kube-prometheus-stack-operator-764c84db8b-t7md5 Created Created container kube-prometheus-stack
observability kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-prometheus-stack-grafana-685df6bf8f-vczw5 Pulling Pulling image "docker.io/curlimages/curl:7.85.0"
observability kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-prometheus-stack-kube-state-metrics-8667b58b4d-r4md4 Pulling Pulling image "registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.10.0"
observability kubelet ip-10-0-3-24.eu-west-3.compute.internal kube-prometheus-stack-operator-764c84db8b-t7md5 Pulled Successfully pulled image "quay.io/prometheus-operator/prometheus-operator:v0.67.1" in 2.030800556s (2.030812057s including waiting)
observability default-scheduler prometheus-kube-prometheus-stack-prometheus-0 Scheduled Successfully assigned observability/prometheus-kube-prometheus-stack-prometheus-0 to ip-10-0-3-24.eu-west-3.compute.internal
observability kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-prometheus-stack-kube-state-metrics-8667b58b4d-r4md4 Created Created container kube-state-metrics
observability kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-prometheus-stack-kube-state-metrics-8667b58b4d-r4md4 Pulled Successfully pulled image "registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.10.0" in 1.345767089s (1.345780269s including waiting)
observability statefulset-controller prometheus-kube-prometheus-stack-prometheus SuccessfulCreate create Pod prometheus-kube-prometheus-stack-prometheus-0 in StatefulSet prometheus-kube-prometheus-stack-prometheus successful
observability kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-prometheus-stack-kube-state-metrics-8667b58b4d-r4md4 Started Started container kube-state-metrics
observability kubelet ip-10-0-3-24.eu-west-3.compute.internal prometheus-kube-prometheus-stack-prometheus-0 Pulling Pulling image "quay.io/prometheus-operator/prometheus-config-reloader:v0.67.1"
observability kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-prometheus-stack-grafana-685df6bf8f-vczw5 Started Started container download-dashboards
observability kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-prometheus-stack-grafana-685df6bf8f-vczw5 Created Created container download-dashboards
observability kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-prometheus-stack-grafana-685df6bf8f-vczw5 Pulled Successfully pulled image "docker.io/curlimages/curl:7.85.0" in 2.481732034s (2.481743548s including waiting)
observability kubelet ip-10-0-3-24.eu-west-3.compute.internal prometheus-kube-prometheus-stack-prometheus-0 Started Started container init-config-reloader
observability kubelet ip-10-0-3-24.eu-west-3.compute.internal prometheus-kube-prometheus-stack-prometheus-0 Pulled Successfully pulled image "quay.io/prometheus-operator/prometheus-config-reloader:v0.67.1" in 1.475533099s (1.475546065s including waiting)
observability kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-prometheus-stack-grafana-685df6bf8f-vczw5 Pulling Pulling image "quay.io/kiwigrid/k8s-sidecar:1.24.6"
observability kubelet ip-10-0-3-24.eu-west-3.compute.internal prometheus-kube-prometheus-stack-prometheus-0 Created Created container init-config-reloader
observability kubelet ip-10-0-3-24.eu-west-3.compute.internal prometheus-kube-prometheus-stack-prometheus-0 Pulling Pulling image "quay.io/prometheus/prometheus:v2.46.0"
observability kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-prometheus-stack-grafana-685df6bf8f-vczw5 Pulled Successfully pulled image "quay.io/kiwigrid/k8s-sidecar:1.24.6" in 2.180642936s (2.180656943s including waiting)
observability kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-prometheus-stack-grafana-685df6bf8f-vczw5 Pulling Pulling image "docker.io/grafana/grafana:10.1.1"
observability kubelet ip-10-0-3-24.eu-west-3.compute.internal prometheus-kube-prometheus-stack-prometheus-0 Pulled Successfully pulled image "quay.io/prometheus/prometheus:v2.46.0" in 3.557235913s (3.557249353s including waiting)
observability kubelet ip-10-0-3-24.eu-west-3.compute.internal prometheus-kube-prometheus-stack-prometheus-0 Started Started container config-reloader
observability kubelet ip-10-0-3-24.eu-west-3.compute.internal prometheus-kube-prometheus-stack-prometheus-0 Created Created container prometheus
observability kubelet ip-10-0-3-24.eu-west-3.compute.internal prometheus-kube-prometheus-stack-prometheus-0 Started Started container prometheus
observability kubelet ip-10-0-3-24.eu-west-3.compute.internal prometheus-kube-prometheus-stack-prometheus-0 Pulled Container image "quay.io/prometheus-operator/prometheus-config-reloader:v0.67.1" already present on machine
observability kubelet ip-10-0-3-24.eu-west-3.compute.internal prometheus-kube-prometheus-stack-prometheus-0 Created Created container config-reloader
flux-system kustomize-controller crossplane-providers ReconciliationSucceeded Reconciliation finished in 312.821325ms, next run in 2m0s
(x4) flux-system kustomize-controller crossplane-configuration DependencyNotReady Dependencies do not meet ready condition, retrying in 30s
flux-system kustomize-controller crossplane-providers Progressing ControllerConfig/aws-config created Provider/provider-aws-iam created
default packages/provider.pkg.crossplane.io provider-aws-iam InstallPackageRevision cannot apply package revision: cannot patch object: Operation cannot be fulfilled on providerrevisions.pkg.crossplane.io "provider-aws-iam-62ccd0ca21a2": the object has been modified; please apply your changes to the latest version and try again
observability kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-prometheus-stack-grafana-685df6bf8f-vczw5 Pulled Successfully pulled image "docker.io/grafana/grafana:10.1.1" in 5.043088675s (5.043117576s including waiting)
(x6) default packages/provider.pkg.crossplane.io provider-aws-iam InstallPackageRevision current package revision health is unknown
default packages/providerrevision.pkg.crossplane.io provider-aws-iam-62ccd0ca21a2 SyncPackage cannot update package revision object metadata: Operation cannot be fulfilled on providerrevisions.pkg.crossplane.io "provider-aws-iam-62ccd0ca21a2": the object has been modified; please apply your changes to the latest version and try again
(x3) default packages/providerrevision.pkg.crossplane.io provider-aws-iam-62ccd0ca21a2 ResolveDependencies cannot resolve package dependencies: missing dependencies: [xpkg.upbound.io/upbound/provider-family-aws]
crossplane-system default-scheduler upbound-provider-family-aws-710d8cfe9f53-8664d497bd-hkp9m Scheduled Successfully assigned crossplane-system/upbound-provider-family-aws-710d8cfe9f53-8664d497bd-hkp9m to ip-10-0-2-105.eu-west-3.compute.internal
crossplane-system deployment-controller upbound-provider-family-aws-710d8cfe9f53 ScalingReplicaSet Scaled up replica set upbound-provider-family-aws-710d8cfe9f53-8664d497bd to 1
(x5) default packages/provider.pkg.crossplane.io upbound-provider-family-aws InstallPackageRevision current package revision health is unknown
(x2) default packages/provider.pkg.crossplane.io upbound-provider-family-aws InstallPackageRevision current package revision is unhealthy
crossplane-system replicaset-controller upbound-provider-family-aws-710d8cfe9f53-8664d497bd SuccessfulCreate Created pod: upbound-provider-family-aws-710d8cfe9f53-8664d497bd-hkp9m
crossplane-system kubelet ip-10-0-2-105.eu-west-3.compute.internal upbound-provider-family-aws-710d8cfe9f53-8664d497bd-hkp9m Pulling Pulling image "xpkg.upbound.io/upbound/provider-family-aws:v0.40.0"
crossplane-system kubelet ip-10-0-2-105.eu-west-3.compute.internal upbound-provider-family-aws-710d8cfe9f53-8664d497bd-hkp9m Created Created container provider-family-aws
crossplane-system kubelet ip-10-0-2-105.eu-west-3.compute.internal upbound-provider-family-aws-710d8cfe9f53-8664d497bd-hkp9m Started Started container provider-family-aws
(x4) default packages/providerrevision.pkg.crossplane.io upbound-provider-family-aws-710d8cfe9f53 SyncPackage cannot run post establish hook for package: provider package deployment is unavailable: Deployment does not have minimum availability.
crossplane-system kubelet ip-10-0-2-105.eu-west-3.compute.internal upbound-provider-family-aws-710d8cfe9f53-8664d497bd-hkp9m Pulled Successfully pulled image "xpkg.upbound.io/upbound/provider-family-aws:v0.40.0" in 5.340946431s (5.340961813s including waiting)
crossplane-system deployment-controller provider-aws-iam-62ccd0ca21a2 ScalingReplicaSet Scaled up replica set provider-aws-iam-62ccd0ca21a2-69c4d59d65 to 1
crossplane-system default-scheduler provider-aws-iam-62ccd0ca21a2-69c4d59d65-fd77w Scheduled Successfully assigned crossplane-system/provider-aws-iam-62ccd0ca21a2-69c4d59d65-fd77w to ip-10-0-3-24.eu-west-3.compute.internal
crossplane-system replicaset-controller provider-aws-iam-62ccd0ca21a2-69c4d59d65 SuccessfulCreate Created pod: provider-aws-iam-62ccd0ca21a2-69c4d59d65-fd77w
(x3) observability kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-prometheus-stack-grafana-685df6bf8f-vczw5 Pulled Container image "quay.io/kiwigrid/k8s-sidecar:1.24.6" already present on machine
default packages/providerrevision.pkg.crossplane.io provider-aws-iam-62ccd0ca21a2 SyncPackage cannot establish control of object: Operation cannot be fulfilled on customresourcedefinitions.apiextensions.k8s.io "servercertificates.iam.aws.upbound.io": the object has been modified; please apply your changes to the latest version and try again
(x2) observability kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-prometheus-stack-grafana-685df6bf8f-vczw5 Pulled Container image "docker.io/grafana/grafana:10.1.1" already present on machine
(x3) observability kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-prometheus-stack-grafana-685df6bf8f-vczw5 Failed Error: secret "kube-prometheus-stack-grafana-admin" not found
(x8) default rbac/providerrevision.pkg.crossplane.io upbound-provider-family-aws-710d8cfe9f53 BindClusterRole Bound system ClusterRole to provider ServiceAccount(s)
(x2) observability kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-prometheus-stack-grafana-685df6bf8f-vczw5 Pulled Container image "quay.io/kiwigrid/k8s-sidecar:1.24.6" already present on machine
(x4) default packages/provider.pkg.crossplane.io upbound-provider-family-aws InstallPackageRevision Successfully installed package revision
(x3) observability kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-prometheus-stack-grafana-685df6bf8f-vczw5 Failed Error: secret "kube-prometheus-stack-grafana-admin" not found
(x3) observability kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-prometheus-stack-grafana-685df6bf8f-vczw5 Failed Error: secret "kube-prometheus-stack-grafana-admin" not found
crossplane-system kubelet ip-10-0-3-24.eu-west-3.compute.internal provider-aws-iam-62ccd0ca21a2-69c4d59d65-fd77w Pulling Pulling image "xpkg.upbound.io/upbound/provider-aws-iam:v0.38.0"
default rbac/providerrevision.pkg.crossplane.io provider-aws-iam-62ccd0ca21a2 ApplyClusterRoles cannot apply ClusterRole: cannot update object: Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "crossplane:provider:provider-aws-iam-62ccd0ca21a2:system": the object has been modified; please apply your changes to the latest version and try again
(x10) default rbac/providerrevision.pkg.crossplane.io provider-aws-iam-62ccd0ca21a2 BindClusterRole Bound system ClusterRole to provider ServiceAccount(s)
(x3) default packages/provider.pkg.crossplane.io provider-aws-iam InstallPackageRevision current package revision is unhealthy
(x14) default rbac/providerrevision.pkg.crossplane.io provider-aws-iam-62ccd0ca21a2 ApplyClusterRoles Applied RBAC ClusterRoles
(x3) default packages/providerrevision.pkg.crossplane.io provider-aws-iam-62ccd0ca21a2 SyncPackage cannot run post establish hook for package: provider package deployment is unavailable: Deployment does not have minimum availability.
crossplane-system kubelet ip-10-0-3-24.eu-west-3.compute.internal provider-aws-iam-62ccd0ca21a2-69c4d59d65-fd77w Created Created container provider-aws-iam
crossplane-system kubelet ip-10-0-3-24.eu-west-3.compute.internal provider-aws-iam-62ccd0ca21a2-69c4d59d65-fd77w Pulled Successfully pulled image "xpkg.upbound.io/upbound/provider-aws-iam:v0.38.0" in 5.180092319s (5.180102377s including waiting)
(x4) default packages/providerrevision.pkg.crossplane.io upbound-provider-family-aws-710d8cfe9f53 SyncPackage Successfully configured package revision
crossplane-system kubelet ip-10-0-3-24.eu-west-3.compute.internal provider-aws-iam-62ccd0ca21a2-69c4d59d65-fd77w Started Started container provider-aws-iam
(x4) default packages/provider.pkg.crossplane.io provider-aws-iam InstallPackageRevision Successfully installed package revision
(x12) default rbac/providerrevision.pkg.crossplane.io upbound-provider-family-aws-710d8cfe9f53 ApplyClusterRoles Applied RBAC ClusterRoles
(x4) default packages/providerrevision.pkg.crossplane.io provider-aws-iam-62ccd0ca21a2 SyncPackage Successfully configured package revision
default offered/compositeresourcedefinition.apiextensions.crossplane.io xirsas.aws.platformref.upbound.io OfferClaim waiting for composite resource claim CustomResourceDefinition to be established
default revisions/composition.apiextensions.crossplane.io xirsas.aws.platformref.upbound.io CreateRevision Created new revision
default offered/compositeresourcedefinition.apiextensions.crossplane.io xirsas.aws.platformref.upbound.io OfferClaim cannot apply rendered composite resource claim CustomResourceDefinition: cannot update object: Operation cannot be fulfilled on customresourcedefinitions.apiextensions.k8s.io "irsas.aws.platformref.upbound.io": the object has been modified; please apply your changes to the latest version and try again
flux-system kustomize-controller crossplane-configuration ReconciliationSucceeded Reconciliation finished in 383.815356ms, next run in 2m0s
default defined/compositeresourcedefinition.apiextensions.crossplane.io xirsas.aws.platformref.upbound.io EstablishComposite cannot apply rendered composite resource CustomResourceDefinition: cannot create object: customresourcedefinitions.apiextensions.k8s.io "xirsas.aws.platformref.upbound.io" already exists
(x2) default defined/compositeresourcedefinition.apiextensions.crossplane.io xirsas.aws.platformref.upbound.io EstablishComposite cannot apply rendered composite resource CustomResourceDefinition: cannot update object: Operation cannot be fulfilled on customresourcedefinitions.apiextensions.k8s.io "xirsas.aws.platformref.upbound.io": the object has been modified; please apply your changes to the latest version and try again
(x4) default rbac/compositeresourcedefinition.apiextensions.crossplane.io xirsas.aws.platformref.upbound.io ApplyClusterRoles Applied RBAC ClusterRoles
(x5) flux-system kustomize-controller infrastructure DependencyNotReady Dependencies do not meet ready condition, retrying in 30s
default offered/compositeresourcedefinition.apiextensions.crossplane.io xirsas.aws.platformref.upbound.io OfferClaim cannot add composite resource claim finalizer: cannot update object: Operation cannot be fulfilled on compositeresourcedefinitions.apiextensions.crossplane.io "xirsas.aws.platformref.upbound.io": the object has been modified; please apply your changes to the latest version and try again
flux-system kustomize-controller crossplane-configuration Progressing CompositeResourceDefinition/xirsas.aws.platformref.upbound.io created Composition/xirsas.aws.platformref.upbound.io created EnvironmentConfig/irsa-environment created ProviderConfig/default created
default defined/compositeresourcedefinition.apiextensions.crossplane.io xirsas.aws.platformref.upbound.io EstablishComposite waiting for composite resource CustomResourceDefinition to be established
(x3) default offered/compositeresourcedefinition.apiextensions.crossplane.io xirsas.aws.platformref.upbound.io OfferClaim (Re)started composite resource claim controller
(x8) default defined/compositeresourcedefinition.apiextensions.crossplane.io xirsas.aws.platformref.upbound.io RenderCRD Rendered composite resource CustomResourceDefinition
(x4) default offered/compositeresourcedefinition.apiextensions.crossplane.io xirsas.aws.platformref.upbound.io OfferClaim Applied composite resource claim CustomResourceDefinition
(x6) default offered/compositeresourcedefinition.apiextensions.crossplane.io xirsas.aws.platformref.upbound.io RenderCRD Rendered composite resource claim CustomResourceDefinition
(x4) default defined/compositeresourcedefinition.apiextensions.crossplane.io xirsas.aws.platformref.upbound.io EstablishComposite (Re)started composite resource controller
(x5) default defined/compositeresourcedefinition.apiextensions.crossplane.io xirsas.aws.platformref.upbound.io EstablishComposite Applied composite resource CustomResourceDefinition
kube-system helm-controller external-dns info HelmChart 'kube-system/kube-system-external-dns' is not ready
default defined/compositeresourcedefinition.apiextensions.crossplane.io xplane-external-dns-mycluster-0-8sfwb CompositionUpdatePolicy Default composition update policy has been selected
flux-system kustomize-controller infrastructure Progressing IRSA/kube-system/xplane-external-dns-mycluster-0 created IRSA/kube-system/xplane-loadbalancer-controller-mycluster-0 created Gateway/infrastructure/platform created HelmRelease/kube-system/aws-load-balancer-controller created HelmRelease/kube-system/external-dns created HelmRepository/kube-system/eks created HelmRepository/kube-system/external-dns created
kube-system offered/compositeresourcedefinition.apiextensions.crossplane.io xplane-loadbalancer-controller-mycluster-0 CompositeDeletePolicy Default composite delete policy has been selected
kube-system offered/compositeresourcedefinition.apiextensions.crossplane.io xplane-external-dns-mycluster-0 CompositeDeletePolicy Default composite delete policy has been selected
kube-system helm-controller aws-load-balancer-controller info HelmChart 'kube-system/kube-system-aws-load-balancer-controller' is not ready
infrastructure service-controller cilium-gateway-platform EnsuringLoadBalancer Ensuring load balancer
default defined/compositeresourcedefinition.apiextensions.crossplane.io xplane-loadbalancer-controller-mycluster-0-6xwkv CompositionUpdatePolicy Default composition update policy has been selected
kube-system helm-controller aws-load-balancer-controller info Helm install has started
kube-system source-controller eks NewArtifact stored fetched index of size 319.6kB from 'https://aws.github.io/eks-charts'
(x2) kube-system offered/compositeresourcedefinition.apiextensions.crossplane.io xplane-external-dns-mycluster-0 ConfigureCompositeResource cannot apply composite resource: cannot patch object: Operation cannot be fulfilled on xirsas.aws.platformref.upbound.io "xplane-external-dns-mycluster-0-8sfwb": the object has been modified; please apply your changes to the latest version and try again
flux-system kustomize-controller infrastructure ReconciliationSucceeded Reconciliation finished in 531.382717ms, next run in 4m0s
kube-system source-controller kube-system-external-dns NoSourceArtifact no artifact available for HelmRepository source 'external-dns'
kube-system source-controller kube-system-aws-load-balancer-controller ChartPullSucceeded pulled 'aws-load-balancer-controller' chart with version '1.6.0'
kube-system source-controller kube-system-aws-load-balancer-controller NoSourceArtifact no artifact available for HelmRepository source 'eks'
(x2) kube-system offered/compositeresourcedefinition.apiextensions.crossplane.io xplane-loadbalancer-controller-mycluster-0 ConfigureCompositeResource cannot apply composite resource: cannot patch object: Operation cannot be fulfilled on xirsas.aws.platformref.upbound.io "xplane-loadbalancer-controller-mycluster-0-6xwkv": the object has been modified; please apply your changes to the latest version and try again
kube-system source-controller external-dns NewArtifact stored fetched index of size 28.89kB from 'https://kubernetes-sigs.github.io/external-dns/'
kube-system helm-controller external-dns info Helm install has started
(x4) default defined/compositeresourcedefinition.apiextensions.crossplane.io xplane-loadbalancer-controller-mycluster-0-6xwkv ComposeResources Composed resource "irsa-attachment" is not yet ready
(x4) default defined/compositeresourcedefinition.apiextensions.crossplane.io xplane-external-dns-mycluster-0-8sfwb ComposeResources Composed resource "irsa-attachment" is not yet ready
kube-system source-controller kube-system-external-dns ChartPullSucceeded pulled 'external-dns' chart with version '1.13.0'
infrastructure service-controller cilium-gateway-platform EnsuredLoadBalancer Ensured load balancer
default managed/iam.aws.upbound.io/v1beta1, kind=policy xplane-loadbalancer-controller-mycluster-0-6xwkv-pln6p CannotInitializeManagedResource Operation cannot be fulfilled on policies.iam.aws.upbound.io "xplane-loadbalancer-controller-mycluster-0-6xwkv-pln6p": the object has been modified; please apply your changes to the latest version and try again
default managed/iam.aws.upbound.io/v1beta1, kind=policy xplane-loadbalancer-controller-mycluster-0-6xwkv-pln6p CreatedExternalResource Successfully requested creation of external resource
default managed/iam.aws.upbound.io/v1beta1, kind=role xplane-loadbalancer-controller-mycluster-0-6xwkv-cqslf CreatedExternalResource Successfully requested creation of external resource
default managed/iam.aws.upbound.io/v1beta1, kind=role xplane-external-dns-mycluster-0-8sfwb-f5hml CreatedExternalResource Successfully requested creation of external resource
default managed/iam.aws.upbound.io/v1beta1, kind=policy xplane-external-dns-mycluster-0-8sfwb-cctxq CreatedExternalResource Successfully requested creation of external resource
kube-system replicaset-controller external-dns-7f57d96597 SuccessfulCreate Created pod: external-dns-7f57d96597-g6cq4
kube-system default-scheduler external-dns-7f57d96597-g6cq4 Scheduled Successfully assigned kube-system/external-dns-7f57d96597-g6cq4 to ip-10-0-3-24.eu-west-3.compute.internal
kube-system deployment-controller external-dns ScalingReplicaSet Scaled up replica set external-dns-7f57d96597 to 1
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal external-dns-7f57d96597-g6cq4 Pulling Pulling image "registry.k8s.io/external-dns/external-dns:v0.13.5"
kube-system replicaset-controller aws-load-balancer-controller-7c44557bd9 SuccessfulCreate Created pod: aws-load-balancer-controller-7c44557bd9-4mfrc
kube-system default-scheduler aws-load-balancer-controller-7c44557bd9-4mfrc Scheduled Successfully assigned kube-system/aws-load-balancer-controller-7c44557bd9-4mfrc to ip-10-0-2-105.eu-west-3.compute.internal
(x4) default managed/iam.aws.upbound.io/v1beta1, kind=rolepolicyattachment xplane-external-dns-mycluster-0-8sfwb-9mcg7 CannotResolveResourceReferences cannot resolve references: mg.Spec.ForProvider.PolicyArn: referenced field was empty (referenced resource may not yet be ready)
kube-system deployment-controller aws-load-balancer-controller ScalingReplicaSet Scaled up replica set aws-load-balancer-controller-7c44557bd9 to 1
default managed/iam.aws.upbound.io/v1beta1, kind=policy xplane-loadbalancer-controller-mycluster-0-6xwkv-pln6p CannotUpdateManagedResource Operation cannot be fulfilled on policies.iam.aws.upbound.io "xplane-loadbalancer-controller-mycluster-0-6xwkv-pln6p": the object has been modified; please apply your changes to the latest version and try again
default managed/iam.aws.upbound.io/v1beta1, kind=policy xplane-external-dns-mycluster-0-8sfwb-cctxq CannotUpdateManagedResource Operation cannot be fulfilled on policies.iam.aws.upbound.io "xplane-external-dns-mycluster-0-8sfwb-cctxq": the object has been modified; please apply your changes to the latest version and try again
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal aws-load-balancer-controller-7c44557bd9-4mfrc Pulling Pulling image "public.ecr.aws/eks/aws-load-balancer-controller:v2.6.0"
kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal external-dns-7f57d96597-g6cq4 Pulled Successfully pulled image "registry.k8s.io/external-dns/external-dns:v0.13.5" in 2.430998415s (2.431009186s including waiting)
(x5) default defined/compositeresourcedefinition.apiextensions.crossplane.io xplane-external-dns-mycluster-0-8sfwb ComposeResources Composed resource "irsa-policy" is not yet ready
default managed/iam.aws.upbound.io/v1beta1, kind=role xplane-loadbalancer-controller-mycluster-0-6xwkv-cqslf CannotUpdateManagedResource Operation cannot be fulfilled on roles.iam.aws.upbound.io "xplane-loadbalancer-controller-mycluster-0-6xwkv-cqslf": the object has been modified; please apply your changes to the latest version and try again
(x5) default defined/compositeresourcedefinition.apiextensions.crossplane.io xplane-external-dns-mycluster-0-8sfwb ComposeResources Successfully composed resources
default managed/iam.aws.upbound.io/v1beta1, kind=role xplane-external-dns-mycluster-0-8sfwb-f5hml CannotUpdateManagedResource Operation cannot be fulfilled on roles.iam.aws.upbound.io "xplane-external-dns-mycluster-0-8sfwb-f5hml": the object has been modified; please apply your changes to the latest version and try again
(x5) default defined/compositeresourcedefinition.apiextensions.crossplane.io xplane-external-dns-mycluster-0-8sfwb ComposeResources Composed resource "irsa-role" is not yet ready
(x5) default defined/compositeresourcedefinition.apiextensions.crossplane.io xplane-loadbalancer-controller-mycluster-0-6xwkv ComposeResources Composed resource "irsa-policy" is not yet ready
(x5) default defined/compositeresourcedefinition.apiextensions.crossplane.io xplane-loadbalancer-controller-mycluster-0-6xwkv ComposeResources Composed resource "irsa-role" is not yet ready
(x5) default defined/compositeresourcedefinition.apiextensions.crossplane.io xplane-loadbalancer-controller-mycluster-0-6xwkv ComposeResources Successfully composed resources
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal aws-load-balancer-controller-7c44557bd9-4mfrc Started Started container aws-load-balancer-controller
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal aws-load-balancer-controller-7c44557bd9-4mfrc Created Created container aws-load-balancer-controller
kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal aws-load-balancer-controller-7c44557bd9-4mfrc Pulled Successfully pulled image "public.ecr.aws/eks/aws-load-balancer-controller:v2.6.0" in 2.172576149s (2.172590581s including waiting)
kube-system helm-controller aws-load-balancer-controller info Helm install succeeded
default managed/iam.aws.upbound.io/v1beta1, kind=policy xplane-external-dns-mycluster-0-8sfwb-cctxq CannotInitializeManagedResource Operation cannot be fulfilled on policies.iam.aws.upbound.io "xplane-external-dns-mycluster-0-8sfwb-cctxq": the object has been modified; please apply your changes to the latest version and try again
kube-system aws-load-balancer-controller-7c44557bd9-4mfrc_453bae07-8675-450e-a5b2-04d8a55d9267 aws-load-balancer-controller-leader LeaderElection aws-load-balancer-controller-7c44557bd9-4mfrc_453bae07-8675-450e-a5b2-04d8a55d9267 became leader
kube-system aws-load-balancer-controller-7c44557bd9-4mfrc_453bae07-8675-450e-a5b2-04d8a55d9267 aws-load-balancer-controller-leader LeaderElection aws-load-balancer-controller-7c44557bd9-4mfrc_453bae07-8675-450e-a5b2-04d8a55d9267 became leader
(x4) kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal external-dns-7f57d96597-g6cq4 BackOff Back-off restarting failed container external-dns in pod external-dns-7f57d96597-g6cq4_kube-system(af876645-4f3f-4631-8c83-25f2e9a9da33)
(x5) default managed/iam.aws.upbound.io/v1beta1, kind=rolepolicyattachment xplane-loadbalancer-controller-mycluster-0-6xwkv-nhfkn CannotResolveResourceReferences cannot resolve references: mg.Spec.ForProvider.PolicyArn: referenced field was empty (referenced resource may not yet be ready)
default managed/iam.aws.upbound.io/v1beta1, kind=rolepolicyattachment xplane-external-dns-mycluster-0-8sfwb-9mcg7 CreatedExternalResource Successfully requested creation of external resource
default managed/iam.aws.upbound.io/v1beta1, kind=rolepolicyattachment xplane-external-dns-mycluster-0-8sfwb-9mcg7 CannotUpdateManagedResource Operation cannot be fulfilled on rolepolicyattachments.iam.aws.upbound.io "xplane-external-dns-mycluster-0-8sfwb-9mcg7": the object has been modified; please apply your changes to the latest version and try again
(x11) kube-system offered/compositeresourcedefinition.apiextensions.crossplane.io xplane-external-dns-mycluster-0 BindCompositeResource Composite resource is not yet ready
(x11) kube-system offered/compositeresourcedefinition.apiextensions.crossplane.io xplane-external-dns-mycluster-0 ConfigureCompositeResource Successfully applied composite resource
(x10) kube-system offered/compositeresourcedefinition.apiextensions.crossplane.io xplane-loadbalancer-controller-mycluster-0 BindCompositeResource Composite resource is not yet ready
(x3) kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal external-dns-7f57d96597-g6cq4 Created Created container external-dns
(x2) kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal external-dns-7f57d96597-g6cq4 Pulled Container image "registry.k8s.io/external-dns/external-dns:v0.13.5" already present on machine
(x3) kube-system kubelet ip-10-0-3-24.eu-west-3.compute.internal external-dns-7f57d96597-g6cq4 Started Started container external-dns
default managed/iam.aws.upbound.io/v1beta1, kind=rolepolicyattachment xplane-loadbalancer-controller-mycluster-0-6xwkv-nhfkn CreatedExternalResource Successfully requested creation of external resource
default managed/iam.aws.upbound.io/v1beta1, kind=rolepolicyattachment xplane-loadbalancer-controller-mycluster-0-6xwkv-nhfkn CannotUpdateManagedResource Operation cannot be fulfilled on rolepolicyattachments.iam.aws.upbound.io "xplane-loadbalancer-controller-mycluster-0-6xwkv-nhfkn": the object has been modified; please apply your changes to the latest version and try again
kube-system helm-controller external-dns info Helm install succeeded
flux-system kustomize-controller crossplane-providers ReconciliationSucceeded Reconciliation finished in 172.272501ms, next run in 2m0s
flux-system kustomize-controller namespaces ReconciliationSucceeded Reconciliation finished in 157.521998ms, next run in 4m0s
flux-system kustomize-controller crossplane-configuration ReconciliationSucceeded Reconciliation finished in 378.389904ms, next run in 2m0s
kube-system offered/compositeresourcedefinition.apiextensions.crossplane.io xplane-loadbalancer-controller-mycluster-0 BindCompositeResource Successfully bound composite resource
(x11) kube-system offered/compositeresourcedefinition.apiextensions.crossplane.io xplane-loadbalancer-controller-mycluster-0 ConfigureCompositeResource Successfully applied composite resource
flux-system kustomize-controller crds ReconciliationSucceeded Reconciliation finished in 1.498833707s, next run in 4m0s
security helm-controller kyverno info HelmChart 'flux-system/security-kyverno' is not ready
security helm-controller external-secrets info HelmChart 'flux-system/security-external-secrets' is not ready
flux-system kustomize-controller security Progressing IRSA/security/xplane-cert-manager-mycluster-0 created IRSA/security/xplane-external-secrets-mycluster-0 created ClusterIssuer/letsencrypt-prod created ClusterSecretStore/clustersecretstore created HelmRelease/security/cert-manager created HelmRelease/security/external-secrets created HelmRelease/security/kyverno created HelmRelease/security/kyverno-policies created ClusterPolicy/mutate-cilium-echo-gateway created ClusterPolicy/mutate-cilium-echo-tls-gateway created ClusterPolicy/mutate-cilium-platform-gateway created HelmRepository/flux-system/external-secrets created HelmRepository/flux-system/jetstack created HelmRepository/flux-system/kyverno created
(x2) security offered/compositeresourcedefinition.apiextensions.crossplane.io xplane-cert-manager-mycluster-0 ConfigureCompositeResource cannot apply composite resource: cannot patch object: Operation cannot be fulfilled on xirsas.aws.platformref.upbound.io "xplane-cert-manager-mycluster-0-5m9n4": the object has been modified; please apply your changes to the latest version and try again
default defined/compositeresourcedefinition.apiextensions.crossplane.io xplane-external-secrets-mycluster-0-d6vfd CompositionUpdatePolicy Default composition update policy has been selected
security offered/compositeresourcedefinition.apiextensions.crossplane.io xplane-cert-manager-mycluster-0 CompositeDeletePolicy Default composite delete policy has been selected
security offered/compositeresourcedefinition.apiextensions.crossplane.io xplane-external-secrets-mycluster-0 CompositeDeletePolicy Default composite delete policy has been selected
security helm-controller cert-manager info HelmChart 'flux-system/security-cert-manager' is not ready
security helm-controller kyverno-policies info HelmChart 'flux-system/security-kyverno-policies' is not ready
security offered/compositeresourcedefinition.apiextensions.crossplane.io xplane-external-secrets-mycluster-0 ConfigureCompositeResource cannot apply composite resource: cannot patch object: Operation cannot be fulfilled on xirsas.aws.platformref.upbound.io "xplane-external-secrets-mycluster-0-d6vfd": the object has been modified; please apply your changes to the latest version and try again
default defined/compositeresourcedefinition.apiextensions.crossplane.io xplane-cert-manager-mycluster-0-5m9n4 CompositionUpdatePolicy Default composition update policy has been selected
flux-system source-controller security-cert-manager ChartPullSucceeded pulled 'cert-manager' chart with version 'v1.12.4'
flux-system source-controller security-kyverno NoSourceArtifact no artifact available for HelmRepository source 'kyverno'
flux-system source-controller jetstack NewArtifact stored fetched index of size 220.2kB from 'https://charts.jetstack.io'
flux-system source-controller security-cert-manager NoSourceArtifact no artifact available for HelmRepository source 'jetstack'
flux-system source-controller kyverno NewArtifact stored fetched index of size 315.4kB from 'https://kyverno.github.io/kyverno/'
flux-system source-controller security-external-secrets NoSourceArtifact no artifact available for HelmRepository source 'external-secrets'
flux-system source-controller security-kyverno-policies NoSourceArtifact no artifact available for HelmRepository source 'kyverno'
flux-system source-controller external-secrets NewArtifact stored fetched index of size 53.82kB from 'https://charts.external-secrets.io'
flux-system source-controller security-kyverno ChartPullSucceeded pulled 'kyverno' chart with version '3.0.5'
(x4) default defined/compositeresourcedefinition.apiextensions.crossplane.io xplane-external-secrets-mycluster-0-d6vfd ComposeResources Composed resource "irsa-attachment" is not yet ready
flux-system source-controller security-kyverno-policies ChartPullSucceeded pulled 'kyverno-policies' chart with version '3.0.3'
(x4) default defined/compositeresourcedefinition.apiextensions.crossplane.io xplane-cert-manager-mycluster-0-5m9n4 ComposeResources Composed resource "irsa-attachment" is not yet ready
flux-system source-controller security-external-secrets ChartPullSucceeded pulled 'external-secrets' chart with version '0.9.4'
security helm-controller cert-manager info Helm install has started
security helm-controller kyverno info Helm install has started
security helm-controller external-secrets info Helm install has started
default managed/iam.aws.upbound.io/v1beta1, kind=policy xplane-external-secrets-mycluster-0-d6vfd-lkc5w CannotInitializeManagedResource Operation cannot be fulfilled on policies.iam.aws.upbound.io "xplane-external-secrets-mycluster-0-d6vfd-lkc5w": the object has been modified; please apply your changes to the latest version and try again
default managed/iam.aws.upbound.io/v1beta1, kind=role xplane-external-secrets-mycluster-0-d6vfd-tttrx CreatedExternalResource Successfully requested creation of external resource
default managed/iam.aws.upbound.io/v1beta1, kind=role xplane-external-secrets-mycluster-0-d6vfd-tttrx CannotInitializeManagedResource Operation cannot be fulfilled on roles.iam.aws.upbound.io "xplane-external-secrets-mycluster-0-d6vfd-tttrx": the object has been modified; please apply your changes to the latest version and try again
default managed/iam.aws.upbound.io/v1beta1, kind=role xplane-cert-manager-mycluster-0-5m9n4-glqmd CreatedExternalResource Successfully requested creation of external resource
default managed/iam.aws.upbound.io/v1beta1, kind=policy xplane-cert-manager-mycluster-0-5m9n4-49nzb CreatedExternalResource Successfully requested creation of external resource
default managed/iam.aws.upbound.io/v1beta1, kind=policy xplane-external-secrets-mycluster-0-d6vfd-lkc5w CreatedExternalResource Successfully requested creation of external resource
default managed/iam.aws.upbound.io/v1beta1, kind=policy xplane-cert-manager-mycluster-0-5m9n4-49nzb CannotInitializeManagedResource Operation cannot be fulfilled on policies.iam.aws.upbound.io "xplane-cert-manager-mycluster-0-5m9n4-49nzb": the object has been modified; please apply your changes to the latest version and try again
security deployment-controller external-secrets-cert-controller ScalingReplicaSet Scaled up replica set external-secrets-cert-controller-8665fc68 to 1
security deployment-controller external-secrets-webhook ScalingReplicaSet Scaled up replica set external-secrets-webhook-589765875 to 1
security deployment-controller external-secrets ScalingReplicaSet Scaled up replica set external-secrets-6b85658cd8 to 1
(x2) security controllermanager external-secrets-pdb NoPods No matching pods found
security replicaset-controller cert-manager-cainjector-57b9db9cd SuccessfulCreate Created pod: cert-manager-cainjector-57b9db9cd-4mpm8
security default-scheduler cert-manager-bc8c566cf-xmxb4 Scheduled Successfully assigned security/cert-manager-bc8c566cf-xmxb4 to ip-10-0-3-24.eu-west-3.compute.internal
security default-scheduler external-secrets-6b85658cd8-s4d24 Scheduled Successfully assigned security/external-secrets-6b85658cd8-s4d24 to ip-10-0-2-105.eu-west-3.compute.internal
security deployment-controller cert-manager ScalingReplicaSet Scaled up replica set cert-manager-bc8c566cf to 1
security deployment-controller cert-manager-webhook ScalingReplicaSet Scaled up replica set cert-manager-webhook-7ffdd9664d to 1
security replicaset-controller external-secrets-6b85658cd8 SuccessfulCreate Created pod: external-secrets-6b85658cd8-s4d24
security default-scheduler external-secrets-cert-controller-8665fc68-fs2rh Scheduled Successfully assigned security/external-secrets-cert-controller-8665fc68-fs2rh to ip-10-0-2-105.eu-west-3.compute.internal
security deployment-controller cert-manager-cainjector ScalingReplicaSet Scaled up replica set cert-manager-cainjector-57b9db9cd to 1
security replicaset-controller cert-manager-webhook-7ffdd9664d SuccessfulCreate Created pod: cert-manager-webhook-7ffdd9664d-ng9hz
security replicaset-controller external-secrets-webhook-589765875 SuccessfulCreate Created pod: external-secrets-webhook-589765875-6s69h
security default-scheduler external-secrets-webhook-589765875-6s69h Scheduled Successfully assigned security/external-secrets-webhook-589765875-6s69h to ip-10-0-3-24.eu-west-3.compute.internal
security default-scheduler cert-manager-webhook-7ffdd9664d-ng9hz Scheduled Successfully assigned security/cert-manager-webhook-7ffdd9664d-ng9hz to ip-10-0-2-105.eu-west-3.compute.internal
security replicaset-controller cert-manager-bc8c566cf SuccessfulCreate Created pod: cert-manager-bc8c566cf-xmxb4
security default-scheduler cert-manager-cainjector-57b9db9cd-4mpm8 Scheduled Successfully assigned security/cert-manager-cainjector-57b9db9cd-4mpm8 to ip-10-0-2-105.eu-west-3.compute.internal
security replicaset-controller external-secrets-cert-controller-8665fc68 SuccessfulCreate Created pod: external-secrets-cert-controller-8665fc68-fs2rh
security kubelet ip-10-0-3-24.eu-west-3.compute.internal cert-manager-bc8c566cf-xmxb4 Pulling Pulling image "quay.io/jetstack/cert-manager-controller:v1.12.4"
security kubelet ip-10-0-2-105.eu-west-3.compute.internal cert-manager-cainjector-57b9db9cd-4mpm8 Pulling Pulling image "quay.io/jetstack/cert-manager-cainjector:v1.12.4"
security kubelet ip-10-0-2-105.eu-west-3.compute.internal external-secrets-6b85658cd8-s4d24 Pulling Pulling image "ghcr.io/external-secrets/external-secrets:v0.9.4"
security kubelet ip-10-0-3-24.eu-west-3.compute.internal external-secrets-webhook-589765875-6s69h Pulling Pulling image "ghcr.io/external-secrets/external-secrets:v0.9.4"
security kubelet ip-10-0-2-105.eu-west-3.compute.internal external-secrets-cert-controller-8665fc68-fs2rh Pulling Pulling image "ghcr.io/external-secrets/external-secrets:v0.9.4"
security replicaset-controller kyverno-cleanup-controller-566f7bc8c SuccessfulCreate Created pod: kyverno-cleanup-controller-566f7bc8c-88xlj
security default-scheduler kyverno-background-controller-67f4b647d7-26gr7 Scheduled Successfully assigned security/kyverno-background-controller-67f4b647d7-26gr7 to ip-10-0-3-24.eu-west-3.compute.internal
default managed/iam.aws.upbound.io/v1beta1, kind=role xplane-cert-manager-mycluster-0-5m9n4-glqmd CannotUpdateManagedResource Operation cannot be fulfilled on roles.iam.aws.upbound.io "xplane-cert-manager-mycluster-0-5m9n4-glqmd": the object has been modified; please apply your changes to the latest version and try again
security replicaset-controller kyverno-reports-controller-6f96648477 SuccessfulCreate Created pod: kyverno-reports-controller-6f96648477-jmfwg
security default-scheduler kyverno-reports-controller-6f96648477-jmfwg Scheduled Successfully assigned security/kyverno-reports-controller-6f96648477-jmfwg to ip-10-0-3-24.eu-west-3.compute.internal
security replicaset-controller kyverno-background-controller-67f4b647d7 SuccessfulCreate Created pod: kyverno-background-controller-67f4b647d7-26gr7
security deployment-controller kyverno-background-controller ScalingReplicaSet Scaled up replica set kyverno-background-controller-67f4b647d7 to 1
security deployment-controller kyverno-cleanup-controller ScalingReplicaSet Scaled up replica set kyverno-cleanup-controller-566f7bc8c to 1
security default-scheduler kyverno-admission-controller-75748bcb9c-jdsbk Scheduled Successfully assigned security/kyverno-admission-controller-75748bcb9c-jdsbk to ip-10-0-2-105.eu-west-3.compute.internal
security deployment-controller kyverno-reports-controller ScalingReplicaSet Scaled up replica set kyverno-reports-controller-6f96648477 to 1
security default-scheduler kyverno-cleanup-controller-566f7bc8c-88xlj Scheduled Successfully assigned security/kyverno-cleanup-controller-566f7bc8c-88xlj to ip-10-0-3-24.eu-west-3.compute.internal
security deployment-controller kyverno-admission-controller ScalingReplicaSet Scaled up replica set kyverno-admission-controller-75748bcb9c to 1
security replicaset-controller kyverno-admission-controller-75748bcb9c SuccessfulCreate Created pod: kyverno-admission-controller-75748bcb9c-jdsbk
flux-system kustomize-controller crossplane-controller ReconciliationSucceeded Reconciliation finished in 246.43364ms, next run in 4m0s
default managed/iam.aws.upbound.io/v1beta1, kind=policy xplane-external-secrets-mycluster-0-d6vfd-lkc5w CannotUpdateManagedResource Operation cannot be fulfilled on policies.iam.aws.upbound.io "xplane-external-secrets-mycluster-0-d6vfd-lkc5w": the object has been modified; please apply your changes to the latest version and try again
default managed/iam.aws.upbound.io/v1beta1, kind=role xplane-external-secrets-mycluster-0-d6vfd-tttrx CannotUpdateManagedResource Operation cannot be fulfilled on roles.iam.aws.upbound.io "xplane-external-secrets-mycluster-0-d6vfd-tttrx": the object has been modified; please apply your changes to the latest version and try again
security kubelet ip-10-0-3-24.eu-west-3.compute.internal kyverno-background-controller-67f4b647d7-26gr7 Pulling Pulling image "ghcr.io/kyverno/background-controller:v1.10.3"
default managed/iam.aws.upbound.io/v1beta1, kind=policy xplane-cert-manager-mycluster-0-5m9n4-49nzb CannotUpdateManagedResource Operation cannot be fulfilled on policies.iam.aws.upbound.io "xplane-cert-manager-mycluster-0-5m9n4-49nzb": the object has been modified; please apply your changes to the latest version and try again
security kubelet ip-10-0-3-24.eu-west-3.compute.internal cert-manager-bc8c566cf-xmxb4 Created Created container cert-manager-controller
security kubelet ip-10-0-3-24.eu-west-3.compute.internal external-secrets-webhook-589765875-6s69h Created Created container webhook
security kubelet ip-10-0-3-24.eu-west-3.compute.internal kyverno-reports-controller-6f96648477-jmfwg Pulling Pulling image "ghcr.io/kyverno/reports-controller:v1.10.3"
(x5) default defined/compositeresourcedefinition.apiextensions.crossplane.io xplane-external-secrets-mycluster-0-d6vfd ComposeResources Composed resource "irsa-policy" is not yet ready
(x5) default defined/compositeresourcedefinition.apiextensions.crossplane.io xplane-external-secrets-mycluster-0-d6vfd ComposeResources Composed resource "irsa-role" is not yet ready
(x5) default defined/compositeresourcedefinition.apiextensions.crossplane.io xplane-cert-manager-mycluster-0-5m9n4 ComposeResources Successfully composed resources
(x5) default defined/compositeresourcedefinition.apiextensions.crossplane.io xplane-cert-manager-mycluster-0-5m9n4 ComposeResources Composed resource "irsa-role" is not yet ready
(x5) default defined/compositeresourcedefinition.apiextensions.crossplane.io xplane-cert-manager-mycluster-0-5m9n4 ComposeResources Composed resource "irsa-policy" is not yet ready
(x5) default defined/compositeresourcedefinition.apiextensions.crossplane.io xplane-external-secrets-mycluster-0-d6vfd ComposeResources Successfully composed resources
security cert-manager-leader-election cert-manager-controller LeaderElection cert-manager-bc8c566cf-xmxb4-external-cert-manager-controller became leader
security kubelet ip-10-0-2-105.eu-west-3.compute.internal cert-manager-webhook-7ffdd9664d-ng9hz Pulling Pulling image "quay.io/jetstack/cert-manager-webhook:v1.12.4"
security kubelet ip-10-0-3-24.eu-west-3.compute.internal kyverno-cleanup-controller-566f7bc8c-88xlj Pulling Pulling image "ghcr.io/kyverno/cleanup-controller:v1.10.3"
security kubelet ip-10-0-3-24.eu-west-3.compute.internal cert-manager-bc8c566cf-xmxb4 Pulled Successfully pulled image "quay.io/jetstack/cert-manager-controller:v1.12.4" in 2.24782631s (2.247842434s including waiting)
security kubelet ip-10-0-2-105.eu-west-3.compute.internal cert-manager-cainjector-57b9db9cd-4mpm8 Created Created container cert-manager-cainjector
security kubelet ip-10-0-3-24.eu-west-3.compute.internal cert-manager-bc8c566cf-xmxb4 Started Started container cert-manager-controller
security kubelet ip-10-0-3-24.eu-west-3.compute.internal external-secrets-webhook-589765875-6s69h Pulled Successfully pulled image "ghcr.io/external-secrets/external-secrets:v0.9.4" in 3.361529743s (3.361567238s including waiting)
security kubelet ip-10-0-2-105.eu-west-3.compute.internal cert-manager-cainjector-57b9db9cd-4mpm8 Started Started container cert-manager-cainjector
security kubelet ip-10-0-2-105.eu-west-3.compute.internal external-secrets-cert-controller-8665fc68-fs2rh Created Created container cert-controller
security kubelet ip-10-0-2-105.eu-west-3.compute.internal external-secrets-cert-controller-8665fc68-fs2rh Pulled Successfully pulled image "ghcr.io/external-secrets/external-secrets:v0.9.4" in 3.716076079s (3.716088813s including waiting)
security kubelet ip-10-0-2-105.eu-west-3.compute.internal cert-manager-cainjector-57b9db9cd-4mpm8 Pulled Successfully pulled image "quay.io/jetstack/cert-manager-cainjector:v1.12.4" in 2.275776425s (2.275786241s including waiting)
security kubelet ip-10-0-2-105.eu-west-3.compute.internal external-secrets-6b85658cd8-s4d24 Created Created container external-secrets
security kubelet ip-10-0-2-105.eu-west-3.compute.internal external-secrets-6b85658cd8-s4d24 Pulled Successfully pulled image "ghcr.io/external-secrets/external-secrets:v0.9.4" in 3.238951461s (3.23899559s including waiting)
security kubelet ip-10-0-2-105.eu-west-3.compute.internal external-secrets-6b85658cd8-s4d24 Started Started container external-secrets
security kubelet ip-10-0-2-105.eu-west-3.compute.internal external-secrets-cert-controller-8665fc68-fs2rh Started Started container cert-controller
security cert-manager-cainjector-57b9db9cd-4mpm8_5612c6ed-59e7-4e94-9f57-5c9630f2dc52 cert-manager-cainjector-leader-election LeaderElection cert-manager-cainjector-57b9db9cd-4mpm8_5612c6ed-59e7-4e94-9f57-5c9630f2dc52 became leader
security kubelet ip-10-0-3-24.eu-west-3.compute.internal external-secrets-webhook-589765875-6s69h Started Started container webhook
security kubelet ip-10-0-2-105.eu-west-3.compute.internal kyverno-admission-controller-75748bcb9c-jdsbk Pulling Pulling image "ghcr.io/kyverno/kyvernopre:v1.10.3"
security kubelet ip-10-0-3-24.eu-west-3.compute.internal kyverno-cleanup-controller-566f7bc8c-88xlj Created Created container controller
security kubelet ip-10-0-2-105.eu-west-3.compute.internal cert-manager-webhook-7ffdd9664d-ng9hz Created Created container cert-manager-webhook
security kubelet ip-10-0-2-105.eu-west-3.compute.internal cert-manager-webhook-7ffdd9664d-ng9hz Pulled Successfully pulled image "quay.io/jetstack/cert-manager-webhook:v1.12.4" in 1.851266239s (1.851284714s including waiting)
security kubelet ip-10-0-3-24.eu-west-3.compute.internal kyverno-cleanup-controller-566f7bc8c-88xlj Pulled Successfully pulled image "ghcr.io/kyverno/cleanup-controller:v1.10.3" in 2.909918374s (2.909927713s including waiting)
security kubelet ip-10-0-2-105.eu-west-3.compute.internal cert-manager-webhook-7ffdd9664d-ng9hz Started Started container cert-manager-webhook
(x8) default cluster-secret-store clustersecretstore ValidationFailed WebIdentityErr: failed to retrieve credentials caused by: AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity status code: 403,
(x7) observability external-secrets kube-prometheus-stack-grafana-admin UpdateFailed the desired SecretStore clustersecretstore is not ready
security kubelet ip-10-0-3-24.eu-west-3.compute.internal kyverno-reports-controller-6f96648477-jmfwg Created Created container controller
observability external-secrets kube-prometheus-stack-grafana-admin UpdateFailed UnrecognizedClientException: The security token included in the request is invalid. status code: 400,
security kubelet ip-10-0-3-24.eu-west-3.compute.internal kyverno-background-controller-67f4b647d7-26gr7 Pulled Successfully pulled image "ghcr.io/kyverno/background-controller:v1.10.3" in 3.447695584s (3.447708952s including waiting)
security kubelet ip-10-0-3-24.eu-west-3.compute.internal kyverno-background-controller-67f4b647d7-26gr7 Created Created container controller
security kubelet ip-10-0-3-24.eu-west-3.compute.internal kyverno-background-controller-67f4b647d7-26gr7 Started Started container controller
security kubelet ip-10-0-3-24.eu-west-3.compute.internal kyverno-cleanup-controller-566f7bc8c-88xlj Started Started container controller
security kubelet ip-10-0-3-24.eu-west-3.compute.internal kyverno-reports-controller-6f96648477-jmfwg Pulled Successfully pulled image "ghcr.io/kyverno/reports-controller:v1.10.3" in 3.334002432s (3.334013076s including waiting)
security kubelet ip-10-0-3-24.eu-west-3.compute.internal kyverno-reports-controller-6f96648477-jmfwg Started Started container controller
security kubelet ip-10-0-2-105.eu-west-3.compute.internal kyverno-admission-controller-75748bcb9c-jdsbk Started Started container kyverno-pre
security kubelet ip-10-0-2-105.eu-west-3.compute.internal kyverno-admission-controller-75748bcb9c-jdsbk Created Created container kyverno-pre
security kubelet ip-10-0-2-105.eu-west-3.compute.internal kyverno-admission-controller-75748bcb9c-jdsbk Pulled Successfully pulled image "ghcr.io/kyverno/kyvernopre:v1.10.3" in 2.580256381s (2.580267322s including waiting)
(x9) default validating-webhook-configuration externalsecret-validate UpdateFailed ca cert not yet ready
(x5) default managed/iam.aws.upbound.io/v1beta1, kind=rolepolicyattachment xplane-external-secrets-mycluster-0-d6vfd-7dt2m CannotResolveResourceReferences cannot resolve references: mg.Spec.ForProvider.PolicyArn: referenced field was empty (referenced resource may not yet be ready)
security kubelet ip-10-0-2-105.eu-west-3.compute.internal kyverno-admission-controller-75748bcb9c-jdsbk Pulling Pulling image "ghcr.io/kyverno/kyverno:v1.10.3"
(x9) default validating-webhook-configuration secretstore-validate UpdateFailed ca cert not yet ready
(x5) default managed/iam.aws.upbound.io/v1beta1, kind=rolepolicyattachment xplane-cert-manager-mycluster-0-5m9n4-zhbfz CannotResolveResourceReferences cannot resolve references: mg.Spec.ForProvider.PolicyArn: referenced field was empty (referenced resource may not yet be ready)
flux-system kustomize-controller flux-config ReconciliationSucceeded Reconciliation finished in 350.052872ms, next run in 4m0s
security kubelet ip-10-0-2-105.eu-west-3.compute.internal kyverno-admission-controller-75748bcb9c-jdsbk Pulled Successfully pulled image "ghcr.io/kyverno/kyverno:v1.10.3" in 3.036411204s (3.036440784s including waiting)
security kubelet ip-10-0-2-105.eu-west-3.compute.internal kyverno-admission-controller-75748bcb9c-jdsbk Created Created container kyverno
security kubelet ip-10-0-2-105.eu-west-3.compute.internal kyverno-admission-controller-75748bcb9c-jdsbk Started Started container kyverno
security default-scheduler cert-manager-startupapicheck-jpttb Scheduled Successfully assigned security/cert-manager-startupapicheck-jpttb to ip-10-0-2-105.eu-west-3.compute.internal
security job-controller cert-manager-startupapicheck SuccessfulCreate Created pod: cert-manager-startupapicheck-jpttb
(x5) observability external-secrets kube-prometheus-stack-grafana-admin UpdateFailed AccessDeniedException: User: arn:aws:sts::396740644681:assumed-role/xplane-external-secrets-mycluster-0/external-secrets-provider-aws is not authorized to perform: secretsmanager:GetSecretValue on resource: observability/kube-prometheus-stack/grafana-admin because no identity-based policy allows the secretsmanager:GetSecretValue action status code: 400,
security kubelet ip-10-0-2-105.eu-west-3.compute.internal cert-manager-startupapicheck-jpttb Pulling Pulling image "quay.io/jetstack/cert-manager-ctl:v1.12.4"
flux-system kustomize-controller crossplane-providers ReconciliationSucceeded Reconciliation finished in 212.445977ms, next run in 2m0s
infrastructure cert-manager-certificates-trigger platform-tls Issuing Issuing certificate as Secret does not exist
(x11) security offered/compositeresourcedefinition.apiextensions.crossplane.io xplane-external-secrets-mycluster-0 BindCompositeResource Composite resource is not yet ready
security kubelet ip-10-0-2-105.eu-west-3.compute.internal cert-manager-startupapicheck-jpttb Created Created container cert-manager-startupapicheck
security kubelet ip-10-0-2-105.eu-west-3.compute.internal cert-manager-startupapicheck-jpttb Pulled Successfully pulled image "quay.io/jetstack/cert-manager-ctl:v1.12.4" in 2.010299761s (2.010310344s including waiting)
security kubelet ip-10-0-2-105.eu-west-3.compute.internal kyverno-admission-controller-75748bcb9c-jdsbk Unhealthy Startup probe failed: Get "https://10.0.5.128:9443/health/liveness": remote error: tls: internal error
infrastructure cert-manager-gateway-shim platform CreateCertificate Successfully created Certificate "platform-tls"
security kubelet ip-10-0-2-105.eu-west-3.compute.internal cert-manager-startupapicheck-jpttb Started Started container cert-manager-startupapicheck
(x10) security offered/compositeresourcedefinition.apiextensions.crossplane.io xplane-cert-manager-mycluster-0 BindCompositeResource Composite resource is not yet ready
infrastructure cert-manager-certificaterequests-approver platform-tls-zdc77 cert-manager.io Certificate request has been approved by cert-manager.io
infrastructure cert-manager-certificates-request-manager platform-tls Requested Created new CertificateRequest resource "platform-tls-zdc77"
infrastructure cert-manager-certificaterequests-issuer-vault platform-tls-zdc77 WaitingForApproval Not signing CertificateRequest until it is Approved
infrastructure cert-manager-certificaterequests-issuer-ca platform-tls-zdc77 WaitingForApproval Not signing CertificateRequest until it is Approved
infrastructure cert-manager-certificaterequests-issuer-venafi platform-tls-zdc77 WaitingForApproval Not signing CertificateRequest until it is Approved
infrastructure cert-manager-certificaterequests-issuer-selfsigned platform-tls-zdc77 WaitingForApproval Not signing CertificateRequest until it is Approved
infrastructure cert-manager-certificaterequests-issuer-acme platform-tls-zdc77 WaitingForApproval Not signing CertificateRequest until it is Approved
infrastructure cert-manager-certificaterequests-issuer-acme platform-tls-zdc77 IssuerNotReady Referenced issuer does not have a Ready status condition
infrastructure cert-manager-certificates-key-manager platform-tls Generated Stored new private key in temporary Secret resource "platform-tls-5d7l2"
default managed/iam.aws.upbound.io/v1beta1, kind=rolepolicyattachment xplane-external-secrets-mycluster-0-d6vfd-7dt2m CreatedExternalResource Successfully requested creation of external resource
default managed/iam.aws.upbound.io/v1beta1, kind=rolepolicyattachment xplane-cert-manager-mycluster-0-5m9n4-zhbfz CreatedExternalResource Successfully requested creation of external resource
security helm-controller kyverno-policies info Helm install has started
security helm-controller kyverno info Helm install succeeded
security job-controller cert-manager-startupapicheck Completed Job completed
security helm-controller cert-manager info Helm install succeeded
default managed/iam.aws.upbound.io/v1beta1, kind=rolepolicyattachment xplane-external-secrets-mycluster-0-d6vfd-7dt2m CannotUpdateManagedResource Operation cannot be fulfilled on rolepolicyattachments.iam.aws.upbound.io "xplane-external-secrets-mycluster-0-d6vfd-7dt2m": the object has been modified; please apply your changes to the latest version and try again
default managed/iam.aws.upbound.io/v1beta1, kind=rolepolicyattachment xplane-cert-manager-mycluster-0-5m9n4-zhbfz CannotUpdateManagedResource Operation cannot be fulfilled on rolepolicyattachments.iam.aws.upbound.io "xplane-cert-manager-mycluster-0-5m9n4-zhbfz": the object has been modified; please apply your changes to the latest version and try again
infrastructure cert-manager-orders platform-tls-zdc77-3297273686 Created Created Challenge resource "platform-tls-zdc77-3297273686-1589334387" for domain "cloud.ogenki.io"
infrastructure cert-manager-challenges platform-tls-zdc77-3297273686-1589334387 Started Challenge scheduled for processing
infrastructure cert-manager-challenges platform-tls-zdc77-3297273686-1589334387 PresentError Error presenting challenge: failed to determine Route 53 hosted zone ID: AccessDenied: User: arn:aws:sts::396740644681:assumed-role/xplane-cert-manager-mycluster-0/1694332762438111595 is not authorized to perform: route53:ListHostedZonesByName because no identity-based policy allows the route53:ListHostedZonesByName action
infrastructure cert-manager-challenges platform-tls-zdc77-3297273686-1589334387 PresentError Error presenting challenge: failed to determine Route 53 hosted zone ID: AccessDenied: User: arn:aws:sts::396740644681:assumed-role/xplane-cert-manager-mycluster-0/1694332762865741699 is not authorized to perform: route53:ListHostedZonesByName because no identity-based policy allows the route53:ListHostedZonesByName action
security helm-controller kyverno-policies info Helm install succeeded
kube-system kyverno-scan cilium-envoy PolicyViolation policy restrict-apparmor-profiles/autogen-app-armor fail: validation error: Specifying other AppArmor profiles is disallowed. The annotation `container.apparmor.security.beta.kubernetes.io` if defined must not be set to anything other than `runtime/default` or `localhost/*`. rule autogen-app-armor failed at path /spec/template/metadata/annotations/container.apparmor.security.beta.kubernetes.io/cilium-envoy/
default kyverno-scan disallow-privileged-containers PolicyViolation DaemonSet kube-system/kube-proxy: [autogen-privileged-containers] fail; validation error: Privileged mode is disallowed. The fields spec.containers[*].securityContext.privileged and spec.initContainers[*].securityContext.privileged must be unset or set to `false`. rule autogen-privileged-containers failed at path /spec/template/spec/containers/0/securityContext/privileged/
default kyverno-scan disallow-capabilities PolicyViolation DaemonSet kube-system/cilium: [autogen-adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
default kyverno-scan disallow-capabilities PolicyViolation DaemonSet kube-system/cilium-envoy: [autogen-adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
kube-system kyverno-scan cilium-envoy PolicyViolation policy disallow-host-path/autogen-host-path fail: validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule autogen-host-path failed at path /spec/template/spec/volumes/0/hostPath/
kube-system kyverno-scan cilium-envoy PolicyViolation policy disallow-host-ports/autogen-host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
kube-system kyverno-scan cilium-envoy PolicyViolation policy disallow-selinux/autogen-selinux-type fail: validation error: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). rule autogen-selinux-type failed at path /spec/template/spec/containers/0/securityContext/seLinuxOptions/type/
kube-system kyverno-scan cilium-envoy PolicyViolation policy disallow-host-namespaces/autogen-host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
kube-system kyverno-scan cilium-envoy PolicyViolation policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
default kyverno-scan disallow-host-namespaces PolicyViolation DaemonSet kube-system/cilium: [autogen-host-namespaces] fail; validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
default kyverno-scan disallow-host-namespaces PolicyViolation DaemonSet kube-system/cilium-envoy: [autogen-host-namespaces] fail; validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
default kyverno-scan disallow-host-namespaces PolicyViolation DaemonSet kube-system/kube-proxy: [autogen-host-namespaces] fail; validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
default kyverno-scan disallow-host-namespaces PolicyViolation DaemonSet observability/kube-prometheus-stack-prometheus-node-exporter: [autogen-host-namespaces] fail; validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
default kyverno-scan disallow-host-path PolicyViolation DaemonSet kube-system/cilium: [autogen-host-path] fail; validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule autogen-host-path failed at path /spec/template/spec/volumes/1/hostPath/
kube-system kyverno-scan cilium PolicyViolation policy disallow-host-namespaces/autogen-host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
kube-system kyverno-scan cilium PolicyViolation policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
kube-system kyverno-scan cilium PolicyViolation policy disallow-host-path/autogen-host-path fail: validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule autogen-host-path failed at path /spec/template/spec/volumes/1/hostPath/
kube-system kyverno-scan cilium PolicyViolation policy restrict-apparmor-profiles/autogen-app-armor fail: validation error: Specifying other AppArmor profiles is disallowed. The annotation `container.apparmor.security.beta.kubernetes.io` if defined must not be set to anything other than `runtime/default` or `localhost/*`. rule autogen-app-armor failed at path /spec/template/metadata/annotations/container.apparmor.security.beta.kubernetes.io/clean-cilium-state/
default kyverno-scan disallow-host-path PolicyViolation DaemonSet kube-system/cilium-envoy: [autogen-host-path] fail; validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule autogen-host-path failed at path /spec/template/spec/volumes/0/hostPath/
default kyverno-scan disallow-host-path PolicyViolation DaemonSet kube-system/kube-proxy: [autogen-host-path] fail; validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule autogen-host-path failed at path /spec/template/spec/volumes/0/hostPath/
default kyverno-scan disallow-host-path PolicyViolation DaemonSet observability/kube-prometheus-stack-prometheus-node-exporter: [autogen-host-path] fail; validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule autogen-host-path failed at path /spec/template/spec/volumes/0/hostPath/
kube-system kyverno-scan cilium PolicyViolation policy disallow-host-ports/autogen-host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
kube-system kyverno-scan cilium PolicyViolation policy disallow-privileged-containers/autogen-privileged-containers fail: validation error: Privileged mode is disallowed. The fields spec.containers[*].securityContext.privileged and spec.initContainers[*].securityContext.privileged must be unset or set to `false`. rule autogen-privileged-containers failed at path /spec/template/spec/initContainers/3/securityContext/privileged/
kube-system kyverno-scan cilium PolicyViolation policy disallow-selinux/autogen-selinux-type fail: validation error: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). rule autogen-selinux-type failed at path /spec/template/spec/initContainers/1/securityContext/seLinuxOptions/type/
kube-system kyverno-scan kube-proxy PolicyViolation policy disallow-host-path/autogen-host-path fail: validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule autogen-host-path failed at path /spec/template/spec/volumes/0/hostPath/
kube-system kyverno-scan kube-proxy PolicyViolation policy disallow-privileged-containers/autogen-privileged-containers fail: validation error: Privileged mode is disallowed. The fields spec.containers[*].securityContext.privileged and spec.initContainers[*].securityContext.privileged must be unset or set to `false`. rule autogen-privileged-containers failed at path /spec/template/spec/containers/0/securityContext/privileged/
kube-system kyverno-scan kube-proxy PolicyViolation policy disallow-host-namespaces/autogen-host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
default kyverno-scan disallow-host-ports PolicyViolation DaemonSet kube-system/cilium-envoy: [autogen-host-ports-none] fail; validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
default kyverno-scan disallow-host-ports PolicyViolation DaemonSet kube-system/cilium: [autogen-host-ports-none] fail; validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
default kyverno-scan disallow-host-ports PolicyViolation DaemonSet observability/kube-prometheus-stack-prometheus-node-exporter: [autogen-host-ports-none] fail; validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
default kyverno-scan restrict-apparmor-profiles PolicyViolation DaemonSet kube-system/cilium: [autogen-app-armor] fail; validation error: Specifying other AppArmor profiles is disallowed. The annotation `container.apparmor.security.beta.kubernetes.io` if defined must not be set to anything other than `runtime/default` or `localhost/*`. rule autogen-app-armor failed at path /spec/template/metadata/annotations/container.apparmor.security.beta.kubernetes.io/clean-cilium-state/
default kyverno-scan restrict-apparmor-profiles PolicyViolation DaemonSet kube-system/cilium-envoy: [autogen-app-armor] fail; validation error: Specifying other AppArmor profiles is disallowed. The annotation `container.apparmor.security.beta.kubernetes.io` if defined must not be set to anything other than `runtime/default` or `localhost/*`. rule autogen-app-armor failed at path /spec/template/metadata/annotations/container.apparmor.security.beta.kubernetes.io/cilium-envoy/
default kyverno-scan disallow-selinux PolicyViolation DaemonSet kube-system/cilium: [autogen-selinux-type] fail; validation error: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). rule autogen-selinux-type failed at path /spec/template/spec/initContainers/1/securityContext/seLinuxOptions/type/
default kyverno-scan disallow-selinux PolicyViolation DaemonSet kube-system/cilium-envoy: [autogen-selinux-type] fail; validation error: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). rule autogen-selinux-type failed at path /spec/template/spec/containers/0/securityContext/seLinuxOptions/type/
observability kyverno-scan kube-prometheus-stack-prometheus-node-exporter PolicyViolation policy disallow-host-path/autogen-host-path fail: validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule autogen-host-path failed at path /spec/template/spec/volumes/0/hostPath/
observability kyverno-scan kube-prometheus-stack-prometheus-node-exporter PolicyViolation policy disallow-host-ports/autogen-host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
default kyverno-scan disallow-privileged-containers PolicyViolation DaemonSet kube-system/cilium: [autogen-privileged-containers] fail; validation error: Privileged mode is disallowed. The fields spec.containers[*].securityContext.privileged and spec.initContainers[*].securityContext.privileged must be unset or set to `false`. rule autogen-privileged-containers failed at path /spec/template/spec/initContainers/3/securityContext/privileged/
(x2) observability external-secrets kube-prometheus-stack-grafana-admin Updated Updated Secret
observability kyverno-scan kube-prometheus-stack-prometheus-node-exporter PolicyViolation policy disallow-host-namespaces/autogen-host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
kube-system kyverno-scan cilium-operator PolicyViolation policy disallow-host-namespaces/autogen-host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
default kyverno-scan disallow-host-namespaces PolicyViolation Deployment kube-system/cilium-operator: [autogen-host-namespaces] fail; validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
default kyverno-scan disallow-privileged-containers PolicyViolation Pod kube-system/kube-proxy-vjrhj: [privileged-containers] fail; validation error: Privileged mode is disallowed. The fields spec.containers[*].securityContext.privileged and spec.initContainers[*].securityContext.privileged must be unset or set to `false`. rule privileged-containers failed at path /spec/containers/0/securityContext/privileged/
kube-system kyverno-scan kube-proxy-vjrhj PolicyViolation policy disallow-privileged-containers/privileged-containers fail: validation error: Privileged mode is disallowed. The fields spec.containers[*].securityContext.privileged and spec.initContainers[*].securityContext.privileged must be unset or set to `false`. rule privileged-containers failed at path /spec/containers/0/securityContext/privileged/
default kyverno-scan disallow-host-namespaces PolicyViolation Pod observability/kube-prometheus-stack-prometheus-node-exporter-km8ht: [host-namespaces] fail; validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
observability kyverno-scan kube-prometheus-stack-prometheus-node-exporter-km8ht PolicyViolation policy disallow-host-namespaces/host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
observability kyverno-scan kube-prometheus-stack-prometheus-node-exporter-km8ht PolicyViolation policy disallow-host-ports/host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
observability kyverno-scan kube-prometheus-stack-prometheus-node-exporter-km8ht PolicyViolation policy disallow-host-path/host-path fail: validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule host-path failed at path /spec/volumes/0/hostPath/
kube-system kyverno-scan cilium-operator-779bf49976-qznq9 PolicyViolation policy disallow-host-namespaces/host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
kube-system kyverno-scan kube-proxy-vjrhj PolicyViolation policy disallow-host-namespaces/host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
default kyverno-scan disallow-host-ports PolicyViolation Pod observability/kube-prometheus-stack-prometheus-node-exporter-km8ht: [host-ports-none] fail; validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
default kyverno-scan disallow-host-namespaces PolicyViolation Pod kube-system/kube-proxy-vjrhj: [host-namespaces] fail; validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
default kyverno-scan disallow-host-namespaces PolicyViolation Pod kube-system/cilium-operator-779bf49976-qznq9: [host-namespaces] fail; validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
kube-system kyverno-scan kube-proxy-vjrhj PolicyViolation policy disallow-host-path/host-path fail: validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule host-path failed at path /spec/volumes/0/hostPath/
default kyverno-scan disallow-host-path PolicyViolation Pod kube-system/kube-proxy-vjrhj: [host-path] fail; validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule host-path failed at path /spec/volumes/0/hostPath/
default kyverno-scan disallow-host-path PolicyViolation Pod observability/kube-prometheus-stack-prometheus-node-exporter-km8ht: [host-path] fail; validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule host-path failed at path /spec/volumes/0/hostPath/
kube-system kyverno-scan cilium-operator-779bf49976-lgq5h PolicyViolation policy disallow-host-namespaces/host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
observability kyverno-scan kube-prometheus-stack-prometheus-node-exporter-bq2dc PolicyViolation policy disallow-host-ports/host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
default kyverno-scan disallow-host-path PolicyViolation Pod observability/kube-prometheus-stack-prometheus-node-exporter-bq2dc: [host-path] fail; validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule host-path failed at path /spec/volumes/0/hostPath/
default kyverno-scan disallow-host-ports PolicyViolation Pod observability/kube-prometheus-stack-prometheus-node-exporter-bq2dc: [host-ports-none] fail; validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
observability kyverno-scan kube-prometheus-stack-prometheus-node-exporter-bq2dc PolicyViolation policy disallow-host-namespaces/host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
observability kyverno-scan kube-prometheus-stack-prometheus-node-exporter-bq2dc PolicyViolation policy disallow-host-path/host-path fail: validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule host-path failed at path /spec/volumes/0/hostPath/
kube-system kyverno-scan cilium-g94mr PolicyViolation policy disallow-host-namespaces/host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
default kyverno-scan disallow-privileged-containers PolicyViolation Pod kube-system/cilium-zpjnm: [privileged-containers] fail; validation error: Privileged mode is disallowed. The fields spec.containers[*].securityContext.privileged and spec.initContainers[*].securityContext.privileged must be unset or set to `false`. rule privileged-containers failed at path /spec/initContainers/3/securityContext/privileged/
default kyverno-scan disallow-capabilities PolicyViolation Pod kube-system/cilium-g94mr: [adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
default kyverno-scan disallow-capabilities PolicyViolation Pod kube-system/cilium-zpjnm: [adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
default kyverno-scan disallow-host-path PolicyViolation Pod kube-system/cilium-g94mr: [host-path] fail; validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule host-path failed at path /spec/volumes/1/hostPath/
default kyverno-scan disallow-host-path PolicyViolation Pod kube-system/cilium-zpjnm: [host-path] fail; validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule host-path failed at path /spec/volumes/1/hostPath/
default kyverno-scan restrict-apparmor-profiles PolicyViolation Pod kube-system/cilium-g94mr: [app-armor] fail; validation error: Specifying other AppArmor profiles is disallowed. The annotation `container.apparmor.security.beta.kubernetes.io` if defined must not be set to anything other than `runtime/default` or `localhost/*`. rule app-armor failed at path /metadata/annotations/container.apparmor.security.beta.kubernetes.io/cilium-agent/
default kyverno-scan restrict-apparmor-profiles PolicyViolation Pod kube-system/cilium-zpjnm: [app-armor] fail; validation error: Specifying other AppArmor profiles is disallowed. The annotation `container.apparmor.security.beta.kubernetes.io` if defined must not be set to anything other than `runtime/default` or `localhost/*`. rule app-armor failed at path /metadata/annotations/container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites/
kube-system kyverno-scan cilium-g94mr PolicyViolation policy disallow-privileged-containers/privileged-containers fail: validation error: Privileged mode is disallowed. The fields spec.containers[*].securityContext.privileged and spec.initContainers[*].securityContext.privileged must be unset or set to `false`. rule privileged-containers failed at path /spec/initContainers/3/securityContext/privileged/
kube-system kyverno-scan cilium-g94mr PolicyViolation policy disallow-selinux/selinux-type fail: validation error: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). rule selinux-type failed at path /spec/initContainers/1/securityContext/seLinuxOptions/type/
default kyverno-scan disallow-privileged-containers PolicyViolation Pod kube-system/cilium-g94mr: [privileged-containers] fail; validation error: Privileged mode is disallowed. The fields spec.containers[*].securityContext.privileged and spec.initContainers[*].securityContext.privileged must be unset or set to `false`. rule privileged-containers failed at path /spec/initContainers/3/securityContext/privileged/
kube-system kyverno-scan cilium-g94mr PolicyViolation policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
kube-system kyverno-scan cilium-g94mr PolicyViolation policy disallow-host-path/host-path fail: validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule host-path failed at path /spec/volumes/1/hostPath/
kube-system kyverno-scan cilium-g94mr PolicyViolation policy restrict-apparmor-profiles/app-armor fail: validation error: Specifying other AppArmor profiles is disallowed. The annotation `container.apparmor.security.beta.kubernetes.io` if defined must not be set to anything other than `runtime/default` or `localhost/*`. rule app-armor failed at path /metadata/annotations/container.apparmor.security.beta.kubernetes.io/cilium-agent/
default kyverno-scan disallow-selinux PolicyViolation Pod kube-system/cilium-zpjnm: [selinux-type] fail; validation error: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). rule selinux-type failed at path /spec/initContainers/1/securityContext/seLinuxOptions/type/
default kyverno-scan disallow-selinux PolicyViolation Pod kube-system/cilium-g94mr: [selinux-type] fail; validation error: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). rule selinux-type failed at path /spec/initContainers/1/securityContext/seLinuxOptions/type/
default kyverno-scan disallow-host-ports PolicyViolation Pod kube-system/cilium-zpjnm: [host-ports-none] fail; validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
default kyverno-scan disallow-host-ports PolicyViolation Pod kube-system/cilium-g94mr: [host-ports-none] fail; validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
kube-system kyverno-scan cilium-g94mr PolicyViolation policy disallow-host-ports/host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
kube-system kyverno-scan cilium-zpjnm PolicyViolation policy disallow-host-namespaces/host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
kube-system kyverno-scan cilium-zpjnm PolicyViolation policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
kube-system kyverno-scan cilium-zpjnm PolicyViolation policy disallow-privileged-containers/privileged-containers fail: validation error: Privileged mode is disallowed. The fields spec.containers[*].securityContext.privileged and spec.initContainers[*].securityContext.privileged must be unset or set to `false`. rule privileged-containers failed at path /spec/initContainers/3/securityContext/privileged/
kube-system kyverno-scan cilium-zpjnm PolicyViolation policy disallow-selinux/selinux-type fail: validation error: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). rule selinux-type failed at path /spec/initContainers/1/securityContext/seLinuxOptions/type/
kube-system kyverno-scan cilium-zpjnm PolicyViolation policy disallow-host-ports/host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
kube-system kyverno-scan cilium-zpjnm PolicyViolation policy restrict-apparmor-profiles/app-armor fail: validation error: Specifying other AppArmor profiles is disallowed. The annotation `container.apparmor.security.beta.kubernetes.io` if defined must not be set to anything other than `runtime/default` or `localhost/*`. rule app-armor failed at path /metadata/annotations/container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites/
kube-system kyverno-scan cilium-zpjnm PolicyViolation policy disallow-host-path/host-path fail: validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule host-path failed at path /spec/volumes/1/hostPath/
default kyverno-scan disallow-capabilities PolicyViolation Pod kube-system/cilium-envoy-pzhcc: [adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
default kyverno-scan restrict-apparmor-profiles PolicyViolation Pod kube-system/cilium-envoy-pzhcc: [app-armor] fail; validation error: Specifying other AppArmor profiles is disallowed. The annotation `container.apparmor.security.beta.kubernetes.io` if defined must not be set to anything other than `runtime/default` or `localhost/*`. rule app-armor failed at path /metadata/annotations/container.apparmor.security.beta.kubernetes.io/cilium-envoy/
default kyverno-scan disallow-privileged-containers PolicyViolation Pod kube-system/kube-proxy-7fcmm: [privileged-containers] fail; validation error: Privileged mode is disallowed. The fields spec.containers[*].securityContext.privileged and spec.initContainers[*].securityContext.privileged must be unset or set to `false`. rule privileged-containers failed at path /spec/containers/0/securityContext/privileged/
kube-system kyverno-scan cilium-envoy-pzhcc PolicyViolation policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
default kyverno-scan disallow-host-ports PolicyViolation Pod kube-system/cilium-envoy-pzhcc: [host-ports-none] fail; validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
default kyverno-scan disallow-selinux PolicyViolation Pod kube-system/cilium-envoy-pzhcc: [selinux-type] fail; validation error: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). rule selinux-type failed at path /spec/containers/0/securityContext/seLinuxOptions/type/
kube-system kyverno-scan cilium-envoy-pzhcc PolicyViolation policy disallow-host-namespaces/host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
kube-system kyverno-scan kube-proxy-7fcmm PolicyViolation policy disallow-privileged-containers/privileged-containers fail: validation error: Privileged mode is disallowed. The fields spec.containers[*].securityContext.privileged and spec.initContainers[*].securityContext.privileged must be unset or set to `false`. rule privileged-containers failed at path /spec/containers/0/securityContext/privileged/
kube-system kyverno-scan kube-proxy-7fcmm PolicyViolation policy disallow-host-namespaces/host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
kube-system kyverno-scan kube-proxy-7fcmm PolicyViolation policy disallow-host-path/host-path fail: validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule host-path failed at path /spec/volumes/0/hostPath/
kube-system kyverno-scan cilium-envoy-pzhcc PolicyViolation policy disallow-selinux/selinux-type fail: validation error: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). rule selinux-type failed at path /spec/containers/0/securityContext/seLinuxOptions/type/
kube-system kyverno-scan cilium-envoy-pzhcc PolicyViolation policy disallow-host-path/host-path fail: validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule host-path failed at path /spec/volumes/0/hostPath/
kube-system kyverno-scan cilium-envoy-pzhcc PolicyViolation policy restrict-apparmor-profiles/app-armor fail: validation error: Specifying other AppArmor profiles is disallowed. The annotation `container.apparmor.security.beta.kubernetes.io` if defined must not be set to anything other than `runtime/default` or `localhost/*`. rule app-armor failed at path /metadata/annotations/container.apparmor.security.beta.kubernetes.io/cilium-envoy/
kube-system kyverno-scan cilium-envoy-pzhcc PolicyViolation policy disallow-host-ports/host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
kube-system kyverno-scan cilium-envoy-2fzwf PolicyViolation policy restrict-apparmor-profiles/app-armor fail: validation error: Specifying other AppArmor profiles is disallowed. The annotation `container.apparmor.security.beta.kubernetes.io` if defined must not be set to anything other than `runtime/default` or `localhost/*`. rule app-armor failed at path /metadata/annotations/container.apparmor.security.beta.kubernetes.io/cilium-envoy/
kube-system kyverno-scan cilium-envoy-2fzwf PolicyViolation policy disallow-selinux/selinux-type fail: validation error: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). rule selinux-type failed at path /spec/containers/0/securityContext/seLinuxOptions/type/
default kyverno-scan restrict-apparmor-profiles PolicyViolation Pod kube-system/cilium-envoy-2fzwf: [app-armor] fail; validation error: Specifying other AppArmor profiles is disallowed. The annotation `container.apparmor.security.beta.kubernetes.io` if defined must not be set to anything other than `runtime/default` or `localhost/*`. rule app-armor failed at path /metadata/annotations/container.apparmor.security.beta.kubernetes.io/cilium-envoy/
default kyverno-scan disallow-host-ports PolicyViolation Pod kube-system/cilium-envoy-2fzwf: [host-ports-none] fail; validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
kube-system kyverno-scan cilium-envoy-2fzwf PolicyViolation policy disallow-host-path/host-path fail: validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule host-path failed at path /spec/volumes/0/hostPath/
default kyverno-scan disallow-selinux PolicyViolation Pod kube-system/cilium-envoy-2fzwf: [selinux-type] fail; validation error: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). rule selinux-type failed at path /spec/containers/0/securityContext/seLinuxOptions/type/
kube-system kyverno-scan cilium-envoy-2fzwf PolicyViolation policy disallow-host-ports/host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
(x3) default kyverno-scan disallow-host-path PolicyViolation (combined from similar events): Pod kube-system/cilium-envoy-2fzwf: [host-path] fail; validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule host-path failed at path /spec/volumes/0/hostPath/
kube-system kyverno-scan cilium-envoy-2fzwf PolicyViolation policy disallow-host-namespaces/host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
kube-system kyverno-scan cilium-envoy-2fzwf PolicyViolation policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
default kyverno-scan disallow-capabilities PolicyViolation Pod kube-system/cilium-envoy-2fzwf: [adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
(x2) default kyverno-scan disallow-host-namespaces PolicyViolation ReplicaSet kube-system/cilium-operator-779bf49976: [autogen-host-namespaces] fail; validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
(x2) kube-system kyverno-scan cilium-operator-779bf49976 PolicyViolation policy disallow-host-namespaces/autogen-host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
flux-system kustomize-controller crossplane-configuration ReconciliationSucceeded Reconciliation finished in 257.640705ms, next run in 2m0s
default kyverno-admission disallow-host-ports PolicyApplied Job observability/kube-prometheus-stack-admission-patch: pass
default kyverno-admission disallow-capabilities PolicyApplied Job observability/kube-prometheus-stack-admission-patch: pass
default kyverno-admission restrict-apparmor-profiles PolicyApplied Job observability/kube-prometheus-stack-admission-patch: pass
observability job-controller kube-prometheus-stack-admission-patch SuccessfulCreate Created pod: kube-prometheus-stack-admission-patch-4xf2q
default kyverno-admission disallow-host-process PolicyApplied Job observability/kube-prometheus-stack-admission-patch: pass
default kyverno-admission disallow-host-path PolicyApplied Job observability/kube-prometheus-stack-admission-patch: pass
default kyverno-admission disallow-selinux PolicyApplied Job observability/kube-prometheus-stack-admission-patch: pass
default kyverno-admission restrict-seccomp PolicyApplied Job observability/kube-prometheus-stack-admission-patch: pass
default kyverno-admission disallow-proc-mount PolicyApplied Job observability/kube-prometheus-stack-admission-patch: pass
default kyverno-admission disallow-host-namespaces PolicyApplied Job observability/kube-prometheus-stack-admission-patch: pass
observability default-scheduler kube-prometheus-stack-admission-patch-4xf2q Scheduled Successfully assigned observability/kube-prometheus-stack-admission-patch-4xf2q to ip-10-0-2-105.eu-west-3.compute.internal
default kyverno-admission disallow-privileged-containers PolicyApplied Job observability/kube-prometheus-stack-admission-patch: pass
default kyverno-admission restrict-sysctls PolicyApplied Job observability/kube-prometheus-stack-admission-patch: pass
observability kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-prometheus-stack-admission-patch-4xf2q Pulling Pulling image "registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20221220-controller-v1.5.1-58-g787ea74b6"
observability kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-prometheus-stack-admission-patch-4xf2q Created Created container patch
observability kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-prometheus-stack-admission-patch-4xf2q Pulled Successfully pulled image "registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20221220-controller-v1.5.1-58-g787ea74b6" in 1.057504559s (1.057526539s including waiting)
observability kubelet ip-10-0-2-105.eu-west-3.compute.internal kube-prometheus-stack-admission-patch-4xf2q Started Started container patch
(x2) default kyverno-admission disallow-selinux PolicyApplied Pod observability/kube-prometheus-stack-admission-patch-4xf2q: pass
(x2) default kyverno-admission restrict-seccomp PolicyApplied Pod observability/kube-prometheus-stack-admission-patch-4xf2q: pass
(x2) default kyverno-admission restrict-apparmor-profiles PolicyApplied Pod observability/kube-prometheus-stack-admission-patch-4xf2q: pass
(x2) default kyverno-admission disallow-privileged-containers PolicyApplied Pod observability/kube-prometheus-stack-admission-patch-4xf2q: pass
observability job-controller kube-prometheus-stack-admission-patch Completed Job completed
(x2) default kyverno-admission disallow-host-path PolicyApplied Pod observability/kube-prometheus-stack-admission-patch-4xf2q: pass
(x2) default kyverno-admission disallow-capabilities PolicyApplied Pod observability/kube-prometheus-stack-admission-patch-4xf2q: pass
(x2) default kyverno-admission disallow-proc-mount PolicyApplied Pod observability/kube-prometheus-stack-admission-patch-4xf2q: pass
(x2) default kyverno-admission disallow-host-namespaces PolicyApplied Pod observability/kube-prometheus-stack-admission-patch-4xf2q: pass
(x2) default kyverno-admission restrict-sysctls PolicyApplied Pod observability/kube-prometheus-stack-admission-patch-4xf2q: pass
(x2) default kyverno-admission disallow-host-process PolicyApplied Pod observability/kube-prometheus-stack-admission-patch-4xf2q: pass
(x2) default kyverno-admission disallow-host-ports PolicyApplied Pod observability/kube-prometheus-stack-admission-patch-4xf2q: pass
observability helm-controller kube-prometheus-stack info Helm install succeeded
flux-system kustomize-controller observability Progressing Health check passed in 5m10.067257319s
flux-system kustomize-controller observability ReconciliationSucceeded Reconciliation finished in 5m10.64161679s, next run in 3m0s
security job-controller kyverno-cleanup-cluster-admission-reports-28238880 SuccessfulCreate Created pod: kyverno-cleanup-cluster-admission-reports-28238880-njl88
security job-controller kyverno-cleanup-admission-reports-28238880 SuccessfulCreate Created pod: kyverno-cleanup-admission-reports-28238880-48knv
security default-scheduler kyverno-cleanup-admission-reports-28238880-48knv Scheduled Successfully assigned security/kyverno-cleanup-admission-reports-28238880-48knv to ip-10-0-2-105.eu-west-3.compute.internal
security cronjob-controller kyverno-cleanup-admission-reports SuccessfulCreate Created job kyverno-cleanup-admission-reports-28238880
security default-scheduler kyverno-cleanup-cluster-admission-reports-28238880-njl88 Scheduled Successfully assigned security/kyverno-cleanup-cluster-admission-reports-28238880-njl88 to ip-10-0-3-24.eu-west-3.compute.internal
security cronjob-controller kyverno-cleanup-cluster-admission-reports SuccessfulCreate Created job kyverno-cleanup-cluster-admission-reports-28238880
security kubelet ip-10-0-2-105.eu-west-3.compute.internal kyverno-cleanup-admission-reports-28238880-48knv Pulling Pulling image "bitnami/kubectl:1.26.4"
security kubelet ip-10-0-3-24.eu-west-3.compute.internal kyverno-cleanup-cluster-admission-reports-28238880-njl88 Pulling Pulling image "bitnami/kubectl:1.26.4"
security kubelet ip-10-0-2-105.eu-west-3.compute.internal kyverno-cleanup-admission-reports-28238880-48knv Created Created container cleanup
security kubelet ip-10-0-3-24.eu-west-3.compute.internal kyverno-cleanup-cluster-admission-reports-28238880-njl88 Started Started container cleanup
security kubelet ip-10-0-3-24.eu-west-3.compute.internal kyverno-cleanup-cluster-admission-reports-28238880-njl88 Created Created container cleanup
security kubelet ip-10-0-2-105.eu-west-3.compute.internal kyverno-cleanup-admission-reports-28238880-48knv Started Started container cleanup
security kubelet ip-10-0-2-105.eu-west-3.compute.internal kyverno-cleanup-admission-reports-28238880-48knv Pulled Successfully pulled image "bitnami/kubectl:1.26.4" in 4.974566608s (4.974589188s including waiting)
security kubelet ip-10-0-3-24.eu-west-3.compute.internal kyverno-cleanup-cluster-admission-reports-28238880-njl88 Pulled Successfully pulled image "bitnami/kubectl:1.26.4" in 5.644036261s (5.644050703s including waiting)
security offered/compositeresourcedefinition.apiextensions.crossplane.io xplane-cert-manager-mycluster-0 BindCompositeResource Successfully bound composite resource
(x11) security offered/compositeresourcedefinition.apiextensions.crossplane.io xplane-cert-manager-mycluster-0 ConfigureCompositeResource Successfully applied composite resource
(x12) security offered/compositeresourcedefinition.apiextensions.crossplane.io xplane-external-secrets-mycluster-0 ConfigureCompositeResource Successfully applied composite resource
flux-system kustomize-controller infrastructure ReconciliationSucceeded Reconciliation finished in 323.634019ms, next run in 4m0s
security cronjob-controller kyverno-cleanup-admission-reports SawCompletedJob Saw completed job: kyverno-cleanup-admission-reports-28238880, status: Complete
security job-controller kyverno-cleanup-admission-reports-28238880 Completed Job completed
security job-controller kyverno-cleanup-cluster-admission-reports-28238880 Completed Job completed
security cronjob-controller kyverno-cleanup-cluster-admission-reports SawCompletedJob Saw completed job: kyverno-cleanup-cluster-admission-reports-28238880, status: Complete
(x14) flux-system kustomize-controller apps DependencyNotReady Dependencies do not meet ready condition, retrying in 30s
(x14) security kubelet ip-10-0-3-24.eu-west-3.compute.internal external-secrets-webhook-589765875-6s69h Unhealthy Readiness probe failed: Get "http://10.0.4.80:8081/readyz": dial tcp 10.0.4.80:8081: connect: connection refused
(x15) security kubelet ip-10-0-2-105.eu-west-3.compute.internal external-secrets-cert-controller-8665fc68-fs2rh Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500
security helm-controller external-secrets info Helm install succeeded
flux-system kustomize-controller security ReconciliationSucceeded Reconciliation finished in 1m45.705803719s, next run in 4m0s
flux-system kustomize-controller security Progressing Health check passed in 1m45.048630304s
echo cert-manager-certificaterequests-issuer-vault echo-tls-74w58 WaitingForApproval Not signing CertificateRequest until it is Approved
echo cert-manager-certificaterequests-issuer-venafi echo-tls-74w58 WaitingForApproval Not signing CertificateRequest until it is Approved
echo helm-controller echo-2 info HelmChart 'flux-system/echo-echo-2' is not ready
echo helm-controller echo-1 info HelmChart 'flux-system/echo-echo-1' is not ready
echo cert-manager-certificaterequests-issuer-selfsigned echo-tls-74w58 WaitingForApproval Not signing CertificateRequest until it is Approved
echo cert-manager-certificaterequests-issuer-acme echo-tls-74w58 WaitingForApproval Not signing CertificateRequest until it is Approved
echo cert-manager-certificaterequests-issuer-ca echo-tls-74w58 WaitingForApproval Not signing CertificateRequest until it is Approved
flux-system kustomize-controller apps Progressing Gateway/echo/echo created Gateway/echo/echo-tls created HTTPRoute/echo/echo-1 created HTTPRoute/echo/split-echo created HTTPRoute/echo/tls-echo-1 created HelmRelease/echo/echo-1 created HelmRelease/echo/echo-2 created HelmRepository/flux-system/echo created
echo cert-manager-certificates-request-manager echo-tls Requested Created new CertificateRequest resource "echo-tls-74w58"
echo cert-manager-certificates-key-manager echo-tls Generated Stored new private key in temporary Secret resource "echo-tls-856ll"
echo cert-manager-certificates-trigger echo-tls Issuing Issuing certificate as Secret does not exist
echo cert-manager-gateway-shim echo-tls CreateCertificate Successfully created Certificate "echo-tls"
flux-system source-controller echo-echo-2 NoSourceArtifact no artifact available for HelmRepository source 'echo'
echo cert-manager-certificaterequests-issuer-acme echo-tls-74w58 OrderCreated Created Order resource echo/echo-tls-74w58-3333634511
echo cert-manager-certificaterequests-approver echo-tls-74w58 cert-manager.io Certificate request has been approved by cert-manager.io
flux-system source-controller echo NewArtifact stored fetched index of size 7.648kB from 'https://ealenn.github.io/charts'
flux-system source-controller echo-echo-1 NoSourceArtifact no artifact available for HelmRepository source 'echo'
echo cert-manager-orders echo-tls-74w58-3333634511 Created Created Challenge resource "echo-tls-74w58-3333634511-2939583069" for domain "tls-echo-1.cloud.ogenki.io"
flux-system source-controller echo-echo-2 ChartPullSucceeded pulled 'echo-server' chart with version '0.5.0'
flux-system source-controller echo-echo-1 ChartPullSucceeded pulled 'echo-server' chart with version '0.5.0'
echo helm-controller echo-1 info Helm install has started
echo cert-manager-challenges echo-tls-74w58-3333634511-2939583069 Started Challenge scheduled for processing
echo helm-controller echo-2 info Helm install has started
default kyverno-admission disallow-host-namespaces PolicyApplied Pod echo/echo-2-echo-server-b4cfd8458-cn9fv: pass
default kyverno-admission disallow-host-path PolicyApplied Pod echo/echo-2-echo-server-b4cfd8458-cn9fv: pass
default kyverno-admission disallow-host-process PolicyApplied Deployment echo/echo-1-echo-server: pass
default kyverno-admission disallow-host-process PolicyApplied Pod echo/echo-2-echo-server-b4cfd8458-zwz77: pass
default kyverno-admission disallow-host-process PolicyApplied Deployment echo/echo-2-echo-server: pass
echo deployment-controller echo-2-echo-server ScalingReplicaSet Scaled up replica set echo-2-echo-server-b4cfd8458 to 2
echo replicaset-controller echo-2-echo-server-b4cfd8458 SuccessfulCreate Created pod: echo-2-echo-server-b4cfd8458-cn9fv
echo replicaset-controller echo-2-echo-server-b4cfd8458 SuccessfulCreate Created pod: echo-2-echo-server-b4cfd8458-zwz77
echo default-scheduler echo-1-echo-server-fd88497d-cbvnz Scheduled Successfully assigned echo/echo-1-echo-server-fd88497d-cbvnz to ip-10-0-2-105.eu-west-3.compute.internal
default kyverno-admission restrict-sysctls PolicyApplied Deployment echo/echo-1-echo-server: pass
default kyverno-admission restrict-sysctls PolicyApplied Pod echo/echo-2-echo-server-b4cfd8458-cn9fv: pass
default kyverno-admission restrict-sysctls PolicyApplied Pod echo/echo-2-echo-server-b4cfd8458-zwz77: pass
default kyverno-admission disallow-host-ports PolicyApplied Deployment echo/echo-1-echo-server: pass
default kyverno-admission disallow-privileged-containers PolicyApplied Deployment echo/echo-2-echo-server: pass
default kyverno-admission disallow-privileged-containers PolicyApplied Pod echo/echo-2-echo-server-b4cfd8458-zwz77: pass
default kyverno-admission disallow-privileged-containers PolicyApplied Pod echo/echo-2-echo-server-b4cfd8458-cn9fv: pass
default kyverno-admission disallow-privileged-containers PolicyApplied Deployment echo/echo-1-echo-server: pass
default kyverno-admission restrict-sysctls PolicyApplied Deployment echo/echo-2-echo-server: pass
default kyverno-admission disallow-capabilities PolicyApplied Deployment echo/echo-2-echo-server: pass
default kyverno-admission disallow-host-ports PolicyApplied Pod echo/echo-2-echo-server-b4cfd8458-cn9fv: pass
default kyverno-admission disallow-host-ports PolicyApplied Pod echo/echo-2-echo-server-b4cfd8458-zwz77: pass
default kyverno-admission disallow-proc-mount PolicyApplied Deployment echo/echo-2-echo-server: pass
default kyverno-admission disallow-proc-mount PolicyApplied Pod echo/echo-2-echo-server-b4cfd8458-zwz77: pass
default kyverno-admission disallow-proc-mount PolicyApplied Pod echo/echo-2-echo-server-b4cfd8458-cn9fv: pass
default kyverno-admission disallow-proc-mount PolicyApplied Deployment echo/echo-1-echo-server: pass
default kyverno-admission disallow-capabilities PolicyApplied Pod echo/echo-2-echo-server-b4cfd8458-zwz77: pass
default kyverno-admission restrict-seccomp PolicyApplied Deployment echo/echo-1-echo-server: pass
default kyverno-admission disallow-host-ports PolicyApplied Deployment echo/echo-2-echo-server: pass
echo replicaset-controller echo-1-echo-server-fd88497d SuccessfulCreate Created pod: echo-1-echo-server-fd88497d-cbvnz
default kyverno-admission disallow-selinux PolicyApplied Deployment echo/echo-2-echo-server: pass
default kyverno-admission disallow-selinux PolicyApplied Pod echo/echo-2-echo-server-b4cfd8458-zwz77: pass
default kyverno-admission restrict-seccomp PolicyApplied Pod echo/echo-2-echo-server-b4cfd8458-cn9fv: pass
default kyverno-admission disallow-selinux PolicyApplied Pod echo/echo-2-echo-server-b4cfd8458-cn9fv: pass
default kyverno-admission disallow-selinux PolicyApplied Deployment echo/echo-1-echo-server: pass
default kyverno-admission restrict-seccomp PolicyApplied Pod echo/echo-2-echo-server-b4cfd8458-zwz77: pass
default kyverno-admission restrict-seccomp PolicyApplied Deployment echo/echo-2-echo-server: pass
default kyverno-admission disallow-host-path PolicyApplied Deployment echo/echo-1-echo-server: pass
default kyverno-admission disallow-host-process PolicyApplied Pod echo/echo-2-echo-server-b4cfd8458-cn9fv: pass
default kyverno-admission restrict-apparmor-profiles PolicyApplied Deployment echo/echo-2-echo-server: pass
echo default-scheduler echo-2-echo-server-b4cfd8458-zwz77 Scheduled Successfully assigned echo/echo-2-echo-server-b4cfd8458-zwz77 to ip-10-0-2-105.eu-west-3.compute.internal
default kyverno-admission disallow-host-path PolicyApplied Pod echo/echo-2-echo-server-b4cfd8458-zwz77: pass
default kyverno-admission disallow-host-path PolicyApplied Deployment echo/echo-2-echo-server: pass
default kyverno-admission disallow-capabilities PolicyApplied Pod echo/echo-2-echo-server-b4cfd8458-cn9fv: pass
default kyverno-admission disallow-capabilities PolicyApplied Deployment echo/echo-1-echo-server: pass
echo default-scheduler echo-2-echo-server-b4cfd8458-cn9fv Scheduled Successfully assigned echo/echo-2-echo-server-b4cfd8458-cn9fv to ip-10-0-3-24.eu-west-3.compute.internal
default kyverno-admission disallow-host-namespaces PolicyApplied Deployment echo/echo-1-echo-server: pass
echo default-scheduler echo-1-echo-server-fd88497d-xkvng Scheduled Successfully assigned echo/echo-1-echo-server-fd88497d-xkvng to ip-10-0-3-24.eu-west-3.compute.internal
default kyverno-admission disallow-host-namespaces PolicyApplied Pod echo/echo-2-echo-server-b4cfd8458-zwz77: pass
default kyverno-admission disallow-host-namespaces PolicyApplied Deployment echo/echo-2-echo-server: pass
echo replicaset-controller echo-1-echo-server-fd88497d SuccessfulCreate Created pod: echo-1-echo-server-fd88497d-xkvng
echo deployment-controller echo-1-echo-server ScalingReplicaSet Scaled up replica set echo-1-echo-server-fd88497d to 2
default kyverno-admission restrict-apparmor-profiles PolicyApplied Pod echo/echo-2-echo-server-b4cfd8458-zwz77: pass
default kyverno-admission restrict-apparmor-profiles PolicyApplied Pod echo/echo-2-echo-server-b4cfd8458-cn9fv: pass
default kyverno-admission restrict-apparmor-profiles PolicyApplied Deployment echo/echo-1-echo-server: pass
default kyverno-admission restrict-apparmor-profiles PolicyApplied Pod echo/echo-1-echo-server-fd88497d-cbvnz: pass
default kyverno-admission disallow-capabilities PolicyApplied Pod echo/echo-1-echo-server-fd88497d-cbvnz: pass
default kyverno-admission disallow-capabilities PolicyApplied Pod echo/echo-1-echo-server-fd88497d-xkvng: pass
default kyverno-admission restrict-apparmor-profiles PolicyApplied Pod echo/echo-1-echo-server-fd88497d-xkvng: pass
default kyverno-admission disallow-host-namespaces PolicyApplied Pod echo/echo-1-echo-server-fd88497d-cbvnz: pass
default kyverno-admission disallow-host-namespaces PolicyApplied Pod echo/echo-1-echo-server-fd88497d-xkvng: pass
default kyverno-admission disallow-selinux PolicyApplied Pod echo/echo-1-echo-server-fd88497d-xkvng: pass
default kyverno-admission disallow-selinux PolicyApplied Pod echo/echo-1-echo-server-fd88497d-cbvnz: pass
default kyverno-admission disallow-host-path PolicyApplied Pod echo/echo-1-echo-server-fd88497d-cbvnz: pass
default kyverno-admission disallow-host-path PolicyApplied Pod echo/echo-1-echo-server-fd88497d-xkvng: pass
default kyverno-admission disallow-proc-mount PolicyApplied Pod echo/echo-1-echo-server-fd88497d-xkvng: pass
default kyverno-admission restrict-seccomp PolicyApplied Pod echo/echo-1-echo-server-fd88497d-cbvnz: pass
default kyverno-admission restrict-seccomp PolicyApplied Pod echo/echo-1-echo-server-fd88497d-xkvng: pass
default kyverno-admission disallow-proc-mount PolicyApplied Pod echo/echo-1-echo-server-fd88497d-cbvnz: pass
default kyverno-admission disallow-privileged-containers PolicyApplied Pod echo/echo-1-echo-server-fd88497d-xkvng: pass
default kyverno-admission disallow-privileged-containers PolicyApplied Pod echo/echo-1-echo-server-fd88497d-cbvnz: pass
default kyverno-admission disallow-host-ports PolicyApplied Pod echo/echo-1-echo-server-fd88497d-cbvnz: pass
default kyverno-admission disallow-host-ports PolicyApplied Pod echo/echo-1-echo-server-fd88497d-xkvng: pass
default kyverno-admission disallow-host-process PolicyApplied Pod echo/echo-1-echo-server-fd88497d-xkvng: pass
default kyverno-admission restrict-sysctls PolicyApplied Pod echo/echo-1-echo-server-fd88497d-cbvnz: pass
default kyverno-admission restrict-sysctls PolicyApplied Pod echo/echo-1-echo-server-fd88497d-xkvng: pass
default kyverno-admission disallow-host-process PolicyApplied Pod echo/echo-1-echo-server-fd88497d-cbvnz: pass
echo kubelet ip-10-0-3-24.eu-west-3.compute.internal echo-2-echo-server-b4cfd8458-cn9fv Pulling Pulling image "ealen/echo-server:0.6.0"
echo kubelet ip-10-0-2-105.eu-west-3.compute.internal echo-1-echo-server-fd88497d-cbvnz Pulling Pulling image "ealen/echo-server:0.6.0"
(x2) echo targetGroupBinding k8s-echo-ciliumga-9f53e27422 SuccessfullyReconciled Successfully reconciled
(x2) echo targetGroupBinding k8s-echo-ciliumga-ccd104ecd3 SuccessfullyReconciled Successfully reconciled
echo kubelet ip-10-0-2-105.eu-west-3.compute.internal echo-2-echo-server-b4cfd8458-zwz77 Pulling Pulling image "ealen/echo-server:0.6.0"
(x2) echo service cilium-gateway-echo-tls SuccessfullyReconciled Successfully reconciled
(x2) echo service cilium-gateway-echo SuccessfullyReconciled Successfully reconciled
echo kubelet ip-10-0-3-24.eu-west-3.compute.internal echo-1-echo-server-fd88497d-xkvng Pulling Pulling image "ealen/echo-server:0.6.0"
echo kubelet ip-10-0-3-24.eu-west-3.compute.internal echo-2-echo-server-b4cfd8458-cn9fv Created Created container echo-server
echo kubelet ip-10-0-3-24.eu-west-3.compute.internal echo-2-echo-server-b4cfd8458-cn9fv Pulled Successfully pulled image "ealen/echo-server:0.6.0" in 3.478879623s (3.478887048s including waiting)
echo kubelet ip-10-0-3-24.eu-west-3.compute.internal echo-1-echo-server-fd88497d-xkvng Pulled Successfully pulled image "ealen/echo-server:0.6.0" in 3.577084668s (3.577097812s including waiting)
echo kubelet ip-10-0-3-24.eu-west-3.compute.internal echo-1-echo-server-fd88497d-xkvng Created Created container echo-server
echo kubelet ip-10-0-3-24.eu-west-3.compute.internal echo-2-echo-server-b4cfd8458-cn9fv Started Started container echo-server
echo kubelet ip-10-0-2-105.eu-west-3.compute.internal echo-2-echo-server-b4cfd8458-zwz77 Created Created container echo-server
echo kubelet ip-10-0-2-105.eu-west-3.compute.internal echo-1-echo-server-fd88497d-cbvnz Started Started container echo-server
echo kubelet ip-10-0-2-105.eu-west-3.compute.internal echo-1-echo-server-fd88497d-cbvnz Pulled Successfully pulled image "ealen/echo-server:0.6.0" in 3.621449016s (3.621457258s including waiting)
echo kubelet ip-10-0-2-105.eu-west-3.compute.internal echo-1-echo-server-fd88497d-cbvnz Created Created container echo-server
echo kubelet ip-10-0-2-105.eu-west-3.compute.internal echo-2-echo-server-b4cfd8458-zwz77 Pulled Successfully pulled image "ealen/echo-server:0.6.0" in 3.727546182s (3.727564998s including waiting)
echo kubelet ip-10-0-3-24.eu-west-3.compute.internal echo-1-echo-server-fd88497d-xkvng Started Started container echo-server
echo kubelet ip-10-0-2-105.eu-west-3.compute.internal echo-2-echo-server-b4cfd8458-zwz77 Started Started container echo-server
echo helm-controller echo-1 info Helm install succeeded
echo helm-controller echo-2 info Helm install succeeded
(x16) default kyverno-admission mutate-cilium-echo-gateway PolicyApplied Service echo/cilium-gateway-echo is successfully mutated
flux-system kustomize-controller apps Progressing Health check passed in 15.050906862s
flux-system kustomize-controller apps ReconciliationSucceeded Reconciliation finished in 15.518982249s, next run in 4m0s
infrastructure cert-manager-certificates-issuing platform-tls Issuing The certificate has been successfully issued
infrastructure cert-manager-certificaterequests-issuer-acme platform-tls-zdc77 CertificateIssued Certificate fetched from issuer successfully
infrastructure cert-manager-orders platform-tls-zdc77-3297273686 Complete Order completed successfully
flux-system kustomize-controller crossplane-providers ReconciliationSucceeded Reconciliation finished in 168.021577ms, next run in 2m0s
echo cert-manager-challenges echo-tls-74w58-3333634511-2939583069 Presented Presented challenge using DNS-01 challenge mechanism
(x2) infrastructure cert-manager-challenges platform-tls-zdc77-3297273686-1589334387 Presented Presented challenge using DNS-01 challenge mechanism
flux-system kustomize-controller crossplane-configuration ReconciliationSucceeded Reconciliation finished in 238.057823ms, next run in 2m0s
flux-system kustomize-controller namespaces ReconciliationSucceeded Reconciliation finished in 131.429198ms, next run in 4m0s
echo cert-manager-challenges echo-tls-74w58-3333634511-2939583069 DomainVerified Domain "tls-echo-1.cloud.ogenki.io" verified with "DNS-01" validation
echo cert-manager-certificaterequests-issuer-acme echo-tls-74w58 CertificateIssued Certificate fetched from issuer successfully
echo cert-manager-orders echo-tls-74w58-3333634511 Complete Order completed successfully
echo cert-manager-certificates-issuing echo-tls Issuing The certificate has been successfully issued
(x16) default kyverno-admission mutate-cilium-echo-tls-gateway PolicyApplied Service echo/cilium-gateway-echo-tls is successfully mutated
(x2) infrastructure cert-manager-challenges platform-tls-zdc77-3297273686-1589334387 DomainVerified Domain "cloud.ogenki.io" verified with "DNS-01" validation
flux-system kustomize-controller crds ReconciliationSucceeded Reconciliation finished in 1.921357168s, next run in 4m0s
flux-system kustomize-controller crossplane-controller ReconciliationSucceeded Reconciliation finished in 237.133177ms, next run in 4m0s
flux-system kustomize-controller observability ReconciliationSucceeded Reconciliation finished in 287.424694ms, next run in 3m0s
flux-system kustomize-controller flux-config ReconciliationSucceeded Reconciliation finished in 319.9312ms, next run in 4m0s
flux-system kustomize-controller crossplane-providers ReconciliationSucceeded Reconciliation finished in 135.756131ms, next run in 2m0s
default kyverno-admission disallow-selinux PolicyApplied Deployment flux-system/helm-controller: pass
default kyverno-admission restrict-sysctls PolicyApplied Deployment flux-system/helm-controller: pass
default kyverno-admission restrict-apparmor-profiles PolicyApplied Deployment flux-system/helm-controller: pass
default kyverno-admission disallow-host-path PolicyApplied Deployment flux-system/helm-controller: pass
(x3) default kyverno-admission disallow-capabilities PolicyApplied (combined from similar events): Deployment flux-system/source-controller: pass
default kyverno-admission disallow-host-ports PolicyApplied Deployment flux-system/helm-controller: pass
flux-system kustomize-controller flux-system ReconciliationSucceeded Reconciliation finished in 2.094730559s, next run in 10m0s
flux-system kustomize-controller crossplane-configuration ReconciliationSucceeded Reconciliation finished in 239.392331ms, next run in 2m0s
flux-system kustomize-controller infrastructure ReconciliationSucceeded Reconciliation finished in 295.868744ms, next run in 4m0s
kube-system kustomize-controller crds-gateway-api ReconciliationSucceeded Reconciliation finished in 3.820978193s, next run in 10m0s
security kustomize-controller crds-external-secrets ReconciliationSucceeded Reconciliation finished in 5.384155929s, next run in 10m0s
flux-system kustomize-controller security Progressing ClusterIssuer/letsencrypt-prod configured
flux-system kustomize-controller security ReconciliationSucceeded Reconciliation finished in 3.415854919s, next run in 4m0s
security kustomize-controller crds-kyverno ReconciliationSucceeded Reconciliation finished in 6.982474732s, next run in 10m0s
default kyverno-admission disallow-host-ports PolicyViolation Deployment cilium-test/echo-same-node: [autogen-host-ports-none] fail; validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
cilium-test replicaset-controller client2-646b88fb9b SuccessfulCreate Created pod: client2-646b88fb9b-xsb7z
cilium-test default-scheduler client-6b4b857d98-b2mts Scheduled Successfully assigned cilium-test/client-6b4b857d98-b2mts to ip-10-0-2-105.eu-west-3.compute.internal
default kyverno-admission disallow-capabilities PolicyViolation Deployment cilium-test/echo-same-node: [autogen-adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test kyverno-admission echo-same-node PolicyViolation policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test kyverno-admission echo-same-node-775456cfcf-bqk4q PolicyViolation policy disallow-host-ports/host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
cilium-test kyverno-admission echo-same-node PolicyViolation policy disallow-host-ports/autogen-host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
cilium-test kyverno-admission client-6b4b857d98-b2mts PolicyViolation policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test deployment-controller echo-same-node ScalingReplicaSet Scaled up replica set echo-same-node-775456cfcf to 1
default kyverno-admission disallow-capabilities PolicyViolation Deployment cilium-test/client: [autogen-adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
default kyverno-admission disallow-capabilities PolicyViolation Pod cilium-test/client-6b4b857d98-b2mts: [adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
default kyverno-admission disallow-capabilities PolicyViolation Pod cilium-test/echo-same-node-775456cfcf-bqk4q: [adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test replicaset-controller client-6b4b857d98 SuccessfulCreate Created pod: client-6b4b857d98-b2mts
cilium-test kyverno-admission echo-same-node-775456cfcf-bqk4q PolicyViolation policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test default-scheduler echo-same-node-775456cfcf-bqk4q FailedScheduling 0/2 nodes are available: 2 node(s) didn't match pod affinity rules. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..
default kyverno-admission disallow-capabilities PolicyViolation Deployment cilium-test/client2: [autogen-adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test deployment-controller echo-other-node ScalingReplicaSet Scaled up replica set echo-other-node-8b4df78df to 1
default kyverno-admission disallow-host-ports PolicyViolation Pod cilium-test/echo-same-node-775456cfcf-bqk4q: [host-ports-none] fail; validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
default kyverno-admission disallow-capabilities PolicyViolation Pod cilium-test/client2-646b88fb9b-xsb7z: [adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test deployment-controller client ScalingReplicaSet Scaled up replica set client-6b4b857d98 to 1
cilium-test default-scheduler echo-other-node-8b4df78df-bf5sk Scheduled Successfully assigned cilium-test/echo-other-node-8b4df78df-bf5sk to ip-10-0-3-24.eu-west-3.compute.internal
cilium-test replicaset-controller echo-other-node-8b4df78df SuccessfulCreate Created pod: echo-other-node-8b4df78df-bf5sk
cilium-test kyverno-admission client2 PolicyViolation policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test deployment-controller client2 ScalingReplicaSet Scaled up replica set client2-646b88fb9b to 1
cilium-test replicaset-controller echo-same-node-775456cfcf SuccessfulCreate Created pod: echo-same-node-775456cfcf-bqk4q
cilium-test kyverno-admission client PolicyViolation policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test default-scheduler client2-646b88fb9b-xsb7z Scheduled Successfully assigned cilium-test/client2-646b88fb9b-xsb7z to ip-10-0-2-105.eu-west-3.compute.internal
cilium-test kyverno-admission client2-646b88fb9b-xsb7z PolicyViolation policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test kyverno-admission echo-other-node-8b4df78df-bf5sk PolicyViolation policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test kyverno-admission echo-external-node PolicyViolation policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test replicaset-controller echo-external-node-545d98c9b4 SuccessfulCreate Created pod: echo-external-node-545d98c9b4-br427
cilium-test kubelet ip-10-0-2-105.eu-west-3.compute.internal client-6b4b857d98-b2mts Pulling Pulling image "quay.io/cilium/alpine-curl:v1.7.0@sha256:ccd0ed9da1752bab88a807647ad3cec65d460d281ab88988b60d70148783e751"
cilium-test kubelet ip-10-0-3-24.eu-west-3.compute.internal host-netns-8wmb8 Pulling Pulling image "quay.io/cilium/alpine-curl:v1.7.0@sha256:ccd0ed9da1752bab88a807647ad3cec65d460d281ab88988b60d70148783e751"
cilium-test kyverno-admission host-netns-6fhsl PolicyViolation policy disallow-host-namespaces/host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
cilium-test kyverno-admission host-netns-6fhsl PolicyViolation policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test kubelet ip-10-0-2-105.eu-west-3.compute.internal host-netns-6fhsl Pulling Pulling image "quay.io/cilium/alpine-curl:v1.7.0@sha256:ccd0ed9da1752bab88a807647ad3cec65d460d281ab88988b60d70148783e751"
cilium-test kyverno-admission echo-external-node PolicyViolation policy disallow-host-namespaces/autogen-host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
default kyverno-admission disallow-host-namespaces PolicyViolation DaemonSet cilium-test/host-netns: [autogen-host-namespaces] fail; validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
cilium-test daemonset-controller host-netns SuccessfulCreate Created pod: host-netns-6fhsl
cilium-test kyverno-admission host-netns PolicyViolation policy disallow-host-namespaces/autogen-host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
cilium-test kyverno-admission echo-external-node-545d98c9b4-br427 PolicyViolation policy disallow-host-ports/host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
cilium-test kyverno-admission echo-external-node-545d98c9b4-br427 PolicyViolation policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test kyverno-admission echo-other-node-8b4df78df-bf5sk PolicyViolation policy disallow-host-ports/host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
cilium-test kyverno-admission host-netns PolicyViolation policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test kyverno-admission echo-external-node-545d98c9b4-br427 PolicyViolation policy disallow-host-namespaces/host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
default kyverno-admission disallow-capabilities PolicyViolation DaemonSet cilium-test/host-netns-non-cilium: [autogen-adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test kyverno-admission host-netns-non-cilium PolicyViolation policy disallow-host-namespaces/autogen-host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
cilium-test kyverno-admission host-netns-non-cilium PolicyViolation policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
default kyverno-admission disallow-capabilities PolicyViolation Deployment cilium-test/echo-other-node: [autogen-adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test default-scheduler host-netns-6fhsl Scheduled Successfully assigned cilium-test/host-netns-6fhsl to ip-10-0-2-105.eu-west-3.compute.internal
default kyverno-admission disallow-capabilities PolicyViolation DaemonSet cilium-test/host-netns: [autogen-adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test default-scheduler host-netns-8wmb8 Scheduled Successfully assigned cilium-test/host-netns-8wmb8 to ip-10-0-3-24.eu-west-3.compute.internal
cilium-test kyverno-admission host-netns-8wmb8 PolicyViolation policy disallow-host-namespaces/host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
default kyverno-admission disallow-host-ports PolicyViolation Deployment cilium-test/echo-other-node: [autogen-host-ports-none] fail; validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
default kyverno-admission disallow-host-ports PolicyViolation Pod cilium-test/echo-other-node-8b4df78df-bf5sk: [host-ports-none] fail; validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
cilium-test kyverno-admission echo-external-node PolicyViolation policy disallow-host-ports/autogen-host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
default kyverno-admission disallow-host-ports PolicyViolation Deployment cilium-test/echo-external-node: [autogen-host-ports-none] fail; validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
cilium-test daemonset-controller host-netns SuccessfulCreate Created pod: host-netns-8wmb8
cilium-test kyverno-admission host-netns-8wmb8 PolicyViolation policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test kyverno-admission echo-other-node PolicyViolation policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test kyverno-admission echo-other-node PolicyViolation policy disallow-host-ports/autogen-host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
cilium-test deployment-controller echo-external-node ScalingReplicaSet Scaled up replica set echo-external-node-545d98c9b4 to 1
default kyverno-admission disallow-host-namespaces PolicyViolation Deployment cilium-test/echo-external-node: [autogen-host-namespaces] fail; validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
default kyverno-admission disallow-host-namespaces PolicyViolation Pod cilium-test/host-netns-6fhsl: [host-namespaces] fail; validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
default kyverno-admission disallow-host-namespaces PolicyViolation Pod cilium-test/host-netns-8wmb8: [host-namespaces] fail; validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
default kyverno-admission disallow-host-namespaces PolicyViolation DaemonSet cilium-test/host-netns-non-cilium: [autogen-host-namespaces] fail; validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
cilium-test kubelet ip-10-0-3-24.eu-west-3.compute.internal echo-other-node-8b4df78df-bf5sk Pulling Pulling image "quay.io/cilium/json-mock:v1.3.5@sha256:d5dfd0044540cbe01ad6a1932cfb1913587f93cac4f145471ca04777f26342a4"
cilium-test kubelet ip-10-0-2-105.eu-west-3.compute.internal client2-646b88fb9b-xsb7z Pulling Pulling image "quay.io/cilium/alpine-curl:v1.7.0@sha256:ccd0ed9da1752bab88a807647ad3cec65d460d281ab88988b60d70148783e751"
cilium-test default-scheduler echo-same-node-775456cfcf-bqk4q Scheduled Successfully assigned cilium-test/echo-same-node-775456cfcf-bqk4q to ip-10-0-2-105.eu-west-3.compute.internal
cilium-test kubelet ip-10-0-2-105.eu-west-3.compute.internal echo-same-node-775456cfcf-bqk4q Pulling Pulling image "quay.io/cilium/json-mock:v1.3.5@sha256:d5dfd0044540cbe01ad6a1932cfb1913587f93cac4f145471ca04777f26342a4"
cilium-test kubelet ip-10-0-2-105.eu-west-3.compute.internal host-netns-6fhsl Pulled Successfully pulled image "quay.io/cilium/alpine-curl:v1.7.0@sha256:ccd0ed9da1752bab88a807647ad3cec65d460d281ab88988b60d70148783e751" in 1.819723699s (1.819741929s including waiting)
cilium-test kubelet ip-10-0-2-105.eu-west-3.compute.internal host-netns-6fhsl Started Started container host-netns
cilium-test kubelet ip-10-0-2-105.eu-west-3.compute.internal client-6b4b857d98-b2mts Pulled Successfully pulled image "quay.io/cilium/alpine-curl:v1.7.0@sha256:ccd0ed9da1752bab88a807647ad3cec65d460d281ab88988b60d70148783e751" in 1.850844644s (1.850852541s including waiting)
cilium-test kubelet ip-10-0-3-24.eu-west-3.compute.internal host-netns-8wmb8 Created Created container host-netns
cilium-test kubelet ip-10-0-3-24.eu-west-3.compute.internal host-netns-8wmb8 Pulled Successfully pulled image "quay.io/cilium/alpine-curl:v1.7.0@sha256:ccd0ed9da1752bab88a807647ad3cec65d460d281ab88988b60d70148783e751" in 1.535634685s (1.535645346s including waiting)
cilium-test kubelet ip-10-0-3-24.eu-west-3.compute.internal host-netns-8wmb8 Started Started container host-netns
cilium-test kubelet ip-10-0-2-105.eu-west-3.compute.internal client-6b4b857d98-b2mts Started Started container client
cilium-test kubelet ip-10-0-2-105.eu-west-3.compute.internal host-netns-6fhsl Created Created container host-netns
cilium-test kubelet ip-10-0-2-105.eu-west-3.compute.internal client-6b4b857d98-b2mts Created Created container client
cilium-test kubelet ip-10-0-2-105.eu-west-3.compute.internal client2-646b88fb9b-xsb7z Pulled Successfully pulled image "quay.io/cilium/alpine-curl:v1.7.0@sha256:ccd0ed9da1752bab88a807647ad3cec65d460d281ab88988b60d70148783e751" in 1.455473117s (1.455484689s including waiting)
flux-system kustomize-controller apps ReconciliationSucceeded Reconciliation finished in 759.52659ms, next run in 4m0s
cilium-test kubelet ip-10-0-2-105.eu-west-3.compute.internal client2-646b88fb9b-xsb7z Started Started container client2
cilium-test kubelet ip-10-0-2-105.eu-west-3.compute.internal client2-646b88fb9b-xsb7z Created Created container client2
(x4) default kyverno-admission disallow-capabilities PolicyViolation (combined from similar events): Deployment cilium-test/echo-external-node: [autogen-adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test kubelet ip-10-0-3-24.eu-west-3.compute.internal echo-other-node-8b4df78df-bf5sk Created Created container echo-other-node
cilium-test kubelet ip-10-0-3-24.eu-west-3.compute.internal echo-other-node-8b4df78df-bf5sk Pulled Successfully pulled image "quay.io/cilium/json-mock:v1.3.5@sha256:d5dfd0044540cbe01ad6a1932cfb1913587f93cac4f145471ca04777f26342a4" in 5.793163766s (5.793176877s including waiting)
cilium-test kubelet ip-10-0-3-24.eu-west-3.compute.internal echo-other-node-8b4df78df-bf5sk Started Started container echo-other-node
cilium-test kubelet ip-10-0-3-24.eu-west-3.compute.internal echo-other-node-8b4df78df-bf5sk Pulling Pulling image "docker.io/coredns/coredns:1.11.1@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"
cilium-test kubelet ip-10-0-2-105.eu-west-3.compute.internal echo-same-node-775456cfcf-bqk4q Created Created container echo-same-node
cilium-test kubelet ip-10-0-2-105.eu-west-3.compute.internal echo-same-node-775456cfcf-bqk4q Pulled Successfully pulled image "quay.io/cilium/json-mock:v1.3.5@sha256:d5dfd0044540cbe01ad6a1932cfb1913587f93cac4f145471ca04777f26342a4" in 6.425719286s (6.425734139s including waiting)
cilium-test kubelet ip-10-0-2-105.eu-west-3.compute.internal echo-same-node-775456cfcf-bqk4q Started Started container echo-same-node
cilium-test kubelet ip-10-0-2-105.eu-west-3.compute.internal echo-same-node-775456cfcf-bqk4q Pulling Pulling image "docker.io/coredns/coredns:1.11.1@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"
cilium-test kubelet ip-10-0-3-24.eu-west-3.compute.internal echo-other-node-8b4df78df-bf5sk Pulled Successfully pulled image "docker.io/coredns/coredns:1.11.1@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1" in 2.509680641s (2.509691453s including waiting)
cilium-test kubelet ip-10-0-3-24.eu-west-3.compute.internal echo-other-node-8b4df78df-bf5sk Created Created container dns-test-server
cilium-test kubelet ip-10-0-3-24.eu-west-3.compute.internal echo-other-node-8b4df78df-bf5sk Started Started container dns-test-server
cilium-test kubelet ip-10-0-2-105.eu-west-3.compute.internal echo-same-node-775456cfcf-bqk4q Pulled Successfully pulled image "docker.io/coredns/coredns:1.11.1@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1" in 2.278273475s (2.278292404s including waiting)
cilium-test kubelet ip-10-0-2-105.eu-west-3.compute.internal echo-same-node-775456cfcf-bqk4q Created Created container dns-test-server
cilium-test kubelet ip-10-0-2-105.eu-west-3.compute.internal echo-same-node-775456cfcf-bqk4q Started Started container dns-test-server
flux-system kustomize-controller crossplane-providers ReconciliationSucceeded Reconciliation finished in 268.292463ms, next run in 2m0s
cilium-test kyverno-scan client PolicyViolation policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test kyverno-scan echo-same-node-775456cfcf PolicyViolation policy disallow-host-ports/autogen-host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
cilium-test kyverno-scan echo-same-node PolicyViolation policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test kyverno-scan echo-same-node-775456cfcf-bqk4q PolicyViolation policy disallow-host-ports/host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
default kyverno-scan disallow-capabilities PolicyViolation ReplicaSet cilium-test/echo-same-node-775456cfcf: [autogen-adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
default kyverno-scan disallow-capabilities PolicyViolation Deployment cilium-test/echo-same-node: [autogen-adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
default kyverno-scan disallow-capabilities PolicyViolation Deployment cilium-test/client: [autogen-adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test kyverno-scan echo-same-node-775456cfcf PolicyViolation policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test kyverno-scan echo-same-node PolicyViolation policy disallow-host-ports/autogen-host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
cilium-test kyverno-scan echo-other-node PolicyViolation policy disallow-host-ports/autogen-host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
cilium-test kyverno-scan client-6b4b857d98 PolicyViolation policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test kyverno-scan client2-646b88fb9b-xsb7z PolicyViolation policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test kyverno-scan client-6b4b857d98-b2mts PolicyViolation policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test kyverno-scan echo-same-node-775456cfcf-bqk4q PolicyViolation policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test kyverno-scan client2-646b88fb9b PolicyViolation policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test kyverno-scan echo-other-node-8b4df78df PolicyViolation policy disallow-host-ports/autogen-host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
cilium-test kyverno-scan client2 PolicyViolation policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test kyverno-scan echo-other-node PolicyViolation policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test kyverno-scan host-netns-6fhsl PolicyViolation policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test kyverno-scan host-netns PolicyViolation policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test kyverno-scan echo-external-node PolicyViolation policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test kyverno-scan host-netns-non-cilium PolicyViolation policy disallow-host-namespaces/autogen-host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
cilium-test kyverno-scan echo-external-node PolicyViolation policy disallow-host-namespaces/autogen-host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
cilium-test kyverno-scan echo-external-node-545d98c9b4 PolicyViolation policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test kyverno-scan host-netns PolicyViolation policy disallow-host-namespaces/autogen-host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
cilium-test kyverno-scan host-netns-6fhsl PolicyViolation policy disallow-host-namespaces/host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
cilium-test kyverno-scan echo-external-node PolicyViolation policy disallow-host-ports/autogen-host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
cilium-test kyverno-scan host-netns-non-cilium PolicyViolation policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test kyverno-scan host-netns-8wmb8 PolicyViolation policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test kyverno-scan echo-other-node-8b4df78df PolicyViolation policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test kyverno-scan host-netns-8wmb8 PolicyViolation policy disallow-host-namespaces/host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
cilium-test kyverno-scan echo-other-node-8b4df78df-bf5sk PolicyViolation policy disallow-host-ports/host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
cilium-test kyverno-scan echo-external-node-545d98c9b4 PolicyViolation policy disallow-host-namespaces/autogen-host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
cilium-test kyverno-scan echo-other-node-8b4df78df-bf5sk PolicyViolation policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test kyverno-scan echo-external-node-545d98c9b4-br427 PolicyViolation policy disallow-host-namespaces/host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
cilium-test kyverno-scan echo-external-node-545d98c9b4 PolicyViolation policy disallow-host-ports/autogen-host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
cilium-test kyverno-scan echo-external-node-545d98c9b4-br427 PolicyViolation policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test kyverno-scan echo-external-node-545d98c9b4-br427 PolicyViolation policy disallow-host-ports/host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
(x16) default kyverno-scan disallow-capabilities PolicyViolation (combined from similar events): Pod cilium-test/echo-external-node-545d98c9b4-br427: [adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
(x14) default kyverno-scan disallow-host-namespaces PolicyViolation (combined from similar events): Pod cilium-test/echo-external-node-545d98c9b4-br427: [host-namespaces] fail; validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
(x9) default kyverno-scan disallow-host-ports PolicyViolation (combined from similar events): Pod cilium-test/echo-external-node-545d98c9b4-br427: [host-ports-none] fail; validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
(x2) kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-g94mr Created Created container cilium-agent
(x2) kube-system kubelet ip-10-0-2-105.eu-west-3.compute.internal cilium-g94mr Pulled Container image "quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72" already present on machine
flux-system kustomize-controller crossplane-configuration ReconciliationSucceeded Reconciliation finished in 249.766089ms, next run in 2m0s
flux-system kustomize-controller namespaces ReconciliationSucceeded Reconciliation finished in 221.235539ms, next run in 4m0s
flux-system kustomize-controller observability ReconciliationSucceeded Reconciliation finished in 301.032977ms, next run in 3m0s
flux-system kustomize-controller crds ReconciliationSucceeded Reconciliation finished in 1.428396109s, next run in 4m0s
flux-system kustomize-controller crossplane-controller ReconciliationSucceeded Reconciliation finished in 232.227181ms, next run in 4m0s
flux-system kustomize-controller flux-config ReconciliationSucceeded Reconciliation finished in 268.25912ms, next run in 4m0s
flux-system kustomize-controller crossplane-providers ReconciliationSucceeded Reconciliation finished in 160.499019ms, next run in 2m0s
flux-system kustomize-controller crossplane-configuration ReconciliationSucceeded Reconciliation finished in 213.997982ms, next run in 2m0s
flux-system kustomize-controller infrastructure ReconciliationSucceeded Reconciliation finished in 311.989865ms, next run in 4m0s
flux-system kustomize-controller security ReconciliationSucceeded Reconciliation finished in 2.266441722s, next run in 4m0s
flux-system kustomize-controller observability ReconciliationSucceeded Reconciliation finished in 311.149143ms, next run in 3m0s
flux-system kustomize-controller apps ReconciliationSucceeded Reconciliation finished in 355.582079ms, next run in 4m0s
flux-system kustomize-controller crossplane-providers ReconciliationSucceeded Reconciliation finished in 179.476402ms, next run in 2m0s
flux-system kustomize-controller crossplane-configuration ReconciliationSucceeded Reconciliation finished in 207.187229ms, next run in 2m0s
flux-system kustomize-controller namespaces ReconciliationSucceeded Reconciliation finished in 147.523049ms, next run in 4m0s
security default-scheduler kyverno-cleanup-cluster-admission-reports-28238890-rz7rx Scheduled Successfully assigned security/kyverno-cleanup-cluster-admission-reports-28238890-rz7rx to ip-10-0-3-24.eu-west-3.compute.internal
security default-scheduler kyverno-cleanup-admission-reports-28238890-mz95z Scheduled Successfully assigned security/kyverno-cleanup-admission-reports-28238890-mz95z to ip-10-0-3-24.eu-west-3.compute.internal
security job-controller kyverno-cleanup-admission-reports-28238890 SuccessfulCreate Created pod: kyverno-cleanup-admission-reports-28238890-mz95z
security cronjob-controller kyverno-cleanup-admission-reports SuccessfulCreate Created job kyverno-cleanup-admission-reports-28238890
security job-controller kyverno-cleanup-cluster-admission-reports-28238890 SuccessfulCreate Created pod: kyverno-cleanup-cluster-admission-reports-28238890-rz7rx
security cronjob-controller kyverno-cleanup-cluster-admission-reports SuccessfulCreate Created job kyverno-cleanup-cluster-admission-reports-28238890
security kubelet ip-10-0-3-24.eu-west-3.compute.internal kyverno-cleanup-admission-reports-28238890-mz95z Created Created container cleanup
security kubelet ip-10-0-3-24.eu-west-3.compute.internal kyverno-cleanup-cluster-admission-reports-28238890-rz7rx Started Started container cleanup
security kubelet ip-10-0-3-24.eu-west-3.compute.internal kyverno-cleanup-admission-reports-28238890-mz95z Pulled Container image "bitnami/kubectl:1.26.4" already present on machine
security kubelet ip-10-0-3-24.eu-west-3.compute.internal kyverno-cleanup-cluster-admission-reports-28238890-rz7rx Created Created container cleanup
security kubelet ip-10-0-3-24.eu-west-3.compute.internal kyverno-cleanup-cluster-admission-reports-28238890-rz7rx Pulled Container image "bitnami/kubectl:1.26.4" already present on machine
security kubelet ip-10-0-3-24.eu-west-3.compute.internal kyverno-cleanup-admission-reports-28238890-mz95z Started Started container cleanup
security job-controller kyverno-cleanup-cluster-admission-reports-28238890 Completed Job completed
security job-controller kyverno-cleanup-admission-reports-28238890 Completed Job completed
(x2) security cronjob-controller kyverno-cleanup-admission-reports SawCompletedJob Saw completed job: kyverno-cleanup-admission-reports-28238890, status: Complete
security cronjob-controller kyverno-cleanup-admission-reports SuccessfulDelete Deleted job kyverno-cleanup-admission-reports-28238880
(x2) security cronjob-controller kyverno-cleanup-cluster-admission-reports SawCompletedJob Saw completed job: kyverno-cleanup-cluster-admission-reports-28238890, status: Complete
security cronjob-controller kyverno-cleanup-cluster-admission-reports SuccessfulDelete Deleted job kyverno-cleanup-cluster-admission-reports-28238880
flux-system kustomize-controller crds ReconciliationSucceeded Reconciliation finished in 1.351032653s, next run in 4m0s
flux-system kustomize-controller crossplane-controller ReconciliationSucceeded Reconciliation finished in 377.407158ms, next run in 4m0s
flux-system kustomize-controller flux-config ReconciliationSucceeded Reconciliation finished in 359.734091ms, next run in 4m0s
flux-system kustomize-controller crossplane-providers ReconciliationSucceeded Reconciliation finished in 233.310207ms, next run in 2m0s
flux-system kustomize-controller crossplane-configuration ReconciliationSucceeded Reconciliation finished in 280.803745ms, next run in 2m0s
flux-system kustomize-controller observability ReconciliationSucceeded Reconciliation finished in 248.369161ms, next run in 3m0s
flux-system kustomize-controller infrastructure ReconciliationSucceeded Reconciliation finished in 261.338771ms, next run in 4m0s
flux-system kustomize-controller security ReconciliationSucceeded Reconciliation finished in 1.818967671s, next run in 4m0s
flux-system kustomize-controller apps ReconciliationSucceeded Reconciliation finished in 420.328651ms, next run in 4m0s
(x18) default kyverno-admission disallow-proc-mount PolicyApplied (combined from similar events): Deployment flux-system/helm-controller: pass
(x12) default kyverno-admission disallow-host-namespaces PolicyApplied (combined from similar events): Deployment flux-system/helm-controller: pass
(x18) default kyverno-admission restrict-seccomp PolicyApplied (combined from similar events): Deployment flux-system/helm-controller: pass
flux-system kustomize-controller flux-system ReconciliationSucceeded Reconciliation finished in 1.792849164s, next run in 10m0s
(x18) default kyverno-admission disallow-host-process PolicyApplied (combined from similar events): Deployment flux-system/helm-controller: pass
flux-system kustomize-controller namespaces ReconciliationSucceeded Reconciliation finished in 409.721272ms, next run in 4m0s
kube-system kustomize-controller crds-gateway-api ReconciliationSucceeded Reconciliation finished in 2.112818078s, next run in 10m0s
(x3) flux-system source-controller observability-crds-prometheus-operator ArtifactUpToDate artifact up-to-date with remote revision: '5.1.0'
security kustomize-controller crds-external-secrets ReconciliationSucceeded Reconciliation finished in 2.893830422s, next run in 10m0s
(x2) security kustomize-controller crds-external-secrets Progressing CustomResourceDefinition/clustersecretstores.external-secrets.io configured CustomResourceDefinition/externalsecrets.external-secrets.io configured CustomResourceDefinition/secretstores.external-secrets.io configured
flux-system kustomize-controller crds ReconciliationSucceeded Reconciliation finished in 1.9698213s, next run in 4m0s
(x3) crossplane-system source-controller crossplane-system-crossplane ArtifactUpToDate artifact up-to-date with remote revision: '1.13.2'
security kustomize-controller crds-kyverno ReconciliationSucceeded Reconciliation finished in 4.229999949s, next run in 10m0s
infrastructure service-controller cilium-gateway-platform DeletingLoadBalancer Deleting load balancer
flux-system kustomize-controller observability ReconciliationSucceeded Reconciliation finished in 240.743758ms, next run in 3m0s
(x2) flux-system source-controller observability-kube-prometheus-stack ArtifactUpToDate artifact up-to-date with remote revision: '50.3.1'
flux-system kustomize-controller crossplane-controller ReconciliationSucceeded Reconciliation finished in 222.967085ms, next run in 4m0s
infrastructure service-controller cilium-gateway-platform DeletedLoadBalancer Deleted load balancer
flux-system kustomize-controller flux-config ReconciliationSucceeded Reconciliation finished in 289.102733ms, next run in 4m0s
(x3) kube-system source-controller kube-system-external-dns ArtifactUpToDate artifact up-to-date with remote revision: '1.13.0'
infrastructure cert-manager-gateway-shim platform CreateCertificate Successfully created Certificate "platform-tls"
flux-system kustomize-controller infrastructure ReconciliationSucceeded Reconciliation finished in 316.126549ms, next run in 4m0s
flux-system kustomize-controller infrastructure Progressing Gateway/infrastructure/platform created
(x28) default kyverno-admission mutate-cilium-platform-gateway PolicyApplied Service infrastructure/cilium-gateway-platform is successfully mutated
(x2) infrastructure targetGroupBinding k8s-infrastr-ciliumga-604d05f720 SuccessfullyReconciled Successfully reconciled
(x3) infrastructure service cilium-gateway-platform SuccessfullyReconciled Successfully reconciled
(x3) kube-system source-controller kube-system-aws-load-balancer-controller ArtifactUpToDate artifact up-to-date with remote revision: '1.6.0'
flux-system kustomize-controller infrastructure ReconciliationSucceeded Reconciliation finished in 301.65964ms, next run in 4m0s
flux-system kustomize-controller security ReconciliationSucceeded Reconciliation finished in 2.095321122s, next run in 4m0s
flux-system kustomize-controller apps ReconciliationSucceeded Reconciliation finished in 410.860138ms, next run in 4m0s
flux-system kustomize-controller observability ReconciliationSucceeded Reconciliation finished in 296.083377ms, next run in 3m0s
flux-system kustomize-controller namespaces ReconciliationSucceeded Reconciliation finished in 150.45685ms, next run in 4m0s
flux-system kustomize-controller crds ReconciliationSucceeded Reconciliation finished in 1.364322017s, next run in 4m0s
(x3) flux-system source-controller security-external-secrets ArtifactUpToDate artifact up-to-date with remote revision: '0.9.4'
(x3) flux-system source-controller security-kyverno ArtifactUpToDate artifact up-to-date with remote revision: '3.0.5'
flux-system kustomize-controller crossplane-controller ReconciliationSucceeded Reconciliation finished in 233.334792ms, next run in 4m0s
(x3) flux-system source-controller security-cert-manager ArtifactUpToDate artifact up-to-date with remote revision: 'v1.12.4'
flux-system kustomize-controller flux-config ReconciliationSucceeded Reconciliation finished in 249.604516ms, next run in 4m0s
(x29) default defined/compositeresourcedefinition.apiextensions.crossplane.io xplane-cert-manager-mycluster-0-5m9n4 SelectComposition Successfully selected composition
(x29) default defined/compositeresourcedefinition.apiextensions.crossplane.io xplane-external-secrets-mycluster-0-d6vfd SelectComposition Successfully selected composition
(x3) flux-system source-controller security-kyverno-policies ArtifactUpToDate artifact up-to-date with remote revision: '3.0.3'
(x5) flux-system source-controller prometheus-community ArtifactUpToDate artifact up-to-date with remote revision: 'sha256:fc301cce1f4fa2f1d482b6d9063823b9b509d602ee335108d2393025fdb5b6fc'
(x5) kube-system source-controller gateway-api GitOperationSucceeded no changes since last reconcilation: observed revision 'v0.8.0@sha1:3d22aa5a08413222cb79e6b2e245870360434614'
(x5) security source-controller kyverno GitOperationSucceeded no changes since last reconcilation: observed revision 'v1.10.3@sha1:8137b4b8afd7ab1464a42e717dc83f1cc471a4a1'
security default-scheduler kyverno-cleanup-admission-reports-28238900-q76br Scheduled Successfully assigned security/kyverno-cleanup-admission-reports-28238900-q76br to ip-10-0-3-24.eu-west-3.compute.internal
security cronjob-controller kyverno-cleanup-admission-reports SuccessfulCreate Created job kyverno-cleanup-admission-reports-28238900
security default-scheduler kyverno-cleanup-cluster-admission-reports-28238900-tf6vw Scheduled Successfully assigned security/kyverno-cleanup-cluster-admission-reports-28238900-tf6vw to ip-10-0-3-24.eu-west-3.compute.internal
security job-controller kyverno-cleanup-cluster-admission-reports-28238900 SuccessfulCreate Created pod: kyverno-cleanup-cluster-admission-reports-28238900-tf6vw
security job-controller kyverno-cleanup-admission-reports-28238900 SuccessfulCreate Created pod: kyverno-cleanup-admission-reports-28238900-q76br
(x4) cilium-test karpenter echo-external-node-545d98c9b4-br427 FailedScheduling Failed to schedule pod, incompatible with provisioner "default", daemonset overhead={"cpu":"600m","memory":"556Mi","pods":"5"}, incompatible requirements, label "cilium.io/no-schedule" does not have known values
security cronjob-controller kyverno-cleanup-cluster-admission-reports SuccessfulCreate Created job kyverno-cleanup-cluster-admission-reports-28238900
security kubelet ip-10-0-3-24.eu-west-3.compute.internal kyverno-cleanup-cluster-admission-reports-28238900-tf6vw Pulled Container image "bitnami/kubectl:1.26.4" already present on machine
security kubelet ip-10-0-3-24.eu-west-3.compute.internal kyverno-cleanup-cluster-admission-reports-28238900-tf6vw Created Created container cleanup
security kubelet ip-10-0-3-24.eu-west-3.compute.internal kyverno-cleanup-admission-reports-28238900-q76br Pulled Container image "bitnami/kubectl:1.26.4" already present on machine
security kubelet ip-10-0-3-24.eu-west-3.compute.internal kyverno-cleanup-admission-reports-28238900-q76br Created Created container cleanup
security kubelet ip-10-0-3-24.eu-west-3.compute.internal kyverno-cleanup-admission-reports-28238900-q76br Started Started container cleanup
security kubelet ip-10-0-3-24.eu-west-3.compute.internal kyverno-cleanup-cluster-admission-reports-28238900-tf6vw Started Started container cleanup
security cronjob-controller kyverno-cleanup-admission-reports SuccessfulDelete Deleted job kyverno-cleanup-admission-reports-28238890
security cronjob-controller kyverno-cleanup-admission-reports SawCompletedJob Saw completed job: kyverno-cleanup-admission-reports-28238900, status: Complete
security job-controller kyverno-cleanup-admission-reports-28238900 Completed Job completed
security cronjob-controller kyverno-cleanup-cluster-admission-reports SuccessfulDelete Deleted job kyverno-cleanup-cluster-admission-reports-28238890
security cronjob-controller kyverno-cleanup-cluster-admission-reports SawCompletedJob Saw completed job: kyverno-cleanup-cluster-admission-reports-28238900, status: Complete
security job-controller kyverno-cleanup-cluster-admission-reports-28238900 Completed Job completed
flux-system kustomize-controller infrastructure ReconciliationSucceeded Reconciliation finished in 325.035365ms, next run in 4m0s
(x4) cilium-test default-scheduler echo-external-node-545d98c9b4-br427 FailedScheduling 0/2 nodes are available: 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..
flux-system kustomize-controller security ReconciliationSucceeded Reconciliation finished in 3.387293705s, next run in 4m0s
(x3) flux-system source-controller echo-echo-1 ArtifactUpToDate artifact up-to-date with remote revision: '0.5.0'
flux-system kustomize-controller observability ReconciliationSucceeded Reconciliation finished in 248.360298ms, next run in 3m0s
(x3) flux-system source-controller echo-echo-2 ArtifactUpToDate artifact up-to-date with remote revision: '0.5.0'
flux-system kustomize-controller apps ReconciliationSucceeded Reconciliation finished in 451.446244ms, next run in 4m0s
cilium-test kubelet ip-10-0-2-105.eu-west-3.compute.internal echo-same-node-775456cfcf-bqk4q Unhealthy Readiness probe failed: Get "http://10.0.5.111:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
(x34) default defined/compositeresourcedefinition.apiextensions.crossplane.io xplane-external-dns-mycluster-0-8sfwb SelectComposition Successfully selected composition
(x34) default defined/compositeresourcedefinition.apiextensions.crossplane.io xplane-loadbalancer-controller-mycluster-0-6xwkv SelectComposition Successfully selected composition
flux-system kustomize-controller namespaces ReconciliationSucceeded Reconciliation finished in 139.675322ms, next run in 4m0s
flux-system kustomize-controller crds ReconciliationSucceeded Reconciliation finished in 1.526316745s, next run in 4m0s
flux-system kustomize-controller crossplane-controller ReconciliationSucceeded Reconciliation finished in 257.506849ms, next run in 4m0s
crossplane-system source-controller crossplane ArtifactUpToDate artifact up-to-date with remote revision: 'sha256:85c414ec6ef11de298344f250d1da1f7c02a6f0ccc1e5731942238575c1fb2a1'
flux-system kustomize-controller flux-config ReconciliationSucceeded Reconciliation finished in 292.335224ms, next run in 4m0s
(x6) flux-system kustomize-controller crossplane-providers ReconciliationSucceeded (combined from similar events): Reconciliation finished in 147.39441ms, next run in 2m0s
(x2) default kyverno-admission disallow-privileged-containers PolicyApplied Deployment flux-system/helm-controller: pass
(x21) default kyverno-admission restrict-apparmor-profiles PolicyApplied (combined from similar events): Deployment flux-system/helm-controller: pass
default kyverno-admission restrict-sysctls PolicyApplied Deployment flux-system/kustomize-controller: pass
(x21) default kyverno-admission disallow-selinux PolicyApplied (combined from similar events): Deployment flux-system/helm-controller: pass
(x21) default kyverno-admission disallow-host-path PolicyApplied (combined from similar events): Deployment flux-system/helm-controller: pass
default kyverno-admission disallow-proc-mount PolicyApplied Deployment flux-system/kustomize-controller: pass
(x3) default kyverno-admission disallow-capabilities PolicyApplied Deployment flux-system/helm-controller: pass
(x2) default kyverno-admission disallow-host-namespaces PolicyApplied Deployment flux-system/helm-controller: pass
(x21) default kyverno-admission disallow-privileged-containers PolicyApplied (combined from similar events): Deployment flux-system/kustomize-controller: pass
default kyverno-admission disallow-host-path PolicyApplied Deployment flux-system/kustomize-controller: pass
default kyverno-admission disallow-selinux PolicyApplied Deployment flux-system/kustomize-controller: pass
(x21) default kyverno-admission restrict-sysctls PolicyApplied (combined from similar events): Deployment flux-system/helm-controller: pass
(x16) default kyverno-admission disallow-host-ports PolicyApplied (combined from similar events): Deployment flux-system/kustomize-controller: pass
(x2) default kyverno-admission disallow-host-process PolicyApplied Deployment flux-system/helm-controller: pass
default kyverno-admission restrict-seccomp PolicyApplied Deployment flux-system/kustomize-controller: pass
default kyverno-admission disallow-host-namespaces PolicyApplied Deployment flux-system/kustomize-controller: pass
(x2) default kyverno-admission restrict-seccomp PolicyApplied Deployment flux-system/helm-controller: pass
(x2) default kyverno-admission disallow-proc-mount PolicyApplied Deployment flux-system/helm-controller: pass
flux-system kustomize-controller flux-system ReconciliationSucceeded Reconciliation finished in 1.417455967s, next run in 10m0s
default kyverno-admission disallow-host-process PolicyApplied Deployment flux-system/kustomize-controller: pass
(x2) default kyverno-admission disallow-capabilities PolicyApplied Deployment flux-system/kustomize-controller: pass
(x2) default kyverno-admission restrict-apparmor-profiles PolicyApplied Deployment flux-system/kustomize-controller: pass
(x6) security source-controller external-secrets GitOperationSucceeded no changes since last reconcilation: observed revision 'v0.9.4@sha1:77a70d08fa1bdd698e3ebdfe98a13d74cfd76477'
flux-system kustomize-controller observability ReconciliationSucceeded Reconciliation finished in 275.181006ms, next run in 3m0s
(x6) flux-system kustomize-controller crossplane-configuration ReconciliationSucceeded (combined from similar events): Reconciliation finished in 221.669784ms, next run in 2m0s
(x12) default cluster-secret-store clustersecretstore Valid store validated
(x30) flux-system source-controller flux-system GitOperationSucceeded no changes since last reconcilation: observed revision 'main@sha1:d2f93919fc00f84397ab71b118b40cbcafb840c2'