ip-172-16-52-153.eu-west-3.compute.internal_9db06657-e36b-4e45-a2aa-6a6f2bb7ef86 became leader
(x2)
default
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
ip-10-0-3-24.eu-west-3.compute.internal
NodeHasNoDiskPressure
Node ip-10-0-3-24.eu-west-3.compute.internal status is now: NodeHasNoDiskPressure
(x2)
default
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
ip-10-0-3-24.eu-west-3.compute.internal
NodeHasSufficientPID
Node ip-10-0-3-24.eu-west-3.compute.internal status is now: NodeHasSufficientPID
(x2)
default
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
ip-10-0-3-24.eu-west-3.compute.internal
NodeHasSufficientMemory
Node ip-10-0-3-24.eu-west-3.compute.internal status is now: NodeHasSufficientMemory
default
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
ip-10-0-3-24.eu-west-3.compute.internal
Starting
Starting kubelet.
default
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
ip-10-0-3-24.eu-west-3.compute.internal
InvalidDiskCapacity
invalid capacity 0 on image filesystem
kube-system
daemonset-controller
aws-node
SuccessfulCreate
Created pod: aws-node-dmbng
kube-system
daemonset-controller
kube-proxy
SuccessfulCreate
Created pod: kube-proxy-m6tc2
kube-system
default-scheduler
coredns-577fccf48c-5vcbw
FailedScheduling
0/1 nodes are available: 1 node(s) had untolerated taint {node.cilium.io/agent-not-ready: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..
default
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
ip-10-0-3-24.eu-west-3.compute.internal
NodeAllocatableEnforced
Updated Node Allocatable limit across pods
kube-system
default-scheduler
aws-node-dmbng
Scheduled
Successfully assigned kube-system/aws-node-dmbng to ip-10-0-3-24.eu-west-3.compute.internal
default
cloud-node-controller
ip-10-0-3-24.eu-west-3.compute.internal
Synced
Node synced successfully
kube-system
default-scheduler
kube-proxy-m6tc2
Scheduled
Successfully assigned kube-system/kube-proxy-m6tc2 to ip-10-0-3-24.eu-west-3.compute.internal
kube-system
default-scheduler
coredns-577fccf48c-sz5dr
FailedScheduling
0/1 nodes are available: 1 node(s) had untolerated taint {node.cilium.io/agent-not-ready: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..
Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/eks/kube-proxy:v1.27.1-minimal-eksbuild.1" in 1.877010228s (1.877027173s including waiting)
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
aws-node-dmbng
Pulled
Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon-k8s-cni-init:v1.12.6-eksbuild.2" in 1.611158184s (1.611167325s including waiting)
kube-system
default-scheduler
aws-node-748vd
Scheduled
Successfully assigned kube-system/aws-node-748vd to ip-10-0-2-105.eu-west-3.compute.internal
(x2)
default
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
ip-10-0-2-105.eu-west-3.compute.internal
NodeHasSufficientPID
Node ip-10-0-2-105.eu-west-3.compute.internal status is now: NodeHasSufficientPID
default
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
ip-10-0-2-105.eu-west-3.compute.internal
NodeAllocatableEnforced
Updated Node Allocatable limit across pods
default
cloud-node-controller
ip-10-0-2-105.eu-west-3.compute.internal
Synced
Node synced successfully
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
kube-proxy-m6tc2
Started
Started container kube-proxy
default
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
ip-10-0-2-105.eu-west-3.compute.internal
Starting
Starting kubelet.
kube-system
default-scheduler
kube-proxy-cqn46
Scheduled
Successfully assigned kube-system/kube-proxy-cqn46 to ip-10-0-2-105.eu-west-3.compute.internal
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
kube-proxy-m6tc2
Created
Created container kube-proxy
kube-system
daemonset-controller
aws-node
SuccessfulCreate
Created pod: aws-node-748vd
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
aws-node-dmbng
Started
Started container aws-vpc-cni-init
default
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
ip-10-0-2-105.eu-west-3.compute.internal
InvalidDiskCapacity
invalid capacity 0 on image filesystem
(x2)
default
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
ip-10-0-2-105.eu-west-3.compute.internal
NodeHasNoDiskPressure
Node ip-10-0-2-105.eu-west-3.compute.internal status is now: NodeHasNoDiskPressure
(x2)
default
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
ip-10-0-2-105.eu-west-3.compute.internal
NodeHasSufficientMemory
Node ip-10-0-2-105.eu-west-3.compute.internal status is now: NodeHasSufficientMemory
Node ip-10-0-3-24.eu-west-3.compute.internal event: Registered Node ip-10-0-3-24.eu-west-3.compute.internal in Controller
default
node-controller
ip-10-0-2-105.eu-west-3.compute.internal
RegisteredNode
Node ip-10-0-2-105.eu-west-3.compute.internal event: Registered Node ip-10-0-2-105.eu-west-3.compute.internal in Controller
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
aws-node-748vd
Pulled
Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon-k8s-cni-init:v1.12.6-eksbuild.2" in 1.456271413s (1.456280886s including waiting)
Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/eks/kube-proxy:v1.27.1-minimal-eksbuild.1" in 1.972902658s (1.972914972s including waiting)
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
kube-proxy-cqn46
Created
Created container kube-proxy
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
kube-proxy-cqn46
Started
Started container kube-proxy
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
aws-node-dmbng
Created
Created container aws-node
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
aws-node-dmbng
Pulled
Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon-k8s-cni:v1.12.6-eksbuild.2" in 1.18428644s (1.184298778s including waiting)
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
aws-node-dmbng
Started
Started container aws-node
default
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
ip-10-0-3-24.eu-west-3.compute.internal
NodeReady
Node ip-10-0-3-24.eu-west-3.compute.internal status is now: NodeReady
Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon-k8s-cni:v1.12.6-eksbuild.2" in 1.1497599s (1.149773225s including waiting)
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
aws-node-748vd
Created
Created container aws-node
default
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
ip-10-0-2-105.eu-west-3.compute.internal
NodeReady
Node ip-10-0-2-105.eu-west-3.compute.internal status is now: NodeReady
(x2)
kube-system
default-scheduler
coredns-577fccf48c-sz5dr
FailedScheduling
0/2 nodes are available: 2 node(s) had untolerated taint {node.cilium.io/agent-not-ready: true}. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..
(x2)
kube-system
default-scheduler
coredns-577fccf48c-5vcbw
FailedScheduling
0/2 nodes are available: 2 node(s) had untolerated taint {node.cilium.io/agent-not-ready: true}. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..
kube-system
default-scheduler
aws-node-2t9s7
Scheduled
Successfully assigned kube-system/aws-node-2t9s7 to ip-10-0-2-105.eu-west-3.compute.internal
Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/eks/kube-proxy:v1.27.4-minimal-eksbuild.2" in 1.250404067s (1.250415801s including waiting)
kube-system
replicaset-controller
coredns-7f9bc84c58
SuccessfulCreate
Created pod: coredns-7f9bc84c58-ws8z4
kube-system
replicaset-controller
coredns-7f9bc84c58
SuccessfulCreate
Created pod: coredns-7f9bc84c58-x7qpw
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
aws-node-2t9s7
Started
Started container aws-vpc-cni-init
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
aws-node-2t9s7
Pulled
Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon-k8s-cni-init:v1.14.0-eksbuild.3" in 2.101402038s (2.101414937s including waiting)
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "ecf9051af393adf4d42bb2b2eec893e276359348c4a5493e9a3f02a6174a0e28": plugin type="aws-cni" name="aws-cni" failed (add): add cmd: Error received from AddNetwork gRPC call: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:50051: connect: connection refused"
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
coredns-7f9bc84c58-x7qpw
Pulled
Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/eks/coredns:v1.10.1-eksbuild.3" in 1.207214364s (1.20722959s including waiting)
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
coredns-7f9bc84c58-x7qpw
Started
Started container coredns
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
aws-node-2t9s7
Created
Created container aws-node
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
coredns-7f9bc84c58-x7qpw
Unhealthy
Readiness probe failed: HTTP probe failed with statuscode: 503
Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon-k8s-cni:v1.14.0-eksbuild.3" in 1.589409324s (1.589423902s including waiting)
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
kube-proxy-cqn46
Killing
Stopping container kube-proxy
kube-system
default-scheduler
kube-proxy-vjrhj
Scheduled
Successfully assigned kube-system/kube-proxy-vjrhj to ip-10-0-2-105.eu-west-3.compute.internal
Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/eks/kube-proxy:v1.27.4-minimal-eksbuild.2" in 930.816518ms (930.829167ms including waiting)
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
kube-proxy-vjrhj
Started
Started container kube-proxy
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
aws-node-2t9s7
Created
Created container aws-eks-nodeagent
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
aws-node-2t9s7
Started
Started container aws-eks-nodeagent
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
aws-node-2t9s7
Pulled
Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon/aws-network-policy-agent:v1.0.1-eksbuild.1" in 8.409430321s (8.409446158s including waiting)
Pod sandbox changed, it will be killed and re-created.
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
aws-node-b5nb9
Created
Created container aws-vpc-cni-init
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
aws-node-b5nb9
Pulled
Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon-k8s-cni-init:v1.14.0-eksbuild.3" in 1.64509555s (1.645109095s including waiting)
Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/eks/coredns:v1.10.1-eksbuild.3" in 1.198078691s (1.198088708s including waiting)
Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon-k8s-cni:v1.14.0-eksbuild.3" in 1.39722713s (1.3972433s including waiting)
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
aws-node-b5nb9
Pulled
Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon/aws-network-policy-agent:v1.0.1-eksbuild.1" in 8.337267096s (8.337279309s including waiting)
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
aws-node-b5nb9
Created
Created container aws-eks-nodeagent
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
aws-node-b5nb9
Started
Started container aws-eks-nodeagent
kube-system
default-scheduler
delete-aws-cni-9cbjk
Scheduled
Successfully assigned kube-system/delete-aws-cni-9cbjk to ip-10-0-3-24.eu-west-3.compute.internal
kube-system
job-controller
delete-aws-cni
SuccessfulCreate
Created pod: delete-aws-cni-9cbjk
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
delete-aws-cni-9cbjk
Pulling
Pulling image "bitnami/kubectl:1.27.3"
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
delete-aws-cni-9cbjk
Created
Created container kubectl
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
aws-node-b5nb9
Killing
Stopping container aws-node
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
delete-aws-cni-9cbjk
Started
Started container kubectl
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
delete-aws-cni-9cbjk
Pulled
Successfully pulled image "bitnami/kubectl:1.27.3" in 3.976764158s (3.976778148s including waiting)
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
aws-node-2t9s7
Killing
Stopping container aws-node
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
aws-node-b5nb9
Killing
Stopping container aws-eks-nodeagent
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
aws-node-2t9s7
Killing
Stopping container aws-eks-nodeagent
kube-system
job-controller
delete-aws-cni
Completed
Job completed
kube-system
daemonset-controller
cilium-envoy
SuccessfulCreate
Created pod: cilium-envoy-2fzwf
kube-system
default-scheduler
cilium-g94mr
Scheduled
Successfully assigned kube-system/cilium-g94mr to ip-10-0-2-105.eu-west-3.compute.internal
kube-system
default-scheduler
cilium-zpjnm
Scheduled
Successfully assigned kube-system/cilium-zpjnm to ip-10-0-3-24.eu-west-3.compute.internal
kube-system
daemonset-controller
cilium-envoy
SuccessfulCreate
Created pod: cilium-envoy-pzhcc
kube-system
daemonset-controller
cilium
SuccessfulCreate
Created pod: cilium-g94mr
kube-system
default-scheduler
cilium-operator-779bf49976-qznq9
Scheduled
Successfully assigned kube-system/cilium-operator-779bf49976-qznq9 to ip-10-0-2-105.eu-west-3.compute.internal
kube-system
default-scheduler
cilium-envoy-2fzwf
Scheduled
Successfully assigned kube-system/cilium-envoy-2fzwf to ip-10-0-3-24.eu-west-3.compute.internal
kube-system
default-scheduler
cilium-operator-779bf49976-lgq5h
Scheduled
Successfully assigned kube-system/cilium-operator-779bf49976-lgq5h to ip-10-0-3-24.eu-west-3.compute.internal
kube-system
deployment-controller
cilium-operator
ScalingReplicaSet
Scaled up replica set cilium-operator-779bf49976 to 2
kube-system
replicaset-controller
cilium-operator-779bf49976
SuccessfulCreate
Created pod: cilium-operator-779bf49976-lgq5h
kube-system
default-scheduler
cilium-envoy-pzhcc
Scheduled
Successfully assigned kube-system/cilium-envoy-pzhcc to ip-10-0-2-105.eu-west-3.compute.internal
Successfully pulled image "quay.io/cilium/cilium-envoy:v1.25.9-f039e2bd380b7eef2f2feea5750676bb36133699@sha256:023d09eeb8a44ae99b489f4af7ffed8b8b54f19a532e0bc6ab4c1e4b31acaab1" in 3.249080977s (3.249088622s including waiting)
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
cilium-envoy-pzhcc
Started
Started container cilium-envoy
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
cilium-envoy-pzhcc
Created
Created container cilium-envoy
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
cilium-envoy-2fzwf
Started
Started container cilium-envoy
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
cilium-envoy-2fzwf
Created
Created container cilium-envoy
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
cilium-envoy-2fzwf
Pulled
Successfully pulled image "quay.io/cilium/cilium-envoy:v1.25.9-f039e2bd380b7eef2f2feea5750676bb36133699@sha256:023d09eeb8a44ae99b489f4af7ffed8b8b54f19a532e0bc6ab4c1e4b31acaab1" in 3.470194533s (3.470208939s including waiting)
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
cilium-operator-779bf49976-lgq5h
Started
Started container cilium-operator
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
cilium-operator-779bf49976-lgq5h
Pulled
Successfully pulled image "quay.io/cilium/operator-aws:v1.14.1@sha256:ff57964aefd903456745e53a4697a4f6a026d8fffdb06f53f624a23d23ade37a" in 3.108024267s (3.108053375s including waiting)
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
cilium-operator-779bf49976-lgq5h
Created
Created container cilium-operator
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
cilium-g94mr
Pulled
Successfully pulled image "quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72" in 5.524408054s (5.524423547s including waiting)
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
cilium-g94mr
Created
Created container config
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
cilium-g94mr
Started
Started container config
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
cilium-zpjnm
Pulled
Successfully pulled image "quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72" in 5.074430028s (5.074445136s including waiting)
Successfully pulled image "quay.io/cilium/operator-aws:v1.14.1@sha256:ff57964aefd903456745e53a4697a4f6a026d8fffdb06f53f624a23d23ade37a" in 4.120856851s (4.120872155s including waiting)
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
cilium-zpjnm
Started
Started container config
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
cilium-zpjnm
Created
Created container config
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
cilium-operator-779bf49976-qznq9
Started
Started container cilium-operator
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
cilium-g94mr
Created
Created container mount-cgroup
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
cilium-g94mr
Started
Started container mount-cgroup
kube-system
replicaset-controller
coredns-7f9bc84c58
SuccessfulCreate
Created pod: coredns-7f9bc84c58-nldsh
kube-system
default-scheduler
coredns-7f9bc84c58-nldsh
Scheduled
Successfully assigned kube-system/coredns-7f9bc84c58-nldsh to ip-10-0-2-105.eu-west-3.compute.internal
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
cilium-g94mr
Pulled
Container image "quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72" already present on machine
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
coredns-7f9bc84c58-ws8z4
Killing
Stopping container coredns
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
cilium-g94mr
Created
Created container apply-sysctl-overwrites
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
coredns-7f9bc84c58-nldsh
FailedCreatePodSandBox
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "6c278ed382ab1c38c6a7bd0c48c07d23b2ee76dbf6590fa946231bea30b2beae": plugin type="aws-cni" name="aws-cni" failed (add): add cmd: Error received from AddNetwork gRPC call: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:50051: connect: connection refused"
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
cilium-g94mr
Started
Started container apply-sysctl-overwrites
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
cilium-zpjnm
Pulled
Container image "quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72" already present on machine
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
cilium-zpjnm
Created
Created container mount-cgroup
(x2)
karpenter
controllermanager
karpenter
NoPods
No matching pods found
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
cilium-g94mr
Pulled
Container image "quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72" already present on machine
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
cilium-zpjnm
Started
Started container mount-cgroup
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
cilium-zpjnm
Created
Created container apply-sysctl-overwrites
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
cilium-zpjnm
Pulled
Container image "quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72" already present on machine
karpenter
deployment-controller
karpenter
ScalingReplicaSet
Scaled up replica set karpenter-85dcd86d7c to 2
karpenter
replicaset-controller
karpenter-85dcd86d7c
SuccessfulCreate
Created pod: karpenter-85dcd86d7c-5dd2d
karpenter
replicaset-controller
karpenter-85dcd86d7c
SuccessfulCreate
Created pod: karpenter-85dcd86d7c-59jrk
karpenter
default-scheduler
karpenter-85dcd86d7c-5dd2d
FailedScheduling
0/2 nodes are available: 2 node(s) had untolerated taint {node.cilium.io/agent-not-ready: true}. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..
karpenter
default-scheduler
karpenter-85dcd86d7c-59jrk
FailedScheduling
0/2 nodes are available: 2 node(s) had untolerated taint {node.cilium.io/agent-not-ready: true}. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
cilium-g94mr
Pulled
Container image "quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72" already present on machine
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
cilium-g94mr
Created
Created container mount-bpf-fs
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
cilium-g94mr
Started
Started container mount-bpf-fs
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
cilium-zpjnm
Started
Started container apply-sysctl-overwrites
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
cilium-g94mr
Created
Created container clean-cilium-state
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
cilium-zpjnm
Created
Created container mount-bpf-fs
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
cilium-g94mr
Started
Started container clean-cilium-state
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
cilium-zpjnm
Started
Started container mount-bpf-fs
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
cilium-zpjnm
Pulled
Container image "quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72" already present on machine
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
cilium-g94mr
Pulled
Container image "quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72" already present on machine
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
cilium-zpjnm
Created
Created container clean-cilium-state
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
cilium-g94mr
Pulled
Container image "quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72" already present on machine
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
cilium-g94mr
Created
Created container install-cni-binaries
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
cilium-g94mr
Started
Started container install-cni-binaries
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
cilium-zpjnm
Started
Started container clean-cilium-state
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
cilium-zpjnm
Pulled
Container image "quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72" already present on machine
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
cilium-zpjnm
Started
Started container install-cni-binaries
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
cilium-zpjnm
Created
Created container install-cni-binaries
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
cilium-zpjnm
Pulled
Container image "quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72" already present on machine
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
cilium-g94mr
Started
Started container cilium-agent
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
cilium-zpjnm
Started
Started container cilium-agent
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
cilium-zpjnm
Created
Created container cilium-agent
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
cilium-zpjnm
Pulled
Container image "quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72" already present on machine
Scaled up replica set notification-controller-ddf44665d to 1
flux-system
replicaset-controller
notification-controller-ddf44665d
SuccessfulCreate
Created pod: notification-controller-ddf44665d-wdlvf
flux-system
replicaset-controller
source-controller-56ccbf8db8
SuccessfulCreate
Created pod: source-controller-56ccbf8db8-vvzzv
flux-system
replicaset-controller
helm-controller-57d8957947
SuccessfulCreate
Created pod: helm-controller-57d8957947-6b2cj
flux-system
default-scheduler
kustomize-controller-858996fc8d-n969h
FailedScheduling
0/2 nodes are available: 2 node(s) had untolerated taint {node.cilium.io/agent-not-ready: true}. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..
flux-system
default-scheduler
source-controller-56ccbf8db8-vvzzv
FailedScheduling
0/2 nodes are available: 2 node(s) had untolerated taint {node.cilium.io/agent-not-ready: true}. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..
flux-system
default-scheduler
helm-controller-57d8957947-6b2cj
FailedScheduling
0/2 nodes are available: 2 node(s) had untolerated taint {node.cilium.io/agent-not-ready: true}. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..
flux-system
deployment-controller
source-controller
ScalingReplicaSet
Scaled up replica set source-controller-56ccbf8db8 to 1
flux-system
deployment-controller
helm-controller
ScalingReplicaSet
Scaled up replica set helm-controller-57d8957947 to 1
flux-system
deployment-controller
kustomize-controller
ScalingReplicaSet
Scaled up replica set kustomize-controller-858996fc8d to 1
flux-system
replicaset-controller
kustomize-controller-858996fc8d
SuccessfulCreate
Created pod: kustomize-controller-858996fc8d-n969h
flux-system
default-scheduler
notification-controller-ddf44665d-wdlvf
FailedScheduling
0/2 nodes are available: 2 node(s) had untolerated taint {node.cilium.io/agent-not-ready: true}. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d0ac448054ccc36e6b3fa6d926d64d9001eec430545d9caf91fbd5c4511bf63f": plugin type="cilium-cni" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http://localhost/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory
Is the agent running?
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
coredns-7f9bc84c58-nldsh
FailedCreatePodSandBox
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "bb9512d64e5ad607f25ed6d614e2de5856168e0e32d68422e54c93c41d83c37a": plugin type="cilium-cni" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http://localhost/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory
Is the agent running?
flux-system
default-scheduler
notification-controller-ddf44665d-wdlvf
Scheduled
Successfully assigned flux-system/notification-controller-ddf44665d-wdlvf to ip-10-0-2-105.eu-west-3.compute.internal
flux-system
default-scheduler
helm-controller-57d8957947-6b2cj
Scheduled
Successfully assigned flux-system/helm-controller-57d8957947-6b2cj to ip-10-0-2-105.eu-west-3.compute.internal
karpenter
default-scheduler
karpenter-85dcd86d7c-59jrk
Scheduled
Successfully assigned karpenter/karpenter-85dcd86d7c-59jrk to ip-10-0-2-105.eu-west-3.compute.internal
karpenter
default-scheduler
karpenter-85dcd86d7c-5dd2d
FailedScheduling
0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.cilium.io/agent-not-ready: true}. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling..
flux-system
default-scheduler
source-controller-56ccbf8db8-vvzzv
Scheduled
Successfully assigned flux-system/source-controller-56ccbf8db8-vvzzv to ip-10-0-2-105.eu-west-3.compute.internal
flux-system
default-scheduler
kustomize-controller-858996fc8d-n969h
Scheduled
Successfully assigned flux-system/kustomize-controller-858996fc8d-n969h to ip-10-0-2-105.eu-west-3.compute.internal
karpenter
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
karpenter-85dcd86d7c-59jrk
FailedMount
MountVolume.SetUp failed for volume "kube-api-access-s6md7" : failed to sync configmap cache: timed out waiting for the condition
karpenter
default-scheduler
karpenter-85dcd86d7c-5dd2d
Scheduled
Successfully assigned karpenter/karpenter-85dcd86d7c-5dd2d to ip-10-0-3-24.eu-west-3.compute.internal
source-controller-56ccbf8db8-vvzzv_483e3c38-5f71-4908-b17d-16dd36f399e7 became leader
karpenter
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
karpenter-85dcd86d7c-5dd2d
Pulled
Successfully pulled image "public.ecr.aws/karpenter/controller:v0.30.0@sha256:3d436ece23d17263edbaa2314281f3ac1c2b0d3fb9dfa531cb30509659d8a7c3" in 2.591949413s (2.59196209s including waiting)
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
coredns-7f9bc84c58-8vw55
Created
Created container coredns
kube-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
coredns-7f9bc84c58-8vw55
Pulled
Container image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/eks/coredns:v1.10.1-eksbuild.3" already present on machine
karpenter
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
karpenter-85dcd86d7c-59jrk
Created
Created container controller
karpenter
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
karpenter-85dcd86d7c-59jrk
Pulled
Successfully pulled image "public.ecr.aws/karpenter/controller:v0.30.0@sha256:3d436ece23d17263edbaa2314281f3ac1c2b0d3fb9dfa531cb30509659d8a7c3" in 3.42892512s (3.428935987s including waiting)
karpenter-85dcd86d7c-5dd2d_a2d0ad6c-3c58-4417-bba4-983f716b96b5 became leader
flux-system
kustomize-controller
flux-system
ReconciliationSucceeded
Reconciliation finished in 2.457260074s, next run in 10m0s
flux-system
kustomize-controller
flux-system
Progressing
CustomResourceDefinition/alerts.notification.toolkit.fluxcd.io configured
CustomResourceDefinition/buckets.source.toolkit.fluxcd.io configured
CustomResourceDefinition/gitrepositories.source.toolkit.fluxcd.io configured
CustomResourceDefinition/helmcharts.source.toolkit.fluxcd.io configured
CustomResourceDefinition/helmreleases.helm.toolkit.fluxcd.io configured
CustomResourceDefinition/helmrepositories.source.toolkit.fluxcd.io configured
CustomResourceDefinition/kustomizations.kustomize.toolkit.fluxcd.io configured
CustomResourceDefinition/ocirepositories.source.toolkit.fluxcd.io configured
CustomResourceDefinition/providers.notification.toolkit.fluxcd.io configured
CustomResourceDefinition/receivers.notification.toolkit.fluxcd.io configured
Namespace/flux-system configured
ResourceQuota/flux-system/critical-pods-flux-system configured
ServiceAccount/flux-system/helm-controller configured
ServiceAccount/flux-system/kustomize-controller configured
ServiceAccount/flux-system/notification-controller configured
ServiceAccount/flux-system/source-controller configured
ClusterRole/crd-controller-flux-system configured
ClusterRole/flux-edit-flux-system configured
ClusterRole/flux-view-flux-system configured
ClusterRoleBinding/cluster-reconciler-flux-system configured
ClusterRoleBinding/crd-controller-flux-system configured
Service/flux-system/notification-controller configured
Service/flux-system/source-controller configured
Service/flux-system/webhook-receiver configured
Deployment/flux-system/helm-controller configured
Deployment/flux-system/kustomize-controller configured
Deployment/flux-system/notification-controller configured
Deployment/flux-system/source-controller configured
Kustomization/flux-system/apps created
Kustomization/flux-system/crds created
Kustomization/flux-system/crossplane-configuration created
Kustomization/flux-system/crossplane-controller created
Kustomization/flux-system/crossplane-providers created
Kustomization/flux-system/flux-config created
Kustomization/flux-system/flux-system configured
Kustomization/flux-system/infrastructure created
Kustomization/flux-system/namespaces created
Kustomization/flux-system/observability created
Kustomization/flux-system/security created
NetworkPolicy/flux-system/allow-egress configured
NetworkPolicy/flux-system/allow-scraping configured
NetworkPolicy/flux-system/allow-webhooks configured
GitRepository/flux-system/flux-system configured
flux-system
kustomize-controller
crds
DependencyNotReady
Dependencies do not meet ready condition, retrying in 30s
flux-system
kustomize-controller
namespaces
ReconciliationSucceeded
Reconciliation finished in 237.884135ms, next run in 4m0s
flux-system
kustomize-controller
namespaces
Progressing
Namespace/crossplane-system created
Namespace/echo created
Namespace/infrastructure created
Namespace/observability created
Namespace/security created
(x2)
flux-system
kustomize-controller
observability
DependencyNotReady
Dependencies do not meet ready condition, retrying in 30s
(x2)
flux-system
kustomize-controller
security
DependencyNotReady
Dependencies do not meet ready condition, retrying in 30s
(x2)
flux-system
kustomize-controller
crossplane-controller
DependencyNotReady
Dependencies do not meet ready condition, retrying in 30s
(x2)
flux-system
kustomize-controller
flux-config
DependencyNotReady
Dependencies do not meet ready condition, retrying in 30s
flux-system
kustomize-controller
crds
Progressing
CustomResourceDefinition/certificaterequests.cert-manager.io created
CustomResourceDefinition/certificates.cert-manager.io created
CustomResourceDefinition/challenges.acme.cert-manager.io created
CustomResourceDefinition/clusterissuers.cert-manager.io created
CustomResourceDefinition/issuers.cert-manager.io created
CustomResourceDefinition/orders.acme.cert-manager.io created
HelmRelease/observability/crds-prometheus-operator created
Kustomization/kube-system/crds-gateway-api created
Kustomization/security/crds-external-secrets created
Kustomization/security/crds-kyverno created
GitRepository/kube-system/gateway-api created
GitRepository/security/external-secrets created
GitRepository/security/kyverno created
HelmRepository/flux-system/prometheus-community created
observability
helm-controller
crds-prometheus-operator
info
HelmChart 'flux-system/observability-crds-prometheus-operator' is not ready
flux-system
kustomize-controller
crds
ReconciliationSucceeded
Reconciliation finished in 3.395809673s, next run in 4m0s
flux-system
source-controller
observability-crds-prometheus-operator
NoSourceArtifact
no artifact available for HelmRepository source 'prometheus-community'
flux-system
source-controller
prometheus-community
NewArtifact
stored fetched index of size 3.905MB from 'https://prometheus-community.github.io/helm-charts'
flux-system
source-controller
observability-crds-prometheus-operator
ChartPullSucceeded
pulled 'prometheus-operator-crds' chart with version '5.1.0'
observability
helm-controller
crds-prometheus-operator
info
Helm install has started
kube-system
source-controller
gateway-api
NewArtifact
stored artifact for commit 'Merge pull request #2360 from robscott/changelog-v...'
security
source-controller
external-secrets
NewArtifact
stored artifact for commit 'fixing label limits (#2645)'
security
source-controller
kyverno
NewArtifact
stored artifact for commit 'release 1.10.3 (#8006)'
Reconciliation finished in 3.832695598s, next run in 10m0s
security
kustomize-controller
crds-external-secrets
ReconciliationSucceeded
Reconciliation finished in 5.449787256s, next run in 10m0s
security
kustomize-controller
crds-external-secrets
Progressing
CustomResourceDefinition/acraccesstokens.generators.external-secrets.io created
CustomResourceDefinition/clusterexternalsecrets.external-secrets.io created
CustomResourceDefinition/clustersecretstores.external-secrets.io created
CustomResourceDefinition/ecrauthorizationtokens.generators.external-secrets.io created
CustomResourceDefinition/externalsecrets.external-secrets.io created
CustomResourceDefinition/fakes.generators.external-secrets.io created
CustomResourceDefinition/gcraccesstokens.generators.external-secrets.io created
CustomResourceDefinition/passwords.generators.external-secrets.io created
CustomResourceDefinition/pushsecrets.external-secrets.io created
CustomResourceDefinition/secretstores.external-secrets.io created
CustomResourceDefinition/vaultdynamicsecrets.generators.external-secrets.io created
observability
helm-controller
crds-prometheus-operator
info
Helm install succeeded
security
kustomize-controller
crds-kyverno
Progressing
CustomResourceDefinition/admissionreports.kyverno.io created
CustomResourceDefinition/backgroundscanreports.kyverno.io created
CustomResourceDefinition/cleanuppolicies.kyverno.io created
CustomResourceDefinition/clusteradmissionreports.kyverno.io created
CustomResourceDefinition/clusterbackgroundscanreports.kyverno.io created
CustomResourceDefinition/clustercleanuppolicies.kyverno.io created
CustomResourceDefinition/clusterpolicies.kyverno.io created
CustomResourceDefinition/clusterpolicyreports.wgpolicyk8s.io created
CustomResourceDefinition/policies.kyverno.io created
CustomResourceDefinition/policyexceptions.kyverno.io created
CustomResourceDefinition/policyreports.wgpolicyk8s.io created
CustomResourceDefinition/updaterequests.kyverno.io created
security
kustomize-controller
crds-kyverno
ReconciliationSucceeded
Reconciliation finished in 11.998017275s, next run in 10m0s
flux-system
helm-controller
weave-gitops
info
HelmChart 'flux-system/flux-system-weave-gitops' is not ready
crossplane-system
helm-controller
crossplane
info
HelmChart 'crossplane-system/crossplane-system-crossplane' is not ready
(x3)
flux-system
kustomize-controller
crossplane-providers
DependencyNotReady
Dependencies do not meet ready condition, retrying in 30s
flux-system
kustomize-controller
flux-config
Progressing
HTTPRoute/flux-system/weave-gitops created
HelmRelease/flux-system/weave-gitops created
PodMonitor/flux-system/flux-system created
HelmRepository/flux-system/ww-gitops created
flux-system
kustomize-controller
crossplane-controller
Progressing
HelmRelease/crossplane-system/crossplane created
HelmRepository/crossplane-system/crossplane created
flux-system
kustomize-controller
observability
Progressing
ExternalSecret/observability/kube-prometheus-stack-grafana-admin created
HTTPRoute/observability/grafana created
HelmRelease/observability/kube-prometheus-stack created
flux-system
kustomize-controller
security
ReconciliationFailed
IRSA/security/xplane-cert-manager-mycluster-0 dry-run failed: failed to get API group resources: unable to retrieve the complete list of server APIs: aws.platformref.upbound.io/v1alpha1: the server could not find the requested resource
observability
helm-controller
kube-prometheus-stack
info
HelmChart 'flux-system/observability-kube-prometheus-stack' is not ready
flux-system
source-controller
ww-gitops
Succeeded
Helm repository is ready
crossplane-system
source-controller
crossplane-system-crossplane
NoSourceArtifact
no artifact available for HelmRepository source 'crossplane'
flux-system
helm-controller
weave-gitops
info
Helm install has started
flux-system
source-controller
flux-system-weave-gitops
ChartPullSucceeded
pulled 'weave-gitops' chart with version '4.0.29'
flux-system
source-controller
observability-kube-prometheus-stack
ChartPullSucceeded
pulled 'kube-prometheus-stack' chart with version '50.3.1'
crossplane-system
source-controller
crossplane-system-crossplane
ChartPullSucceeded
pulled 'crossplane' chart with version '1.13.2'
crossplane-system
helm-controller
crossplane
info
Helm install has started
observability
helm-controller
kube-prometheus-stack
info
Helm install has started
crossplane-system
source-controller
crossplane
NewArtifact
stored fetched index of size 85.81kB from 'https://charts.crossplane.io/stable'
flux-system
default-scheduler
weave-gitops-66f9ddc754-b4n2g
Scheduled
Successfully assigned flux-system/weave-gitops-66f9ddc754-b4n2g to ip-10-0-3-24.eu-west-3.compute.internal
flux-system
deployment-controller
weave-gitops
ScalingReplicaSet
Scaled up replica set weave-gitops-66f9ddc754 to 1
Successfully pulled image "registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20221220-controller-v1.5.1-58-g787ea74b6" in 1.280504128s (1.280512245s including waiting)
Successfully assigned observability/kube-prometheus-stack-prometheus-node-exporter-km8ht to ip-10-0-3-24.eu-west-3.compute.internal
observability
endpoint-controller
kube-prometheus-stack-prometheus-node-exporter
FailedToUpdateEndpoint
Failed to update endpoint observability/kube-prometheus-stack-prometheus-node-exporter: Operation cannot be fulfilled on endpoints "kube-prometheus-stack-prometheus-node-exporter": the object has been modified; please apply your changes to the latest version and try again
observability
deployment-controller
kube-prometheus-stack-operator
ScalingReplicaSet
Scaled up replica set kube-prometheus-stack-operator-764c84db8b to 1
observability
deployment-controller
kube-prometheus-stack-kube-state-metrics
ScalingReplicaSet
Scaled up replica set kube-prometheus-stack-kube-state-metrics-8667b58b4d to 1
Successfully pulled image "quay.io/kiwigrid/k8s-sidecar:1.24.6" in 2.180642936s (2.180656943s including waiting)
observability
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
kube-prometheus-stack-grafana-685df6bf8f-vczw5
Pulling
Pulling image "docker.io/grafana/grafana:10.1.1"
observability
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
prometheus-kube-prometheus-stack-prometheus-0
Pulled
Successfully pulled image "quay.io/prometheus/prometheus:v2.46.0" in 3.557235913s (3.557249353s including waiting)
observability
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
prometheus-kube-prometheus-stack-prometheus-0
Started
Started container config-reloader
observability
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
prometheus-kube-prometheus-stack-prometheus-0
Created
Created container prometheus
observability
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
prometheus-kube-prometheus-stack-prometheus-0
Started
Started container prometheus
observability
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
prometheus-kube-prometheus-stack-prometheus-0
Pulled
Container image "quay.io/prometheus-operator/prometheus-config-reloader:v0.67.1" already present on machine
observability
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
prometheus-kube-prometheus-stack-prometheus-0
Created
Created container config-reloader
flux-system
kustomize-controller
crossplane-providers
ReconciliationSucceeded
Reconciliation finished in 312.821325ms, next run in 2m0s
(x4)
flux-system
kustomize-controller
crossplane-configuration
DependencyNotReady
Dependencies do not meet ready condition, retrying in 30s
flux-system
kustomize-controller
crossplane-providers
Progressing
ControllerConfig/aws-config created
Provider/provider-aws-iam created
default
packages/provider.pkg.crossplane.io
provider-aws-iam
InstallPackageRevision
cannot apply package revision: cannot patch object: Operation cannot be fulfilled on providerrevisions.pkg.crossplane.io "provider-aws-iam-62ccd0ca21a2": the object has been modified; please apply your changes to the latest version and try again
observability
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
kube-prometheus-stack-grafana-685df6bf8f-vczw5
Pulled
Successfully pulled image "docker.io/grafana/grafana:10.1.1" in 5.043088675s (5.043117576s including waiting)
(x6)
default
packages/provider.pkg.crossplane.io
provider-aws-iam
InstallPackageRevision
current package revision health is unknown
default
packages/providerrevision.pkg.crossplane.io
provider-aws-iam-62ccd0ca21a2
SyncPackage
cannot update package revision object metadata: Operation cannot be fulfilled on providerrevisions.pkg.crossplane.io "provider-aws-iam-62ccd0ca21a2": the object has been modified; please apply your changes to the latest version and try again
Successfully pulled image "xpkg.upbound.io/upbound/provider-family-aws:v0.40.0" in 5.340946431s (5.340961813s including waiting)
crossplane-system
deployment-controller
provider-aws-iam-62ccd0ca21a2
ScalingReplicaSet
Scaled up replica set provider-aws-iam-62ccd0ca21a2-69c4d59d65 to 1
crossplane-system
default-scheduler
provider-aws-iam-62ccd0ca21a2-69c4d59d65-fd77w
Scheduled
Successfully assigned crossplane-system/provider-aws-iam-62ccd0ca21a2-69c4d59d65-fd77w to ip-10-0-3-24.eu-west-3.compute.internal
crossplane-system
replicaset-controller
provider-aws-iam-62ccd0ca21a2-69c4d59d65
SuccessfulCreate
Created pod: provider-aws-iam-62ccd0ca21a2-69c4d59d65-fd77w
(x3)
observability
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
kube-prometheus-stack-grafana-685df6bf8f-vczw5
Pulled
Container image "quay.io/kiwigrid/k8s-sidecar:1.24.6" already present on machine
default
packages/providerrevision.pkg.crossplane.io
provider-aws-iam-62ccd0ca21a2
SyncPackage
cannot establish control of object: Operation cannot be fulfilled on customresourcedefinitions.apiextensions.k8s.io "servercertificates.iam.aws.upbound.io": the object has been modified; please apply your changes to the latest version and try again
(x2)
observability
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
kube-prometheus-stack-grafana-685df6bf8f-vczw5
Pulled
Container image "docker.io/grafana/grafana:10.1.1" already present on machine
(x3)
observability
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
kube-prometheus-stack-grafana-685df6bf8f-vczw5
Failed
Error: secret "kube-prometheus-stack-grafana-admin" not found
(x8)
default
rbac/providerrevision.pkg.crossplane.io
upbound-provider-family-aws-710d8cfe9f53
BindClusterRole
Bound system ClusterRole to provider ServiceAccount(s)
(x2)
observability
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
kube-prometheus-stack-grafana-685df6bf8f-vczw5
Pulled
Container image "quay.io/kiwigrid/k8s-sidecar:1.24.6" already present on machine
(x4)
default
packages/provider.pkg.crossplane.io
upbound-provider-family-aws
InstallPackageRevision
Successfully installed package revision
(x3)
observability
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
kube-prometheus-stack-grafana-685df6bf8f-vczw5
Failed
Error: secret "kube-prometheus-stack-grafana-admin" not found
(x3)
observability
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
kube-prometheus-stack-grafana-685df6bf8f-vczw5
Failed
Error: secret "kube-prometheus-stack-grafana-admin" not found
cannot apply ClusterRole: cannot update object: Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "crossplane:provider:provider-aws-iam-62ccd0ca21a2:system": the object has been modified; please apply your changes to the latest version and try again
(x10)
default
rbac/providerrevision.pkg.crossplane.io
provider-aws-iam-62ccd0ca21a2
BindClusterRole
Bound system ClusterRole to provider ServiceAccount(s)
(x3)
default
packages/provider.pkg.crossplane.io
provider-aws-iam
InstallPackageRevision
current package revision is unhealthy
(x14)
default
rbac/providerrevision.pkg.crossplane.io
provider-aws-iam-62ccd0ca21a2
ApplyClusterRoles
Applied RBAC ClusterRoles
(x3)
default
packages/providerrevision.pkg.crossplane.io
provider-aws-iam-62ccd0ca21a2
SyncPackage
cannot run post establish hook for package: provider package deployment is unavailable: Deployment does not have minimum availability.
crossplane-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
provider-aws-iam-62ccd0ca21a2-69c4d59d65-fd77w
Created
Created container provider-aws-iam
crossplane-system
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
provider-aws-iam-62ccd0ca21a2-69c4d59d65-fd77w
Pulled
Successfully pulled image "xpkg.upbound.io/upbound/provider-aws-iam:v0.38.0" in 5.180092319s (5.180102377s including waiting)
cannot apply rendered composite resource claim CustomResourceDefinition: cannot update object: Operation cannot be fulfilled on customresourcedefinitions.apiextensions.k8s.io "irsas.aws.platformref.upbound.io": the object has been modified; please apply your changes to the latest version and try again
flux-system
kustomize-controller
crossplane-configuration
ReconciliationSucceeded
Reconciliation finished in 383.815356ms, next run in 2m0s
cannot apply rendered composite resource CustomResourceDefinition: cannot update object: Operation cannot be fulfilled on customresourcedefinitions.apiextensions.k8s.io "xirsas.aws.platformref.upbound.io": the object has been modified; please apply your changes to the latest version and try again
cannot add composite resource claim finalizer: cannot update object: Operation cannot be fulfilled on compositeresourcedefinitions.apiextensions.crossplane.io "xirsas.aws.platformref.upbound.io": the object has been modified; please apply your changes to the latest version and try again
flux-system
kustomize-controller
crossplane-configuration
Progressing
CompositeResourceDefinition/xirsas.aws.platformref.upbound.io created
Composition/xirsas.aws.platformref.upbound.io created
EnvironmentConfig/irsa-environment created
ProviderConfig/default created
Default composition update policy has been selected
flux-system
kustomize-controller
infrastructure
Progressing
IRSA/kube-system/xplane-external-dns-mycluster-0 created
IRSA/kube-system/xplane-loadbalancer-controller-mycluster-0 created
Gateway/infrastructure/platform created
HelmRelease/kube-system/aws-load-balancer-controller created
HelmRelease/kube-system/external-dns created
HelmRepository/kube-system/eks created
HelmRepository/kube-system/external-dns created
cannot apply composite resource: cannot patch object: Operation cannot be fulfilled on xirsas.aws.platformref.upbound.io "xplane-external-dns-mycluster-0-8sfwb": the object has been modified; please apply your changes to the latest version and try again
flux-system
kustomize-controller
infrastructure
ReconciliationSucceeded
Reconciliation finished in 531.382717ms, next run in 4m0s
kube-system
source-controller
kube-system-external-dns
NoSourceArtifact
no artifact available for HelmRepository source 'external-dns'
kube-system
source-controller
kube-system-aws-load-balancer-controller
ChartPullSucceeded
pulled 'aws-load-balancer-controller' chart with version '1.6.0'
kube-system
source-controller
kube-system-aws-load-balancer-controller
NoSourceArtifact
no artifact available for HelmRepository source 'eks'
cannot apply composite resource: cannot patch object: Operation cannot be fulfilled on xirsas.aws.platformref.upbound.io "xplane-loadbalancer-controller-mycluster-0-6xwkv": the object has been modified; please apply your changes to the latest version and try again
kube-system
source-controller
external-dns
NewArtifact
stored fetched index of size 28.89kB from 'https://kubernetes-sigs.github.io/external-dns/'
Operation cannot be fulfilled on policies.iam.aws.upbound.io "xplane-loadbalancer-controller-mycluster-0-6xwkv-pln6p": the object has been modified; please apply your changes to the latest version and try again
Operation cannot be fulfilled on policies.iam.aws.upbound.io "xplane-loadbalancer-controller-mycluster-0-6xwkv-pln6p": the object has been modified; please apply your changes to the latest version and try again
default
managed/iam.aws.upbound.io/v1beta1, kind=policy
xplane-external-dns-mycluster-0-8sfwb-cctxq
CannotUpdateManagedResource
Operation cannot be fulfilled on policies.iam.aws.upbound.io "xplane-external-dns-mycluster-0-8sfwb-cctxq": the object has been modified; please apply your changes to the latest version and try again
Operation cannot be fulfilled on roles.iam.aws.upbound.io "xplane-loadbalancer-controller-mycluster-0-6xwkv-cqslf": the object has been modified; please apply your changes to the latest version and try again
Operation cannot be fulfilled on roles.iam.aws.upbound.io "xplane-external-dns-mycluster-0-8sfwb-f5hml": the object has been modified; please apply your changes to the latest version and try again
Successfully pulled image "public.ecr.aws/eks/aws-load-balancer-controller:v2.6.0" in 2.172576149s (2.172590581s including waiting)
kube-system
helm-controller
aws-load-balancer-controller
info
Helm install succeeded
default
managed/iam.aws.upbound.io/v1beta1, kind=policy
xplane-external-dns-mycluster-0-8sfwb-cctxq
CannotInitializeManagedResource
Operation cannot be fulfilled on policies.iam.aws.upbound.io "xplane-external-dns-mycluster-0-8sfwb-cctxq": the object has been modified; please apply your changes to the latest version and try again
Operation cannot be fulfilled on rolepolicyattachments.iam.aws.upbound.io "xplane-external-dns-mycluster-0-8sfwb-9mcg7": the object has been modified; please apply your changes to the latest version and try again
Operation cannot be fulfilled on rolepolicyattachments.iam.aws.upbound.io "xplane-loadbalancer-controller-mycluster-0-6xwkv-nhfkn": the object has been modified; please apply your changes to the latest version and try again
kube-system
helm-controller
external-dns
info
Helm install succeeded
flux-system
kustomize-controller
crossplane-providers
ReconciliationSucceeded
Reconciliation finished in 172.272501ms, next run in 2m0s
flux-system
kustomize-controller
namespaces
ReconciliationSucceeded
Reconciliation finished in 157.521998ms, next run in 4m0s
flux-system
kustomize-controller
crossplane-configuration
ReconciliationSucceeded
Reconciliation finished in 378.389904ms, next run in 2m0s
Reconciliation finished in 1.498833707s, next run in 4m0s
security
helm-controller
kyverno
info
HelmChart 'flux-system/security-kyverno' is not ready
security
helm-controller
external-secrets
info
HelmChart 'flux-system/security-external-secrets' is not ready
flux-system
kustomize-controller
security
Progressing
IRSA/security/xplane-cert-manager-mycluster-0 created
IRSA/security/xplane-external-secrets-mycluster-0 created
ClusterIssuer/letsencrypt-prod created
ClusterSecretStore/clustersecretstore created
HelmRelease/security/cert-manager created
HelmRelease/security/external-secrets created
HelmRelease/security/kyverno created
HelmRelease/security/kyverno-policies created
ClusterPolicy/mutate-cilium-echo-gateway created
ClusterPolicy/mutate-cilium-echo-tls-gateway created
ClusterPolicy/mutate-cilium-platform-gateway created
HelmRepository/flux-system/external-secrets created
HelmRepository/flux-system/jetstack created
HelmRepository/flux-system/kyverno created
cannot apply composite resource: cannot patch object: Operation cannot be fulfilled on xirsas.aws.platformref.upbound.io "xplane-cert-manager-mycluster-0-5m9n4": the object has been modified; please apply your changes to the latest version and try again
cannot apply composite resource: cannot patch object: Operation cannot be fulfilled on xirsas.aws.platformref.upbound.io "xplane-external-secrets-mycluster-0-d6vfd": the object has been modified; please apply your changes to the latest version and try again
Composed resource "irsa-attachment" is not yet ready
flux-system
source-controller
security-external-secrets
ChartPullSucceeded
pulled 'external-secrets' chart with version '0.9.4'
security
helm-controller
cert-manager
info
Helm install has started
security
helm-controller
kyverno
info
Helm install has started
security
helm-controller
external-secrets
info
Helm install has started
default
managed/iam.aws.upbound.io/v1beta1, kind=policy
xplane-external-secrets-mycluster-0-d6vfd-lkc5w
CannotInitializeManagedResource
Operation cannot be fulfilled on policies.iam.aws.upbound.io "xplane-external-secrets-mycluster-0-d6vfd-lkc5w": the object has been modified; please apply your changes to the latest version and try again
default
managed/iam.aws.upbound.io/v1beta1, kind=role
xplane-external-secrets-mycluster-0-d6vfd-tttrx
CreatedExternalResource
Successfully requested creation of external resource
default
managed/iam.aws.upbound.io/v1beta1, kind=role
xplane-external-secrets-mycluster-0-d6vfd-tttrx
CannotInitializeManagedResource
Operation cannot be fulfilled on roles.iam.aws.upbound.io "xplane-external-secrets-mycluster-0-d6vfd-tttrx": the object has been modified; please apply your changes to the latest version and try again
default
managed/iam.aws.upbound.io/v1beta1, kind=role
xplane-cert-manager-mycluster-0-5m9n4-glqmd
CreatedExternalResource
Successfully requested creation of external resource
default
managed/iam.aws.upbound.io/v1beta1, kind=policy
xplane-cert-manager-mycluster-0-5m9n4-49nzb
CreatedExternalResource
Successfully requested creation of external resource
default
managed/iam.aws.upbound.io/v1beta1, kind=policy
xplane-external-secrets-mycluster-0-d6vfd-lkc5w
CreatedExternalResource
Successfully requested creation of external resource
default
managed/iam.aws.upbound.io/v1beta1, kind=policy
xplane-cert-manager-mycluster-0-5m9n4-49nzb
CannotInitializeManagedResource
Operation cannot be fulfilled on policies.iam.aws.upbound.io "xplane-cert-manager-mycluster-0-5m9n4-49nzb": the object has been modified; please apply your changes to the latest version and try again
security
deployment-controller
external-secrets-cert-controller
ScalingReplicaSet
Scaled up replica set external-secrets-cert-controller-8665fc68 to 1
security
deployment-controller
external-secrets-webhook
ScalingReplicaSet
Scaled up replica set external-secrets-webhook-589765875 to 1
security
deployment-controller
external-secrets
ScalingReplicaSet
Scaled up replica set external-secrets-6b85658cd8 to 1
(x2)
security
controllermanager
external-secrets-pdb
NoPods
No matching pods found
security
replicaset-controller
cert-manager-cainjector-57b9db9cd
SuccessfulCreate
Created pod: cert-manager-cainjector-57b9db9cd-4mpm8
security
default-scheduler
cert-manager-bc8c566cf-xmxb4
Scheduled
Successfully assigned security/cert-manager-bc8c566cf-xmxb4 to ip-10-0-3-24.eu-west-3.compute.internal
security
default-scheduler
external-secrets-6b85658cd8-s4d24
Scheduled
Successfully assigned security/external-secrets-6b85658cd8-s4d24 to ip-10-0-2-105.eu-west-3.compute.internal
security
deployment-controller
cert-manager
ScalingReplicaSet
Scaled up replica set cert-manager-bc8c566cf to 1
security
deployment-controller
cert-manager-webhook
ScalingReplicaSet
Scaled up replica set cert-manager-webhook-7ffdd9664d to 1
security
replicaset-controller
external-secrets-6b85658cd8
SuccessfulCreate
Created pod: external-secrets-6b85658cd8-s4d24
security
default-scheduler
external-secrets-cert-controller-8665fc68-fs2rh
Scheduled
Successfully assigned security/external-secrets-cert-controller-8665fc68-fs2rh to ip-10-0-2-105.eu-west-3.compute.internal
security
deployment-controller
cert-manager-cainjector
ScalingReplicaSet
Scaled up replica set cert-manager-cainjector-57b9db9cd to 1
security
replicaset-controller
cert-manager-webhook-7ffdd9664d
SuccessfulCreate
Created pod: cert-manager-webhook-7ffdd9664d-ng9hz
security
replicaset-controller
external-secrets-webhook-589765875
SuccessfulCreate
Created pod: external-secrets-webhook-589765875-6s69h
security
default-scheduler
external-secrets-webhook-589765875-6s69h
Scheduled
Successfully assigned security/external-secrets-webhook-589765875-6s69h to ip-10-0-3-24.eu-west-3.compute.internal
security
default-scheduler
cert-manager-webhook-7ffdd9664d-ng9hz
Scheduled
Successfully assigned security/cert-manager-webhook-7ffdd9664d-ng9hz to ip-10-0-2-105.eu-west-3.compute.internal
security
replicaset-controller
cert-manager-bc8c566cf
SuccessfulCreate
Created pod: cert-manager-bc8c566cf-xmxb4
security
default-scheduler
cert-manager-cainjector-57b9db9cd-4mpm8
Scheduled
Successfully assigned security/cert-manager-cainjector-57b9db9cd-4mpm8 to ip-10-0-2-105.eu-west-3.compute.internal
security
replicaset-controller
external-secrets-cert-controller-8665fc68
SuccessfulCreate
Created pod: external-secrets-cert-controller-8665fc68-fs2rh
Created pod: kyverno-cleanup-controller-566f7bc8c-88xlj
security
default-scheduler
kyverno-background-controller-67f4b647d7-26gr7
Scheduled
Successfully assigned security/kyverno-background-controller-67f4b647d7-26gr7 to ip-10-0-3-24.eu-west-3.compute.internal
default
managed/iam.aws.upbound.io/v1beta1, kind=role
xplane-cert-manager-mycluster-0-5m9n4-glqmd
CannotUpdateManagedResource
Operation cannot be fulfilled on roles.iam.aws.upbound.io "xplane-cert-manager-mycluster-0-5m9n4-glqmd": the object has been modified; please apply your changes to the latest version and try again
security
replicaset-controller
kyverno-reports-controller-6f96648477
SuccessfulCreate
Created pod: kyverno-reports-controller-6f96648477-jmfwg
security
default-scheduler
kyverno-reports-controller-6f96648477-jmfwg
Scheduled
Successfully assigned security/kyverno-reports-controller-6f96648477-jmfwg to ip-10-0-3-24.eu-west-3.compute.internal
security
replicaset-controller
kyverno-background-controller-67f4b647d7
SuccessfulCreate
Created pod: kyverno-background-controller-67f4b647d7-26gr7
security
deployment-controller
kyverno-background-controller
ScalingReplicaSet
Scaled up replica set kyverno-background-controller-67f4b647d7 to 1
security
deployment-controller
kyverno-cleanup-controller
ScalingReplicaSet
Scaled up replica set kyverno-cleanup-controller-566f7bc8c to 1
security
default-scheduler
kyverno-admission-controller-75748bcb9c-jdsbk
Scheduled
Successfully assigned security/kyverno-admission-controller-75748bcb9c-jdsbk to ip-10-0-2-105.eu-west-3.compute.internal
security
deployment-controller
kyverno-reports-controller
ScalingReplicaSet
Scaled up replica set kyverno-reports-controller-6f96648477 to 1
security
default-scheduler
kyverno-cleanup-controller-566f7bc8c-88xlj
Scheduled
Successfully assigned security/kyverno-cleanup-controller-566f7bc8c-88xlj to ip-10-0-3-24.eu-west-3.compute.internal
security
deployment-controller
kyverno-admission-controller
ScalingReplicaSet
Scaled up replica set kyverno-admission-controller-75748bcb9c to 1
security
replicaset-controller
kyverno-admission-controller-75748bcb9c
SuccessfulCreate
Created pod: kyverno-admission-controller-75748bcb9c-jdsbk
flux-system
kustomize-controller
crossplane-controller
ReconciliationSucceeded
Reconciliation finished in 246.43364ms, next run in 4m0s
default
managed/iam.aws.upbound.io/v1beta1, kind=policy
xplane-external-secrets-mycluster-0-d6vfd-lkc5w
CannotUpdateManagedResource
Operation cannot be fulfilled on policies.iam.aws.upbound.io "xplane-external-secrets-mycluster-0-d6vfd-lkc5w": the object has been modified; please apply your changes to the latest version and try again
default
managed/iam.aws.upbound.io/v1beta1, kind=role
xplane-external-secrets-mycluster-0-d6vfd-tttrx
CannotUpdateManagedResource
Operation cannot be fulfilled on roles.iam.aws.upbound.io "xplane-external-secrets-mycluster-0-d6vfd-tttrx": the object has been modified; please apply your changes to the latest version and try again
Operation cannot be fulfilled on policies.iam.aws.upbound.io "xplane-cert-manager-mycluster-0-5m9n4-49nzb": the object has been modified; please apply your changes to the latest version and try again
cannot resolve references: mg.Spec.ForProvider.PolicyArn: referenced field was empty (referenced resource may not yet be ready)
flux-system
kustomize-controller
flux-config
ReconciliationSucceeded
Reconciliation finished in 350.052872ms, next run in 4m0s
security
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
kyverno-admission-controller-75748bcb9c-jdsbk
Pulled
Successfully pulled image "ghcr.io/kyverno/kyverno:v1.10.3" in 3.036411204s (3.036440784s including waiting)
security
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
kyverno-admission-controller-75748bcb9c-jdsbk
Created
Created container kyverno
security
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
kyverno-admission-controller-75748bcb9c-jdsbk
Started
Started container kyverno
security
default-scheduler
cert-manager-startupapicheck-jpttb
Scheduled
Successfully assigned security/cert-manager-startupapicheck-jpttb to ip-10-0-2-105.eu-west-3.compute.internal
security
job-controller
cert-manager-startupapicheck
SuccessfulCreate
Created pod: cert-manager-startupapicheck-jpttb
(x5)
observability
external-secrets
kube-prometheus-stack-grafana-admin
UpdateFailed
AccessDeniedException: User: arn:aws:sts::396740644681:assumed-role/xplane-external-secrets-mycluster-0/external-secrets-provider-aws is not authorized to perform: secretsmanager:GetSecretValue on resource: observability/kube-prometheus-stack/grafana-admin because no identity-based policy allows the secretsmanager:GetSecretValue action
status code: 400,
Operation cannot be fulfilled on rolepolicyattachments.iam.aws.upbound.io "xplane-external-secrets-mycluster-0-d6vfd-7dt2m": the object has been modified; please apply your changes to the latest version and try again
Operation cannot be fulfilled on rolepolicyattachments.iam.aws.upbound.io "xplane-cert-manager-mycluster-0-5m9n4-zhbfz": the object has been modified; please apply your changes to the latest version and try again
infrastructure
cert-manager-orders
platform-tls-zdc77-3297273686
Created
Created Challenge resource "platform-tls-zdc77-3297273686-1589334387" for domain "cloud.ogenki.io"
infrastructure
cert-manager-challenges
platform-tls-zdc77-3297273686-1589334387
Started
Challenge scheduled for processing
infrastructure
cert-manager-challenges
platform-tls-zdc77-3297273686-1589334387
PresentError
Error presenting challenge: failed to determine Route 53 hosted zone ID: AccessDenied: User: arn:aws:sts::396740644681:assumed-role/xplane-cert-manager-mycluster-0/1694332762438111595 is not authorized to perform: route53:ListHostedZonesByName because no identity-based policy allows the route53:ListHostedZonesByName action
infrastructure
cert-manager-challenges
platform-tls-zdc77-3297273686-1589334387
PresentError
Error presenting challenge: failed to determine Route 53 hosted zone ID: AccessDenied: User: arn:aws:sts::396740644681:assumed-role/xplane-cert-manager-mycluster-0/1694332762865741699 is not authorized to perform: route53:ListHostedZonesByName because no identity-based policy allows the route53:ListHostedZonesByName action
security
helm-controller
kyverno-policies
info
Helm install succeeded
kube-system
kyverno-scan
cilium-envoy
PolicyViolation
policy restrict-apparmor-profiles/autogen-app-armor fail: validation error: Specifying other AppArmor profiles is disallowed. The annotation `container.apparmor.security.beta.kubernetes.io` if defined must not be set to anything other than `runtime/default` or `localhost/*`. rule autogen-app-armor failed at path /spec/template/metadata/annotations/container.apparmor.security.beta.kubernetes.io/cilium-envoy/
default
kyverno-scan
disallow-privileged-containers
PolicyViolation
DaemonSet kube-system/kube-proxy: [autogen-privileged-containers] fail; validation error: Privileged mode is disallowed. The fields spec.containers[*].securityContext.privileged and spec.initContainers[*].securityContext.privileged must be unset or set to `false`. rule autogen-privileged-containers failed at path /spec/template/spec/containers/0/securityContext/privileged/
default
kyverno-scan
disallow-capabilities
PolicyViolation
DaemonSet kube-system/cilium: [autogen-adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
default
kyverno-scan
disallow-capabilities
PolicyViolation
DaemonSet kube-system/cilium-envoy: [autogen-adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
kube-system
kyverno-scan
cilium-envoy
PolicyViolation
policy disallow-host-path/autogen-host-path fail: validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule autogen-host-path failed at path /spec/template/spec/volumes/0/hostPath/
kube-system
kyverno-scan
cilium-envoy
PolicyViolation
policy disallow-host-ports/autogen-host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
kube-system
kyverno-scan
cilium-envoy
PolicyViolation
policy disallow-selinux/autogen-selinux-type fail: validation error: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). rule autogen-selinux-type failed at path /spec/template/spec/containers/0/securityContext/seLinuxOptions/type/
kube-system
kyverno-scan
cilium-envoy
PolicyViolation
policy disallow-host-namespaces/autogen-host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
kube-system
kyverno-scan
cilium-envoy
PolicyViolation
policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
default
kyverno-scan
disallow-host-namespaces
PolicyViolation
DaemonSet kube-system/cilium: [autogen-host-namespaces] fail; validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
default
kyverno-scan
disallow-host-namespaces
PolicyViolation
DaemonSet kube-system/cilium-envoy: [autogen-host-namespaces] fail; validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
default
kyverno-scan
disallow-host-namespaces
PolicyViolation
DaemonSet kube-system/kube-proxy: [autogen-host-namespaces] fail; validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
default
kyverno-scan
disallow-host-namespaces
PolicyViolation
DaemonSet observability/kube-prometheus-stack-prometheus-node-exporter: [autogen-host-namespaces] fail; validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
default
kyverno-scan
disallow-host-path
PolicyViolation
DaemonSet kube-system/cilium: [autogen-host-path] fail; validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule autogen-host-path failed at path /spec/template/spec/volumes/1/hostPath/
kube-system
kyverno-scan
cilium
PolicyViolation
policy disallow-host-namespaces/autogen-host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
kube-system
kyverno-scan
cilium
PolicyViolation
policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
kube-system
kyverno-scan
cilium
PolicyViolation
policy disallow-host-path/autogen-host-path fail: validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule autogen-host-path failed at path /spec/template/spec/volumes/1/hostPath/
kube-system
kyverno-scan
cilium
PolicyViolation
policy restrict-apparmor-profiles/autogen-app-armor fail: validation error: Specifying other AppArmor profiles is disallowed. The annotation `container.apparmor.security.beta.kubernetes.io` if defined must not be set to anything other than `runtime/default` or `localhost/*`. rule autogen-app-armor failed at path /spec/template/metadata/annotations/container.apparmor.security.beta.kubernetes.io/clean-cilium-state/
default
kyverno-scan
disallow-host-path
PolicyViolation
DaemonSet kube-system/cilium-envoy: [autogen-host-path] fail; validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule autogen-host-path failed at path /spec/template/spec/volumes/0/hostPath/
default
kyverno-scan
disallow-host-path
PolicyViolation
DaemonSet kube-system/kube-proxy: [autogen-host-path] fail; validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule autogen-host-path failed at path /spec/template/spec/volumes/0/hostPath/
default
kyverno-scan
disallow-host-path
PolicyViolation
DaemonSet observability/kube-prometheus-stack-prometheus-node-exporter: [autogen-host-path] fail; validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule autogen-host-path failed at path /spec/template/spec/volumes/0/hostPath/
kube-system
kyverno-scan
cilium
PolicyViolation
policy disallow-host-ports/autogen-host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
kube-system
kyverno-scan
cilium
PolicyViolation
policy disallow-privileged-containers/autogen-privileged-containers fail: validation error: Privileged mode is disallowed. The fields spec.containers[*].securityContext.privileged and spec.initContainers[*].securityContext.privileged must be unset or set to `false`. rule autogen-privileged-containers failed at path /spec/template/spec/initContainers/3/securityContext/privileged/
kube-system
kyverno-scan
cilium
PolicyViolation
policy disallow-selinux/autogen-selinux-type fail: validation error: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). rule autogen-selinux-type failed at path /spec/template/spec/initContainers/1/securityContext/seLinuxOptions/type/
kube-system
kyverno-scan
kube-proxy
PolicyViolation
policy disallow-host-path/autogen-host-path fail: validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule autogen-host-path failed at path /spec/template/spec/volumes/0/hostPath/
kube-system
kyverno-scan
kube-proxy
PolicyViolation
policy disallow-privileged-containers/autogen-privileged-containers fail: validation error: Privileged mode is disallowed. The fields spec.containers[*].securityContext.privileged and spec.initContainers[*].securityContext.privileged must be unset or set to `false`. rule autogen-privileged-containers failed at path /spec/template/spec/containers/0/securityContext/privileged/
kube-system
kyverno-scan
kube-proxy
PolicyViolation
policy disallow-host-namespaces/autogen-host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
default
kyverno-scan
disallow-host-ports
PolicyViolation
DaemonSet kube-system/cilium-envoy: [autogen-host-ports-none] fail; validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
default
kyverno-scan
disallow-host-ports
PolicyViolation
DaemonSet kube-system/cilium: [autogen-host-ports-none] fail; validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
default
kyverno-scan
disallow-host-ports
PolicyViolation
DaemonSet observability/kube-prometheus-stack-prometheus-node-exporter: [autogen-host-ports-none] fail; validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
default
kyverno-scan
restrict-apparmor-profiles
PolicyViolation
DaemonSet kube-system/cilium: [autogen-app-armor] fail; validation error: Specifying other AppArmor profiles is disallowed. The annotation `container.apparmor.security.beta.kubernetes.io` if defined must not be set to anything other than `runtime/default` or `localhost/*`. rule autogen-app-armor failed at path /spec/template/metadata/annotations/container.apparmor.security.beta.kubernetes.io/clean-cilium-state/
default
kyverno-scan
restrict-apparmor-profiles
PolicyViolation
DaemonSet kube-system/cilium-envoy: [autogen-app-armor] fail; validation error: Specifying other AppArmor profiles is disallowed. The annotation `container.apparmor.security.beta.kubernetes.io` if defined must not be set to anything other than `runtime/default` or `localhost/*`. rule autogen-app-armor failed at path /spec/template/metadata/annotations/container.apparmor.security.beta.kubernetes.io/cilium-envoy/
default
kyverno-scan
disallow-selinux
PolicyViolation
DaemonSet kube-system/cilium: [autogen-selinux-type] fail; validation error: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). rule autogen-selinux-type failed at path /spec/template/spec/initContainers/1/securityContext/seLinuxOptions/type/
default
kyverno-scan
disallow-selinux
PolicyViolation
DaemonSet kube-system/cilium-envoy: [autogen-selinux-type] fail; validation error: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). rule autogen-selinux-type failed at path /spec/template/spec/containers/0/securityContext/seLinuxOptions/type/
observability
kyverno-scan
kube-prometheus-stack-prometheus-node-exporter
PolicyViolation
policy disallow-host-path/autogen-host-path fail: validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule autogen-host-path failed at path /spec/template/spec/volumes/0/hostPath/
observability
kyverno-scan
kube-prometheus-stack-prometheus-node-exporter
PolicyViolation
policy disallow-host-ports/autogen-host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
default
kyverno-scan
disallow-privileged-containers
PolicyViolation
DaemonSet kube-system/cilium: [autogen-privileged-containers] fail; validation error: Privileged mode is disallowed. The fields spec.containers[*].securityContext.privileged and spec.initContainers[*].securityContext.privileged must be unset or set to `false`. rule autogen-privileged-containers failed at path /spec/template/spec/initContainers/3/securityContext/privileged/
(x2)
observability
external-secrets
kube-prometheus-stack-grafana-admin
Updated
Updated Secret
observability
kyverno-scan
kube-prometheus-stack-prometheus-node-exporter
PolicyViolation
policy disallow-host-namespaces/autogen-host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
kube-system
kyverno-scan
cilium-operator
PolicyViolation
policy disallow-host-namespaces/autogen-host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
default
kyverno-scan
disallow-host-namespaces
PolicyViolation
Deployment kube-system/cilium-operator: [autogen-host-namespaces] fail; validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
default
kyverno-scan
disallow-privileged-containers
PolicyViolation
Pod kube-system/kube-proxy-vjrhj: [privileged-containers] fail; validation error: Privileged mode is disallowed. The fields spec.containers[*].securityContext.privileged and spec.initContainers[*].securityContext.privileged must be unset or set to `false`. rule privileged-containers failed at path /spec/containers/0/securityContext/privileged/
kube-system
kyverno-scan
kube-proxy-vjrhj
PolicyViolation
policy disallow-privileged-containers/privileged-containers fail: validation error: Privileged mode is disallowed. The fields spec.containers[*].securityContext.privileged and spec.initContainers[*].securityContext.privileged must be unset or set to `false`. rule privileged-containers failed at path /spec/containers/0/securityContext/privileged/
default
kyverno-scan
disallow-host-namespaces
PolicyViolation
Pod observability/kube-prometheus-stack-prometheus-node-exporter-km8ht: [host-namespaces] fail; validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
policy disallow-host-namespaces/host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
policy disallow-host-ports/host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
policy disallow-host-path/host-path fail: validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule host-path failed at path /spec/volumes/0/hostPath/
kube-system
kyverno-scan
cilium-operator-779bf49976-qznq9
PolicyViolation
policy disallow-host-namespaces/host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
kube-system
kyverno-scan
kube-proxy-vjrhj
PolicyViolation
policy disallow-host-namespaces/host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
default
kyverno-scan
disallow-host-ports
PolicyViolation
Pod observability/kube-prometheus-stack-prometheus-node-exporter-km8ht: [host-ports-none] fail; validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
default
kyverno-scan
disallow-host-namespaces
PolicyViolation
Pod kube-system/kube-proxy-vjrhj: [host-namespaces] fail; validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
default
kyverno-scan
disallow-host-namespaces
PolicyViolation
Pod kube-system/cilium-operator-779bf49976-qznq9: [host-namespaces] fail; validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
kube-system
kyverno-scan
kube-proxy-vjrhj
PolicyViolation
policy disallow-host-path/host-path fail: validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule host-path failed at path /spec/volumes/0/hostPath/
default
kyverno-scan
disallow-host-path
PolicyViolation
Pod kube-system/kube-proxy-vjrhj: [host-path] fail; validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule host-path failed at path /spec/volumes/0/hostPath/
default
kyverno-scan
disallow-host-path
PolicyViolation
Pod observability/kube-prometheus-stack-prometheus-node-exporter-km8ht: [host-path] fail; validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule host-path failed at path /spec/volumes/0/hostPath/
kube-system
kyverno-scan
cilium-operator-779bf49976-lgq5h
PolicyViolation
policy disallow-host-namespaces/host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
policy disallow-host-ports/host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
default
kyverno-scan
disallow-host-path
PolicyViolation
Pod observability/kube-prometheus-stack-prometheus-node-exporter-bq2dc: [host-path] fail; validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule host-path failed at path /spec/volumes/0/hostPath/
default
kyverno-scan
disallow-host-ports
PolicyViolation
Pod observability/kube-prometheus-stack-prometheus-node-exporter-bq2dc: [host-ports-none] fail; validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
policy disallow-host-namespaces/host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
policy disallow-host-path/host-path fail: validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule host-path failed at path /spec/volumes/0/hostPath/
kube-system
kyverno-scan
cilium-g94mr
PolicyViolation
policy disallow-host-namespaces/host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
default
kyverno-scan
disallow-privileged-containers
PolicyViolation
Pod kube-system/cilium-zpjnm: [privileged-containers] fail; validation error: Privileged mode is disallowed. The fields spec.containers[*].securityContext.privileged and spec.initContainers[*].securityContext.privileged must be unset or set to `false`. rule privileged-containers failed at path /spec/initContainers/3/securityContext/privileged/
default
kyverno-scan
disallow-capabilities
PolicyViolation
Pod kube-system/cilium-g94mr: [adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
default
kyverno-scan
disallow-capabilities
PolicyViolation
Pod kube-system/cilium-zpjnm: [adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
default
kyverno-scan
disallow-host-path
PolicyViolation
Pod kube-system/cilium-g94mr: [host-path] fail; validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule host-path failed at path /spec/volumes/1/hostPath/
default
kyverno-scan
disallow-host-path
PolicyViolation
Pod kube-system/cilium-zpjnm: [host-path] fail; validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule host-path failed at path /spec/volumes/1/hostPath/
default
kyverno-scan
restrict-apparmor-profiles
PolicyViolation
Pod kube-system/cilium-g94mr: [app-armor] fail; validation error: Specifying other AppArmor profiles is disallowed. The annotation `container.apparmor.security.beta.kubernetes.io` if defined must not be set to anything other than `runtime/default` or `localhost/*`. rule app-armor failed at path /metadata/annotations/container.apparmor.security.beta.kubernetes.io/cilium-agent/
default
kyverno-scan
restrict-apparmor-profiles
PolicyViolation
Pod kube-system/cilium-zpjnm: [app-armor] fail; validation error: Specifying other AppArmor profiles is disallowed. The annotation `container.apparmor.security.beta.kubernetes.io` if defined must not be set to anything other than `runtime/default` or `localhost/*`. rule app-armor failed at path /metadata/annotations/container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites/
kube-system
kyverno-scan
cilium-g94mr
PolicyViolation
policy disallow-privileged-containers/privileged-containers fail: validation error: Privileged mode is disallowed. The fields spec.containers[*].securityContext.privileged and spec.initContainers[*].securityContext.privileged must be unset or set to `false`. rule privileged-containers failed at path /spec/initContainers/3/securityContext/privileged/
kube-system
kyverno-scan
cilium-g94mr
PolicyViolation
policy disallow-selinux/selinux-type fail: validation error: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). rule selinux-type failed at path /spec/initContainers/1/securityContext/seLinuxOptions/type/
default
kyverno-scan
disallow-privileged-containers
PolicyViolation
Pod kube-system/cilium-g94mr: [privileged-containers] fail; validation error: Privileged mode is disallowed. The fields spec.containers[*].securityContext.privileged and spec.initContainers[*].securityContext.privileged must be unset or set to `false`. rule privileged-containers failed at path /spec/initContainers/3/securityContext/privileged/
kube-system
kyverno-scan
cilium-g94mr
PolicyViolation
policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
kube-system
kyverno-scan
cilium-g94mr
PolicyViolation
policy disallow-host-path/host-path fail: validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule host-path failed at path /spec/volumes/1/hostPath/
kube-system
kyverno-scan
cilium-g94mr
PolicyViolation
policy restrict-apparmor-profiles/app-armor fail: validation error: Specifying other AppArmor profiles is disallowed. The annotation `container.apparmor.security.beta.kubernetes.io` if defined must not be set to anything other than `runtime/default` or `localhost/*`. rule app-armor failed at path /metadata/annotations/container.apparmor.security.beta.kubernetes.io/cilium-agent/
default
kyverno-scan
disallow-selinux
PolicyViolation
Pod kube-system/cilium-zpjnm: [selinux-type] fail; validation error: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). rule selinux-type failed at path /spec/initContainers/1/securityContext/seLinuxOptions/type/
default
kyverno-scan
disallow-selinux
PolicyViolation
Pod kube-system/cilium-g94mr: [selinux-type] fail; validation error: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). rule selinux-type failed at path /spec/initContainers/1/securityContext/seLinuxOptions/type/
default
kyverno-scan
disallow-host-ports
PolicyViolation
Pod kube-system/cilium-zpjnm: [host-ports-none] fail; validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
default
kyverno-scan
disallow-host-ports
PolicyViolation
Pod kube-system/cilium-g94mr: [host-ports-none] fail; validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
kube-system
kyverno-scan
cilium-g94mr
PolicyViolation
policy disallow-host-ports/host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
kube-system
kyverno-scan
cilium-zpjnm
PolicyViolation
policy disallow-host-namespaces/host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
kube-system
kyverno-scan
cilium-zpjnm
PolicyViolation
policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
kube-system
kyverno-scan
cilium-zpjnm
PolicyViolation
policy disallow-privileged-containers/privileged-containers fail: validation error: Privileged mode is disallowed. The fields spec.containers[*].securityContext.privileged and spec.initContainers[*].securityContext.privileged must be unset or set to `false`. rule privileged-containers failed at path /spec/initContainers/3/securityContext/privileged/
kube-system
kyverno-scan
cilium-zpjnm
PolicyViolation
policy disallow-selinux/selinux-type fail: validation error: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). rule selinux-type failed at path /spec/initContainers/1/securityContext/seLinuxOptions/type/
kube-system
kyverno-scan
cilium-zpjnm
PolicyViolation
policy disallow-host-ports/host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
kube-system
kyverno-scan
cilium-zpjnm
PolicyViolation
policy restrict-apparmor-profiles/app-armor fail: validation error: Specifying other AppArmor profiles is disallowed. The annotation `container.apparmor.security.beta.kubernetes.io` if defined must not be set to anything other than `runtime/default` or `localhost/*`. rule app-armor failed at path /metadata/annotations/container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites/
kube-system
kyverno-scan
cilium-zpjnm
PolicyViolation
policy disallow-host-path/host-path fail: validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule host-path failed at path /spec/volumes/1/hostPath/
default
kyverno-scan
disallow-capabilities
PolicyViolation
Pod kube-system/cilium-envoy-pzhcc: [adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
default
kyverno-scan
restrict-apparmor-profiles
PolicyViolation
Pod kube-system/cilium-envoy-pzhcc: [app-armor] fail; validation error: Specifying other AppArmor profiles is disallowed. The annotation `container.apparmor.security.beta.kubernetes.io` if defined must not be set to anything other than `runtime/default` or `localhost/*`. rule app-armor failed at path /metadata/annotations/container.apparmor.security.beta.kubernetes.io/cilium-envoy/
default
kyverno-scan
disallow-privileged-containers
PolicyViolation
Pod kube-system/kube-proxy-7fcmm: [privileged-containers] fail; validation error: Privileged mode is disallowed. The fields spec.containers[*].securityContext.privileged and spec.initContainers[*].securityContext.privileged must be unset or set to `false`. rule privileged-containers failed at path /spec/containers/0/securityContext/privileged/
kube-system
kyverno-scan
cilium-envoy-pzhcc
PolicyViolation
policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
default
kyverno-scan
disallow-host-ports
PolicyViolation
Pod kube-system/cilium-envoy-pzhcc: [host-ports-none] fail; validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
default
kyverno-scan
disallow-selinux
PolicyViolation
Pod kube-system/cilium-envoy-pzhcc: [selinux-type] fail; validation error: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). rule selinux-type failed at path /spec/containers/0/securityContext/seLinuxOptions/type/
kube-system
kyverno-scan
cilium-envoy-pzhcc
PolicyViolation
policy disallow-host-namespaces/host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
kube-system
kyverno-scan
kube-proxy-7fcmm
PolicyViolation
policy disallow-privileged-containers/privileged-containers fail: validation error: Privileged mode is disallowed. The fields spec.containers[*].securityContext.privileged and spec.initContainers[*].securityContext.privileged must be unset or set to `false`. rule privileged-containers failed at path /spec/containers/0/securityContext/privileged/
kube-system
kyverno-scan
kube-proxy-7fcmm
PolicyViolation
policy disallow-host-namespaces/host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
kube-system
kyverno-scan
kube-proxy-7fcmm
PolicyViolation
policy disallow-host-path/host-path fail: validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule host-path failed at path /spec/volumes/0/hostPath/
kube-system
kyverno-scan
cilium-envoy-pzhcc
PolicyViolation
policy disallow-selinux/selinux-type fail: validation error: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). rule selinux-type failed at path /spec/containers/0/securityContext/seLinuxOptions/type/
kube-system
kyverno-scan
cilium-envoy-pzhcc
PolicyViolation
policy disallow-host-path/host-path fail: validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule host-path failed at path /spec/volumes/0/hostPath/
kube-system
kyverno-scan
cilium-envoy-pzhcc
PolicyViolation
policy restrict-apparmor-profiles/app-armor fail: validation error: Specifying other AppArmor profiles is disallowed. The annotation `container.apparmor.security.beta.kubernetes.io` if defined must not be set to anything other than `runtime/default` or `localhost/*`. rule app-armor failed at path /metadata/annotations/container.apparmor.security.beta.kubernetes.io/cilium-envoy/
kube-system
kyverno-scan
cilium-envoy-pzhcc
PolicyViolation
policy disallow-host-ports/host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
kube-system
kyverno-scan
cilium-envoy-2fzwf
PolicyViolation
policy restrict-apparmor-profiles/app-armor fail: validation error: Specifying other AppArmor profiles is disallowed. The annotation `container.apparmor.security.beta.kubernetes.io` if defined must not be set to anything other than `runtime/default` or `localhost/*`. rule app-armor failed at path /metadata/annotations/container.apparmor.security.beta.kubernetes.io/cilium-envoy/
kube-system
kyverno-scan
cilium-envoy-2fzwf
PolicyViolation
policy disallow-selinux/selinux-type fail: validation error: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). rule selinux-type failed at path /spec/containers/0/securityContext/seLinuxOptions/type/
default
kyverno-scan
restrict-apparmor-profiles
PolicyViolation
Pod kube-system/cilium-envoy-2fzwf: [app-armor] fail; validation error: Specifying other AppArmor profiles is disallowed. The annotation `container.apparmor.security.beta.kubernetes.io` if defined must not be set to anything other than `runtime/default` or `localhost/*`. rule app-armor failed at path /metadata/annotations/container.apparmor.security.beta.kubernetes.io/cilium-envoy/
default
kyverno-scan
disallow-host-ports
PolicyViolation
Pod kube-system/cilium-envoy-2fzwf: [host-ports-none] fail; validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
kube-system
kyverno-scan
cilium-envoy-2fzwf
PolicyViolation
policy disallow-host-path/host-path fail: validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule host-path failed at path /spec/volumes/0/hostPath/
default
kyverno-scan
disallow-selinux
PolicyViolation
Pod kube-system/cilium-envoy-2fzwf: [selinux-type] fail; validation error: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). rule selinux-type failed at path /spec/containers/0/securityContext/seLinuxOptions/type/
kube-system
kyverno-scan
cilium-envoy-2fzwf
PolicyViolation
policy disallow-host-ports/host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
(x3)
default
kyverno-scan
disallow-host-path
PolicyViolation
(combined from similar events): Pod kube-system/cilium-envoy-2fzwf: [host-path] fail; validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule host-path failed at path /spec/volumes/0/hostPath/
kube-system
kyverno-scan
cilium-envoy-2fzwf
PolicyViolation
policy disallow-host-namespaces/host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
kube-system
kyverno-scan
cilium-envoy-2fzwf
PolicyViolation
policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
default
kyverno-scan
disallow-capabilities
PolicyViolation
Pod kube-system/cilium-envoy-2fzwf: [adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
(x2)
default
kyverno-scan
disallow-host-namespaces
PolicyViolation
ReplicaSet kube-system/cilium-operator-779bf49976: [autogen-host-namespaces] fail; validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
(x2)
kube-system
kyverno-scan
cilium-operator-779bf49976
PolicyViolation
policy disallow-host-namespaces/autogen-host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
flux-system
kustomize-controller
crossplane-configuration
ReconciliationSucceeded
Reconciliation finished in 257.640705ms, next run in 2m0s
Successfully pulled image "registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20221220-controller-v1.5.1-58-g787ea74b6" in 1.057504559s (1.057526539s including waiting)
observability
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
kube-prometheus-stack-admission-patch-4xf2q
Started
Started container patch
(x2)
default
kyverno-admission
disallow-selinux
PolicyApplied
Pod observability/kube-prometheus-stack-admission-patch-4xf2q: pass
(x2)
default
kyverno-admission
restrict-seccomp
PolicyApplied
Pod observability/kube-prometheus-stack-admission-patch-4xf2q: pass
(x2)
default
kyverno-admission
restrict-apparmor-profiles
PolicyApplied
Pod observability/kube-prometheus-stack-admission-patch-4xf2q: pass
(x2)
default
kyverno-admission
disallow-privileged-containers
PolicyApplied
Pod observability/kube-prometheus-stack-admission-patch-4xf2q: pass
observability
job-controller
kube-prometheus-stack-admission-patch
Completed
Job completed
(x2)
default
kyverno-admission
disallow-host-path
PolicyApplied
Pod observability/kube-prometheus-stack-admission-patch-4xf2q: pass
(x2)
default
kyverno-admission
disallow-capabilities
PolicyApplied
Pod observability/kube-prometheus-stack-admission-patch-4xf2q: pass
(x2)
default
kyverno-admission
disallow-proc-mount
PolicyApplied
Pod observability/kube-prometheus-stack-admission-patch-4xf2q: pass
(x2)
default
kyverno-admission
disallow-host-namespaces
PolicyApplied
Pod observability/kube-prometheus-stack-admission-patch-4xf2q: pass
(x2)
default
kyverno-admission
restrict-sysctls
PolicyApplied
Pod observability/kube-prometheus-stack-admission-patch-4xf2q: pass
(x2)
default
kyverno-admission
disallow-host-process
PolicyApplied
Pod observability/kube-prometheus-stack-admission-patch-4xf2q: pass
(x2)
default
kyverno-admission
disallow-host-ports
PolicyApplied
Pod observability/kube-prometheus-stack-admission-patch-4xf2q: pass
observability
helm-controller
kube-prometheus-stack
info
Helm install succeeded
flux-system
kustomize-controller
observability
Progressing
Health check passed in 5m10.067257319s
flux-system
kustomize-controller
observability
ReconciliationSucceeded
Reconciliation finished in 5m10.64161679s, next run in 3m0s
Not signing CertificateRequest until it is Approved
echo
cert-manager-certificaterequests-issuer-acme
echo-tls-74w58
WaitingForApproval
Not signing CertificateRequest until it is Approved
echo
cert-manager-certificaterequests-issuer-ca
echo-tls-74w58
WaitingForApproval
Not signing CertificateRequest until it is Approved
flux-system
kustomize-controller
apps
Progressing
Gateway/echo/echo created
Gateway/echo/echo-tls created
HTTPRoute/echo/echo-1 created
HTTPRoute/echo/split-echo created
HTTPRoute/echo/tls-echo-1 created
HelmRelease/echo/echo-1 created
HelmRelease/echo/echo-2 created
HelmRepository/flux-system/echo created
echo
cert-manager-certificates-request-manager
echo-tls
Requested
Created new CertificateRequest resource "echo-tls-74w58"
echo
cert-manager-certificates-key-manager
echo-tls
Generated
Stored new private key in temporary Secret resource "echo-tls-856ll"
echo
cert-manager-certificates-trigger
echo-tls
Issuing
Issuing certificate as Secret does not exist
echo
cert-manager-gateway-shim
echo-tls
CreateCertificate
Successfully created Certificate "echo-tls"
flux-system
source-controller
echo-echo-2
NoSourceArtifact
no artifact available for HelmRepository source 'echo'
echo
cert-manager-certificaterequests-issuer-acme
echo-tls-74w58
OrderCreated
Created Order resource echo/echo-tls-74w58-3333634511
echo
cert-manager-certificaterequests-approver
echo-tls-74w58
cert-manager.io
Certificate request has been approved by cert-manager.io
flux-system
source-controller
echo
NewArtifact
stored fetched index of size 7.648kB from 'https://ealenn.github.io/charts'
flux-system
source-controller
echo-echo-1
NoSourceArtifact
no artifact available for HelmRepository source 'echo'
echo
cert-manager-orders
echo-tls-74w58-3333634511
Created
Created Challenge resource "echo-tls-74w58-3333634511-2939583069" for domain "tls-echo-1.cloud.ogenki.io"
flux-system
source-controller
echo-echo-2
ChartPullSucceeded
pulled 'echo-server' chart with version '0.5.0'
flux-system
source-controller
echo-echo-1
ChartPullSucceeded
pulled 'echo-server' chart with version '0.5.0'
echo
helm-controller
echo-1
info
Helm install has started
echo
cert-manager-challenges
echo-tls-74w58-3333634511-2939583069
Started
Challenge scheduled for processing
echo
helm-controller
echo-2
info
Helm install has started
default
kyverno-admission
disallow-host-namespaces
PolicyApplied
Pod echo/echo-2-echo-server-b4cfd8458-cn9fv: pass
default
kyverno-admission
disallow-host-path
PolicyApplied
Pod echo/echo-2-echo-server-b4cfd8458-cn9fv: pass
default
kyverno-admission
disallow-host-process
PolicyApplied
Deployment echo/echo-1-echo-server: pass
default
kyverno-admission
disallow-host-process
PolicyApplied
Pod echo/echo-2-echo-server-b4cfd8458-zwz77: pass
default
kyverno-admission
disallow-host-process
PolicyApplied
Deployment echo/echo-2-echo-server: pass
echo
deployment-controller
echo-2-echo-server
ScalingReplicaSet
Scaled up replica set echo-2-echo-server-b4cfd8458 to 2
echo
replicaset-controller
echo-2-echo-server-b4cfd8458
SuccessfulCreate
Created pod: echo-2-echo-server-b4cfd8458-cn9fv
echo
replicaset-controller
echo-2-echo-server-b4cfd8458
SuccessfulCreate
Created pod: echo-2-echo-server-b4cfd8458-zwz77
echo
default-scheduler
echo-1-echo-server-fd88497d-cbvnz
Scheduled
Successfully assigned echo/echo-1-echo-server-fd88497d-cbvnz to ip-10-0-2-105.eu-west-3.compute.internal
default
kyverno-admission
restrict-sysctls
PolicyApplied
Deployment echo/echo-1-echo-server: pass
default
kyverno-admission
restrict-sysctls
PolicyApplied
Pod echo/echo-2-echo-server-b4cfd8458-cn9fv: pass
default
kyverno-admission
restrict-sysctls
PolicyApplied
Pod echo/echo-2-echo-server-b4cfd8458-zwz77: pass
default
kyverno-admission
disallow-host-ports
PolicyApplied
Deployment echo/echo-1-echo-server: pass
default
kyverno-admission
disallow-privileged-containers
PolicyApplied
Deployment echo/echo-2-echo-server: pass
default
kyverno-admission
disallow-privileged-containers
PolicyApplied
Pod echo/echo-2-echo-server-b4cfd8458-zwz77: pass
default
kyverno-admission
disallow-privileged-containers
PolicyApplied
Pod echo/echo-2-echo-server-b4cfd8458-cn9fv: pass
default
kyverno-admission
disallow-privileged-containers
PolicyApplied
Deployment echo/echo-1-echo-server: pass
default
kyverno-admission
restrict-sysctls
PolicyApplied
Deployment echo/echo-2-echo-server: pass
default
kyverno-admission
disallow-capabilities
PolicyApplied
Deployment echo/echo-2-echo-server: pass
default
kyverno-admission
disallow-host-ports
PolicyApplied
Pod echo/echo-2-echo-server-b4cfd8458-cn9fv: pass
default
kyverno-admission
disallow-host-ports
PolicyApplied
Pod echo/echo-2-echo-server-b4cfd8458-zwz77: pass
default
kyverno-admission
disallow-proc-mount
PolicyApplied
Deployment echo/echo-2-echo-server: pass
default
kyverno-admission
disallow-proc-mount
PolicyApplied
Pod echo/echo-2-echo-server-b4cfd8458-zwz77: pass
default
kyverno-admission
disallow-proc-mount
PolicyApplied
Pod echo/echo-2-echo-server-b4cfd8458-cn9fv: pass
default
kyverno-admission
disallow-proc-mount
PolicyApplied
Deployment echo/echo-1-echo-server: pass
default
kyverno-admission
disallow-capabilities
PolicyApplied
Pod echo/echo-2-echo-server-b4cfd8458-zwz77: pass
default
kyverno-admission
restrict-seccomp
PolicyApplied
Deployment echo/echo-1-echo-server: pass
default
kyverno-admission
disallow-host-ports
PolicyApplied
Deployment echo/echo-2-echo-server: pass
echo
replicaset-controller
echo-1-echo-server-fd88497d
SuccessfulCreate
Created pod: echo-1-echo-server-fd88497d-cbvnz
default
kyverno-admission
disallow-selinux
PolicyApplied
Deployment echo/echo-2-echo-server: pass
default
kyverno-admission
disallow-selinux
PolicyApplied
Pod echo/echo-2-echo-server-b4cfd8458-zwz77: pass
default
kyverno-admission
restrict-seccomp
PolicyApplied
Pod echo/echo-2-echo-server-b4cfd8458-cn9fv: pass
default
kyverno-admission
disallow-selinux
PolicyApplied
Pod echo/echo-2-echo-server-b4cfd8458-cn9fv: pass
default
kyverno-admission
disallow-selinux
PolicyApplied
Deployment echo/echo-1-echo-server: pass
default
kyverno-admission
restrict-seccomp
PolicyApplied
Pod echo/echo-2-echo-server-b4cfd8458-zwz77: pass
default
kyverno-admission
restrict-seccomp
PolicyApplied
Deployment echo/echo-2-echo-server: pass
default
kyverno-admission
disallow-host-path
PolicyApplied
Deployment echo/echo-1-echo-server: pass
default
kyverno-admission
disallow-host-process
PolicyApplied
Pod echo/echo-2-echo-server-b4cfd8458-cn9fv: pass
default
kyverno-admission
restrict-apparmor-profiles
PolicyApplied
Deployment echo/echo-2-echo-server: pass
echo
default-scheduler
echo-2-echo-server-b4cfd8458-zwz77
Scheduled
Successfully assigned echo/echo-2-echo-server-b4cfd8458-zwz77 to ip-10-0-2-105.eu-west-3.compute.internal
default
kyverno-admission
disallow-host-path
PolicyApplied
Pod echo/echo-2-echo-server-b4cfd8458-zwz77: pass
default
kyverno-admission
disallow-host-path
PolicyApplied
Deployment echo/echo-2-echo-server: pass
default
kyverno-admission
disallow-capabilities
PolicyApplied
Pod echo/echo-2-echo-server-b4cfd8458-cn9fv: pass
default
kyverno-admission
disallow-capabilities
PolicyApplied
Deployment echo/echo-1-echo-server: pass
echo
default-scheduler
echo-2-echo-server-b4cfd8458-cn9fv
Scheduled
Successfully assigned echo/echo-2-echo-server-b4cfd8458-cn9fv to ip-10-0-3-24.eu-west-3.compute.internal
default
kyverno-admission
disallow-host-namespaces
PolicyApplied
Deployment echo/echo-1-echo-server: pass
echo
default-scheduler
echo-1-echo-server-fd88497d-xkvng
Scheduled
Successfully assigned echo/echo-1-echo-server-fd88497d-xkvng to ip-10-0-3-24.eu-west-3.compute.internal
default
kyverno-admission
disallow-host-namespaces
PolicyApplied
Pod echo/echo-2-echo-server-b4cfd8458-zwz77: pass
default
kyverno-admission
disallow-host-namespaces
PolicyApplied
Deployment echo/echo-2-echo-server: pass
echo
replicaset-controller
echo-1-echo-server-fd88497d
SuccessfulCreate
Created pod: echo-1-echo-server-fd88497d-xkvng
echo
deployment-controller
echo-1-echo-server
ScalingReplicaSet
Scaled up replica set echo-1-echo-server-fd88497d to 2
default
kyverno-admission
restrict-apparmor-profiles
PolicyApplied
Pod echo/echo-2-echo-server-b4cfd8458-zwz77: pass
default
kyverno-admission
restrict-apparmor-profiles
PolicyApplied
Pod echo/echo-2-echo-server-b4cfd8458-cn9fv: pass
default
kyverno-admission
restrict-apparmor-profiles
PolicyApplied
Deployment echo/echo-1-echo-server: pass
default
kyverno-admission
restrict-apparmor-profiles
PolicyApplied
Pod echo/echo-1-echo-server-fd88497d-cbvnz: pass
default
kyverno-admission
disallow-capabilities
PolicyApplied
Pod echo/echo-1-echo-server-fd88497d-cbvnz: pass
default
kyverno-admission
disallow-capabilities
PolicyApplied
Pod echo/echo-1-echo-server-fd88497d-xkvng: pass
default
kyverno-admission
restrict-apparmor-profiles
PolicyApplied
Pod echo/echo-1-echo-server-fd88497d-xkvng: pass
default
kyverno-admission
disallow-host-namespaces
PolicyApplied
Pod echo/echo-1-echo-server-fd88497d-cbvnz: pass
default
kyverno-admission
disallow-host-namespaces
PolicyApplied
Pod echo/echo-1-echo-server-fd88497d-xkvng: pass
default
kyverno-admission
disallow-selinux
PolicyApplied
Pod echo/echo-1-echo-server-fd88497d-xkvng: pass
default
kyverno-admission
disallow-selinux
PolicyApplied
Pod echo/echo-1-echo-server-fd88497d-cbvnz: pass
default
kyverno-admission
disallow-host-path
PolicyApplied
Pod echo/echo-1-echo-server-fd88497d-cbvnz: pass
default
kyverno-admission
disallow-host-path
PolicyApplied
Pod echo/echo-1-echo-server-fd88497d-xkvng: pass
default
kyverno-admission
disallow-proc-mount
PolicyApplied
Pod echo/echo-1-echo-server-fd88497d-xkvng: pass
default
kyverno-admission
restrict-seccomp
PolicyApplied
Pod echo/echo-1-echo-server-fd88497d-cbvnz: pass
default
kyverno-admission
restrict-seccomp
PolicyApplied
Pod echo/echo-1-echo-server-fd88497d-xkvng: pass
default
kyverno-admission
disallow-proc-mount
PolicyApplied
Pod echo/echo-1-echo-server-fd88497d-cbvnz: pass
default
kyverno-admission
disallow-privileged-containers
PolicyApplied
Pod echo/echo-1-echo-server-fd88497d-xkvng: pass
default
kyverno-admission
disallow-privileged-containers
PolicyApplied
Pod echo/echo-1-echo-server-fd88497d-cbvnz: pass
default
kyverno-admission
disallow-host-ports
PolicyApplied
Pod echo/echo-1-echo-server-fd88497d-cbvnz: pass
default
kyverno-admission
disallow-host-ports
PolicyApplied
Pod echo/echo-1-echo-server-fd88497d-xkvng: pass
default
kyverno-admission
disallow-host-process
PolicyApplied
Pod echo/echo-1-echo-server-fd88497d-xkvng: pass
default
kyverno-admission
restrict-sysctls
PolicyApplied
Pod echo/echo-1-echo-server-fd88497d-cbvnz: pass
default
kyverno-admission
restrict-sysctls
PolicyApplied
Pod echo/echo-1-echo-server-fd88497d-xkvng: pass
default
kyverno-admission
disallow-host-process
PolicyApplied
Pod echo/echo-1-echo-server-fd88497d-cbvnz: pass
echo
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
echo-2-echo-server-b4cfd8458-cn9fv
Pulling
Pulling image "ealen/echo-server:0.6.0"
echo
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
echo-1-echo-server-fd88497d-cbvnz
Pulling
Pulling image "ealen/echo-server:0.6.0"
(x2)
echo
targetGroupBinding
k8s-echo-ciliumga-9f53e27422
SuccessfullyReconciled
Successfully reconciled
(x2)
echo
targetGroupBinding
k8s-echo-ciliumga-ccd104ecd3
SuccessfullyReconciled
Successfully reconciled
echo
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
echo-2-echo-server-b4cfd8458-zwz77
Pulling
Pulling image "ealen/echo-server:0.6.0"
(x2)
echo
service
cilium-gateway-echo-tls
SuccessfullyReconciled
Successfully reconciled
(x2)
echo
service
cilium-gateway-echo
SuccessfullyReconciled
Successfully reconciled
echo
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
echo-1-echo-server-fd88497d-xkvng
Pulling
Pulling image "ealen/echo-server:0.6.0"
echo
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
echo-2-echo-server-b4cfd8458-cn9fv
Created
Created container echo-server
echo
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
echo-2-echo-server-b4cfd8458-cn9fv
Pulled
Successfully pulled image "ealen/echo-server:0.6.0" in 3.478879623s (3.478887048s including waiting)
echo
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
echo-1-echo-server-fd88497d-xkvng
Pulled
Successfully pulled image "ealen/echo-server:0.6.0" in 3.577084668s (3.577097812s including waiting)
echo
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
echo-1-echo-server-fd88497d-xkvng
Created
Created container echo-server
echo
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
echo-2-echo-server-b4cfd8458-cn9fv
Started
Started container echo-server
echo
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
echo-2-echo-server-b4cfd8458-zwz77
Created
Created container echo-server
echo
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
echo-1-echo-server-fd88497d-cbvnz
Started
Started container echo-server
echo
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
echo-1-echo-server-fd88497d-cbvnz
Pulled
Successfully pulled image "ealen/echo-server:0.6.0" in 3.621449016s (3.621457258s including waiting)
echo
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
echo-1-echo-server-fd88497d-cbvnz
Created
Created container echo-server
echo
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
echo-2-echo-server-b4cfd8458-zwz77
Pulled
Successfully pulled image "ealen/echo-server:0.6.0" in 3.727546182s (3.727564998s including waiting)
echo
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
echo-1-echo-server-fd88497d-xkvng
Started
Started container echo-server
echo
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
echo-2-echo-server-b4cfd8458-zwz77
Started
Started container echo-server
echo
helm-controller
echo-1
info
Helm install succeeded
echo
helm-controller
echo-2
info
Helm install succeeded
(x16)
default
kyverno-admission
mutate-cilium-echo-gateway
PolicyApplied
Service echo/cilium-gateway-echo is successfully mutated
flux-system
kustomize-controller
apps
Progressing
Health check passed in 15.050906862s
flux-system
kustomize-controller
apps
ReconciliationSucceeded
Reconciliation finished in 15.518982249s, next run in 4m0s
infrastructure
cert-manager-certificates-issuing
platform-tls
Issuing
The certificate has been successfully issued
infrastructure
cert-manager-certificaterequests-issuer-acme
platform-tls-zdc77
CertificateIssued
Certificate fetched from issuer successfully
infrastructure
cert-manager-orders
platform-tls-zdc77-3297273686
Complete
Order completed successfully
flux-system
kustomize-controller
crossplane-providers
ReconciliationSucceeded
Reconciliation finished in 168.021577ms, next run in 2m0s
echo
cert-manager-challenges
echo-tls-74w58-3333634511-2939583069
Presented
Presented challenge using DNS-01 challenge mechanism
(x2)
infrastructure
cert-manager-challenges
platform-tls-zdc77-3297273686-1589334387
Presented
Presented challenge using DNS-01 challenge mechanism
flux-system
kustomize-controller
crossplane-configuration
ReconciliationSucceeded
Reconciliation finished in 238.057823ms, next run in 2m0s
flux-system
kustomize-controller
namespaces
ReconciliationSucceeded
Reconciliation finished in 131.429198ms, next run in 4m0s
echo
cert-manager-challenges
echo-tls-74w58-3333634511-2939583069
DomainVerified
Domain "tls-echo-1.cloud.ogenki.io" verified with "DNS-01" validation
echo
cert-manager-certificaterequests-issuer-acme
echo-tls-74w58
CertificateIssued
Certificate fetched from issuer successfully
echo
cert-manager-orders
echo-tls-74w58-3333634511
Complete
Order completed successfully
echo
cert-manager-certificates-issuing
echo-tls
Issuing
The certificate has been successfully issued
(x16)
default
kyverno-admission
mutate-cilium-echo-tls-gateway
PolicyApplied
Service echo/cilium-gateway-echo-tls is successfully mutated
(x2)
infrastructure
cert-manager-challenges
platform-tls-zdc77-3297273686-1589334387
DomainVerified
Domain "cloud.ogenki.io" verified with "DNS-01" validation
flux-system
kustomize-controller
crds
ReconciliationSucceeded
Reconciliation finished in 1.921357168s, next run in 4m0s
flux-system
kustomize-controller
crossplane-controller
ReconciliationSucceeded
Reconciliation finished in 237.133177ms, next run in 4m0s
flux-system
kustomize-controller
observability
ReconciliationSucceeded
Reconciliation finished in 287.424694ms, next run in 3m0s
flux-system
kustomize-controller
flux-config
ReconciliationSucceeded
Reconciliation finished in 319.9312ms, next run in 4m0s
flux-system
kustomize-controller
crossplane-providers
ReconciliationSucceeded
Reconciliation finished in 135.756131ms, next run in 2m0s
default
kyverno-admission
disallow-selinux
PolicyApplied
Deployment flux-system/helm-controller: pass
default
kyverno-admission
restrict-sysctls
PolicyApplied
Deployment flux-system/helm-controller: pass
default
kyverno-admission
restrict-apparmor-profiles
PolicyApplied
Deployment flux-system/helm-controller: pass
default
kyverno-admission
disallow-host-path
PolicyApplied
Deployment flux-system/helm-controller: pass
(x3)
default
kyverno-admission
disallow-capabilities
PolicyApplied
(combined from similar events): Deployment flux-system/source-controller: pass
default
kyverno-admission
disallow-host-ports
PolicyApplied
Deployment flux-system/helm-controller: pass
flux-system
kustomize-controller
flux-system
ReconciliationSucceeded
Reconciliation finished in 2.094730559s, next run in 10m0s
flux-system
kustomize-controller
crossplane-configuration
ReconciliationSucceeded
Reconciliation finished in 239.392331ms, next run in 2m0s
flux-system
kustomize-controller
infrastructure
ReconciliationSucceeded
Reconciliation finished in 295.868744ms, next run in 4m0s
kube-system
kustomize-controller
crds-gateway-api
ReconciliationSucceeded
Reconciliation finished in 3.820978193s, next run in 10m0s
security
kustomize-controller
crds-external-secrets
ReconciliationSucceeded
Reconciliation finished in 5.384155929s, next run in 10m0s
flux-system
kustomize-controller
security
Progressing
ClusterIssuer/letsencrypt-prod configured
flux-system
kustomize-controller
security
ReconciliationSucceeded
Reconciliation finished in 3.415854919s, next run in 4m0s
security
kustomize-controller
crds-kyverno
ReconciliationSucceeded
Reconciliation finished in 6.982474732s, next run in 10m0s
default
kyverno-admission
disallow-host-ports
PolicyViolation
Deployment cilium-test/echo-same-node: [autogen-host-ports-none] fail; validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
cilium-test
replicaset-controller
client2-646b88fb9b
SuccessfulCreate
Created pod: client2-646b88fb9b-xsb7z
cilium-test
default-scheduler
client-6b4b857d98-b2mts
Scheduled
Successfully assigned cilium-test/client-6b4b857d98-b2mts to ip-10-0-2-105.eu-west-3.compute.internal
default
kyverno-admission
disallow-capabilities
PolicyViolation
Deployment cilium-test/echo-same-node: [autogen-adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
kyverno-admission
echo-same-node
PolicyViolation
policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
kyverno-admission
echo-same-node-775456cfcf-bqk4q
PolicyViolation
policy disallow-host-ports/host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
cilium-test
kyverno-admission
echo-same-node
PolicyViolation
policy disallow-host-ports/autogen-host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
cilium-test
kyverno-admission
client-6b4b857d98-b2mts
PolicyViolation
policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
deployment-controller
echo-same-node
ScalingReplicaSet
Scaled up replica set echo-same-node-775456cfcf to 1
default
kyverno-admission
disallow-capabilities
PolicyViolation
Deployment cilium-test/client: [autogen-adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
default
kyverno-admission
disallow-capabilities
PolicyViolation
Pod cilium-test/client-6b4b857d98-b2mts: [adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
default
kyverno-admission
disallow-capabilities
PolicyViolation
Pod cilium-test/echo-same-node-775456cfcf-bqk4q: [adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
replicaset-controller
client-6b4b857d98
SuccessfulCreate
Created pod: client-6b4b857d98-b2mts
cilium-test
kyverno-admission
echo-same-node-775456cfcf-bqk4q
PolicyViolation
policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
default-scheduler
echo-same-node-775456cfcf-bqk4q
FailedScheduling
0/2 nodes are available: 2 node(s) didn't match pod affinity rules. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..
default
kyverno-admission
disallow-capabilities
PolicyViolation
Deployment cilium-test/client2: [autogen-adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
deployment-controller
echo-other-node
ScalingReplicaSet
Scaled up replica set echo-other-node-8b4df78df to 1
default
kyverno-admission
disallow-host-ports
PolicyViolation
Pod cilium-test/echo-same-node-775456cfcf-bqk4q: [host-ports-none] fail; validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
default
kyverno-admission
disallow-capabilities
PolicyViolation
Pod cilium-test/client2-646b88fb9b-xsb7z: [adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
deployment-controller
client
ScalingReplicaSet
Scaled up replica set client-6b4b857d98 to 1
cilium-test
default-scheduler
echo-other-node-8b4df78df-bf5sk
Scheduled
Successfully assigned cilium-test/echo-other-node-8b4df78df-bf5sk to ip-10-0-3-24.eu-west-3.compute.internal
cilium-test
replicaset-controller
echo-other-node-8b4df78df
SuccessfulCreate
Created pod: echo-other-node-8b4df78df-bf5sk
cilium-test
kyverno-admission
client2
PolicyViolation
policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
deployment-controller
client2
ScalingReplicaSet
Scaled up replica set client2-646b88fb9b to 1
cilium-test
replicaset-controller
echo-same-node-775456cfcf
SuccessfulCreate
Created pod: echo-same-node-775456cfcf-bqk4q
cilium-test
kyverno-admission
client
PolicyViolation
policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
default-scheduler
client2-646b88fb9b-xsb7z
Scheduled
Successfully assigned cilium-test/client2-646b88fb9b-xsb7z to ip-10-0-2-105.eu-west-3.compute.internal
cilium-test
kyverno-admission
client2-646b88fb9b-xsb7z
PolicyViolation
policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
kyverno-admission
echo-other-node-8b4df78df-bf5sk
PolicyViolation
policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
kyverno-admission
echo-external-node
PolicyViolation
policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
policy disallow-host-namespaces/host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
cilium-test
kyverno-admission
host-netns-6fhsl
PolicyViolation
policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
policy disallow-host-namespaces/autogen-host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
default
kyverno-admission
disallow-host-namespaces
PolicyViolation
DaemonSet cilium-test/host-netns: [autogen-host-namespaces] fail; validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
cilium-test
daemonset-controller
host-netns
SuccessfulCreate
Created pod: host-netns-6fhsl
cilium-test
kyverno-admission
host-netns
PolicyViolation
policy disallow-host-namespaces/autogen-host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
cilium-test
kyverno-admission
echo-external-node-545d98c9b4-br427
PolicyViolation
policy disallow-host-ports/host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
cilium-test
kyverno-admission
echo-external-node-545d98c9b4-br427
PolicyViolation
policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
kyverno-admission
echo-other-node-8b4df78df-bf5sk
PolicyViolation
policy disallow-host-ports/host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
cilium-test
kyverno-admission
host-netns
PolicyViolation
policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
kyverno-admission
echo-external-node-545d98c9b4-br427
PolicyViolation
policy disallow-host-namespaces/host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
default
kyverno-admission
disallow-capabilities
PolicyViolation
DaemonSet cilium-test/host-netns-non-cilium: [autogen-adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
kyverno-admission
host-netns-non-cilium
PolicyViolation
policy disallow-host-namespaces/autogen-host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
cilium-test
kyverno-admission
host-netns-non-cilium
PolicyViolation
policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
default
kyverno-admission
disallow-capabilities
PolicyViolation
Deployment cilium-test/echo-other-node: [autogen-adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
default-scheduler
host-netns-6fhsl
Scheduled
Successfully assigned cilium-test/host-netns-6fhsl to ip-10-0-2-105.eu-west-3.compute.internal
default
kyverno-admission
disallow-capabilities
PolicyViolation
DaemonSet cilium-test/host-netns: [autogen-adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
default-scheduler
host-netns-8wmb8
Scheduled
Successfully assigned cilium-test/host-netns-8wmb8 to ip-10-0-3-24.eu-west-3.compute.internal
cilium-test
kyverno-admission
host-netns-8wmb8
PolicyViolation
policy disallow-host-namespaces/host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
default
kyverno-admission
disallow-host-ports
PolicyViolation
Deployment cilium-test/echo-other-node: [autogen-host-ports-none] fail; validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
default
kyverno-admission
disallow-host-ports
PolicyViolation
Pod cilium-test/echo-other-node-8b4df78df-bf5sk: [host-ports-none] fail; validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
cilium-test
kyverno-admission
echo-external-node
PolicyViolation
policy disallow-host-ports/autogen-host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
default
kyverno-admission
disallow-host-ports
PolicyViolation
Deployment cilium-test/echo-external-node: [autogen-host-ports-none] fail; validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
cilium-test
daemonset-controller
host-netns
SuccessfulCreate
Created pod: host-netns-8wmb8
cilium-test
kyverno-admission
host-netns-8wmb8
PolicyViolation
policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
kyverno-admission
echo-other-node
PolicyViolation
policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
kyverno-admission
echo-other-node
PolicyViolation
policy disallow-host-ports/autogen-host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
cilium-test
deployment-controller
echo-external-node
ScalingReplicaSet
Scaled up replica set echo-external-node-545d98c9b4 to 1
default
kyverno-admission
disallow-host-namespaces
PolicyViolation
Deployment cilium-test/echo-external-node: [autogen-host-namespaces] fail; validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
default
kyverno-admission
disallow-host-namespaces
PolicyViolation
Pod cilium-test/host-netns-6fhsl: [host-namespaces] fail; validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
default
kyverno-admission
disallow-host-namespaces
PolicyViolation
Pod cilium-test/host-netns-8wmb8: [host-namespaces] fail; validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
default
kyverno-admission
disallow-host-namespaces
PolicyViolation
DaemonSet cilium-test/host-netns-non-cilium: [autogen-host-namespaces] fail; validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
Successfully pulled image "quay.io/cilium/alpine-curl:v1.7.0@sha256:ccd0ed9da1752bab88a807647ad3cec65d460d281ab88988b60d70148783e751" in 1.819723699s (1.819741929s including waiting)
cilium-test
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
host-netns-6fhsl
Started
Started container host-netns
cilium-test
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
client-6b4b857d98-b2mts
Pulled
Successfully pulled image "quay.io/cilium/alpine-curl:v1.7.0@sha256:ccd0ed9da1752bab88a807647ad3cec65d460d281ab88988b60d70148783e751" in 1.850844644s (1.850852541s including waiting)
cilium-test
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
host-netns-8wmb8
Created
Created container host-netns
cilium-test
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
host-netns-8wmb8
Pulled
Successfully pulled image "quay.io/cilium/alpine-curl:v1.7.0@sha256:ccd0ed9da1752bab88a807647ad3cec65d460d281ab88988b60d70148783e751" in 1.535634685s (1.535645346s including waiting)
cilium-test
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
host-netns-8wmb8
Started
Started container host-netns
cilium-test
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
client-6b4b857d98-b2mts
Started
Started container client
cilium-test
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
host-netns-6fhsl
Created
Created container host-netns
cilium-test
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
client-6b4b857d98-b2mts
Created
Created container client
cilium-test
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
client2-646b88fb9b-xsb7z
Pulled
Successfully pulled image "quay.io/cilium/alpine-curl:v1.7.0@sha256:ccd0ed9da1752bab88a807647ad3cec65d460d281ab88988b60d70148783e751" in 1.455473117s (1.455484689s including waiting)
flux-system
kustomize-controller
apps
ReconciliationSucceeded
Reconciliation finished in 759.52659ms, next run in 4m0s
cilium-test
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
client2-646b88fb9b-xsb7z
Started
Started container client2
cilium-test
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
client2-646b88fb9b-xsb7z
Created
Created container client2
(x4)
default
kyverno-admission
disallow-capabilities
PolicyViolation
(combined from similar events): Deployment cilium-test/echo-external-node: [autogen-adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
echo-other-node-8b4df78df-bf5sk
Created
Created container echo-other-node
cilium-test
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
echo-other-node-8b4df78df-bf5sk
Pulled
Successfully pulled image "quay.io/cilium/json-mock:v1.3.5@sha256:d5dfd0044540cbe01ad6a1932cfb1913587f93cac4f145471ca04777f26342a4" in 5.793163766s (5.793176877s including waiting)
Successfully pulled image "quay.io/cilium/json-mock:v1.3.5@sha256:d5dfd0044540cbe01ad6a1932cfb1913587f93cac4f145471ca04777f26342a4" in 6.425719286s (6.425734139s including waiting)
Successfully pulled image "docker.io/coredns/coredns:1.11.1@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1" in 2.509680641s (2.509691453s including waiting)
cilium-test
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
echo-other-node-8b4df78df-bf5sk
Created
Created container dns-test-server
cilium-test
kubelet
ip-10-0-3-24.eu-west-3.compute.internal
echo-other-node-8b4df78df-bf5sk
Started
Started container dns-test-server
cilium-test
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
echo-same-node-775456cfcf-bqk4q
Pulled
Successfully pulled image "docker.io/coredns/coredns:1.11.1@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1" in 2.278273475s (2.278292404s including waiting)
cilium-test
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
echo-same-node-775456cfcf-bqk4q
Created
Created container dns-test-server
cilium-test
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
echo-same-node-775456cfcf-bqk4q
Started
Started container dns-test-server
flux-system
kustomize-controller
crossplane-providers
ReconciliationSucceeded
Reconciliation finished in 268.292463ms, next run in 2m0s
cilium-test
kyverno-scan
client
PolicyViolation
policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
kyverno-scan
echo-same-node-775456cfcf
PolicyViolation
policy disallow-host-ports/autogen-host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
cilium-test
kyverno-scan
echo-same-node
PolicyViolation
policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
kyverno-scan
echo-same-node-775456cfcf-bqk4q
PolicyViolation
policy disallow-host-ports/host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
default
kyverno-scan
disallow-capabilities
PolicyViolation
ReplicaSet cilium-test/echo-same-node-775456cfcf: [autogen-adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
default
kyverno-scan
disallow-capabilities
PolicyViolation
Deployment cilium-test/echo-same-node: [autogen-adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
default
kyverno-scan
disallow-capabilities
PolicyViolation
Deployment cilium-test/client: [autogen-adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
kyverno-scan
echo-same-node-775456cfcf
PolicyViolation
policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
kyverno-scan
echo-same-node
PolicyViolation
policy disallow-host-ports/autogen-host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
cilium-test
kyverno-scan
echo-other-node
PolicyViolation
policy disallow-host-ports/autogen-host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
cilium-test
kyverno-scan
client-6b4b857d98
PolicyViolation
policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
kyverno-scan
client2-646b88fb9b-xsb7z
PolicyViolation
policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
kyverno-scan
client-6b4b857d98-b2mts
PolicyViolation
policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
kyverno-scan
echo-same-node-775456cfcf-bqk4q
PolicyViolation
policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
kyverno-scan
client2-646b88fb9b
PolicyViolation
policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
kyverno-scan
echo-other-node-8b4df78df
PolicyViolation
policy disallow-host-ports/autogen-host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
cilium-test
kyverno-scan
client2
PolicyViolation
policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
kyverno-scan
echo-other-node
PolicyViolation
policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
kyverno-scan
host-netns-6fhsl
PolicyViolation
policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
kyverno-scan
host-netns
PolicyViolation
policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
kyverno-scan
echo-external-node
PolicyViolation
policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
kyverno-scan
host-netns-non-cilium
PolicyViolation
policy disallow-host-namespaces/autogen-host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
cilium-test
kyverno-scan
echo-external-node
PolicyViolation
policy disallow-host-namespaces/autogen-host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
cilium-test
kyverno-scan
echo-external-node-545d98c9b4
PolicyViolation
policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
kyverno-scan
host-netns
PolicyViolation
policy disallow-host-namespaces/autogen-host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
cilium-test
kyverno-scan
host-netns-6fhsl
PolicyViolation
policy disallow-host-namespaces/host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
cilium-test
kyverno-scan
echo-external-node
PolicyViolation
policy disallow-host-ports/autogen-host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
cilium-test
kyverno-scan
host-netns-non-cilium
PolicyViolation
policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
kyverno-scan
host-netns-8wmb8
PolicyViolation
policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
kyverno-scan
echo-other-node-8b4df78df
PolicyViolation
policy disallow-capabilities/autogen-adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
kyverno-scan
host-netns-8wmb8
PolicyViolation
policy disallow-host-namespaces/host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
cilium-test
kyverno-scan
echo-other-node-8b4df78df-bf5sk
PolicyViolation
policy disallow-host-ports/host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
cilium-test
kyverno-scan
echo-external-node-545d98c9b4
PolicyViolation
policy disallow-host-namespaces/autogen-host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule autogen-host-namespaces failed at path /spec/template/spec/hostNetwork/
cilium-test
kyverno-scan
echo-other-node-8b4df78df-bf5sk
PolicyViolation
policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
kyverno-scan
echo-external-node-545d98c9b4-br427
PolicyViolation
policy disallow-host-namespaces/host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
cilium-test
kyverno-scan
echo-external-node-545d98c9b4
PolicyViolation
policy disallow-host-ports/autogen-host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule autogen-host-ports-none failed at path /spec/template/spec/containers/0/ports/0/hostPort/
cilium-test
kyverno-scan
echo-external-node-545d98c9b4-br427
PolicyViolation
policy disallow-capabilities/adding-capabilities fail: Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
cilium-test
kyverno-scan
echo-external-node-545d98c9b4-br427
PolicyViolation
policy disallow-host-ports/host-ports-none fail: validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
(x16)
default
kyverno-scan
disallow-capabilities
PolicyViolation
(combined from similar events): Pod cilium-test/echo-external-node-545d98c9b4-br427: [adding-capabilities] fail; Any capabilities added beyond the allowed list (AUDIT_WRITE, CHOWN, DAC_OVERRIDE, FOWNER, FSETID, KILL, MKNOD, NET_BIND_SERVICE, SETFCAP, SETGID, SETPCAP, SETUID, SYS_CHROOT) are disallowed.
(x14)
default
kyverno-scan
disallow-host-namespaces
PolicyViolation
(combined from similar events): Pod cilium-test/echo-external-node-545d98c9b4-br427: [host-namespaces] fail; validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostNetwork/
(x9)
default
kyverno-scan
disallow-host-ports
PolicyViolation
(combined from similar events): Pod cilium-test/echo-external-node-545d98c9b4-br427: [host-ports-none] fail; validation error: Use of host ports is disallowed. The fields spec.containers[*].ports[*].hostPort , spec.initContainers[*].ports[*].hostPort, and spec.ephemeralContainers[*].ports[*].hostPort must either be unset or set to `0`. rule host-ports-none failed at path /spec/containers/0/ports/0/hostPort/
(x2)
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
cilium-g94mr
Created
Created container cilium-agent
(x2)
kube-system
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
cilium-g94mr
Pulled
Container image "quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72" already present on machine
flux-system
kustomize-controller
crossplane-configuration
ReconciliationSucceeded
Reconciliation finished in 249.766089ms, next run in 2m0s
flux-system
kustomize-controller
namespaces
ReconciliationSucceeded
Reconciliation finished in 221.235539ms, next run in 4m0s
flux-system
kustomize-controller
observability
ReconciliationSucceeded
Reconciliation finished in 301.032977ms, next run in 3m0s
flux-system
kustomize-controller
crds
ReconciliationSucceeded
Reconciliation finished in 1.428396109s, next run in 4m0s
flux-system
kustomize-controller
crossplane-controller
ReconciliationSucceeded
Reconciliation finished in 232.227181ms, next run in 4m0s
flux-system
kustomize-controller
flux-config
ReconciliationSucceeded
Reconciliation finished in 268.25912ms, next run in 4m0s
flux-system
kustomize-controller
crossplane-providers
ReconciliationSucceeded
Reconciliation finished in 160.499019ms, next run in 2m0s
flux-system
kustomize-controller
crossplane-configuration
ReconciliationSucceeded
Reconciliation finished in 213.997982ms, next run in 2m0s
flux-system
kustomize-controller
infrastructure
ReconciliationSucceeded
Reconciliation finished in 311.989865ms, next run in 4m0s
flux-system
kustomize-controller
security
ReconciliationSucceeded
Reconciliation finished in 2.266441722s, next run in 4m0s
flux-system
kustomize-controller
observability
ReconciliationSucceeded
Reconciliation finished in 311.149143ms, next run in 3m0s
flux-system
kustomize-controller
apps
ReconciliationSucceeded
Reconciliation finished in 355.582079ms, next run in 4m0s
flux-system
kustomize-controller
crossplane-providers
ReconciliationSucceeded
Reconciliation finished in 179.476402ms, next run in 2m0s
flux-system
kustomize-controller
crossplane-configuration
ReconciliationSucceeded
Reconciliation finished in 207.187229ms, next run in 2m0s
flux-system
kustomize-controller
namespaces
ReconciliationSucceeded
Reconciliation finished in 147.523049ms, next run in 4m0s
Created pod: kyverno-cleanup-cluster-admission-reports-28238900-tf6vw
security
job-controller
kyverno-cleanup-admission-reports-28238900
SuccessfulCreate
Created pod: kyverno-cleanup-admission-reports-28238900-q76br
(x4)
cilium-test
karpenter
echo-external-node-545d98c9b4-br427
FailedScheduling
Failed to schedule pod, incompatible with provisioner "default", daemonset overhead={"cpu":"600m","memory":"556Mi","pods":"5"}, incompatible requirements, label "cilium.io/no-schedule" does not have known values
security
cronjob-controller
kyverno-cleanup-cluster-admission-reports
SuccessfulCreate
Created job kyverno-cleanup-cluster-admission-reports-28238900
Reconciliation finished in 325.035365ms, next run in 4m0s
(x4)
cilium-test
default-scheduler
echo-external-node-545d98c9b4-br427
FailedScheduling
0/2 nodes are available: 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..
flux-system
kustomize-controller
security
ReconciliationSucceeded
Reconciliation finished in 3.387293705s, next run in 4m0s
(x3)
flux-system
source-controller
echo-echo-1
ArtifactUpToDate
artifact up-to-date with remote revision: '0.5.0'
flux-system
kustomize-controller
observability
ReconciliationSucceeded
Reconciliation finished in 248.360298ms, next run in 3m0s
(x3)
flux-system
source-controller
echo-echo-2
ArtifactUpToDate
artifact up-to-date with remote revision: '0.5.0'
flux-system
kustomize-controller
apps
ReconciliationSucceeded
Reconciliation finished in 451.446244ms, next run in 4m0s
cilium-test
kubelet
ip-10-0-2-105.eu-west-3.compute.internal
echo-same-node-775456cfcf-bqk4q
Unhealthy
Readiness probe failed: Get "http://10.0.5.111:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers)