You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Run cilium/little-vm-helper@c3dbeb9d505b31aa5e960ebb258f4dd5f96f0202
with:
provision: false
cmd: cd /host/
./contrib/scripts/kind.sh "" 3 "" "" "iptables"
./cilium-cli install --wait --chart-directory=./install/kubernetes/cilium --helm-set=image.repository=quay.io/cilium/cilium-ci --helm-set=image.useDigest=false --helm-set=image.tag=5008dbc15f58ad1765edb8a6a47b[2](https://github.com/cilium/cilium/actions/runs/3998584732/jobs/6861424339#step:9:2)4812b7b004a --helm-set=operator.image.repository=quay.io/cilium/operator --helm-set=operator.image.suffix=-ci --helm-set=operator.image.tag=5008dbc15f58ad1765edb8a6a47b24812b7b004a --helm-set=operator.image.useDigest=false --helm-set=hubble.relay.image.repository=quay.io/cilium/hubble-relay-ci --helm-set=hubble.relay.image.tag=5008dbc15f58ad1765edb8a6a47b24812b7b004a --rollback=false --config monitor-aggregation=none --nodes-without-cilium=kind-worker3 --helm-set-string=kubeProxyReplacement=disabled --helm-set=bpf.masquerade=false --helm-set-string=tunnel=vxlan \
--encryption=ipsec --node-encryption=false"
./cilium-cli status --wait
./cilium-cli connectivity test --datapath --collect-sysdump-on-failure \
--sysdump-output-filename "cilium-sysdump-1-<ts>"
./cilium-cli connectivity test --collect-sysdump-on-failure \
--sysdump-output-filename "cilium-sysdump-1-<ts>"
image: kind
image-version: 5.10-main
lvh-version: v0.0.3
ssh-port: 2222
install-dependencies: false
serial-port: 0
cpu: 8
mem: 6G
cpu-kind: host
env:
check_url: https://github.com/cilium/cilium/actions/runs/3998584732
Run ssh -p 2222 -o "StrictHostKeyChecking=no" root@localhost << EOF
Pseudo-terminal will not be allocated because stdin is not a terminal.
Creating cluster "kind" ...
• Ensuring node image (kindest/node:v1.24.0) 🖼 ...
✓ Ensuring node image (kindest/node:v1.24.0) 🖼
• Preparing nodes 📦 📦 📦 📦 ...
✓ Preparing nodes 📦 📦 📦 📦
• Writing configuration 📜 ...
✓ Writing configuration 📜
• Starting control-plane 🕹️ ...
✓ Starting control-plane 🕹️
• Installing StorageClass 💾 ...
✓ Installing StorageClass 💾
• Joining worker nodes 🚜 ...
✓ Joining worker nodes 🚜
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Have a nice day! 👋
Error response from daemon: endpoint with name kind-registry already exists in network kind
node/kind-worker annotated
node/kind-control-plane annotated
node/kind-worker2 annotated
node/kind-worker[3](https://github.com/cilium/cilium/actions/runs/3998584732/jobs/6861424339#step:9:3) annotated
node/kind-control-plane untainted
taint "node-role.kubernetes.io/master" not found
taint "node-role.kubernetes.io/master" not found
taint "node-role.kubernetes.io/master" not found
node/kind-control-plane untainted
taint "node-role.kubernetes.io/control-plane" not found
taint "node-role.kubernetes.io/control-plane" not found
taint "node-role.kubernetes.io/control-plane" not found
Images are pushed into the kind registry like so:
export DOCKER_REGISTRY=localhost:[5](https://github.com/cilium/cilium/actions/runs/3998584732/jobs/6861424339#step:9:5)000
make dev-docker-image
-bash: line [9](https://github.com/cilium/cilium/actions/runs/3998584732/jobs/6861424339#step:9:9): unexpected EOF while looking for matching `"'
Error: Process completed with exit code 2.
The text was updated successfully, but these errors were encountered:
joestringer
added
area/CI
Continuous Integration testing issue or flake
ci/flake
This is a known failure that occurs in the tree. Please investigate me!
labels
Jan 24, 2023
Failure link:
https://github.com/cilium/cilium/actions/runs/3998584732
The text was updated successfully, but these errors were encountered: