You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[run_kubernetes_tests.sh:138] INFO: Test actions that must be successful
1..1
ok 1 Test unencrypted confidential container launch success and verify that we are running in a secure enclave. # skip Test not supported for clh.
1..1
ok 1 Running with postStart and preStop handlers
1..1
ok 1 Check capabilities of pod
1..1
ok 1 ConfigMap for a pod
1..2
ok 1 Copy file in a pod
ok 2 Copy from pod to host
1..1
ok 1 Check CPU constraints
1..1
ok 1 Credentials using secrets
1..1
ok 1 Check custom dns
1..2
ok 1 Empty dir volumes
ok 2 Empty dir volume when FSGroup is specified with non-root container
1..1
ok 1 Environment variables
1..1
ok 1 Kubectl exec
1..1
ok 1 Test readonly volume for pods
1..1
not ok 1 configmap update works, and preserves symlinks
# (in test file k8s-inotify.bats, line 25)
# `kubectl wait --for=condition=Ready --timeout=$timeout pod "$pod_name"' failed
...
# Events:
# Type Reason Age From Message
# ---- ------ ---- ---- -------
# Normal Scheduled 91s default-scheduler Successfully assigned kata-containers-k8s-tests/inotify-configmap-testing to aks-nodepool1-32907564-vmss000000
# Normal Pulling 82s kubelet Pulling image "quay.io/kata-containers/fsnotify:latest"
# Normal Pulled 79s kubelet Successfully pulled image "quay.io/kata-containers/fsnotify:latest" in 3.391115011s (3.391140111s including waiting)
# Normal Created 79s kubelet Created container c1
# Warning Failed 72s kubelet Error: failed to create containerd task: failed to create shim task: error: Failed to resize memory from 2147483648 to 3221225472: error: Put "http://localhost/api/v1/vm.resize": context deadline exceeded reason: reason: : unknown
The text was updated successfully, but these errors were encountered:
Avoid some of the memory pressure during K8s tests by waiting in
"kubectl delete".
Fixes: kata-containers#8769
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
Avoid some of the memory pressure during K8s tests by waiting in
"kubectl delete" before starting the next tests.
Fixes: kata-containers#8769
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
danmihai1
added a commit
to microsoft/kata-containers
that referenced
this issue
Jan 5, 2024
Log the list of the current pods between tests because these pods
might be related to cluster nodes occasionally running out of memory.
Fixes: kata-containers#8769
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
I have seen these tests failing every once in a while - apparently running out of memory. Most recently in https://github.com/kata-containers/kata-containers/actions/runs/7402656467/job/20145920312?pr=8768 馃憤
The text was updated successfully, but these errors were encountered: