-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
k8s: guest-pull: "Test readonly volume for pods" fails #9667
Comments
Fix tests: - k8s-credentials-secrets.bats - k8s-file-volume.bats - k8s-nested-configmap-secret.bats - k8s-projected-volume.bats - k8s-volume.bats Fixes: kata-containers#9664 kata-containers#9666 kata-containers#9667 kata-containers#9668 Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
Revert Fix tests: - k8s-credentials-secrets.bats - k8s-file-volume.bats - k8s-nested-configmap-secret.bats - k8s-projected-volume.bats - k8s-volume.bats - k8s-shared-volume.bats - k8s-kill-all-process-in-container.bats - k8s-sysctls.bats Fixes: kata-containers#9664 kata-containers#9666 kata-containers#9667 kata-containers#9668 Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
Revert code logic in 462051b Let me explain why: In our previous approach, we implemented guest pull by passing PullImageRequest to the guest. However, this method resulted in the loss of specifications essential for running the container, such as commands specified in YAML, during the CreateContainer stage. To address this, it is necessary to integrate the OCI specifications and process information from the image’s configuration with the container in guest pull. The snapshotter method does not care this issue. Nevertheless, a problem arises when two containers in the same pod attempt to pull the same image, like InitContainer. This is because the system searches for the existing configuration, which resides in the guest. The configuration, associated with <image name, cid>, is stored in the directory /run/kata-containers/<cid>. Consequently, when the InitContainer finishes its task and terminates, the directory ceases to exist. As a result, during the creation of the application container, the OCI spec and process information cannot be merged due to the absence of the expected configuration file. Fix tests: - k8s-credentials-secrets.bats - k8s-file-volume.bats - k8s-nested-configmap-secret.bats - k8s-projected-volume.bats - k8s-volume.bats - k8s-shared-volume.bats - k8s-kill-all-process-in-container.bats - k8s-sysctls.bats Fixes: kata-containers#9664 kata-containers#9666 kata-containers#9667 kata-containers#9668 Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
Let me explain why: In our previous approach, we implemented guest pull by passing PullImageRequest to the guest. However, this method resulted in the loss of specifications essential for running the container, such as commands specified in YAML, during the CreateContainer stage. To address this, it is necessary to integrate the OCI specifications and process information from the image’s configuration with the container in guest pull. The snapshotter method does not care this issue. Nevertheless, a problem arises when two containers in the same pod attempt to pull the same image, like InitContainer. This is because the system searches for the existing configuration, which resides in the guest. The configuration, associated with <image name, cid>, is stored in the directory /run/kata-containers/<cid>. Consequently, when the InitContainer finishes its task and terminates, the directory ceases to exist. As a result, during the creation of the application container, the OCI spec and process information cannot be merged due to the absence of the expected configuration file. Fix tests: - k8s-credentials-secrets.bats - k8s-file-volume.bats - k8s-nested-configmap-secret.bats - k8s-projected-volume.bats - k8s-volume.bats - k8s-shared-volume.bats - k8s-kill-all-process-in-container.bats - k8s-sysctls.bats Fixes: kata-containers#9664 kata-containers#9666 kata-containers#9667 kata-containers#9668 Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
Let me explain why: In our previous approach, we implemented guest pull by passing PullImageRequest to the guest. However, this method resulted in the loss of specifications essential for running the container, such as commands specified in YAML, during the CreateContainer stage. To address this, it is necessary to integrate the OCI specifications and process information from the image’s configuration with the container in guest pull. The snapshotter method does not care this issue. Nevertheless, a problem arises when two containers in the same pod attempt to pull the same image, like InitContainer. This is because the image service searches for the existing configuration, which resides in the guest. The configuration, associated with <image name, cid>, is stored in the directory /run/kata-containers/<cid>. Consequently, when the InitContainer finishes its task and terminates, the directory ceases to exist. As a result, during the creation of the application container, the OCI spec and process information cannot be merged due to the absence of the expected configuration file. Fix tests: - k8s-credentials-secrets.bats - k8s-file-volume.bats - k8s-nested-configmap-secret.bats - k8s-projected-volume.bats - k8s-volume.bats - k8s-shared-volume.bats - k8s-kill-all-process-in-container.bats - k8s-sysctls.bats Fixes: kata-containers#9664 kata-containers#9666 kata-containers#9667 kata-containers#9668 Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
Revert code logic in 462051b Let me explain why: In our previous approach, we implemented guest pull by passing PullImageRequest to the guest. However, this method resulted in the loss of specifications essential for running the container, such as commands specified in YAML, during the CreateContainer stage. To address this, it is necessary to integrate the OCI specifications and process information from the image’s configuration with the container in guest pull. The snapshotter method does not care this issue. Nevertheless, a problem arises when two containers in the same pod attempt to pull the same image, like InitContainer. This is because the image service searches for the existing configuration, which resides in the guest. The configuration, associated with <image name, cid>, is stored in the directory /run/kata-containers/<cid>. Consequently, when the InitContainer finishes its task and terminates, the directory ceases to exist. As a result, during the creation of the application container, the OCI spec and process information cannot be merged due to the absence of the expected configuration file. Fix tests: - k8s-credentials-secrets.bats - k8s-file-volume.bats - k8s-nested-configmap-secret.bats - k8s-projected-volume.bats - k8s-volume.bats - k8s-shared-volume.bats - k8s-kill-all-process-in-container.bats - k8s-sysctls.bats Fixes: kata-containers#9664 kata-containers#9666 kata-containers#9667 kata-containers#9668 Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
Revert code logic in 462051b Let me explain why: In our previous approach, we implemented guest pull by passing PullImageRequest to the guest. However, this method resulted in the loss of specifications essential for running the container, such as commands specified in YAML, during the CreateContainer stage. To address this, it is necessary to integrate the OCI specifications and process information from the image’s configuration with the container in guest pull. The snapshotter method does not care this issue. Nevertheless, a problem arises when two containers in the same pod attempt to pull the same image, like InitContainer. This is because the image service searches for the existing configuration, which resides in the guest. The configuration, associated with <image name, cid>, is stored in the directory /run/kata-containers/<cid>. Consequently, when the InitContainer finishes its task and terminates, the directory ceases to exist. As a result, during the creation of the application container, the OCI spec and process information cannot be merged due to the absence of the expected configuration file. Fix tests: - k8s-credentials-secrets.bats - k8s-file-volume.bats - k8s-nested-configmap-secret.bats - k8s-projected-volume.bats - k8s-volume.bats - k8s-shared-volume.bats - k8s-kill-all-process-in-container.bats - k8s-sysctls.bats Fixes: kata-containers#9664 kata-containers#9666 kata-containers#9667 kata-containers#9668 Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
Revert code logic in 462051b Let me explain why: In our previous approach, we implemented guest pull by passing PullImageRequest to the guest. However, this method resulted in the loss of specifications essential for running the container, such as commands specified in YAML, during the CreateContainer stage. To address this, it is necessary to integrate the OCI specifications and process information from the image’s configuration with the container in guest pull. The snapshotter method does not care this issue. Nevertheless, a problem arises when two containers in the same pod attempt to pull the same image, like InitContainer. This is because the image service searches for the existing configuration, which resides in the guest. The configuration, associated with <image name, cid>, is stored in the directory /run/kata-containers/<cid>. Consequently, when the InitContainer finishes its task and terminates, the directory ceases to exist. As a result, during the creation of the application container, the OCI spec and process information cannot be merged due to the absence of the expected configuration file. Fix tests: - k8s-credentials-secrets.bats - k8s-file-volume.bats - k8s-nested-configmap-secret.bats - k8s-projected-volume.bats - k8s-volume.bats - k8s-shared-volume.bats - k8s-kill-all-process-in-container.bats - k8s-sysctls.bats Fixes: kata-containers#9664 Fixes: kata-containers#9666 Fixes: kata-containers#9667 Fixes: kata-containers#9668 Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
Revert code logic in 462051b Let me explain why: In our previous approach, we implemented guest pull by passing PullImageRequest to the guest. However, this method resulted in the loss of specifications essential for running the container, such as commands specified in YAML, during the CreateContainer stage. To address this, it is necessary to integrate the OCI specifications and process information from the image’s configuration with the container in guest pull. The snapshotter method does not care this issue. Nevertheless, a problem arises when two containers in the same pod attempt to pull the same image, like InitContainer. This is because the image service searches for the existing configuration, which resides in the guest. The configuration, associated with <image name, cid>, is stored in the directory /run/kata-containers/<cid>. Consequently, when the InitContainer finishes its task and terminates, the directory ceases to exist. As a result, during the creation of the application container, the OCI spec and process information cannot be merged due to the absence of the expected configuration file. Fix tests: - k8s-credentials-secrets.bats - k8s-file-volume.bats - k8s-nested-configmap-secret.bats - k8s-projected-volume.bats - k8s-volume.bats - k8s-shared-volume.bats - k8s-kill-all-process-in-container.bats - k8s-sysctls.bats Fixes: kata-containers#9664 Fixes: kata-containers#9666 Fixes: kata-containers#9667 Fixes: kata-containers#9668 Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
Revert code logic in 462051b Let me explain why: In our previous approach, we implemented guest pull by passing PullImageRequest to the guest. However, this method resulted in the loss of specifications essential for running the container, such as commands specified in YAML, during the CreateContainer stage. To address this, it is necessary to integrate the OCI specifications and process information from the image’s configuration with the container in guest pull. The snapshotter method does not care this issue. Nevertheless, a problem arises when two containers in the same pod attempt to pull the same image, like InitContainer. This is because the image service searches for the existing configuration, which resides in the guest. The configuration, associated with <image name, cid>, is stored in the directory /run/kata-containers/<cid>. Consequently, when the InitContainer finishes its task and terminates, the directory ceases to exist. As a result, during the creation of the application container, the OCI spec and process information cannot be merged due to the absence of the expected configuration file. Fix tests: - k8s-credentials-secrets.bats - k8s-file-volume.bats - k8s-nested-configmap-secret.bats - k8s-projected-volume.bats - k8s-volume.bats - k8s-shared-volume.bats - k8s-kill-all-process-in-container.bats - k8s-sysctls.bats Fixes: kata-containers#9664 Fixes: kata-containers#9666 Fixes: kata-containers#9667 Fixes: kata-containers#9668 Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
Revert code logic in 462051b Let me explain why: In our previous approach, we implemented guest pull by passing PullImageRequest to the guest. However, this method resulted in the loss of specifications essential for running the container, such as commands specified in YAML, during the CreateContainer stage. To address this, it is necessary to integrate the OCI specifications and process information from the image’s configuration with the container in guest pull. The snapshotter method does not care this issue. Nevertheless, a problem arises when two containers in the same pod attempt to pull the same image, like InitContainer. This is because the image service searches for the existing configuration, which resides in the guest. The configuration, associated with <image name, cid>, is stored in the directory /run/kata-containers/<cid>. Consequently, when the InitContainer finishes its task and terminates, the directory ceases to exist. As a result, during the creation of the application container, the OCI spec and process information cannot be merged due to the absence of the expected configuration file. Fix tests: - k8s-credentials-secrets.bats - k8s-file-volume.bats - k8s-nested-configmap-secret.bats - k8s-projected-volume.bats - k8s-volume.bats - k8s-shared-volume.bats - k8s-sysctls.bats Fixes: kata-containers#9666 Fixes: kata-containers#9667 Fixes: kata-containers#9668 Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
Revert code logic in 462051b Let me explain why: In our previous approach, we implemented guest pull by passing PullImageRequest to the guest. However, this method resulted in the loss of specifications essential for running the container, such as commands specified in YAML, during the CreateContainer stage. To address this, it is necessary to integrate the OCI specifications and process information from the image’s configuration with the container in guest pull. The snapshotter method does not care this issue. Nevertheless, a problem arises when two containers in the same pod attempt to pull the same image, like InitContainer. This is because the image service searches for the existing configuration, which resides in the guest. The configuration, associated with <image name, cid>, is stored in the directory /run/kata-containers/<cid>. Consequently, when the InitContainer finishes its task and terminates, the directory ceases to exist. As a result, during the creation of the application container, the OCI spec and process information cannot be merged due to the absence of the expected configuration file. Fix tests: - k8s-credentials-secrets.bats - k8s-file-volume.bats - k8s-nested-configmap-secret.bats - k8s-projected-volume.bats - k8s-volume.bats - k8s-shared-volume.bats - k8s-sysctls.bats Fixes: kata-containers#9666 Fixes: kata-containers#9667 Fixes: kata-containers#9668 Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
Revert code logic in 462051b Let me explain why: In our previous approach, we implemented guest pull by passing PullImageRequest to the guest. However, this method resulted in the loss of specifications essential for running the container, such as commands specified in YAML, during the CreateContainer stage. To address this, it is necessary to integrate the OCI specifications and process information from the image’s configuration with the container in guest pull. The snapshotter method does not care this issue. Nevertheless, a problem arises when two containers in the same pod attempt to pull the same image, like InitContainer. This is because the image service searches for the existing configuration, which resides in the guest. The configuration, associated with <image name, cid>, is stored in the directory /run/kata-containers/<cid>. Consequently, when the InitContainer finishes its task and terminates, the directory ceases to exist. As a result, during the creation of the application container, the OCI spec and process information cannot be merged due to the absence of the expected configuration file. Fix tests: - k8s-credentials-secrets.bats - k8s-file-volume.bats - k8s-nested-configmap-secret.bats - k8s-projected-volume.bats - k8s-volume.bats - k8s-shared-volume.bats - k8s-sysctls.bats Fixes: kata-containers#9666 Fixes: kata-containers#9667 Fixes: kata-containers#9668 Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
Refactor code in guest pull Fix tests: - k8s-credentials-secrets.bats - k8s-file-volume.bats - k8s-nested-configmap-secret.bats - k8s-projected-volume.bats - k8s-volume.bats - k8s-shared-volume.bats - k8s-sysctls.bats Fixes: kata-containers#9666 Fixes: kata-containers#9667 Fixes: kata-containers#9668 Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
Refactor code in guest pull Fix tests: - k8s-credentials-secrets.bats - k8s-file-volume.bats - k8s-nested-configmap-secret.bats - k8s-projected-volume.bats - k8s-volume.bats - k8s-shared-volume.bats - k8s-sysctls.bats Fixes: kata-containers#9666 Fixes: kata-containers#9667 Fixes: kata-containers#9668 Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
This test fails with qemu-coco-dev configuration and guest-pull image pull. Issue: kata-containers#9667 Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Refactor code in guest pull Fix tests: - k8s-credentials-secrets.bats - k8s-file-volume.bats - k8s-nested-configmap-secret.bats - k8s-projected-volume.bats - k8s-volume.bats - k8s-shared-volume.bats - k8s-sysctls.bats Fixes: kata-containers#9666 Fixes: kata-containers#9667 Fixes: kata-containers#9668 Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
Refactor code in guest pull Fix tests: - k8s-credentials-secrets.bats - k8s-file-volume.bats - k8s-nested-configmap-secret.bats - k8s-projected-volume.bats - k8s-volume.bats - k8s-shared-volume.bats - k8s-sysctls.bats - k8s-inotify.bats - k8s-liveness-probes.bats Fixes: kata-containers#9665 Fixes: kata-containers#9666 Fixes: kata-containers#9667 Fixes: kata-containers#9668 Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
Refactor code in guest pull Fix tests: - k8s-credentials-secrets.bats - k8s-file-volume.bats - k8s-nested-configmap-secret.bats - k8s-projected-volume.bats - k8s-volume.bats - k8s-shared-volume.bats - k8s-sysctls.bats - k8s-inotify.bats - k8s-liveness-probes.bats Fixes: kata-containers#9665 Fixes: kata-containers#9666 Fixes: kata-containers#9667 Fixes: kata-containers#9668 Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
I prepared a image ( I checked it really had
Next I'm to collect more information from a couple of tests as @fidencio asked early today. The script I used: #!/bin/bash
set -o errexit
set -o nounset
set -o pipefail
export AZ_RG="wmoschet-ci-tests"
export AKS_NAME="wmoschet-aks-ci-test"
export DOCKER_REGISTRY=quay.io
export DOCKER_REPO=wainersm/kata-deploy
export DOCKER_TAG=kata-containers-b30d085271fd333b2925a14646130051a95fad42-amd64
export KATA_HOST_OS=ubuntu
export KATA_HYPERVISOR=qemu-coco-dev
export KUBERNETES=vanilla
export KBS=true
export KBS_INGRESS=aks
export PULL_TYPE=guest-pull
export SNAPSHOTTER=nydus
export USING_NFD="false"
export K8S_TEST_UNION="k8s-file-volume.bats k8s-credentials-secrets.bats k8s-inotify.bats k8s-nested-configmap-secret.bats k8s-projected-volume.bats k8s-volume.bats"
prep_env() {
echo "== CREATE CLUSTER =="
./gha-run.sh create-cluster
echo "== GET CREDENTIALS =="
./gha-run.sh get-cluster-credentials
echo "== DEPLOY SNAPSHOTTER =="
./gha-run.sh deploy-snapshotter
echo "== DEPLOY KATA =="
./gha-run.sh deploy-kata-aks
echo "== DEPLOY KBS =="
./gha-run.sh deploy-coco-kbs
}
teardown_env() {
echo "== DESTROY CLUSTER =="
./gha-run.sh delete-cluster
}
run_tests() {
echo "== RUN TESTS =="
./gha-run.sh run-tests
}
main() {
if [ $# -gt 0 ]; then
case $1 in
"-d") teardown_env ;;
"-t") run_tests ;;
esac
return
fi
prep_env
run_tests
}
main "$@" |
Test's generate yaml: $ cat runtimeclass_workloads_work/test-pod-file-volume.yaml
#
# Copyright (c) 2022 Ant Group
#
# SPDX-License-Identifier: Apache-2.0
#
apiVersion: v1
kind: Pod
metadata:
name: test-file-volume
annotations:
io.containerd.cri.runtime-handler: kata-qemu-coco-dev
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
restartPolicy: Never
nodeName: aks-nodepool1-26043497-vmss000000
volumes:
- name: shared-file
hostPath:
path: /tmp/file-volume-test-foo.Nfy75
type: File
containers:
- name: busybox-file-volume-container
image: quay.io/prometheus/busybox:latest
volumeMounts:
- name: shared-file
mountPath: /tmp/foo.txt
command: ["/bin/sh"]
args: ["-c", "tail -f /dev/null"] It's is spec'ed
There is not the agent policy annotation:
Here is the qemu process at worker node:
Checking that the host file (
Run root@localhost:/# mount
/dev/pmem0p1 on / type ext4 (ro,relatime)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,size=1015000k,nr_inodes=253750,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,size=203772k,mode=755)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=28,pgrp=1,timeout=0,minproto=5,maxproto=5,direct)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev)
fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
nsfs on /run/sandbox-ns/ipc type nsfs (rw)
nsfs on /run/sandbox-ns/uts type nsfs (rw)
shm on /run/kata-containers/sandbox/shm type tmpfs (rw,relatime)
tmpfs on /etc/resolv.conf type tmpfs (rw,nosuid,nodev,size=203772k,mode=755)
tmpfs on /run/kata-containers/b19befe69add3418ed6cec8d00156fbbb2db0de7567358b837ab7a5b02e0138f/rootfs type tmpfs (rw,nosuid,nodev,size=203772k,mode=755)
tmpfs on /run/kata-containers/b19befe69add3418ed6cec8d00156fbbb2db0de7567358b837ab7a5b02e0138f/rootfs type tmpfs (rw,nosuid,nodev,size=203772k,mode=755)
tmpfs on /run/kata-containers/b19befe69add3418ed6cec8d00156fbbb2db0de7567358b837ab7a5b02e0138f/pause/rootfs type tmpfs (rw,nosuid,nodev,size=203772k,mode=755)
overlay on /run/kata-containers/49a8902d85e1c09dad146e9221f892c6955a9ca65de9fe5c7f7f015dc944860d/images/rootfs type overlay (rw,relatime,lowerdir=/run/kata-containers/image/layers/sha256_1617e25568b2231fdd0d5caff63b06f6f7738d8d961f031c80e47d35aaec9733:/run/kata-containers/image/layers/sha256_9fa9226be034e47923c0457d916aa68474cdfb23af8d4525e9baeebc4760977a,upperdir=/run/kata-containers/image/overlay/0/upperdir,workdir=/run/kata-containers/image/overlay/0/workdir,redirect_dir=nofollow,index=off,uuid=null,metacopy=off)
overlay on /run/kata-containers/49a8902d85e1c09dad146e9221f892c6955a9ca65de9fe5c7f7f015dc944860d/rootfs type overlay (rw,relatime,lowerdir=/run/kata-containers/image/layers/sha256_1617e25568b2231fdd0d5caff63b06f6f7738d8d961f031c80e47d35aaec9733:/run/kata-containers/image/layers/sha256_9fa9226be034e47923c0457d916aa68474cdfb23af8d4525e9baeebc4760977a,upperdir=/run/kata-containers/image/overlay/0/upperdir,workdir=/run/kata-containers/image/overlay/0/workdir,redirect_dir=nofollow,index=off,uuid=null,metacopy=off)
overlay on /run/kata-containers/49a8902d85e1c09dad146e9221f892c6955a9ca65de9fe5c7f7f015dc944860d/rootfs type overlay (rw,relatime,lowerdir=/run/kata-containers/image/layers/sha256_1617e25568b2231fdd0d5caff63b06f6f7738d8d961f031c80e47d35aaec9733:/run/kata-containers/image/layers/sha256_9fa9226be034e47923c0457d916aa68474cdfb23af8d4525e9baeebc4760977a,upperdir=/run/kata-containers/image/overlay/0/upperdir,workdir=/run/kata-containers/image/overlay/0/workdir,redirect_dir=nofollow,index=off,uuid=null,metacopy=off)
overlay on /run/kata-containers/49a8902d85e1c09dad146e9221f892c6955a9ca65de9fe5c7f7f015dc944860d/images/rootfs type overlay (rw,relatime,lowerdir=/run/kata-containers/image/layers/sha256_1617e25568b2231fdd0d5caff63b06f6f7738d8d961f031c80e47d35aaec9733:/run/kata-containers/image/layers/sha256_9fa9226be034e47923c0457d916aa68474cdfb23af8d4525e9baeebc4760977a,upperdir=/run/kata-containers/image/overlay/0/upperdir,workdir=/run/kata-containers/image/overlay/0/workdir,redirect_dir=nofollow,index=off,uuid=null,metacopy=off) |
This test fails with qemu-coco-dev configuration and guest-pull image pull. Issue: kata-containers#9667 Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Refactor code in guest pull Fix tests: - k8s-credentials-secrets.bats - k8s-file-volume.bats - k8s-nested-configmap-secret.bats - k8s-projected-volume.bats - k8s-volume.bats - k8s-shared-volume.bats - k8s-sysctls.bats - k8s-inotify.bats - k8s-liveness-probes.bats Fixes: kata-containers#9665 Fixes: kata-containers#9666 Fixes: kata-containers#9667 Fixes: kata-containers#9668 Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
Well, the volume has to be exposed to the VM somehow ... and this is yet not supported.
This will be the very same case for all the tests related to volumes, in the same way we have the limitations for firecracker.
The tests affected by this limitation are:
The text was updated successfully, but these errors were encountered: