Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

k8s: guest-pull: "Test readonly volume for pods" fails #9667

Open
fidencio opened this issue May 19, 2024 · 2 comments
Open

k8s: guest-pull: "Test readonly volume for pods" fails #9667

fidencio opened this issue May 19, 2024 · 2 comments
Labels
area/guest-pull bug Incorrect behaviour needs-review Needs to be assessed by the team.

Comments

@fidencio
Copy link
Member

fidencio commented May 19, 2024

# Events:
#   Type     Reason   Age   From     Message
#   ----     ------   ----  ----     -------
#   Normal   Pulling  82s   kubelet  Pulling image "busybox"
#   Normal   Pulled   81s   kubelet  Successfully pulled image "busybox" in 739ms (739ms including waiting)
#   Normal   Created  81s   kubelet  Created container busybox-file-volume-container
#   Warning  Failed   79s   kubelet  Error: failed to create containerd task: failed to create shim task: failed to handle layer: hasher sha256: channel: send failed SendError { .. }: unknown
# pod "test-file-volume" deleted

Well, the volume has to be exposed to the VM somehow ... and this is yet not supported.
This will be the very same case for all the tests related to volumes, in the same way we have the limitations for firecracker.

The tests affected by this limitation are:

  • k8s-credentials-secrets.bats
  • k8s-file-volume.bats
  • k8s-inotify.bats
  • k8s-nested-configmap-secret.bats
  • k8s-projected-volume.bats
  • k8s-volume.bats
@fidencio fidencio added bug Incorrect behaviour needs-review Needs to be assessed by the team. area/guest-pull labels May 19, 2024
ChengyuZhu6 added a commit to ChengyuZhu6/kata-containers that referenced this issue May 22, 2024
Fix tests:
- k8s-credentials-secrets.bats
- k8s-file-volume.bats
- k8s-nested-configmap-secret.bats
- k8s-projected-volume.bats
- k8s-volume.bats

Fixes: kata-containers#9664 kata-containers#9666 kata-containers#9667 kata-containers#9668

Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
ChengyuZhu6 added a commit to ChengyuZhu6/kata-containers that referenced this issue May 22, 2024
Revert

Fix tests:
- k8s-credentials-secrets.bats
- k8s-file-volume.bats
- k8s-nested-configmap-secret.bats
- k8s-projected-volume.bats
- k8s-volume.bats
- k8s-shared-volume.bats
- k8s-kill-all-process-in-container.bats
- k8s-sysctls.bats

Fixes: kata-containers#9664 kata-containers#9666 kata-containers#9667 kata-containers#9668

Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
ChengyuZhu6 added a commit to ChengyuZhu6/kata-containers that referenced this issue May 22, 2024
Revert code logic in 462051b

Let me explain why:

In our previous approach, we implemented guest pull by passing PullImageRequest to the guest.
However, this method  resulted in the loss of specifications essential for running the container,
such as commands specified in YAML, during the CreateContainer stage. To address this,
it is necessary to integrate the OCI specifications and process information from the image’s configuration with the container in guest pull.

The snapshotter method does not care this issue. Nevertheless, a problem arises when two containers in the same pod attempt
to pull the same image, like InitContainer. This is because the system searches for the existing configuration, which resides in the
guest. The configuration, associated with <image name, cid>, is stored in the directory /run/kata-containers/<cid>. Consequently, when
the InitContainer finishes its task and terminates, the directory ceases to exist. As a result, during the creation of the application container,
the OCI spec and process information cannot be merged due to the absence of the expected configuration file.

Fix tests:
- k8s-credentials-secrets.bats
- k8s-file-volume.bats
- k8s-nested-configmap-secret.bats
- k8s-projected-volume.bats
- k8s-volume.bats
- k8s-shared-volume.bats
- k8s-kill-all-process-in-container.bats
- k8s-sysctls.bats

Fixes: kata-containers#9664 kata-containers#9666 kata-containers#9667 kata-containers#9668

Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
@katacontainersbot katacontainersbot moved this from To do to In progress in Issue backlog May 22, 2024
ChengyuZhu6 added a commit to ChengyuZhu6/kata-containers that referenced this issue May 22, 2024
Let me explain why:

In our previous approach, we implemented guest pull by passing PullImageRequest to the guest.
However, this method  resulted in the loss of specifications essential for running the container,
such as commands specified in YAML, during the CreateContainer stage. To address this,
it is necessary to integrate the OCI specifications and process information from the image’s configuration with the container in guest pull.

The snapshotter method does not care this issue. Nevertheless, a problem arises when two containers in the same pod attempt
to pull the same image, like InitContainer. This is because the system searches for the existing configuration, which resides in the
guest. The configuration, associated with <image name, cid>, is stored in the directory /run/kata-containers/<cid>. Consequently, when
the InitContainer finishes its task and terminates, the directory ceases to exist. As a result, during the creation of the application container,
the OCI spec and process information cannot be merged due to the absence of the expected configuration file.

Fix tests:
- k8s-credentials-secrets.bats
- k8s-file-volume.bats
- k8s-nested-configmap-secret.bats
- k8s-projected-volume.bats
- k8s-volume.bats
- k8s-shared-volume.bats
- k8s-kill-all-process-in-container.bats
- k8s-sysctls.bats

Fixes: kata-containers#9664 kata-containers#9666 kata-containers#9667 kata-containers#9668

Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
ChengyuZhu6 added a commit to ChengyuZhu6/kata-containers that referenced this issue May 22, 2024
Let me explain why:

In our previous approach, we implemented guest pull by passing PullImageRequest to the guest.
However, this method  resulted in the loss of specifications essential for running the container,
such as commands specified in YAML, during the CreateContainer stage. To address this,
it is necessary to integrate the OCI specifications and process information from the image’s configuration with the container in guest pull.

The snapshotter method does not care this issue. Nevertheless, a problem arises when two containers in the same pod attempt
to pull the same image, like InitContainer. This is because the image service searches for the existing configuration,
which resides in the guest. The configuration, associated with <image name, cid>, is stored in the directory /run/kata-containers/<cid>.
Consequently, when the InitContainer finishes its task and terminates, the directory ceases to exist.
As a result, during the creation of the application container, the OCI spec and process information cannot
be merged due to the absence of the expected configuration file.

Fix tests:
- k8s-credentials-secrets.bats
- k8s-file-volume.bats
- k8s-nested-configmap-secret.bats
- k8s-projected-volume.bats
- k8s-volume.bats
- k8s-shared-volume.bats
- k8s-kill-all-process-in-container.bats
- k8s-sysctls.bats

Fixes: kata-containers#9664 kata-containers#9666 kata-containers#9667 kata-containers#9668

Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
ChengyuZhu6 added a commit to ChengyuZhu6/kata-containers that referenced this issue May 22, 2024
Revert code logic in 462051b

Let me explain why:

In our previous approach, we implemented guest pull by passing PullImageRequest to the guest.
However, this method  resulted in the loss of specifications essential for running the container,
such as commands specified in YAML, during the CreateContainer stage. To address this,
it is necessary to integrate the OCI specifications and process information from the image’s configuration with the container in guest pull.

The snapshotter method does not care this issue. Nevertheless, a problem arises when two containers in the same pod attempt
to pull the same image, like InitContainer. This is because the image service searches for the existing configuration,
which resides in the guest. The configuration, associated with <image name, cid>, is stored in the directory /run/kata-containers/<cid>.
Consequently, when the InitContainer finishes its task and terminates, the directory ceases to exist.
As a result, during the creation of the application container, the OCI spec and process information cannot
be merged due to the absence of the expected configuration file.

Fix tests:
- k8s-credentials-secrets.bats
- k8s-file-volume.bats
- k8s-nested-configmap-secret.bats
- k8s-projected-volume.bats
- k8s-volume.bats
- k8s-shared-volume.bats
- k8s-kill-all-process-in-container.bats
- k8s-sysctls.bats

Fixes: kata-containers#9664 kata-containers#9666 kata-containers#9667 kata-containers#9668

Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
ChengyuZhu6 added a commit to ChengyuZhu6/kata-containers that referenced this issue May 22, 2024
Revert code logic in 462051b

Let me explain why:

In our previous approach, we implemented guest pull by passing PullImageRequest to the guest.
However, this method  resulted in the loss of specifications essential for running the container,
such as commands specified in YAML, during the CreateContainer stage. To address this,
it is necessary to integrate the OCI specifications and process information from the image’s configuration with the container in guest pull.

The snapshotter method does not care this issue. Nevertheless, a problem arises when two containers in the same pod attempt
to pull the same image, like InitContainer. This is because the image service searches for the existing configuration,
which resides in the guest. The configuration, associated with <image name, cid>, is stored in the directory /run/kata-containers/<cid>.
Consequently, when the InitContainer finishes its task and terminates, the directory ceases to exist.
As a result, during the creation of the application container, the OCI spec and process information cannot
be merged due to the absence of the expected configuration file.

Fix tests:
- k8s-credentials-secrets.bats
- k8s-file-volume.bats
- k8s-nested-configmap-secret.bats
- k8s-projected-volume.bats
- k8s-volume.bats
- k8s-shared-volume.bats
- k8s-kill-all-process-in-container.bats
- k8s-sysctls.bats

Fixes: kata-containers#9664   kata-containers#9666   kata-containers#9667   kata-containers#9668

Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
ChengyuZhu6 added a commit to ChengyuZhu6/kata-containers that referenced this issue May 22, 2024
Revert code logic in 462051b

Let me explain why:

In our previous approach, we implemented guest pull by passing PullImageRequest to the guest.
However, this method  resulted in the loss of specifications essential for running the container,
such as commands specified in YAML, during the CreateContainer stage. To address this,
it is necessary to integrate the OCI specifications and process information from the image’s configuration with the container in guest pull.

The snapshotter method does not care this issue. Nevertheless, a problem arises when two containers in the same pod attempt
to pull the same image, like InitContainer. This is because the image service searches for the existing configuration,
which resides in the guest. The configuration, associated with <image name, cid>, is stored in the directory /run/kata-containers/<cid>.
Consequently, when the InitContainer finishes its task and terminates, the directory ceases to exist.
As a result, during the creation of the application container, the OCI spec and process information cannot
be merged due to the absence of the expected configuration file.

Fix tests:
- k8s-credentials-secrets.bats
- k8s-file-volume.bats
- k8s-nested-configmap-secret.bats
- k8s-projected-volume.bats
- k8s-volume.bats
- k8s-shared-volume.bats
- k8s-kill-all-process-in-container.bats
- k8s-sysctls.bats

Fixes: kata-containers#9664
Fixes: kata-containers#9666
Fixes: kata-containers#9667
Fixes: kata-containers#9668

Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
fidencio pushed a commit to ChengyuZhu6/kata-containers that referenced this issue May 22, 2024
Revert code logic in 462051b

Let me explain why:

In our previous approach, we implemented guest pull by passing PullImageRequest to the guest.
However, this method  resulted in the loss of specifications essential for running the container,
such as commands specified in YAML, during the CreateContainer stage. To address this,
it is necessary to integrate the OCI specifications and process information from the image’s configuration with the container in guest pull.

The snapshotter method does not care this issue. Nevertheless, a problem arises when two containers in the same pod attempt
to pull the same image, like InitContainer. This is because the image service searches for the existing configuration,
which resides in the guest. The configuration, associated with <image name, cid>, is stored in the directory /run/kata-containers/<cid>.
Consequently, when the InitContainer finishes its task and terminates, the directory ceases to exist.
As a result, during the creation of the application container, the OCI spec and process information cannot
be merged due to the absence of the expected configuration file.

Fix tests:
- k8s-credentials-secrets.bats
- k8s-file-volume.bats
- k8s-nested-configmap-secret.bats
- k8s-projected-volume.bats
- k8s-volume.bats
- k8s-shared-volume.bats
- k8s-kill-all-process-in-container.bats
- k8s-sysctls.bats

Fixes: kata-containers#9664
Fixes: kata-containers#9666
Fixes: kata-containers#9667
Fixes: kata-containers#9668

Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
ChengyuZhu6 added a commit to ChengyuZhu6/kata-containers that referenced this issue May 23, 2024
Revert code logic in 462051b

Let me explain why:

In our previous approach, we implemented guest pull by passing PullImageRequest to the guest.
However, this method  resulted in the loss of specifications essential for running the container,
such as commands specified in YAML, during the CreateContainer stage. To address this,
it is necessary to integrate the OCI specifications and process information from the image’s configuration with the container in guest pull.

The snapshotter method does not care this issue. Nevertheless, a problem arises when two containers in the same pod attempt
to pull the same image, like InitContainer. This is because the image service searches for the existing configuration,
which resides in the guest. The configuration, associated with <image name, cid>, is stored in the directory /run/kata-containers/<cid>.
Consequently, when the InitContainer finishes its task and terminates, the directory ceases to exist.
As a result, during the creation of the application container, the OCI spec and process information cannot
be merged due to the absence of the expected configuration file.

Fix tests:
- k8s-credentials-secrets.bats
- k8s-file-volume.bats
- k8s-nested-configmap-secret.bats
- k8s-projected-volume.bats
- k8s-volume.bats
- k8s-shared-volume.bats
- k8s-kill-all-process-in-container.bats
- k8s-sysctls.bats

Fixes: kata-containers#9664
Fixes: kata-containers#9666
Fixes: kata-containers#9667
Fixes: kata-containers#9668

Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
ChengyuZhu6 added a commit to ChengyuZhu6/kata-containers that referenced this issue May 23, 2024
Revert code logic in 462051b

Let me explain why:

In our previous approach, we implemented guest pull by passing PullImageRequest to the guest.
However, this method  resulted in the loss of specifications essential for running the container,
such as commands specified in YAML, during the CreateContainer stage. To address this,
it is necessary to integrate the OCI specifications and process information from the image’s configuration with the container in guest pull.

The snapshotter method does not care this issue. Nevertheless, a problem arises when two containers in the same pod attempt
to pull the same image, like InitContainer. This is because the image service searches for the existing configuration,
which resides in the guest. The configuration, associated with <image name, cid>, is stored in the directory /run/kata-containers/<cid>.
Consequently, when the InitContainer finishes its task and terminates, the directory ceases to exist.
As a result, during the creation of the application container, the OCI spec and process information cannot
be merged due to the absence of the expected configuration file.

Fix tests:
- k8s-credentials-secrets.bats
- k8s-file-volume.bats
- k8s-nested-configmap-secret.bats
- k8s-projected-volume.bats
- k8s-volume.bats
- k8s-shared-volume.bats
- k8s-sysctls.bats

Fixes: kata-containers#9666
Fixes: kata-containers#9667
Fixes: kata-containers#9668

Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
ChengyuZhu6 added a commit to ChengyuZhu6/kata-containers that referenced this issue May 23, 2024
Revert code logic in 462051b

Let me explain why:

In our previous approach, we implemented guest pull by passing PullImageRequest to the guest.
However, this method  resulted in the loss of specifications essential for running the container,
such as commands specified in YAML, during the CreateContainer stage. To address this,
it is necessary to integrate the OCI specifications and process information from the image’s configuration with the container in guest pull.

The snapshotter method does not care this issue. Nevertheless, a problem arises when two containers in the same pod attempt
to pull the same image, like InitContainer. This is because the image service searches for the existing configuration,
which resides in the guest. The configuration, associated with <image name, cid>, is stored in the directory /run/kata-containers/<cid>.
Consequently, when the InitContainer finishes its task and terminates, the directory ceases to exist.
As a result, during the creation of the application container, the OCI spec and process information cannot
be merged due to the absence of the expected configuration file.

Fix tests:
- k8s-credentials-secrets.bats
- k8s-file-volume.bats
- k8s-nested-configmap-secret.bats
- k8s-projected-volume.bats
- k8s-volume.bats
- k8s-shared-volume.bats
- k8s-sysctls.bats

Fixes: kata-containers#9666
Fixes: kata-containers#9667
Fixes: kata-containers#9668

Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
ChengyuZhu6 added a commit to ChengyuZhu6/kata-containers that referenced this issue May 23, 2024
Revert code logic in 462051b

Let me explain why:

In our previous approach, we implemented guest pull by passing PullImageRequest to the guest.
However, this method  resulted in the loss of specifications essential for running the container,
such as commands specified in YAML, during the CreateContainer stage. To address this,
it is necessary to integrate the OCI specifications and process information from the image’s configuration with the container in guest pull.

The snapshotter method does not care this issue. Nevertheless, a problem arises when two containers in the same pod attempt
to pull the same image, like InitContainer. This is because the image service searches for the existing configuration,
which resides in the guest. The configuration, associated with <image name, cid>, is stored in the directory /run/kata-containers/<cid>.
Consequently, when the InitContainer finishes its task and terminates, the directory ceases to exist.
As a result, during the creation of the application container, the OCI spec and process information cannot
be merged due to the absence of the expected configuration file.

Fix tests:
- k8s-credentials-secrets.bats
- k8s-file-volume.bats
- k8s-nested-configmap-secret.bats
- k8s-projected-volume.bats
- k8s-volume.bats
- k8s-shared-volume.bats
- k8s-sysctls.bats

Fixes: kata-containers#9666
Fixes: kata-containers#9667
Fixes: kata-containers#9668

Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
ChengyuZhu6 added a commit to ChengyuZhu6/kata-containers that referenced this issue May 23, 2024
Refactor code in guest pull

Fix tests:
- k8s-credentials-secrets.bats
- k8s-file-volume.bats
- k8s-nested-configmap-secret.bats
- k8s-projected-volume.bats
- k8s-volume.bats
- k8s-shared-volume.bats
- k8s-sysctls.bats

Fixes: kata-containers#9666
Fixes: kata-containers#9667
Fixes: kata-containers#9668

Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
ChengyuZhu6 added a commit to ChengyuZhu6/kata-containers that referenced this issue May 23, 2024
Refactor code in guest pull

Fix tests:
- k8s-credentials-secrets.bats
- k8s-file-volume.bats
- k8s-nested-configmap-secret.bats
- k8s-projected-volume.bats
- k8s-volume.bats
- k8s-shared-volume.bats
- k8s-sysctls.bats

Fixes: kata-containers#9666
Fixes: kata-containers#9667
Fixes: kata-containers#9668

Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
wainersm added a commit to wainersm/kata-containers that referenced this issue May 30, 2024
This test fails with qemu-coco-dev configuration and guest-pull image pull.

Issue: kata-containers#9667
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
ChengyuZhu6 added a commit to ChengyuZhu6/kata-containers that referenced this issue Jun 5, 2024
Refactor code in guest pull

Fix tests:
- k8s-credentials-secrets.bats
- k8s-file-volume.bats
- k8s-nested-configmap-secret.bats
- k8s-projected-volume.bats
- k8s-volume.bats
- k8s-shared-volume.bats
- k8s-sysctls.bats

Fixes: kata-containers#9666
Fixes: kata-containers#9667
Fixes: kata-containers#9668

Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
ChengyuZhu6 added a commit to ChengyuZhu6/kata-containers that referenced this issue Jun 5, 2024
Refactor code in guest pull

Fix tests:
- k8s-credentials-secrets.bats
- k8s-file-volume.bats
- k8s-nested-configmap-secret.bats
- k8s-projected-volume.bats
- k8s-volume.bats
- k8s-shared-volume.bats
- k8s-sysctls.bats
- k8s-inotify.bats
- k8s-liveness-probes.bats

Fixes: kata-containers#9665
Fixes: kata-containers#9666
Fixes: kata-containers#9667
Fixes: kata-containers#9668

Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
ChengyuZhu6 added a commit to ChengyuZhu6/kata-containers that referenced this issue Jun 6, 2024
Refactor code in guest pull

Fix tests:
- k8s-credentials-secrets.bats
- k8s-file-volume.bats
- k8s-nested-configmap-secret.bats
- k8s-projected-volume.bats
- k8s-volume.bats
- k8s-shared-volume.bats
- k8s-sysctls.bats
- k8s-inotify.bats
- k8s-liveness-probes.bats

Fixes: kata-containers#9665
Fixes: kata-containers#9666
Fixes: kata-containers#9667
Fixes: kata-containers#9668

Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
@wainersm
Copy link
Contributor

wainersm commented Jun 6, 2024

I prepared a image (quay.io/wainersm/kata-deploy:kata-containers-b30d085271fd333b2925a14646130051a95fad42-amd64) from latest kata-deploy-ci image but with shared_fs=none in configuration-qemu-coco-dev.toml. Installed in AKS and ran the tests k8s-credentials-secrets.bats k8s-inotify.bats k8s-nested-configmap-secret.bats k8s-projected-volume.bats k8s-volume.bats , they all passed.

I checked it really had shared_fs=none on the worker node:

/ # cat /host/opt/kata/share/defaults/kata-containers/configuration-qemu-coco-dev.toml | grep shared_fs
shared_fs = "none"
/ # 

Next I'm to collect more information from a couple of tests as @fidencio asked early today.

The script I used:

#!/bin/bash

set -o errexit
set -o nounset
set -o pipefail

export AZ_RG="wmoschet-ci-tests"
export AKS_NAME="wmoschet-aks-ci-test"

export DOCKER_REGISTRY=quay.io
export DOCKER_REPO=wainersm/kata-deploy
export DOCKER_TAG=kata-containers-b30d085271fd333b2925a14646130051a95fad42-amd64

export KATA_HOST_OS=ubuntu
export KATA_HYPERVISOR=qemu-coco-dev
export KUBERNETES=vanilla
export KBS=true
export KBS_INGRESS=aks
export PULL_TYPE=guest-pull
export SNAPSHOTTER=nydus
export USING_NFD="false"
export K8S_TEST_UNION="k8s-file-volume.bats k8s-credentials-secrets.bats k8s-inotify.bats k8s-nested-configmap-secret.bats k8s-projected-volume.bats k8s-volume.bats"

prep_env() {
    echo "== CREATE CLUSTER =="
    ./gha-run.sh create-cluster
    echo "== GET CREDENTIALS =="
    ./gha-run.sh get-cluster-credentials
    echo "== DEPLOY SNAPSHOTTER =="
    ./gha-run.sh deploy-snapshotter
    echo "== DEPLOY KATA =="
    ./gha-run.sh deploy-kata-aks
    echo "== DEPLOY KBS =="
    ./gha-run.sh deploy-coco-kbs
}

teardown_env() {
    echo "== DESTROY CLUSTER =="
    ./gha-run.sh delete-cluster
}

run_tests() {
    echo "== RUN TESTS =="
    ./gha-run.sh run-tests
}

main() {
    if [ $# -gt 0 ]; then
        case $1 in
        "-d") teardown_env ;;
        "-t") run_tests ;;
        esac
        return
    fi

    prep_env
    run_tests
}

main "$@"

@wainersm
Copy link
Contributor

wainersm commented Jun 6, 2024

  • k8s-file-volume.bats

Test's generate yaml:

$ cat runtimeclass_workloads_work/test-pod-file-volume.yaml 
#
# Copyright (c) 2022 Ant Group
#
# SPDX-License-Identifier: Apache-2.0
#
apiVersion: v1
kind: Pod
metadata:
  name: test-file-volume
  annotations:
    io.containerd.cri.runtime-handler: kata-qemu-coco-dev
spec:
  terminationGracePeriodSeconds: 0
  runtimeClassName: kata
  restartPolicy: Never
  nodeName: aks-nodepool1-26043497-vmss000000
  volumes:
    - name: shared-file
      hostPath:
        path: /tmp/file-volume-test-foo.Nfy75
        type: File
  containers:
    - name: busybox-file-volume-container
      image: quay.io/prometheus/busybox:latest
      volumeMounts:
        - name: shared-file
          mountPath: /tmp/foo.txt
      command: ["/bin/sh"]
      args: ["-c", "tail -f /dev/null"]

It's is spec'ed runtimeClassName: kata and kata is handled by qemu-coco-dev:

$ kubectl get runtimeclass
NAME                 HANDLER              AGE
kata                 kata-qemu-coco-dev   3h50m
kata-qemu-coco-dev   kata-qemu-coco-dev   3h50m

There is not the agent policy annotation:

$ kubectl get pod -o jsonpath='{.items[0].metada
ta.annotations}'
{"io.containerd.cri.runtime-handler":"kata-qemu-coco-dev"}

Here is the qemu process at worker node:

$ kubectl debug -it --image busybox node/aks-nod
epool1-26043497-vmss000000
Creating debugging pod node-debugger-aks-nodepool1-26043497-vmss000000-8vftn with container debugger on node aks-nodepool1-26043497-vmss000000.
If you don't see a command prompt, try pressing enter.
/ # 
/ # 
/ # chroot /host
root@aks-nodepool1-26043497-vmss000000:/# ps aux | grep qemu
root      272738  0.4  3.9 2640684 324652 ?      Sl   18:16   0:04 /opt/kata/bin/qemu-system-x86_64 -name sandbox-b19befe69add3418ed6cec8d00156fbbb2db0de7567358b837ab7a5b02e0138f,debug-threads=on -uuid 41c713c6-564d-4ffd-a52e-ac900d37acc6 -machine q35,accel=kvm,nvdimm=on -cpu host,pmu=off -qmp unix:fd=3,server=on,wait=off -m 2048M,slots=10,maxmem=8962M -device pci-bridge,bus=pcie.0,id=pci-bridge-0,chassis_nr=1,shpc=off,addr=2,io-reserve=4k,mem-reserve=1m,pref64-reserve=1m -device virtio-serial-pci,disable-modern=true,id=serial0 -device virtconsole,chardev=charconsole0,id=console0 -chardev socket,id=charconsole0,path=/run/vc/vm/b19befe69add3418ed6cec8d00156fbbb2db0de7567358b837ab7a5b02e0138f/console.sock,server=on,wait=off -device nvdimm,id=nv0,memdev=mem0,unarmed=on -object memory-backend-file,id=mem0,mem-path=/opt/kata/share/kata-containers/kata-ubuntu-latest-confidential.image,size=268435456,readonly=on -device virtio-scsi-pci,id=scsi0,disable-modern=true -object rng-random,id=rng0,filename=/dev/urandom -device virtio-rng-pci,rng=rng0 -device vhost-vsock-pci,disable-modern=true,vhostfd=4,id=vsock-2284674647,guest-cid=2284674647 -netdev tap,id=network-0,vhost=on,vhostfds=5,fds=6 -device driver=virtio-net-pci,netdev=network-0,mac=ca:c9:dd:89:f7:5c,disable-modern=true,mq=on,vectors=4 -rtc base=utc,driftfix=slew,clock=host -global kvm-pit.lost_tick_policy=discard -vga none -no-user-config -nodefaults -nographic --no-reboot -object memory-backend-ram,id=dimm1,size=2048M -numa node,memdev=dimm1 -kernel /opt/kata/share/kata-containers/vmlinuz-6.7-132-confidential -append tsc=reliable no_timer_check rcupdate.rcu_expedited=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 i8042.noaux=1 noreplace-smp reboot=k cryptomgr.notests net.ifnames=0 pci=lastbus=0 root=/dev/pmem0p1 rootflags=dax,data=ordered,errors=remount-ro ro rootfstype=ext4 console=hvc0 console=hvc1 debug systemd.show_status=true systemd.log_level=debug panic=1 nr_cpus=2 selinux=0 systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket scsi_mod.scan=none agent.log=debug agent.debug_console agent.debug_console_vport=1026 agent.log=debug initcall_debug -pidfile /run/vc/vm/b19befe69add3418ed6cec8d00156fbbb2db0de7567358b837ab7a5b02e0138f/pid -smp 1,cores=1,threads=1,sockets=2,maxcpus=2
root      292536  0.0  0.0   3472  1504 ?        S+   18:33   0:00 grep --color=auto qemu
root@aks-nodepool1-26043497-vmss000000:/#

Checking that the host file (/tmp/file-volume-test-foo.Nfy75) was copied to the host:

root@aks-nodepool1-26043497-vmss000000:/# cat /tmp/file-volume-test-foo.Nfy75 
test
root@aks-nodepool1-26043497-vmss000000:/# pod_id=$(ps -ef | grep qemu | head -1 | sed 's/.*-name sandbox-\([a-z0-9]\+\).*/\1/')
root@aks-nodepool1-26043497-vmss000000:/# /opt/kata/bin/kata-runtime exec $pod_id
root@localhost:/# ls /run/kata-containers/shared/containers/
49a8902d85e1c09dad146e9221f892c6955a9ca65de9fe5c7f7f015dc944860d-2bf09b44717c458f-hosts
49a8902d85e1c09dad146e9221f892c6955a9ca65de9fe5c7f7f015dc944860d-3ff1459e560656cd-resolv.conf
49a8902d85e1c09dad146e9221f892c6955a9ca65de9fe5c7f7f015dc944860d-57775cc55ab52a2a-foo.txt
49a8902d85e1c09dad146e9221f892c6955a9ca65de9fe5c7f7f015dc944860d-66c46d5573d4bace-termination-log
49a8902d85e1c09dad146e9221f892c6955a9ca65de9fe5c7f7f015dc944860d-7ecb26889ad7dee2-serviceaccount
49a8902d85e1c09dad146e9221f892c6955a9ca65de9fe5c7f7f015dc944860d-e5040034e613dff3-hostname
b19befe69add3418ed6cec8d00156fbbb2db0de7567358b837ab7a5b02e0138f-b3b46cdb5e8e3963-resolv.conf
root@localhost:/# cat /run/kata-containers/shared/containers/49a8902d85e1c09dad146e9221f892c6955a9ca65de9fe5c7f7f015dc944860d-57775cc55ab52a2a-foo.txt
test
root@localhost:/#

Run mount on guest:

root@localhost:/# mount
/dev/pmem0p1 on / type ext4 (ro,relatime)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,size=1015000k,nr_inodes=253750,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,size=203772k,mode=755)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=28,pgrp=1,timeout=0,minproto=5,maxproto=5,direct)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev)
fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
nsfs on /run/sandbox-ns/ipc type nsfs (rw)
nsfs on /run/sandbox-ns/uts type nsfs (rw)
shm on /run/kata-containers/sandbox/shm type tmpfs (rw,relatime)
tmpfs on /etc/resolv.conf type tmpfs (rw,nosuid,nodev,size=203772k,mode=755)
tmpfs on /run/kata-containers/b19befe69add3418ed6cec8d00156fbbb2db0de7567358b837ab7a5b02e0138f/rootfs type tmpfs (rw,nosuid,nodev,size=203772k,mode=755)
tmpfs on /run/kata-containers/b19befe69add3418ed6cec8d00156fbbb2db0de7567358b837ab7a5b02e0138f/rootfs type tmpfs (rw,nosuid,nodev,size=203772k,mode=755)
tmpfs on /run/kata-containers/b19befe69add3418ed6cec8d00156fbbb2db0de7567358b837ab7a5b02e0138f/pause/rootfs type tmpfs (rw,nosuid,nodev,size=203772k,mode=755)
overlay on /run/kata-containers/49a8902d85e1c09dad146e9221f892c6955a9ca65de9fe5c7f7f015dc944860d/images/rootfs type overlay (rw,relatime,lowerdir=/run/kata-containers/image/layers/sha256_1617e25568b2231fdd0d5caff63b06f6f7738d8d961f031c80e47d35aaec9733:/run/kata-containers/image/layers/sha256_9fa9226be034e47923c0457d916aa68474cdfb23af8d4525e9baeebc4760977a,upperdir=/run/kata-containers/image/overlay/0/upperdir,workdir=/run/kata-containers/image/overlay/0/workdir,redirect_dir=nofollow,index=off,uuid=null,metacopy=off)
overlay on /run/kata-containers/49a8902d85e1c09dad146e9221f892c6955a9ca65de9fe5c7f7f015dc944860d/rootfs type overlay (rw,relatime,lowerdir=/run/kata-containers/image/layers/sha256_1617e25568b2231fdd0d5caff63b06f6f7738d8d961f031c80e47d35aaec9733:/run/kata-containers/image/layers/sha256_9fa9226be034e47923c0457d916aa68474cdfb23af8d4525e9baeebc4760977a,upperdir=/run/kata-containers/image/overlay/0/upperdir,workdir=/run/kata-containers/image/overlay/0/workdir,redirect_dir=nofollow,index=off,uuid=null,metacopy=off)
overlay on /run/kata-containers/49a8902d85e1c09dad146e9221f892c6955a9ca65de9fe5c7f7f015dc944860d/rootfs type overlay (rw,relatime,lowerdir=/run/kata-containers/image/layers/sha256_1617e25568b2231fdd0d5caff63b06f6f7738d8d961f031c80e47d35aaec9733:/run/kata-containers/image/layers/sha256_9fa9226be034e47923c0457d916aa68474cdfb23af8d4525e9baeebc4760977a,upperdir=/run/kata-containers/image/overlay/0/upperdir,workdir=/run/kata-containers/image/overlay/0/workdir,redirect_dir=nofollow,index=off,uuid=null,metacopy=off)
overlay on /run/kata-containers/49a8902d85e1c09dad146e9221f892c6955a9ca65de9fe5c7f7f015dc944860d/images/rootfs type overlay (rw,relatime,lowerdir=/run/kata-containers/image/layers/sha256_1617e25568b2231fdd0d5caff63b06f6f7738d8d961f031c80e47d35aaec9733:/run/kata-containers/image/layers/sha256_9fa9226be034e47923c0457d916aa68474cdfb23af8d4525e9baeebc4760977a,upperdir=/run/kata-containers/image/overlay/0/upperdir,workdir=/run/kata-containers/image/overlay/0/workdir,redirect_dir=nofollow,index=off,uuid=null,metacopy=off)

datadog-compute-robot pushed a commit to DataDog/kata-containers that referenced this issue Jun 11, 2024
This test fails with qemu-coco-dev configuration and guest-pull image pull.

Issue: kata-containers#9667
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
fidencio pushed a commit to ChengyuZhu6/kata-containers that referenced this issue Jun 12, 2024
Refactor code in guest pull

Fix tests:
- k8s-credentials-secrets.bats
- k8s-file-volume.bats
- k8s-nested-configmap-secret.bats
- k8s-projected-volume.bats
- k8s-volume.bats
- k8s-shared-volume.bats
- k8s-sysctls.bats
- k8s-inotify.bats
- k8s-liveness-probes.bats

Fixes: kata-containers#9665
Fixes: kata-containers#9666
Fixes: kata-containers#9667
Fixes: kata-containers#9668

Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/guest-pull bug Incorrect behaviour needs-review Needs to be assessed by the team.
Projects
Issue backlog
  
In progress
Development

No branches or pull requests

2 participants