Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about shared-fs options and security in kata-containers #9683

Closed
sidneychang opened this issue May 21, 2024 · 4 comments
Closed

Questions about shared-fs options and security in kata-containers #9683

sidneychang opened this issue May 21, 2024 · 4 comments
Labels
question Requires an answer

Comments

@sidneychang
Copy link
Contributor

sidneychang commented May 21, 2024

Description:
I have some questions about the shard-fs options. I used kata containers with confidential containers running on Hygon CSV (which seems to have implemented the same tee mechanism as AMD SEV). I noticed that directories inside the runing Pod/container can be accessed through df -h, and files in containers can be accessed directly on the host machine, including viewing and adding the contents of files(I used the root user).

[root@node opt]# cd /run/containerd/io.containerd.runtime.v2.task/k8s.io/93082edc7694dab59b9c14d85b4bef505185f13eecc472f7439ab4f8b9d675ac/rootfs
[root@node rootfs]# cd /run/containerd/io.containerd.runtime.v2.task/k8s.io/93082edc7694dab59b9c14d85b4bef505185f13eecc472f7439ab4f8b9d675ac/rootfs/opt
[root@node opt]# ls
hadoop  hadoop-3.1.1
[root@node opt]# mkdir test
[root@node opt]# ls
hadoop  hadoop-3.1.1  test

My questions:
Is this caused by shared-fs?
I want to try to increase some security policies myself, and I want to know besides improving performance, what are the main propose of shared-fs for kata-containers, and what aspects might I need to modify?
If I want to increase protection for files inside the container myself, are there any encryption schemes for files or file systems that can be applied without modifying kata?

I have learned about #9676 and related issues, and I understand that shared-fs is required by Kubernetes. However, I'm not very clear about its other impacts.

I would be very grateful if someone could answer my questions.

@sidneychang sidneychang added the question Requires an answer label May 21, 2024
@fitzthum
Copy link
Contributor

We are in the process of removing the shared-fs. Partly the shared-fs is a remnant of how Kata pulls images when confidential computing is not enabled. When we pull the image inside the guest with CoCo we won't need shared-fs for the images, but there are a few other features that rely on it. If you look at #9315, which removes the shared-fs for TDX, you can see that we are skipping a number of tests for that platform. This gives you an idea of what features currently rely on shared-fs. We will hopefully have workarounds for most of these.

This PR also shows that turning off the shared-fs is as simple as changing one setting in the kata config, but note that this is a host configuration so you will want to do something in the guest as well to make sure the shared-fs can't be turned back on.

In the example you posted, I think you actually aren't seeing the shared-fs, but rather the location where containerd has unpacked images on the host. With nydus I think we download the container on the host as well as the guest. The shared-fs is usually located /run/kata-containers/shared/sandboxes. If you want to make sure, try execing into the running container and see if any of the changes you make from the host show up.

@sidneychang
Copy link
Contributor Author

sidneychang commented May 21, 2024

@fitzthum Thank you very much. I found the shared-fs in /run/kata-containers/shared/sandboxes. But the rootfs dir did used by the running container.

I used helm install hadoop, my containerd default runtime is kata. kata generated qemu commands like that:

/opt/kata/bin/qemu-system-x86_64 -name sandbox-dde17c67b7cb31cac2ed5669637cc34f3563c2c7500ff78a711d511635714a7d -uuid 35744a80-a8cb-49a5-b62b-26d1ba35ee1f -machine q35,accel=kvm,kernel_irqchip=split,confidential-guest-support=sev -cpu host,pmu=off -qmp unix:/run/vc/vm/dde17c67b7cb31cac2ed5669637cc34f3563c2c7500ff78a711d511635714a7d/qmp.sock,server=on,wait=off -m 2048M,slots=10,maxmem=516883M -device pci-bridge,bus=pcie.0,id=pci-bridge-0,chassis_nr=1,shpc=off,addr=2,io-reserve=4k,mem-reserve=1m,pref64-reserve=1m -device virtio-serial-pci,disable-modern=false,id=serial0 -device virtconsole,chardev=charconsole0,id=console0 -chardev socket,id=charconsole0,path=/run/vc/vm/dde17c67b7cb31cac2ed5669637cc34f3563c2c7500ff78a711d511635714a7d/console.sock,server=on,wait=off -device virtio-blk-pci,disable-modern=false,drive=image-10eb9be908acad01,scsi=off,config-wce=off,share-rw=on,serial=image-10eb9be908acad01 -drive id=image-10eb9be908acad01,file=/opt/kata/share/kata-containers/kata-clearlinux-latest.image,aio=threads,format=raw,if=none,readonly=on -device virtio-scsi-pci,id=scsi0,disable-modern=false -object sev-guest,id=sev,cbitpos=47,reduced-phys-bits=5,kernel-hashes=on -drive if=pflash,format=raw,readonly=on,file=/opt/hygon/csv/OVMF.fd -object rng-random,id=rng0,filename=/dev/urandom -device virtio-rng-pci,rng=rng0 -device vhost-vsock-pci,disable-modern=false,vhostfd=3,id=vsock-3151801823,guest-cid=3151801823 -device virtio-9p-pci,disable-modern=false,fsdev=extra-9p-kataShared,mount_tag=kataShared -fsdev local,id=extra-9p-kataShared,path=/run/kata-containers/shared/sandboxes/dde17c67b7cb31cac2ed5669637cc34f3563c2c7500ff78a711d511635714a7d/shared,security_model=none,multidevs=remap -netdev tap,id=network-0,vhost=on,vhostfds=4,fds=5 -device driver=virtio-net-pci,netdev=network-0,mac=92:41:03:e1:f2:63,disable-modern=false,mq=on,vectors=4 -rtc base=utc,driftfix=slew,clock=host -global kvm-pit.lost_tick_policy=discard -vga none -no-user-config -nodefaults -nographic --no-reboot -daemonize -object memory-backend-ram,id=dimm1,size=2048M -numa node,memdev=dimm1 -kernel /opt/kata/share/kata-containers/vmlinuz-5.11-96 -append tsc=reliable no_timer_check rcupdate.rcu_expedited=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 i8042.noaux=1 noreplace-smp reboot=k cryptomgr.notests net.ifnames=0 pci=lastbus=0 root=/dev/vda1 rootflags=data=ordered,errors=remount-ro ro rootfstype=ext4 console=hvc0 console=hvc1 quiet systemd.show_status=false panic=1 nr_cpus=1 systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket scsi_mod.scan=none -pidfile /run/vc/vm/dde17c67b7cb31cac2ed5669637cc34f3563c2c7500ff78a711d511635714a7d/pid -smp 1,cores=1,threads=1,sockets=1,maxcpus=1

there are my pods:

[root@node shared]# kubectl get pods
NAME                      READY   STATUS    RESTARTS      AGE
hadoop-hadoop-hdfs-dn-0   1/1     Running   0             18m
hadoop-hadoop-hdfs-nn-0   1/1     Running   0             18m
hadoop-hadoop-yarn-nm-0   1/1     Running   1 (18m ago)   18m
hadoop-hadoop-yarn-rm-0   1/1     Running   0             18m

I used nerdctl ps --namespace k8s.io to find the container id that in hadoop-hdfs-dn-0 pod.

093da5f75bb0  localhost:80/farberg/apache-hadoop:3.1.1.1                        "/bin/bash /tmp/hado…"    18 seconds ago    Up                 k8s://default/hadoop-hadoop-hdfs-dn-0/hdfs-dn

then I got the rootfs path

[root@node sys]# df -h | grep 093da5f75bb0 
overlay         197G  132G   56G  71% /run/containerd/io.containerd.runtime.v2.task/k8s.io/093da5f75bb00debaf29fe142b37f5a794dd54a95d55ed7fcdac3bbe61024caf/rootfs

and i tried to mkdir in the rootfs dir and in the container, i could view the change both side.

[root@node sys]# cd /run/containerd/io.containerd.runtime.v2.task/k8s.io/093da5f75bb00debaf29fe142b37f5a794dd54a95d55ed7fcdac3bbe61024caf/rootfs
[root@node rootfs]# ls
bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
[root@node rootfs]# mkdir 1
[root@node rootfs]# ls
1  bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
[root@node rootfs]# kubectl exec -it hadoop-hadoop-hdfs-dn-0 ls /
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
1  bin	boot  dev  etc	home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
[root@node3 rootfs]# kubectl exec -it hadoop-hadoop-hdfs-dn-0 -- mkdir /2
[root@node3 rootfs]# ls
1  2  bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

I am using version 2.5.2 of Kata, and I am not sure which version it is on the CC branch. I am not sure if this is an issue with an older version.

@fitzthum
Copy link
Contributor

fitzthum commented May 21, 2024

Ah if you are using Kata 2.5.2 you might not have any of the support for confidential image pulling. In that case if you disable shared-fs you are not going to be able to pull the image. You might try with upstream main if it isn't too much of a hassle to enable csv there. Guest pulling is a coco feature so you'd want to use the confidential rootfs which includes image-rs.

@sidneychang
Copy link
Contributor Author

Ah if you are using Kata 2.5.2 you might not have any of the support for confidential image pulling. In that case if you disable shared-fs you are not going to be able to pull the image. You might try with upstream main if it isn't too much of a hassle to enable csv there. Guest pulling is a coco feature so you'd want to use the confidential rootfs which includes image-rs.

Thank you, I think I understand the reason now. I will try upgrading the version.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Requires an answer
Projects
Issue backlog
  
To do
Development

No branches or pull requests

2 participants