Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tests: Support for kbs setup on kcli #9273

Merged
merged 2 commits into from Apr 5, 2024

Conversation

ldoktor
Copy link
Contributor

@ldoktor ldoktor commented Mar 13, 2024

this adds a support for kcli to tests/integration/kubernetes/gha-run.sh deploy-coco-kbs. It uses nodeport feature to expose the service from localhost as well as between the nodes, one can test it by:

cd tests/integration/kubernetes
CLUSTER_DISK_SIZE=40 ./gha-run.sh create-cluster-kcli
./gha-run.sh deploy-kata-kcli
KBS_INGRESS=nodeport KBS=true AKS_NAME=ldoktor bash -x ./gha-run.sh deploy-coco-kbs

@@ -224,8 +224,6 @@ function kbs_k8s_deploy() {
popd
echo "::endgroup::"

[ -n "$ingress" ] && _handle_ingress "$ingress"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @ldoktor !

AKS handler requires to be evoked before calling deploy-kbs.sh. It will create the ingress.yaml (https://github.com/kata-containers/kata-containers/blob/main/tests/integration/kubernetes/confidential_kbs.sh#L401) under "${COCO_KBS_DIR}/config/kubernetes/overlays" and adjust the kustomize file (https://github.com/kata-containers/kata-containers/blob/main/tests/integration/kubernetes/confidential_kbs.sh#L408). Thus the ingress deployment is applied alongside the KBS by deploy-kbs.sh.

Can we follow that approach with NodePort instead of calling kubectl expose...?

One extra advance is that you won't need to do the deleting of nodePort ingress on k8s_kbs_delete() - https://github.com/kata-containers/kata-containers/blob/main/tests/integration/kubernetes/confidential_kbs.sh#L162

@ldoktor
Copy link
Contributor Author

ldoktor commented Mar 26, 2024

Changes:

  • rebased
  • keep the ingress setup phase before kbs deployment
  • used kustomize to define the kbs nodeport service
  • new commit to use full svc address when checking the kbs service

elif kubectl get svc kbs-nodeport -n "$KBS_NS" &>/dev/null; then
local host
host=$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[?(@.type=="ExternalIP")].address}' -n "$KBS_NS")
[ -z "$host"] && host=$(oc get nodes -o jsonpath='{.items[0].status.addresses[?(@.type=="InternalIP")].address}' -n "$KBS_NS")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @ldoktor !

I looked at the code and I didn't find anything wrong. Then I give a try on my local machine but it failed to return the service address. Luckily I changed of laptop recently and I don't have oc installed yet, so that's the problem it is using oc here :D

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oups, let me fix that (oc is shorter than kubectl...)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new version worked for me @ldoktor !

this can be used on kcli or other systems where cluster nodes are
accessible from all places where the tests are running.

Fixes: kata-containers#9272

Signed-off-by: Lukáš Doktor <ldoktor@redhat.com>
the service might not listen on the default port, use the full service
address to ensure we are talking to the right resource.

Signed-off-by: Lukáš Doktor <ldoktor@redhat.com>
@ldoktor
Copy link
Contributor Author

ldoktor commented Mar 26, 2024

Changes:

  • oc -> kubectl


cat > nodeport_service.yaml <<EOF
# Service to expose the KBS on nodes
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ldoktor the only thing that I would change on this PR is to contribute the nodeport_service.yaml to https://github.com/confidential-containers/trustee/tree/main/kbs/config/kubernetes . So it lives alongside the ingress deployment yaml for aks. What do you think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about it but unlike AKS the nodeport service is quite hackish and mainly supported for debugging/testing purposes. I don't think it'd be wise to promote it as another "option".

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good and fair point @ldoktor !

Copy link
Contributor

@wainersm wainersm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @ldoktor

@wainersm
Copy link
Contributor

wainersm commented Apr 1, 2024

Hi @fitzthum ! Have you had a chance to test this on kubeadm and k3s?

@fitzthum
Copy link
Contributor

fitzthum commented Apr 1, 2024

Hi @fitzthum ! Have you had a chance to test this on kubeadm and k3s?

Not yet unfortunately and I am sick today so I am not sure when I will be able to get to it.

@GabyCT
Copy link
Contributor

GabyCT commented Apr 3, 2024

@ldoktor @wainersm I am trying to test on a fresh baremetal, however, when I am doing this

cd tests/integration/kubernetes
CLUSTER_DISK_SIZE=40 ./gha-run.sh create-cluster-kcli
gha-run-k8s-common.sh: line 185: kcli: command not found

To avoid that error I found out that I need to do the

./gha-run.sh deploy-kata-kcli

But I am getting this errors

kubectl apply -k /home/testk3s/go/src/github.com/kata-containers/kata-containers/tools/packaging/kata-deploy/kata-cleanup/overlays/k3s
The connection to the server localhost:8080 was refused - did you specify the right host or port?
+ sleep 180s
+ kubectl delete -k /home/testk3s/go/src/github.com/kata-containers/kata-containers/tools/packaging/kata-deploy/kata-cleanup/overlays/k3s
E0403 17:07:51.923967   50178 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
error: unable to recognize "/home/testk3s/go/src/github.com/kata-containers/kata-containers/tools/packaging/kata-deploy/kata-cleanup/overlays/k3s": Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
+ kubectl delete -f /home/testk3s/go/src/github.com/kata-containers/kata-containers/tools/packaging/kata-deploy/kata-rbac/base/kata-rbac.yaml
E0403 17:07:51.959792   50183 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0403 17:07:51.960490   50183 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0403 17:07:51.962312   50183 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "/home/testk3s/go/src/github.com/kata-containers/kata-containers/tools/packaging/kata-deploy/kata-rbac/base/kata-rbac.yaml": Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "/home/testk3s/go/src/github.com/kata-containers/kata-containers/tools/packaging/kata-deploy/kata-rbac/base/kata-rbac.yaml": Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "/home/testk3s/go/src/github.com/kata-containers/kata-containers/tools/packaging/kata-deploy/kata-rbac/base/kata-rbac.yaml": Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
+ true
+ set_default_cluster_namespace
+ kubectl config set-context --current --namespace=default
error: no current context is set

I try to follow the github workflow with the steps, I could not find something that was using deploy-kata-kcli, not sure if something else is missing

@wainersm
Copy link
Contributor

wainersm commented Apr 3, 2024

Hi @GabyCT ,

The instructions on the PR's description is for a cluster built with kcli. I never used k3s but I believe you should run the following commands (assuming you have k3s installed and kubectl configured to operate on it):

cd tests/integration/kubernetes
export KUBERNETES="k3s"
./gha-run.sh deploy-kata-tdx
KBS_INGRESS=nodeport KBS=true ./gha-run.sh deploy-coco-kbs

If you are not running on TDX then it's fine, I suppose the ./gha-run.sh deploy-kata-tdx command will install the kata-qemu runtimeclass. But if you are in TDX then you can export KATA_HYPERVISOR=qemu-tdx before running the commands below, so it will install the "correct" runtimeclass.

@GabyCT
Copy link
Contributor

GabyCT commented Apr 3, 2024

Hi @GabyCT ,

The instructions on the PR's description is for a cluster built with kcli. I never used k3s but I believe you should run the following commands (assuming you have k3s installed and kubectl configured to operate on it):

cd tests/integration/kubernetes
export KUBERNETES="k3s"
./gha-run.sh deploy-kata-tdx
KBS_INGRESS=nodeport KBS=true ./gha-run.sh deploy-coco-kbs

If you are not running on TDX then it's fine, I suppose the ./gha-run.sh deploy-kata-tdx command will install the kata-qemu runtimeclass. But if you are in TDX then you can export KATA_HYPERVISOR=qemu-tdx before running the commands below, so it will install the "correct" runtimeclass.

@wainersm thanks for the info I just run it on a non-TDX environment but following your instructions and seems ok

....
+++ grep -q kbs
+++ kubectl get ingress -n coco-tenant
+++ kubectl get svc kbs-nodeport -n coco-tenant
+++ kubectl get -o 'jsonpath={.spec.ports[0].nodePort}' svc kbs-nodeport -n coco-tenant
++ port=31381
++ echo http://10.0.0.4:31381
+ svc_host=http://10.0.0.4:31381
+ '[' -z http://10.0.0.4:31381 ']'
+ timeout=350
+ echo 'Trying to connect at http://10.0.0.4:31381. Timeout=350'
Trying to connect at http://10.0.0.4:31381. Timeout=350
+ waitForProcess 350 30 'curl -s -I "http://10.0.0.4:31381" | grep -q "404 Not Found"'
+ wait_time=350
+ sleep_time=30
+ cmd='curl -s -I "http://10.0.0.4:31381" | grep -q "404 Not Found"'
+ '[' 350 -gt 0 ']'
+ eval 'curl -s -I "http://10.0.0.4:31381" | grep -q "404 Not Found"'
++ grep -q '404 Not Found'
++ curl -s -I http://10.0.0.4:31381
+ return 0
+ echo 'KBS service respond to requests at http://10.0.0.4:31381'
KBS service respond to requests at http://10.0.0.4:31381
+ echo ::endgroup::
::endgroup::

@wainersm
Copy link
Contributor

wainersm commented Apr 3, 2024

Hi @GabyCT ,
The instructions on the PR's description is for a cluster built with kcli. I never used k3s but I believe you should run the following commands (assuming you have k3s installed and kubectl configured to operate on it):

cd tests/integration/kubernetes
export KUBERNETES="k3s"
./gha-run.sh deploy-kata-tdx
KBS_INGRESS=nodeport KBS=true ./gha-run.sh deploy-coco-kbs

If you are not running on TDX then it's fine, I suppose the ./gha-run.sh deploy-kata-tdx command will install the kata-qemu runtimeclass. But if you are in TDX then you can export KATA_HYPERVISOR=qemu-tdx before running the commands below, so it will install the "correct" runtimeclass.

@wainersm thanks for the info I just run it on a non-TDX environment but following your instructions and seems ok

....
+++ grep -q kbs
+++ kubectl get ingress -n coco-tenant
+++ kubectl get svc kbs-nodeport -n coco-tenant
+++ kubectl get -o 'jsonpath={.spec.ports[0].nodePort}' svc kbs-nodeport -n coco-tenant
++ port=31381
++ echo http://10.0.0.4:31381
+ svc_host=http://10.0.0.4:31381
+ '[' -z http://10.0.0.4:31381 ']'
+ timeout=350
+ echo 'Trying to connect at http://10.0.0.4:31381. Timeout=350'
Trying to connect at http://10.0.0.4:31381. Timeout=350
+ waitForProcess 350 30 'curl -s -I "http://10.0.0.4:31381" | grep -q "404 Not Found"'
+ wait_time=350
+ sleep_time=30
+ cmd='curl -s -I "http://10.0.0.4:31381" | grep -q "404 Not Found"'
+ '[' 350 -gt 0 ']'
+ eval 'curl -s -I "http://10.0.0.4:31381" | grep -q "404 Not Found"'
++ grep -q '404 Not Found'
++ curl -s -I http://10.0.0.4:31381
+ return 0
+ echo 'KBS service respond to requests at http://10.0.0.4:31381'
KBS service respond to requests at http://10.0.0.4:31381
+ echo ::endgroup::
::endgroup::

Great! Thanks for you time running it, @GabyCT !

@GabyCT
Copy link
Contributor

GabyCT commented Apr 4, 2024

/test

@wainersm wainersm merged commit aae7048 into kata-containers:main Apr 5, 2024
294 of 303 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ok-to-test size/small Small and simple task
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants