Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

move cloud-providers to top go module #1719

Merged

Conversation

liudalibj
Copy link
Member

@liudalibj liudalibj commented Feb 27, 2024

Remove cyclic dependency among cloud-api-adaptor and peerpod-ctrl.
Move cloud-providers to a new go mod and re-structure all the source codes as:

cloud-api-adaptor
    \src\cloud-api-adaptor
         cloud-providers
         peerpod-ctl
         peerpodcfg-ctl
         webhook
         csi-wrapper

And updated all workflow yamls to reflect new directory structure.

part of #1122

Signed-off-by: Da Li Liu liudali@cn.ibm.com
Co-authored-by: James Tumber james.tumber@ibm.com

@liudalibj liudalibj marked this pull request as draft February 27, 2024 07:19
@liudalibj liudalibj changed the title Toplevel cloud module move cloud-providers to top go module Feb 27, 2024
@liudalibj liudalibj force-pushed the toplevel-cloud-module branch 2 times, most recently from 2ae8efb to 8db87ac Compare February 27, 2024 11:05
@liudalibj liudalibj marked this pull request as ready for review February 28, 2024 05:23
@liudalibj liudalibj force-pushed the toplevel-cloud-module branch 2 times, most recently from 07cf80e to 530594d Compare March 20, 2024 08:26
@liudalibj liudalibj added the test_e2e_libvirt Run Libvirt e2e tests label Mar 20, 2024
@liudalibj liudalibj force-pushed the toplevel-cloud-module branch 5 times, most recently from cb06f0c to 9a65e55 Compare March 20, 2024 11:35
@liudalibj liudalibj removed the test_e2e_libvirt Run Libvirt e2e tests label Mar 20, 2024
@liudalibj
Copy link
Member Author

The pr are ready to review.
The est_e2e_libvirt test didn't passed, it failed to build podvm images.
From https://github.com/confidential-containers/cloud-api-adaptor/actions/runs/8358226935/job/22879052517?pr=1719#step:1:38

Uses: confidential-containers/cloud-api-adaptor/.github/workflows/podvm_builder.yaml@refs/heads/main (b499f84492c1ad4e2755d04ef31b10532e9a272c)

it seems that it use the workflow file from main branch, it should use the podvm build yaml from this PR.
I am trying to find out how to let the GHA using podvm build yaml from this pr.

@liudalibj liudalibj added the test_e2e_libvirt Run Libvirt e2e tests label Mar 20, 2024
@liudalibj
Copy link
Member Author

liudalibj commented Mar 20, 2024

So we maybe need merge this pr first without test the test_e2e_libvirt result, when all related yamls in main then can test it from a new pr.

@liudalibj liudalibj removed the test_e2e_libvirt Run Libvirt e2e tests label Mar 20, 2024
@stevenhorsman
Copy link
Member

@liudalibj - so the problem is that the workflow files comes from the main branch, not this PR. However we might be able to get around this as I think that things like hack/ci-helper.sh (and probably quite a bit of the hack directory) are not related to the cloud-api-adaptor go module, so I think they can stay in the root of the repo, rather than moving into /src/cloud-api-adaptor. I think this is also the case for things like docs and the podvm directories as well? Maybe this is something we can discuss on the call later?

@liudalibj
Copy link
Member Author

@liudalibj - so the problem is that the workflow files comes from the main branch, not this PR. However we might be able to get around this as I think that things like hack/ci-helper.sh (and probably quite a bit of the hack directory) are not related to the cloud-api-adaptor go module, so I think they can stay in the root of the repo, rather than moving into /src/cloud-api-adaptor. I think this is also the case for things like docs and the podvm directories as well? Maybe this is something we can discuss on the call later?

@stevenhorsman base on this comment, I move some common scripts(check lint...) to the root hack folder, and also create new hack/release-helper.sh to help do git tags in the release process.

@liudalibj
Copy link
Member Author

And verified that the test_e2e_libvirt will only work after we have the PR in main branch.
Here is the verify PR liudalibj#3

@liudalibj
Copy link
Member Author

root@liudali-s390x-libvirt:~/cloud-api-adaptor/src/cloud-api-adaptor# kubectl get po -A
NAMESPACE                        NAME                                               READY   STATUS             RESTARTS      AGE
confidential-containers-system   cc-operator-controller-manager-857f844f7d-qbdx8    2/2     Running            1 (26m ago)   60m
confidential-containers-system   cc-operator-daemon-install-95hrv                   1/1     Running            0             59m
confidential-containers-system   cc-operator-pre-install-daemon-sfh67               1/1     Running            0             60m
confidential-containers-system   cloud-api-adaptor-daemonset-z4vsz                  1/1     Running            0             60m
confidential-containers-system   peerpod-ctrl-controller-manager-74c65cd59f-6hbsb   2/2     Running            1 (26m ago)   60m
default                          nginx                                              1/1     Running            0             25m
ingress-nginx                    ingress-nginx-admission-create-hpxzc               0/1     Completed          0             91m
ingress-nginx                    ingress-nginx-admission-patch-gltxq                0/1     Completed          2             91m
ingress-nginx                    ingress-nginx-controller-7bf7bc78dc-qcs8r          0/1     ImagePullBackOff   0             91m
kube-flannel                     kube-flannel-ds-2ldqk                              1/1     Running            0             89m
kube-flannel                     kube-flannel-ds-5c76z                              1/1     Running            0             91m
kube-system                      coredns-787d4945fb-7pzlt                           1/1     Running            0             91m
kube-system                      coredns-787d4945fb-mkp7l                           1/1     Running            0             91m
kube-system                      etcd-peer-pods-ctlplane-0                          1/1     Running            0             91m
kube-system                      kube-apiserver-peer-pods-ctlplane-0                1/1     Running            0             91m
kube-system                      kube-controller-manager-peer-pods-ctlplane-0       1/1     Running            2 (26m ago)   91m
kube-system                      kube-proxy-jwc5d                                   1/1     Running            0             89m
kube-system                      kube-proxy-x8jqw                                   1/1     Running            0             91m
kube-system                      kube-scheduler-peer-pods-ctlplane-0                1/1     Running            2 (26m ago)   91m
root@liudali-s390x-libvirt:~/cloud-api-adaptor/src/cloud-api-adaptor#
kubectl logs -f -n confidential-containers-system   cloud-api-adaptor-daemonset-z4vsz
+ exec cloud-api-adaptor libvirt -uri 'qemu+ssh://root@192.168.122.1/system?no_verify=1' -data-dir /opt/data-dir -pods-dir /run/peerpod/pods -network-name default -pool-name default -disable-cvm -socket /run/peerpod/hypervisor.sock
cloud-api-adaptor version v0.9.0.alpha.1-dev
  commit: 7d3fdfc81246c603c9bc70a3c5d34a82f832ae2a
  go: go1.21.8
cloud-api-adaptor: starting Cloud API Adaptor daemon for "libvirt"
2024/03/21 07:22:02 [adaptor/cloud/libvirt] libvirt config: &libvirt.Config{URI:"qemu+ssh://root@192.168.122.1/system?no_verify=1", PoolName:"default", NetworkName:"default", DataDir:"/opt/data-dir", DisableCVM:true, VolName:"podvm-base.qcow2", LaunchSecurity:"", Firmware:"/usr/share/edk2/ovmf/OVMF_CODE.fd"}
2024/03/21 07:22:03 [adaptor/cloud/libvirt] Created libvirt connection
2024/03/21 07:22:03 [adaptor] server config: &adaptor.ServerConfig{TLSConfig:(*tlsutil.TLSConfig)(0xc000511180), SocketPath:"/run/peerpod/hypervisor.sock", CriSocketPath:"", PauseImage:"", PodsDir:"/run/peerpod/pods", ForwarderPort:"15150", ProxyTimeout:300000000000, AAKBCParams:"", EnableCloudConfigVerify:false}
2024/03/21 07:22:03 [util/k8sops] initialized PeerPodService
2024/03/21 07:22:03 [probe/probe] Using port: 8000
2024/03/21 07:22:03 [adaptor] server started
2024/03/21 07:22:25 [probe/probe] nodeName: peer-pods-worker-0
2024/03/21 07:22:25 [probe/probe] Selected pods count: 9
2024/03/21 07:22:25 [probe/probe] Ignored standard pod: cc-operator-controller-manager-857f844f7d-qbdx8
2024/03/21 07:22:25 [probe/probe] Ignored standard pod: cc-operator-pre-install-daemon-sfh67
2024/03/21 07:22:25 [probe/probe] Ignored standard pod: cloud-api-adaptor-daemonset-z4vsz
2024/03/21 07:22:25 [probe/probe] Ignored standard pod: peerpod-ctrl-controller-manager-74c65cd59f-6hbsb
2024/03/21 07:22:25 [probe/probe] Ignored standard pod: ingress-nginx-admission-create-hpxzc
2024/03/21 07:22:25 [probe/probe] Ignored standard pod: ingress-nginx-admission-patch-gltxq
2024/03/21 07:22:25 [probe/probe] Ignored standard pod: ingress-nginx-controller-7bf7bc78dc-qcs8r
2024/03/21 07:22:25 [probe/probe] Ignored standard pod: kube-flannel-ds-2ldqk
2024/03/21 07:22:25 [probe/probe] Ignored standard pod: kube-proxy-jwc5d
2024/03/21 07:22:25 [probe/probe] All PeerPods standup. we do not check the PeerPods status any more.
2024/03/21 07:57:18 [podnetwork] routes on netns /var/run/netns/cni-aca249c0-cf2a-69d2-ae44-be82d82f22ea
2024/03/21 07:57:18 [podnetwork]     0.0.0.0/0 via 10.244.1.1 dev eth0
2024/03/21 07:57:18 [podnetwork]     10.244.0.0/16 via 10.244.1.1 dev eth0
2024/03/21 07:57:18 [adaptor/cloud] Credentials file is not in a valid Json format, ignored
2024/03/21 07:57:18 [adaptor/cloud] stored /run/peerpod/pods/dc5d14f97adad43f08e9d30c8e93d400193a4b4904786f4776449e693a4dc847/daemon.json
2024/03/21 07:57:18 [adaptor/cloud] create a sandbox dc5d14f97adad43f08e9d30c8e93d400193a4b4904786f4776449e693a4dc847 for pod nginx in namespace default (netns: /var/run/netns/cni-aca249c0-cf2a-69d2-ae44-be82d82f22ea)
2024/03/21 07:57:18 [adaptor/cloud/libvirt] LaunchSecurityType: None
2024/03/21 07:57:18 [adaptor/cloud/libvirt] Checking if instance (podvm-nginx-dc5d14f9) exists
2024/03/21 07:57:18 [adaptor/cloud/libvirt] Uploaded volume key /var/lib/libvirt/images/podvm-nginx-dc5d14f9-root.qcow2
2024/03/21 07:57:18 [adaptor/cloud/libvirt] Create cloudInit iso
2024/03/21 07:57:18 [adaptor/cloud/libvirt] Uploading iso file: podvm-nginx-dc5d14f9-cloudinit.iso
2024/03/21 07:57:18 [adaptor/cloud/libvirt] 45056 bytes uploaded
2024/03/21 07:57:18 [adaptor/cloud/libvirt] Volume ID: /var/lib/libvirt/images/podvm-nginx-dc5d14f9-cloudinit.iso
2024/03/21 07:57:18 [adaptor/cloud/libvirt] Create XML for 'podvm-nginx-dc5d14f9'
2024/03/21 07:57:18 [adaptor/cloud/libvirt] Creating VM 'podvm-nginx-dc5d14f9'
2024/03/21 07:57:18 [adaptor/cloud/libvirt] Starting VM 'podvm-nginx-dc5d14f9'
2024/03/21 07:57:20 [adaptor/cloud/libvirt] VM id 290
2024/03/21 07:57:41 [adaptor/cloud/libvirt] Instance created successfully
2024/03/21 07:57:41 [adaptor/cloud/libvirt] created an instance podvm-nginx-dc5d14f9 for sandbox dc5d14f97adad43f08e9d30c8e93d400193a4b4904786f4776449e693a4dc847
2024/03/21 07:57:41 [util/k8sops] nginx is now owning a PeerPod object
2024/03/21 07:57:41 [adaptor/cloud] created an instance podvm-nginx-dc5d14f9 for sandbox dc5d14f97adad43f08e9d30c8e93d400193a4b4904786f4776449e693a4dc847
2024/03/21 07:57:41 [tunneler/vxlan] vxlan ppvxlan1 (remote 192.168.122.103:4789, id: 555000) created at /proc/1/task/12/ns/net
2024/03/21 07:57:41 [tunneler/vxlan] vxlan ppvxlan1 created at /proc/1/task/12/ns/net
2024/03/21 07:57:41 [tunneler/vxlan] vxlan ppvxlan1 is moved to /var/run/netns/cni-aca249c0-cf2a-69d2-ae44-be82d82f22ea
2024/03/21 07:57:41 [tunneler/vxlan] Add tc redirect filters between eth0 and vxlan1 on pod network namespace /var/run/netns/cni-aca249c0-cf2a-69d2-ae44-be82d82f22ea
2024/03/21 07:57:41 [adaptor/proxy] Listening on /run/peerpod/pods/dc5d14f97adad43f08e9d30c8e93d400193a4b4904786f4776449e693a4dc847/agent.ttrpc
2024/03/21 07:57:41 [adaptor/proxy] failed to init cri client, the err: cri runtime endpoint is not specified, it is used to get the image name from image digest
2024/03/21 07:57:41 [adaptor/proxy] Trying to establish agent proxy connection to 192.168.122.103:15150
2024/03/21 07:57:41 [adaptor/proxy] established agent proxy connection to 192.168.122.103:15150
2024/03/21 07:57:41 [adaptor/cloud] agent proxy is ready
2024/03/21 07:57:41 [adaptor/proxy] CreateSandbox: hostname:nginx sandboxId:dc5d14f97adad43f08e9d30c8e93d400193a4b4904786f4776449e693a4dc847
2024/03/21 07:57:41 [adaptor/proxy]     storages:
2024/03/21 07:57:41 [adaptor/proxy]         mountpoint:/run/kata-containers/sandbox/shm source:shm fstype:tmpfs driver:ephemeral
2024/03/21 07:57:41 [adaptor/proxy] CreateContainer: containerID:dc5d14f97adad43f08e9d30c8e93d400193a4b4904786f4776449e693a4dc847
2024/03/21 07:57:41 [adaptor/proxy]     mounts:
2024/03/21 07:57:41 [adaptor/proxy]         destination:/proc source:proc type:proc
2024/03/21 07:57:41 [adaptor/proxy]         destination:/dev source:tmpfs type:tmpfs
2024/03/21 07:57:41 [adaptor/proxy]         destination:/dev/pts source:devpts type:devpts
2024/03/21 07:57:41 [adaptor/proxy]         destination:/dev/mqueue source:mqueue type:mqueue
2024/03/21 07:57:41 [adaptor/proxy]         destination:/sys source:sysfs type:sysfs
2024/03/21 07:57:41 [adaptor/proxy]         destination:/dev/shm source:/run/kata-containers/sandbox/shm type:bind
2024/03/21 07:57:41 [adaptor/proxy]         destination:/etc/resolv.conf source:/run/kata-containers/shared/containers/dc5d14f97adad43f08e9d30c8e93d400193a4b4904786f4776449e693a4dc847-317e09ac72c9a66f-resolv.conf type:bind
2024/03/21 07:57:41 [adaptor/proxy]     annotations:
2024/03/21 07:57:41 [adaptor/proxy]         io.kubernetes.cri.sandbox-id: dc5d14f97adad43f08e9d30c8e93d400193a4b4904786f4776449e693a4dc847
2024/03/21 07:57:41 [adaptor/proxy]         io.kubernetes.cri.sandbox-cpu-period: 100000
2024/03/21 07:57:41 [adaptor/proxy]         io.kubernetes.cri.sandbox-name: nginx
2024/03/21 07:57:41 [adaptor/proxy]         io.katacontainers.pkg.oci.container_type: pod_sandbox
2024/03/21 07:57:41 [adaptor/proxy]         io.kubernetes.cri.sandbox-uid: dd6cd64b-745a-48b1-a843-3a8dccfb7d34
2024/03/21 07:57:41 [adaptor/proxy]         nerdctl/network-namespace: /var/run/netns/cni-aca249c0-cf2a-69d2-ae44-be82d82f22ea
2024/03/21 07:57:41 [adaptor/proxy]         io.kubernetes.cri.sandbox-namespace: default
2024/03/21 07:57:41 [adaptor/proxy]         io.kubernetes.cri.sandbox-memory: 0
2024/03/21 07:57:41 [adaptor/proxy]         io.kubernetes.cri.container-type: sandbox
2024/03/21 07:57:41 [adaptor/proxy]         io.katacontainers.pkg.oci.bundle_path: /run/containerd/io.containerd.runtime.v2.task/k8s.io/dc5d14f97adad43f08e9d30c8e93d400193a4b4904786f4776449e693a4dc847
2024/03/21 07:57:41 [adaptor/proxy]         io.kubernetes.cri.sandbox-cpu-shares: 2
2024/03/21 07:57:41 [adaptor/proxy]         io.kubernetes.cri.sandbox-cpu-quota: 0
2024/03/21 07:57:41 [adaptor/proxy]         io.kubernetes.cri.sandbox-log-directory: /var/log/pods/default_nginx_dd6cd64b-745a-48b1-a843-3a8dccfb7d34
2024/03/21 07:57:41 [adaptor/proxy] getImageName: no pause image specified uses default pause image: registry.k8s.io/pause:3.7
2024/03/21 07:57:41 [adaptor/proxy] CreateContainer: calling PullImage for "registry.k8s.io/pause:3.7" before CreateContainer (cid: "dc5d14f97adad43f08e9d30c8e93d400193a4b4904786f4776449e693a4dc847")
2024/03/21 07:57:42 [adaptor/proxy] CreateContainer: successfully pulled image "registry.k8s.io/pause:3.7"
2024/03/21 07:57:42 [adaptor/proxy] StartContainer: containerID:dc5d14f97adad43f08e9d30c8e93d400193a4b4904786f4776449e693a4dc847
2024/03/21 07:57:53 [adaptor/proxy] CreateContainer: containerID:9da206395d8d3ed5153668d8d9489a17aed9a73d772a5406a3ffb9eb5250cdd3
2024/03/21 07:57:53 [adaptor/proxy]     mounts:
2024/03/21 07:57:53 [adaptor/proxy]         destination:/proc source:proc type:proc
2024/03/21 07:57:53 [adaptor/proxy]         destination:/dev source:tmpfs type:tmpfs
2024/03/21 07:57:53 [adaptor/proxy]         destination:/dev/pts source:devpts type:devpts
2024/03/21 07:57:53 [adaptor/proxy]         destination:/dev/mqueue source:mqueue type:mqueue
2024/03/21 07:57:53 [adaptor/proxy]         destination:/sys source:sysfs type:sysfs
2024/03/21 07:57:53 [adaptor/proxy]         destination:/sys/fs/cgroup source:cgroup type:cgroup
2024/03/21 07:57:53 [adaptor/proxy]         destination:/etc/config source:/run/kata-containers/shared/containers/9da206395d8d3ed5153668d8d9489a17aed9a73d772a5406a3ffb9eb5250cdd3-7d2a657d0b8035f6-config type:bind
2024/03/21 07:57:53 [adaptor/proxy]         destination:/sealed/etc/secret source:/run/kata-containers/shared/containers/9da206395d8d3ed5153668d8d9489a17aed9a73d772a5406a3ffb9eb5250cdd3-3ccecd05b5f6a69a-secret type:bind
2024/03/21 07:57:53 [adaptor/proxy]         destination:/etc/hosts source:/run/kata-containers/shared/containers/9da206395d8d3ed5153668d8d9489a17aed9a73d772a5406a3ffb9eb5250cdd3-ad6fa77b857f1074-hosts type:bind
2024/03/21 07:57:53 [adaptor/proxy]         destination:/dev/termination-log source:/run/kata-containers/shared/containers/9da206395d8d3ed5153668d8d9489a17aed9a73d772a5406a3ffb9eb5250cdd3-bbbc4927fd1a7dbe-termination-log type:bind
2024/03/21 07:57:53 [adaptor/proxy]         destination:/etc/hostname source:/run/kata-containers/shared/containers/9da206395d8d3ed5153668d8d9489a17aed9a73d772a5406a3ffb9eb5250cdd3-7da2f76697b03dc0-hostname type:bind
2024/03/21 07:57:53 [adaptor/proxy]         destination:/etc/resolv.conf source:/run/kata-containers/shared/containers/9da206395d8d3ed5153668d8d9489a17aed9a73d772a5406a3ffb9eb5250cdd3-eb04bd053187c277-resolv.conf type:bind
2024/03/21 07:57:53 [adaptor/proxy]         destination:/dev/shm source:/run/kata-containers/sandbox/shm type:bind
2024/03/21 07:57:53 [adaptor/proxy]         destination:/var/run/secrets/kubernetes.io/serviceaccount source:/run/kata-containers/shared/containers/9da206395d8d3ed5153668d8d9489a17aed9a73d772a5406a3ffb9eb5250cdd3-1bfe95e6daea0391-serviceaccount type:bind
2024/03/21 07:57:53 [adaptor/proxy]     annotations:
2024/03/21 07:57:53 [adaptor/proxy]         io.kubernetes.cri.image-name: docker.io/library/nginx:latest
2024/03/21 07:57:53 [adaptor/proxy]         io.kubernetes.cri.container-type: container
2024/03/21 07:57:53 [adaptor/proxy]         io.kubernetes.cri.sandbox-name: nginx
2024/03/21 07:57:53 [adaptor/proxy]         io.kubernetes.cri.sandbox-namespace: default
2024/03/21 07:57:53 [adaptor/proxy]         io.kubernetes.cri.container-name: nginx
2024/03/21 07:57:53 [adaptor/proxy]         io.kubernetes.cri.sandbox-uid: dd6cd64b-745a-48b1-a843-3a8dccfb7d34
2024/03/21 07:57:53 [adaptor/proxy]         io.katacontainers.pkg.oci.container_type: pod_container
2024/03/21 07:57:53 [adaptor/proxy]         io.kubernetes.cri.sandbox-id: dc5d14f97adad43f08e9d30c8e93d400193a4b4904786f4776449e693a4dc847
2024/03/21 07:57:53 [adaptor/proxy]         io.katacontainers.pkg.oci.bundle_path: /run/containerd/io.containerd.runtime.v2.task/k8s.io/9da206395d8d3ed5153668d8d9489a17aed9a73d772a5406a3ffb9eb5250cdd3
2024/03/21 07:57:53 [adaptor/proxy] getImageName: got image from annotations: docker.io/library/nginx:latest
2024/03/21 07:57:53 [adaptor/proxy] CreateContainer: calling PullImage for "docker.io/library/nginx:latest" before CreateContainer (cid: "9da206395d8d3ed5153668d8d9489a17aed9a73d772a5406a3ffb9eb5250cdd3")
2024/03/21 07:57:57 [adaptor/proxy] CreateContainer: successfully pulled image "docker.io/library/nginx:latest"
2024/03/21 07:57:58 [adaptor/proxy] StartContainer: containerID:9da206395d8d3ed5153668d8d9489a17aed9a73d772a5406a3ffb9eb5250cdd3

@liudalibj liudalibj force-pushed the toplevel-cloud-module branch 2 times, most recently from cfb0874 to 973f9f0 Compare March 21, 2024 09:20
Copy link
Contributor

@mkulke mkulke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I ran simple smoke tests using a CAA image built from this branch (launch + delete peerpods), did not observe any problems.

Reviewed the code as far as feasible: LGTM, great work!

@stevenhorsman
Copy link
Member

Sorry if this comes across as bike shedding, but do you think some of the docs should remain in the root of the repo (under a new/old docs directory as they relate to the project as a whole and are more tricky to find after the move. I'm pariticularly thinking of the architecture and release process docs as they cover all the components, not just the CAA.

I think the golang-fedora image build is also not specific to the CAA? https://github.com/liudalibj/cloud-api-adaptor/blob/toplevel-cloud-module/src/cloud-api-adaptor/Dockerfile (it might even just be used in one of the controllers, so be a candidate to move into their directories, or be renamed for clarity I guess?

@liudalibj
Copy link
Member Author

liudalibj commented Mar 21, 2024

Sorry if this comes across as bike shedding, but do you think some of the docs should remain in the root of the repo (under a new/old docs directory as they relate to the project as a whole and are more tricky to find after the move. I'm pariticularly thinking of the architecture and release process docs as they cover all the components, not just the CAA.

I think the golang-fedora image build is also not specific to the CAA? https://github.com/liudalibj/cloud-api-adaptor/blob/toplevel-cloud-module/src/cloud-api-adaptor/Dockerfile (it might even just be used in one of the controllers, so be a candidate to move into their directories, or be renamed for clarity I guess?

Good points, I updated the pr:

Copy link
Member

@stevenhorsman stevenhorsman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a couple of minor things I spotted scanning through the commits. Obviously there are too many changes to be confident for human review, so I'll try and run through a few builds as well.

.github/workflows/azure-e2e-test.yml Outdated Show resolved Hide resolved
.github/workflows/caa_build_and_push.yaml Outdated Show resolved Hide resolved
@stevenhorsman
Copy link
Member

stevenhorsman commented Mar 21, 2024

Hey @liudalibj - I was trying to test this out and hit a problem that I think is related to the directory shuffle - the undeploy of peerpod-ctrl seems to be failing in a way that I don't see in the previous code:

~/cloud-api-adaptor/src/cloud-api-adaptor# make delete
make -C ../peerpod-ctrl undeploy
make[1]: Entering directory '/root/cloud-api-adaptor/src/peerpod-ctrl'
/root/cloud-api-adaptor/src/peerpod-ctrl/bin/kustomize build config/default | kubectl delete --ignore-not-found=false -f -
bash: /root/cloud-api-adaptor/src/peerpod-ctrl/bin/kustomize: No such file or directory
No resources found
make[1]: *** [Makefile:182: undeploy] Error 127
make[1]: Leaving directory '/root/cloud-api-adaptor/src/peerpod-ctrl'
make: *** [Makefile:137: delete] Error 2

In my old code, after running the same thing, I can see that ./peerpod-ctrl/bin/kustomize that been created.

Oh - I think this is broken in main as well, so I've raised #1755 to cover this. Apologies for the noise

Copy link
Member

@stevenhorsman stevenhorsman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this (once rebased) is good enough to merge. Thanks for all the hard-work on this DaLi.

Full disclaimer - I tried to build it all locally and run the e2e tests with libvirt, but hit a problem where the CAA couldn't connect to the APF, but I've been having a bunch of issues locally with kcli and libvirt in other tests and the e2e still passes and you've linked it working for you, so I don't want to hold this up further.

Copy link
Contributor

@huoqifeng huoqifeng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, fantastic! thank you @liudalibj

tumberino and others added 6 commits March 22, 2024 11:17
Create new go module, provider, to hold generic provider code.
This can be used between the cloud-api-adaptor and the peerpod-ctrl.
- Extract out cloudinit code for the cloud-api-adaptor pkg
- Extract out various utility functions that should live outside of the cloud-api-adaptor package
- Extract out the generic instance related types
- Make the cloudTable reusable between cloud-api-adaptor and the peerpod-ctrl

Signed-off-by: James Tumber <james.tumber@ibm.com>
Co-authored-by: Da Li Liu <liudali@cn.ibm.com>
- create cloud-providers go mod

Signed-off-by: Da Li Liu <liudali@cn.ibm.com>
- move all function source codes to src folder
- keep depency go modules in the single repo

Signed-off-by: Da Li Liu <liudali@cn.ibm.com>
- caa local build
- peerpod-ctrl local build
- csi-wrapper-localbuild
- test codes local build

Signed-off-by: Da Li Liu <liudali@cn.ibm.com>
- golang-fedora
- test-azure-e2e
- peerpod-ctrl and peerpodconfig-ctrl
- csi_wrapper_images
- lint and links
- podvm
- webhook
- release
- publish on push
- test-images

Signed-off-by: Da Li Liu <liudali@cn.ibm.com>
- add a new hack/release-helper.sh script to help generate tags command
- update release document

review: address review comments
- keep build-golang-fedora on the root of repo
- keep common docs architecture and Release-Process at the root docs

Signed-off-by: Da Li Liu <liudali@cn.ibm.com>
@liudalibj liudalibj merged commit 4f22d9a into confidential-containers:main Mar 22, 2024
18 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants