Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🌱 Testing: Add kubetest to e2e test framework, and make e2e tests easily runnable locally #3593

Closed

Conversation

randomvariable
Copy link
Member

@randomvariable randomvariable commented Sep 3, 2020

What this PR does / why we need it:

  • Primarily:
    • adds a kubetest package to /test/framework that allows you to run kubetest against a ClusterProxy.
      • Runs kubetest in Docker on the host . Tested on Linux and MacOS
      • Adds helpers to inject a shell script to download the latest CI artifacts on Debian-based operating systems.
      • Adds conformance jobs to e2e testing. Will need to set up in test-infra if we want to run this.
    • Did some yak-shaving to help do debugging, which is why the PR is large, so:
      • Refactored the Makefile, separating into include files:
        • Make apidiff target work locally, so you don't don't have to push a PR to find out which APIs you broke
        • Stop making e2e tests locally mess up the manifests so you don't commit temp files by accident.
        • Make an attempt at finding out dependent go files for each docker image, so they don't always have to be rebuilt. Use sentinel files to state the docker container is built.
        • Add parameter help to the Makefile.
        • Ensure all targets are printed in help.
        • Stream e2e tests in Prow so you can see tests in progress

Also upstreaming a few things from CAPA, plus some framework helpers that will enable deduplication from CAPA:

  • Function to gather JUnit reports from directories, renaming them so that they get picked up by Prow/Testgrid.
  • Structured logging for Ginkgo, adding timestamps and key-values.

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #3569
Fixes #2826

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Sep 3, 2020
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
To complete the pull request process, please assign timothysc
You can assign the PR to them by writing /assign @timothysc in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@randomvariable randomvariable changed the title e2e framework: Add conformance testing to test framework WIP: e2e framework: Add conformance testing to test framework Sep 3, 2020
@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels Sep 3, 2020
@randomvariable
Copy link
Member Author

did a test run forcing v1.18.2 --> v1.19.1:

➜ make test-e2e GINKGO_ARGS=-stream
Makefile:570: warning: overriding recipe for target 'clean-bin'
Makefile:566: warning: ignoring old recipe for target 'clean-bin'
cd hack/tools && go build -tags=tools -o bin/ginkgo github.com/onsi/ginkgo/ginkgo
make -C test/e2e run
make[1]: Entering directory '/home/naadir/go/src/sigs.k8s.io/cluster-api/test/e2e'
cd /home/naadir/go/src/sigs.k8s.io/cluster-api/hack/tools && go build -tags=tools -o bin/ginkgo github.com/onsi/ginkgo/ginkgo
cd /home/naadir/go/src/sigs.k8s.io/cluster-api/test/e2e; /home/naadir/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/ginkgo -v -trace -tags=e2e -focus= -nodes=1 --noColor=false . -- \
    -e2e.artifacts-folder="/home/naadir/go/src/sigs.k8s.io/cluster-api/_artifacts" \
    -e2e.config="/home/naadir/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker-dev.yaml" \
    -e2e.skip-resource-cleanup=false -e2e.use-existing-cluster=false
Running Suite: capi-e2e
=======================
Random Seed: 1599170895
Will run 1 of 9 specs

STEP: Initializing a runtime.Scheme with all the GVK relevant for this test
INFO: Loading the e2e test configuration: "artifacts-directory"="/home/naadir/go/src/sigs.k8s.io/cluster-api/_artifacts" "config-path"="/home/naadir/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker-dev.yaml"  "time"="2020-09-03T22:08:18Z"
INFO: Creating a clusterctl local repository in artifacts directory: "artifacts-directory"="/home/naadir/go/src/sigs.k8s.io/cluster-api/_artifacts" "config-path"="/home/naadir/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker-dev.yaml"  "time"="2020-09-03T22:08:18Z"
INFO: Setting up the bootstrap cluster: "artifacts-directory"="/home/naadir/go/src/sigs.k8s.io/cluster-api/_artifacts" "config-path"="/home/naadir/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker-dev.yaml"  "time"="2020-09-03T22:08:19Z"
INFO: Creating a kind cluster with name "test-35f8nm" "time"="2020-09-03T22:08:19Z"
INFO: Loading image: "gcr.io/k8s-staging-cluster-api/cluster-api-controller-amd64:dev" "time"="2020-09-03T22:08:31Z"
INFO: Loading image: "gcr.io/k8s-staging-cluster-api/kubeadm-bootstrap-controller-amd64:dev" "time"="2020-09-03T22:08:32Z"
INFO: Loading image: "gcr.io/k8s-staging-cluster-api/kubeadm-control-plane-controller-amd64:dev" "time"="2020-09-03T22:08:34Z"
INFO: Loading image: "gcr.io/k8s-staging-cluster-api/capd-manager-amd64:dev" "time"="2020-09-03T22:08:35Z"
INFO: Loading image: "quay.io/jetstack/cert-manager-cainjector:v0.16.1" "time"="2020-09-03T22:08:39Z"
INFO: [WARNING] Unable to load image "quay.io/jetstack/cert-manager-cainjector:v0.16.1" into the kind cluster "test-35f8nm": exit status 1 "time"="2020-09-03T22:08:39Z"
INFO: Loading image: "quay.io/jetstack/cert-manager-webhook:v0.16.1" "time"="2020-09-03T22:08:39Z"
INFO: [WARNING] Unable to load image "quay.io/jetstack/cert-manager-webhook:v0.16.1" into the kind cluster "test-35f8nm": exit status 1 "time"="2020-09-03T22:08:39Z"
INFO: Loading image: "quay.io/jetstack/cert-manager-controller:v0.16.1" "time"="2020-09-03T22:08:39Z"
INFO: [WARNING] Unable to load image "quay.io/jetstack/cert-manager-controller:v0.16.1" into the kind cluster "test-35f8nm": exit status 1 "time"="2020-09-03T22:08:39Z"
INFO: Initializing the bootstrap cluster: "artifacts-directory"="/home/naadir/go/src/sigs.k8s.io/cluster-api/_artifacts" "config-path"="/home/naadir/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker-dev.yaml"  "time"="2020-09-03T22:08:39Z"
INFO: clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure docker "time"="2020-09-03T22:08:39Z"
INFO: Waiting for provider controllers to be running "time"="2020-09-03T22:09:15Z"
STEP: Waiting for deployment capd-system/capd-controller-manager to be available
INFO: Creating log watcher for controller capd-system/capd-controller-manager, pod capd-controller-manager-579775c9f8-h4ndm, container kube-rbac-proxy "time"="2020-09-03T22:09:45Z"
INFO: Creating log watcher for controller capd-system/capd-controller-manager, pod capd-controller-manager-579775c9f8-h4ndm, container manager "time"="2020-09-03T22:09:45Z"
STEP: Waiting for deployment capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager to be available
INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-b8b84b7fb-l8dfr, container kube-rbac-proxy "time"="2020-09-03T22:09:45Z"
INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-b8b84b7fb-l8dfr, container manager "time"="2020-09-03T22:09:45Z"
STEP: Waiting for deployment capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager to be available
INFO: Creating log watcher for controller capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-7894dd74cf-h2thq, container kube-rbac-proxy "time"="2020-09-03T22:09:45Z"
INFO: Creating log watcher for controller capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-7894dd74cf-h2thq, container manager "time"="2020-09-03T22:09:45Z"
STEP: Waiting for deployment capi-system/capi-controller-manager to be available
INFO: Creating log watcher for controller capi-system/capi-controller-manager, pod capi-controller-manager-669b6d4594-qvlwl, container kube-rbac-proxy "time"="2020-09-03T22:09:45Z"
INFO: Creating log watcher for controller capi-system/capi-controller-manager, pod capi-controller-manager-669b6d4594-qvlwl, container manager "time"="2020-09-03T22:09:45Z"
STEP: Waiting for deployment capi-webhook-system/capi-controller-manager to be available
INFO: Creating log watcher for controller capi-webhook-system/capi-controller-manager, pod capi-controller-manager-66fc956975-zrjmt, container kube-rbac-proxy "time"="2020-09-03T22:09:46Z"
INFO: Creating log watcher for controller capi-webhook-system/capi-controller-manager, pod capi-controller-manager-66fc956975-zrjmt, container manager "time"="2020-09-03T22:09:46Z"
STEP: Waiting for deployment capi-webhook-system/capi-kubeadm-bootstrap-controller-manager to be available
INFO: Creating log watcher for controller capi-webhook-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-5f7ccb8bcf-87jtz, container kube-rbac-proxy "time"="2020-09-03T22:09:46Z"
INFO: Creating log watcher for controller capi-webhook-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-5f7ccb8bcf-87jtz, container manager "time"="2020-09-03T22:09:46Z"
STEP: Waiting for deployment capi-webhook-system/capi-kubeadm-control-plane-controller-manager to be available
INFO: Creating log watcher for controller capi-webhook-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-7bf5b66c8b-wrf9h, container kube-rbac-proxy "time"="2020-09-03T22:09:47Z"
INFO: Creating log watcher for controller capi-webhook-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-7bf5b66c8b-wrf9h, container manager "time"="2020-09-03T22:09:47Z"
SSSSS
------------------------------
When testing KCP upgrade with CI artifacts
  Should successfully upgrade Kubernetes to the latest main branch version
  /home/naadir/go/src/sigs.k8s.io/cluster-api/test/e2e/kcp_upgrade_ci_artifacts.go:83
INFO: Creating a namespace for hosting the %!q(MISSING) test spec: "namespace"="kcp-upgrade-ci-artifacts-dtmi4c" "spec-name"="kcp-upgrade-ci-artifacts" "kcp-upgrade-ci-artifacts"=null "time"="2020-09-03T22:09:47Z"
INFO: Creating namespace kcp-upgrade-ci-artifacts-dtmi4c "time"="2020-09-03T22:09:47Z"
INFO: Creating event watcher for namespace "kcp-upgrade-ci-artifacts-dtmi4c" "time"="2020-09-03T22:09:47Z"
STEP: Creating a workload cluster
INFO: Creating the workload cluster with name "cluster-lgkm5x" using the "with-ci-artifacts" template (Kubernetes v1.18.2, 824645820648 control-plane machines, 824645820672 worker machines) "time"="2020-09-03T22:09:47Z"
INFO: Getting the cluster template yaml "time"="2020-09-03T22:09:47Z"
INFO: clusterctl config cluster cluster-lgkm5x --infrastructure (default) --kubernetes-version v1.18.2 --control-plane-machine-count 1 --worker-machine-count 1 --flavor with-ci-artifacts "time"="2020-09-03T22:09:47Z"
INFO: Applying the cluster template yaml to the cluster "time"="2020-09-03T22:09:47Z"
configmap/cni-cluster-lgkm5x-crs-0 created
clusterresourceset.addons.cluster.x-k8s.io/cluster-lgkm5x-crs-0 created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/cluster-lgkm5x-md-0 created
cluster.cluster.x-k8s.io/cluster-lgkm5x created
machinedeployment.cluster.x-k8s.io/cluster-lgkm5x-md-0 created
machinehealthcheck.cluster.x-k8s.io/cluster-lgkm5x-mhc-0 created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/cluster-lgkm5x-control-plane created
dockercluster.infrastructure.cluster.x-k8s.io/cluster-lgkm5x created
dockermachinetemplate.infrastructure.cluster.x-k8s.io/cluster-lgkm5x-control-plane created
dockermachinetemplate.infrastructure.cluster.x-k8s.io/cluster-lgkm5x-md-0 created

INFO: Installing a CNI plugin to the workload cluster "time"="2020-09-03T22:09:48Z"
INFO: Waiting for the cluster infrastructure to be provisioned "time"="2020-09-03T22:09:48Z"
STEP: Waiting for cluster to enter the provisioned phase
INFO: Waiting for control plane to be initialized "time"="2020-09-03T22:09:58Z"
INFO: Waiting for the first control plane machine managed by kcp-upgrade-ci-artifacts-dtmi4c/cluster-lgkm5x-control-plane to be provisioned "time"="2020-09-03T22:09:58Z"
STEP: Waiting for one control plane node to exist
INFO: Waiting for control plane to be ready "time"="2020-09-03T22:10:18Z"
INFO: Waiting for control plane kcp-upgrade-ci-artifacts-dtmi4c/cluster-lgkm5x-control-plane to be ready (implies underlying nodes to be ready as well) "time"="2020-09-03T22:10:18Z"
STEP: Waiting for the control plane to be ready
INFO: Waiting for the worker machines to be provisioned "time"="2020-09-03T22:10:48Z"
STEP: waiting for the workload nodes to exist
STEP: Upgrading Kubernetes
INFO: Creating the workload cluster with name "cluster-lgkm5x" using the "with-ci-artifacts" template (Kubernetes v1.19.1-rc.0.7+e78405e50c0edc, 824644185688 control-plane machines, 824644185696 worker machines) "time"="2020-09-03T22:10:48Z"
INFO: Getting the cluster template yaml "time"="2020-09-03T22:10:48Z"
INFO: clusterctl config cluster cluster-lgkm5x --infrastructure (default) --kubernetes-version v1.19.1-rc.0.7+e78405e50c0edc --control-plane-machine-count 1 --worker-machine-count 1 --flavor with-ci-artifacts "time"="2020-09-03T22:10:48Z"
INFO: Applying the cluster template yaml to the cluster "time"="2020-09-03T22:10:48Z"
configmap/cni-cluster-lgkm5x-crs-0 unchanged
clusterresourceset.addons.cluster.x-k8s.io/cluster-lgkm5x-crs-0 unchanged
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/cluster-lgkm5x-md-0 configured
cluster.cluster.x-k8s.io/cluster-lgkm5x unchanged
machinedeployment.cluster.x-k8s.io/cluster-lgkm5x-md-0 configured
machinehealthcheck.cluster.x-k8s.io/cluster-lgkm5x-mhc-0 unchanged
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/cluster-lgkm5x-control-plane configured
dockercluster.infrastructure.cluster.x-k8s.io/cluster-lgkm5x unchanged
dockermachinetemplate.infrastructure.cluster.x-k8s.io/cluster-lgkm5x-control-plane unchanged
dockermachinetemplate.infrastructure.cluster.x-k8s.io/cluster-lgkm5x-md-0 unchanged

INFO: Installing a CNI plugin to the workload cluster "time"="2020-09-03T22:10:48Z"
INFO: Waiting for the cluster infrastructure to be provisioned "time"="2020-09-03T22:10:48Z"
STEP: Waiting for cluster to enter the provisioned phase
INFO: Waiting for control plane to be initialized "time"="2020-09-03T22:10:48Z"
INFO: Waiting for the first control plane machine managed by kcp-upgrade-ci-artifacts-dtmi4c/cluster-lgkm5x-control-plane to be provisioned "time"="2020-09-03T22:10:48Z"
STEP: Waiting for one control plane node to exist
INFO: Waiting for control plane to be ready "time"="2020-09-03T22:10:48Z"
INFO: Waiting for control plane kcp-upgrade-ci-artifacts-dtmi4c/cluster-lgkm5x-control-plane to be ready (implies underlying nodes to be ready as well) "time"="2020-09-03T22:10:48Z"
STEP: Waiting for the control plane to be ready
INFO: Waiting for the worker machines to be provisioned "time"="2020-09-03T22:10:48Z"
STEP: waiting for the workload nodes to exist
STEP: Waiting for control plane to be up to date
STEP: Ensuring all machines have upgraded kubernetes version
INFO: Ensuring all MachineDeployment Machines have upgraded kubernetes version v1.19.1-rc.0.7+e78405e50c0edc "time"="2020-09-03T22:10:48Z"
STEP: PASSED!
INFO: Dumping all the Cluster API resources in namespace: "cluster-name"="cluster-lgkm5x" "cluster-namespace"="kcp-upgrade-ci-artifacts-dtmi4c" "log-path"="/home/naadir/go/src/sigs.k8s.io/cluster-api/_artifacts/clusters/bootstrap/resources" "namespace"="kcp-upgrade-ci-artifacts-dtmi4c" "spec-name"="kcp-upgrade-ci-artifacts"  "time"="2020-09-03T22:24:08Z"
INFO: Deleting cluster: "cluster-name"="cluster-lgkm5x" "cluster-namespace"="kcp-upgrade-ci-artifacts-dtmi4c" "log-path"="/home/naadir/go/src/sigs.k8s.io/cluster-api/_artifacts/clusters/bootstrap/resources" "namespace"="kcp-upgrade-ci-artifacts-dtmi4c" "spec-name"="kcp-upgrade-ci-artifacts"  "time"="2020-09-03T22:24:08Z"
STEP: deleting cluster cluster-lgkm5x
INFO: Waiting for the Cluster kcp-upgrade-ci-artifacts-dtmi4c/cluster-lgkm5x to be deleted "time"="2020-09-03T22:24:08Z"
STEP: Waiting for cluster cluster-lgkm5x to be deleted
INFO: Deleting namespace used for hosting the test spec: "cluster-name"="cluster-lgkm5x" "cluster-namespace"="kcp-upgrade-ci-artifacts-dtmi4c" "log-path"="/home/naadir/go/src/sigs.k8s.io/cluster-api/_artifacts/clusters/bootstrap/resources" "namespace"="kcp-upgrade-ci-artifacts-dtmi4c" "spec-name"="kcp-upgrade-ci-artifacts"  "time"="2020-09-03T22:24:38Z"
INFO: Deleting namespace kcp-upgrade-ci-artifacts-dtmi4c "time"="2020-09-03T22:24:38Z"

• [SLOW TEST:891.445 seconds]
When testing KCP upgrade with CI artifacts
/home/naadir/go/src/sigs.k8s.io/cluster-api/test/e2e/kcp_upgrade_ci_artifacts_test.go:27
  Should successfully upgrade Kubernetes to the latest main branch version
  /home/naadir/go/src/sigs.k8s.io/cluster-api/test/e2e/kcp_upgrade_ci_artifacts.go:83
------------------------------
SSSINFO: Tearing down the management cluster: "artifacts-directory"="/home/naadir/go/src/sigs.k8s.io/cluster-api/_artifacts" "config-path"="/home/naadir/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker-dev.yaml"  "time"="2020-09-03T22:24:38Z"

JUnit report was created: /home/naadir/go/src/sigs.k8s.io/cluster-api/_artifacts/junit.e2e_suite.1.xml

Ran 1 of 9 Specs in 981.426 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 8 Skipped
PASS | FOCUSED

Ginkgo ran 1 suite in 16m23.770716832s
Test Suite Passed
Detected Programmatic Focus - setting exit status to 197
make[1]: *** [Makefile:63: run] Error 197
make[1]: Leaving directory '/home/naadir/go/src/sigs.k8s.io/cluster-api/test/e2e'
make: *** [Makefile:144: test-e2e] Error 2
Time: 0h:16m:24s

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 3, 2020
@randomvariable randomvariable force-pushed the conformance branch 3 times, most recently from 73e9d34 to a4c25af Compare September 3, 2020 22:37
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 3, 2020
@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 8, 2020
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 16, 2020
@randomvariable randomvariable force-pushed the conformance branch 6 times, most recently from 4942138 to d8de599 Compare September 16, 2020 13:46
@randomvariable randomvariable changed the title WIP: e2e framework: Add conformance testing to test framework Testing: Add kubetest to e2e test framework, and make e2e tests easily runnable locally Sep 16, 2020
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Sep 16, 2020
@randomvariable randomvariable changed the title Testing: Add kubetest to e2e test framework, and make e2e tests easily runnable locally 🌱 Testing: Add kubetest to e2e test framework, and make e2e tests easily runnable locally Sep 16, 2020
@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 16, 2020
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 16, 2020
@randomvariable randomvariable force-pushed the conformance branch 3 times, most recently from dd2e40a to 425888c Compare September 16, 2020 14:42
@randomvariable
Copy link
Member Author

/test pull-cluster-api-e2e

@randomvariable randomvariable force-pushed the conformance branch 3 times, most recently from 9abe377 to 4087520 Compare September 16, 2020 15:16
@randomvariable
Copy link
Member Author

/test pull-cluster-api-e2e-full

Includes addition of kubetest and conformance testing

Signed-off-by: Naadir Jeewa <jeewan@vmware.com>
@randomvariable
Copy link
Member Author

I'll start breaking up this PR into separate components.

Let's get #3650 and #3639 in

@randomvariable
Copy link
Member Author

/close

@k8s-ci-robot
Copy link
Contributor

@randomvariable: Closed this PR.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add tests for CAPI e2e for main branch of Kubernetes Expand test framework to include upstream k8s testing
2 participants