-
Notifications
You must be signed in to change notification settings - Fork 38.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Federation][init-11] Switch federation e2e tests to use the new federation control plane bootstrap via the kubefed init
command.
#35961
Conversation
64c820e
to
71e79ef
Compare
Jenkins GKE smoke e2e failed for commit 71e79ef281d4e2874bfe3b24b599667f6ea8b517. Full PR test history. The magic incantation to run this job again is |
Jenkins GCE e2e failed for commit 71e79ef281d4e2874bfe3b24b599667f6ea8b517. Full PR test history. The magic incantation to run this job again is |
Jenkins Kubemark GCE e2e failed for commit 71e79ef281d4e2874bfe3b24b599667f6ea8b517. Full PR test history. The magic incantation to run this job again is |
Jenkins GCI GKE smoke e2e failed for commit 71e79ef281d4e2874bfe3b24b599667f6ea8b517. Full PR test history. The magic incantation to run this job again is |
Jenkins GCE etcd3 e2e failed for commit 71e79ef281d4e2874bfe3b24b599667f6ea8b517. Full PR test history. The magic incantation to run this job again is |
Jenkins GCI GCE e2e failed for commit 71e79ef281d4e2874bfe3b24b599667f6ea8b517. Full PR test history. The magic incantation to run this job again is |
Jenkins GCE Node e2e failed for commit 71e79ef281d4e2874bfe3b24b599667f6ea8b517. Full PR test history. The magic incantation to run this job again is |
71e79ef
to
04c83ce
Compare
kubefed init
command.kubefed init
command.
Jenkins unit/integration failed for commit 04c83ce. Full PR test history. The magic incantation to run this job again is |
Jenkins verification failed for commit 04c83ce. Full PR test history. The magic incantation to run this job again is |
# create_cluster_secrets creates the secrets containing the kubeconfigs | ||
# of the participating clusters in the host cluster. The kubeconfigs itself | ||
# are created while deploying clusters, i.e. when kube-up is run. | ||
function create_cluster_secrets() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Today e2e framework just brings up the federation control plane and each test has to register clusters.
In future, we should register the clusters in the e2e framework as well.
We can do this now since we have federated namespaces and each test runs its own independent namespace. We do not have to unregister and register clusters in each test.
Will not need this code then. We can use kubefed join.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not entirely sure if we want to do this in the framework or using kubefed join
. Each e2e test should have the autonomy to make this decision. We have tests (and we want them) that test the API server without having the clusters registered. Pre-registering clusters will make it difficult to run these tests. They will have to deregister the pre-registered clusters before running their tests and then register those clusters back. It kind of reverses the setup and teardown order which will in turn make the tests hard to reason about and debug.
It also makes the tests complicated because, every test should now ensure that the resources are cleaned up in all the underlying clusters before ending the test.
We also need tests that incrementally register/deregister clusters. For example, start with 1 cluster, add two, delete one and so on and test how the controllers behave. We don't have good coverage for these kinds of operations today. But the point is, pre-registering clusters makes it hard to write these kinds of tests.
thanks LGTM |
An idea: Instead of replacing the existing scripts, we can let them be and add support for using kubefed to init using a USE_KUBEFED flag. @madhusudancs What do you think? |
@nikhiljindal yeah, that's a good idea. Plus, if there is anyone using the existing scripts, we don't unnecessarily break them. |
@nikhiljindal Does the milestone v1.5 stay on this one too? |
@dims yeah. This should go into 1.5 for confidence. |
…ration control plane bootstrap via the `kubefed init` command.
04c83ce
to
540eb9b
Compare
@matchstick @nikhiljindal @madhusudancs : last call for a review (OR) please move it to 1.6 |
yes, it does @madhusudancs please add non-release-blocker label |
@madhusudancs so this stays on for a point release? |
I am closing this in favor of PR #37215. |
Automatic merge from submit-queue [Federation][init-11.2] use USE_KUBEFED env var to choose bw old and new federation deployment This is continuation of #35961 USE_KUBEFED variable is used for deploying federation control plane. if not defined, federation will be brought up using old method i.e scripts. Have verified that federation comes up using the old method, using following steps ``` $ export FEDERATION=true $ export E2E_ZONES="asia-east1-c" $ export FEDERATION_PUSH_REPO_BASE=gcr.io/<my-project> $ KUBE_RELEASE_RUN_TESTS=n KUBE_FASTBUILD=true go run hack/e2e.go -v -build $ build-tools/push-federation-images.sh $ go run hack/e2e.go -v --up ``` Should merge #35961 before this PR @madhusudancs
This is ready for review now
Please review only the last commit here. This is based on PR #36294 which will be reviewed independently.
Design Doc: PR #35960
cc @kubernetes/sig-cluster-federation @quinton-hoole @nikhiljindal
This change is![Reviewable](https://camo.githubusercontent.com/2d899f4291d07d3cd2fa4aaae1e3b243f164c23fce87d30a589ace0d496a444c/68747470733a2f2f72657669657761626c652e6b756265726e657465732e696f2f7265766965775f627574746f6e2e737667)