-
Notifications
You must be signed in to change notification settings - Fork 39.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revert "Revert "Add support for running GCI on the GCE cloud provider"" #25927
Conversation
@cjcullen told me the failed test cases may be caused by incorrect firewall. My cluster was created using gcloud containers command, instead of jenkins tests. After manually adding the needed firewall rules, the failed test cases all can pass. It means that this PR can pass both k8s and GKE e2e tests. |
@david-mcmahon -- the original PR had a release note but was reverted. Will it's release note show up in the next release or should this PR have a release note attached? |
Marking as e2e-not-required since the merge bot tests won't actually test this code (so there's no point in wasting cycles). |
lgtm |
@roberthbailey Each PR has its own independent release note state and text. In the case of an original, a revert and a revert-revert, I'd recommend |
I will remove the commit "GCI: add support for CIDR allocator for NodeController" from this PR, as the original PR will be reverted shortly. |
PR #25997 was abandoned. I add the support for CIDR allocator for NodeController back. The PR is ready |
This reverts commit 40f53b1.
@roberthbailey I verified the PR in GKE cluster creation and kube-system pods. Also ran most e2e test cases in GCE and GKE. Please merge it and then let's monitor the gke CI jenkins. |
GCE e2e build/test passed for commit 6bb0a25. |
This is for resubmitting the PR #25425, which was reverted due to breakage in GKE cluster intialization. The root cause is that /etc/gce.conf is incorrectly deleted. Compared with the previous PR, this one corrects the logic.
Test: (1) spin up a k8s cluster and gke cluster and verify kube-system pods all running. (2) Run e2e tests on k8s cluster and gke cluster.
One thing confusing me: two test fail in gke but all pass in oss k8s cluster.
The second test case is more strange as the kubeproxy is running on ContainerVM-based node.
@roberthbailey do you know any potential reason for the different result in gke and k8s? For GKE e2e tests, I follow the instruction https://g3doc.corp.google.com/cloud/kubernetes/g3doc/dev/testing.md?cl=head#running-e2es-against-a-gke-cluster. I am not sure if I missed any configuration which led to the failures.