Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI: Restart all kube-system pods in GKE #11136

Merged
merged 1 commit into from Apr 29, 2020

Conversation

raybejjani
Copy link
Contributor

Some of these pods are scheduled before cilium and are never managed.
They still have issues, however, and some are cricital to the cluster's
health.

@raybejjani raybejjani added wip area/CI Continuous Integration testing issue or flake release-note/ci This PR makes changes to the CI. labels Apr 24, 2020
@maintainer-s-little-helper maintainer-s-little-helper bot added this to In progress in 1.8.0 Apr 24, 2020
@raybejjani
Copy link
Contributor Author

test-gke K8sDemosTest.*

@raybejjani
Copy link
Contributor Author

test-gke

@coveralls
Copy link

coveralls commented Apr 24, 2020

Coverage Status

Coverage decreased (-0.01%) to 44.785% when pulling 10e71cd on raybejjani:ci-restart-k8s-pods into d07eec5 on cilium:master.

case helpers.CIIntegrationGKE:
By("Restarting all kube-system pods")
if res := vm.DeleteResource("pod", fmt.Sprintf("-n %s --all", helpers.KubeSystemNamespace)); !res.WasSuccessful() {
log.Warningf("Unable to delete DNS pods: %s", res.OutputPrettyPrint())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this msg should not be DNS-specific.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yup!

@@ -138,6 +143,9 @@ func DeployCiliumOptionsAndDNS(vm *helpers.Kubectl, ciliumFilename string, optio
ExpectCiliumReady(vm)
ExpectCiliumOperatorReady(vm)
ExpectKubeDNSReady(vm)

err := vm.WaitforPods(helpers.KubeSystemNamespace, "", longTimeout)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this block be done only if we are running in GKE?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I figured it was harmless anyway. I can put it in an if if you'd like

@nebril
Copy link
Member

nebril commented Apr 24, 2020

test-gke

Some of these pods are scheduled before cilium and are never managed.
They still have issues, however, and some are cricital to the cluster's
health.

Signed-off-by: Ray Bejjani <ray@isovalent.com>
@raybejjani
Copy link
Contributor Author

raybejjani commented Apr 27, 2020

@raybejjani
Copy link
Contributor Author

raybejjani commented Apr 28, 2020

test-gke (passed)

@raybejjani
Copy link
Contributor Author

test-me-please

@raybejjani raybejjani marked this pull request as ready for review April 28, 2020 12:55
@raybejjani raybejjani requested a review from a team as a code owner April 28, 2020 12:55
@raybejjani
Copy link
Contributor Author

test-gke

1 similar comment
@raybejjani
Copy link
Contributor Author

test-gke

@raybejjani raybejjani merged commit fe0d91d into cilium:master Apr 29, 2020
1.8.0 automation moved this from In progress to Merged Apr 29, 2020
@raybejjani raybejjani deleted the ci-restart-k8s-pods branch April 29, 2020 10:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/CI Continuous Integration testing issue or flake release-note/ci This PR makes changes to the CI.
Projects
No open projects
1.8.0
  
Merged
Development

Successfully merging this pull request may close these issues.

None yet

3 participants