Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

don't dump logs if the cluster doesn't exist #16054

Merged
merged 1 commit into from Oct 26, 2023

Conversation

upodroid
Copy link
Member

This change skips dumping logs if the cluster doesn't exist.

These lines should disappear

I1014 18:57:08.799731    6300 dumplogs.go:46] /home/prow/go/src/k8s.io/kops/.build/dist/linux/amd64/kops toolbox dump --name e2e-pr16011.pull-kops-e2e-k8s-gce-cilium.k8s.local --dir /logs/artifacts --private-key /tmp/kops-ssh3199524214/key --ssh-user prow
I1014 18:57:08.799751    6300 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/.build/dist/linux/amd64/kops toolbox dump --name e2e-pr16011.pull-kops-e2e-k8s-gce-cilium.k8s.local --dir /logs/artifacts --private-key /tmp/kops-ssh3199524214/key --ssh-user prow
Error: Cluster.kops.k8s.io "e2e-pr16011.pull-kops-e2e-k8s-gce-cilium.k8s.local" not found
W1014 18:57:09.079398    6300 dumplogs.go:54] kops toolbox dump failed: exit status 1
I1014 18:57:09.079536    6300 dumplogs.go:86] /home/prow/go/src/k8s.io/kops/.build/dist/linux/amd64/kops get cluster --name e2e-pr16011.pull-kops-e2e-k8s-gce-cilium.k8s.local -o yaml
I1014 18:57:09.079555    6300 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/.build/dist/linux/amd64/kops get cluster --name e2e-pr16011.pull-kops-e2e-k8s-gce-cilium.k8s.local -o yaml
I1014 18:57:09.306903    6300 dumplogs.go:86] /home/prow/go/src/k8s.io/kops/.build/dist/linux/amd64/kops get instancegroups --name e2e-pr16011.pull-kops-e2e-k8s-gce-cilium.k8s.local -o yaml
I1014 18:57:09.306928    6300 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/.build/dist/linux/amd64/kops get instancegroups --name e2e-pr16011.pull-kops-e2e-k8s-gce-cilium.k8s.local -o yaml
W1014 18:57:09.544229    6300 dumplogs.go:59] cluster manifest dump failed: exit status 1
exit status 1
I1014 18:57:09.544273    6300 dumplogs.go:105] kubectl cluster-info dump --all-namespaces -o yaml --output-directory /logs/artifacts/cluster-info
I1014 18:57:09.544283    6300 local.go:42] ⚙️ kubectl cluster-info dump --all-namespaces -o yaml --output-directory /logs/artifacts/cluster-info
I1014 18:57:09.610619    6300 dumplogs.go:214] /home/prow/go/src/k8s.io/kops/.build/dist/linux/amd64/kops toolbox dump --name e2e-pr16011.pull-kops-e2e-k8s-gce-cilium.k8s.local --private-key /tmp/kops-ssh3199524214/key --ssh-user prow -o yaml
I1014 18:57:09.610645    6300 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/.build/dist/linux/amd64/kops toolbox dump --name e2e-pr16011.pull-kops-e2e-k8s-gce-cilium.k8s.local --private-key /tmp/kops-ssh3199524214/key --ssh-user prow -o yaml
Error: Cluster.kops.k8s.io "e2e-pr16011.pull-kops-e2e-k8s-gce-cilium.k8s.local" not found
I1014 18:57:09.837847    6300 dumplogs.go:138] kubectl --request-timeout 5s get csinodes --all-namespaces --show-managed-fields -o yaml
I1014 18:57:09.837873    6300 local.go:42] ⚙️ kubectl --request-timeout 5s get csinodes --all-namespaces --show-managed-fields -o yaml
W1014 18:57:09.919915    6300 dumplogs.go:144] Failed to get csinodes: exit status 1
I1014 18:57:09.920006    6300 dumplogs.go:138] kubectl --request-timeout 5s get csidrivers --all-namespaces --show-managed-fields -o yaml
I1014 18:57:09.920017    6300 local.go:42] ⚙️ kubectl --request-timeout 5s get csidrivers --all-namespaces --show-managed-fields -o yaml
W1014 18:57:09.997168    6300 dumplogs.go:144] Failed to get csidrivers: exit status 1
I1014 18:57:09.997291    6300 dumplogs.go:138] kubectl --request-timeout 5s get storageclasses --all-namespaces --show-managed-fields -o yaml
I1014 18:57:09.997306    6300 local.go:42] ⚙️ kubectl --request-timeout 5s get storageclasses --all-namespaces --show-managed-fields -o yaml
W1014 18:57:10.065970    6300 dumplogs.go:144] Failed to get storageclasses: exit status 1
I1014 18:57:10.066096    6300 dumplogs.go:138] kubectl --request-timeout 5s get persistentvolumes --all-namespaces --show-managed-fields -o yaml
I1014 18:57:10.066115    6300 local.go:42] ⚙️ kubectl --request-timeout 5s get persistentvolumes --all-namespaces --show-managed-fields -o yaml
W1014 18:57:10.131111    6300 dumplogs.go:144] Failed to get persistentvolumes: exit status 1
I1014 18:57:10.131196    6300 dumplogs.go:138] kubectl --request-timeout 5s get mutatingwebhookconfigurations --all-namespaces --show-managed-fields -o yaml
I1014 18:57:10.131205    6300 local.go:42] ⚙️ kubectl --request-timeout 5s get mutatingwebhookconfigurations --all-namespaces --show-managed-fields -o yaml
W1014 18:57:10.204529    6300 dumplogs.go:144] Failed to get mutatingwebhookconfigurations: exit status 1
I1014 18:57:10.204621    6300 dumplogs.go:138] kubectl --request-timeout 5s get validatingwebhookconfigurations --all-namespaces --show-managed-fields -o yaml
I1014 18:57:10.204630    6300 local.go:42] ⚙️ kubectl --request-timeout 5s get validatingwebhookconfigurations --all-namespaces --show-managed-fields -o yaml
W1014 18:57:10.269060    6300 dumplogs.go:144] Failed to get validatingwebhookconfigurations: exit status 1
I1014 18:57:10.269213    6300 dumplogs.go:138] kubectl --request-timeout 5s get clusterrolebindings --all-namespaces --show-managed-fields -o yaml
I1014 18:57:10.269231    6300 local.go:42] ⚙️ kubectl --request-timeout 5s get clusterrolebindings --all-namespaces --show-managed-fields -o yaml
W1014 18:57:10.334875    6300 dumplogs.go:144] Failed to get clusterrolebindings: exit status 1
I1014 18:57:10.334968    6300 dumplogs.go:138] kubectl --request-timeout 5s get clusterroles --all-namespaces --show-managed-fields -o yaml
I1014 18:57:10.334980    6300 local.go:42] ⚙️ kubectl --request-timeout 5s get clusterroles --all-namespaces --show-managed-fields -o yaml
W1014 18:57:10.408658    6300 dumplogs.go:144] Failed to get clusterroles: exit status 1
I1014 18:57:10.408688    6300 local.go:42] ⚙️ kubectl --request-timeout 5s get namespaces --no-headers -o custom-columns=name:.metadata.name
W1014 18:57:10.472718    6300 dumplogs.go:155] failed to get namespaces: exit status 1
W1014 18:57:10.472791    6300 dumplogs.go:64] cluster info dump failed: exit status 1
exit status 1
exit status 1
exit status 1
exit status 1
exit status 1
exit status 1
exit status 1
exit status 1
exit status 1
W1014 18:57:10.472831    6300 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
exit status 1
exit status 1
exit status 1
exit status 1
exit status 1
exit status 1
exit status 1
exit status 1
exit status 1
exit status 1
exit status 1
exit status 1
I1014 18:57:10.472851    6300 down.go:48] /home/prow/go/src/k8s.io/kops/.build/dist/linux/amd64/kops delete cluster --name e2e-pr16011.pull-kops-e2e-k8s-gce-cilium.k8s.local --yes
I1014 18:57:10.472862    6300 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/.build/dist/linux/amd64/kops delete cluster --name e2e-pr16011.pull-kops-e2e-k8s-gce-cilium.k8s.local --yes
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-pr16011.pull-kops-e2e-k8s-gce-cilium.k8s.local" not found

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Oct 24, 2023
@hakman
Copy link
Member

hakman commented Oct 24, 2023

/lgtm
/cc @rifelpet

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Oct 24, 2023
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: hakman

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Oct 26, 2023
@k8s-ci-robot k8s-ci-robot merged commit 21b9af6 into kubernetes:master Oct 26, 2023
22 checks passed
@k8s-ci-robot k8s-ci-robot added this to the v1.29 milestone Oct 26, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants