Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test/extended: exclude ovnkube master/node metrics endpoints from secure test #24857

Merged

Conversation

dcbw
Copy link
Contributor

@dcbw dcbw commented Apr 9, 2020

@dcbw
Copy link
Contributor Author

dcbw commented Apr 9, 2020

/test e2e-aws-ovn
/test e2e-ovn-step-registry

@dcbw
Copy link
Contributor Author

dcbw commented Apr 9, 2020

[sig-cli][Feature:LegacyCommandTests][Disruptive][Serial] test-cmd: test/cmd/secrets.sh [Suite:openshift]

https://bugzilla.redhat.com/show_bug.cgi?id=1822764

/test e2e-cmd

@dcbw
Copy link
Contributor Author

dcbw commented Apr 9, 2020

GCP failed because apparently openshift-sdn got wedged somehow:

I0409 18:20:27.158306    2148 pod.go:503] CNI_ADD openshift-apiserver/apiserver-6dd8565657-wjrmk got IP 10.130.0.15, ofport 16
W0409 18:36:59.973522    2148 reflector.go:326] k8s.io/client-go/informers/factory.go:135: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: read tcp 10.0.0.4:55010->10.0.0.2:6443: read: connection timed out") has prevented the request from succeeding
W0409 18:36:59.973522    2148 reflector.go:326] k8s.io/client-go/informers/factory.go:135: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: read tcp 10.0.0.4:55010->10.0.0.2:6443: read: connection timed out") has prevented the request from succeeding
W0409 18:36:59.973537    2148 reflector.go:326] k8s.io/client-go/informers/factory.go:135: watch of *v1.Namespace ended with: an error on the server ("unable to decode an event from the watch stream: read tcp 10.0.0.4:55010->10.0.0.2:6443: read: connection timed out") has prevented the request from succeeding
W0409 18:36:59.973549    2148 reflector.go:326] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: an error on the server ("unable to decode an event from the watch stream: read tcp 10.0.0.4:55010->10.0.0.2:6443: read: connection timed out") has prevented the request from succeeding
W0409 18:36:59.973557    2148 reflector.go:326] k8s.io/client-go/informers/factory.go:135: watch of *v1.NetworkPolicy ended with: an error on the server ("unable to decode an event from the watch stream: read tcp 10.0.0.4:55010->10.0.0.2:6443: read: connection timed out") has prevented the request from succeeding
W0409 18:36:59.973577    2148 reflector.go:326] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.EgressNetworkPolicy ended with: an error on the server ("unable to decode an event from the watch stream: read tcp 10.0.0.4:55010->10.0.0.2:6443: read: connection timed out") has prevented the request from succeeding
W0409 18:36:59.973593    2148 reflector.go:326] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: an error on the server ("unable to decode an event from the watch stream: read tcp 10.0.0.4:55010->10.0.0.2:6443: read: connection timed out") has prevented the request from succeeding
W0409 18:36:59.973597    2148 reflector.go:326] k8s.io/client-go/informers/factory.go:135: watch of *v1.Endpoints ended with: an error on the server ("unable to decode an event from the watch stream: read tcp 10.0.0.4:55010->10.0.0.2:6443: read: connection timed out") has prevented the request from succeeding
W0409 18:36:59.977320    2148 pod.go:274] CNI_ADD openshift-insights/insights-operator-7654895cd4-zqrtp failed: Get https://api-int.ci-op-9cj53xi2-2a78c.origin-ci-int-gce.dev.openshift.com:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-7654895cd4-zqrtp: read tcp 10.0.0.4:55010->10.0.0.2:6443: read: connection timed out

which was during the time that the cluster-monitoring-operator was supposed to be coming up:

Apr 09 18:21:08.163813 ci-op-4gsnv-m-2.c.openshift-gce-devel-ci.internal hyperkube[1468]: I0409 18:21:08.163223    1468 kuberuntime_manager.go:422] No sandbox for pod "cluster-monitoring-operator-57b674f486-qwrb
b_openshift-monitoring(be17ce3a-1c7d-4896-93bf-6e945c2cf090)" can be found. Need to start a new one
Apr 09 18:21:08.163813 ci-op-4gsnv-m-2.c.openshift-gce-devel-ci.internal hyperkube[1468]: I0409 18:21:08.163287    1468 kuberuntime_manager.go:650] computePodActions got {KillPod:true CreateSandbox:true SandboxID: Attempt:0 NextInitContainerToStart:nil ContainersToStart:[0 1] ContainersToKill:map[] EphemeralContainersToStart:[]} for pod "cluster-monitoring-operator-57b674f486-qwrbb_openshift-monitoring(be17ce3a-1c7d-4896-93bf-6e945c2cf090)"
Apr 09 18:21:08.166959 ci-op-4gsnv-m-2.c.openshift-gce-devel-ci.internal crio[1385]: time="2020-04-09 18:21:08.166912128Z" level=info msg="attempting to run pod sandbox with infra container: openshift-monitoring/cluster-monitoring-operator-57b674f486-qwrbb/POD" id=ed5bf23e-fc87-49c8-a94f-ed66ed491288
Apr 09 18:21:08.383965 ci-op-4gsnv-m-2.c.openshift-gce-devel-ci.internal hyperkube[1468]: I0409 18:21:08.383363    1468 manager.go:950] Added container: "/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe17ce3a_1c7d_4896_93bf_6e945c2cf090.slice/crio-4a686db2273bd461f4cc77107852f533b3131549922efaeabd1ec856c1800a50.scope" (aliases: [k8s_POD_cluster-monitoring-operator-57b674f486-qwrbb_openshift-monitoring_be17ce3a-1c7d-4896-93bf-6e945c2cf090_0 4a686db2273bd461f4cc77107852f533b3131549922efaeabd1ec856c1800a50], namespace: "crio")
Apr 09 18:21:08.521467 ci-op-4gsnv-m-2.c.openshift-gce-devel-ci.internal crio[1385]: time="2020-04-09 18:21:08.521407142Z" level=info msg="Got pod network &{Name:cluster-monitoring-operator-57b674f486-qwrbb Namespace:openshift-monitoring ID:4a686db2273bd461f4cc77107852f533b3131549922efaeabd1ec856c1800a50 NetNS:/proc/19873/ns/net Networks:[] RuntimeConfig:map[multus-cni-network:{IP: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
Apr 09 18:21:08.548492 ci-op-4gsnv-m-2.c.openshift-gce-devel-ci.internal crio[1385]: time="2020-04-09 18:21:08.548324851Z" level=info msg="Got pod network &{Name:cluster-monitoring-operator-57b674f486-qwrbb Namespace:openshift-monitoring ID:4a686db2273bd461f4cc77107852f533b3131549922efaeabd1ec856c1800a50 NetNS:/proc/19873/ns/net Networks:[] RuntimeConfig:map[multus-cni-network:{IP: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
Apr 09 18:25:08.164024 ci-op-4gsnv-m-2.c.openshift-gce-devel-ci.internal hyperkube[1468]: E0409 18:25:08.164031    1468 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "cluster-monitoring-operator-57b674f486-qwrbb_openshift-monitoring(be17ce3a-1c7d-4896-93bf-6e945c2cf090)" failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded

@dcbw
Copy link
Contributor Author

dcbw commented Apr 9, 2020

/test e2e-gcp

@dcbw
Copy link
Contributor Author

dcbw commented Apr 10, 2020

/retest

@dcbw
Copy link
Contributor Author

dcbw commented Apr 10, 2020

/test e2e-aws-ovn

@dcbw
Copy link
Contributor Author

dcbw commented Apr 10, 2020

/retest

@knobunc
Copy link
Contributor

knobunc commented Apr 10, 2020

/approve
/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Apr 10, 2020
@openshift-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: dcbw, knobunc

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 10, 2020
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

6 similar comments
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-merge-robot openshift-merge-robot merged commit ddee3f6 into openshift:master Apr 11, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants