Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add containernetworking-plugin RPM's bin location to CRI-O config #1630

Conversation

alexanderConstantinescu
Copy link
Contributor

- What I did

RHCOS is shipped with containernetworking-plugins (which is a dependency for CRI-O and podman), ex:

$ rpm -qa | grep containernetwo
containernetworking-plugins-0.8.3-4.module+el8.1.1+5259+bcdd613a.x86_64

These binaries are needed for rudimentary networking actions and the CNO has a much more intricate way of copying the binaries on the host by checking /etc/os-release and selecting the right binary depending on the underlying OS.

Since the rpm exist, let's just add the unpacked RPM's bin location to CRI-O's config so that it can look there. The final goal of this is to stop copying binaries (at least loopback) in the CNO.

- How to verify it

- Description for the changelog

Add containernetworking-plugin RPM's bin location to CRI-O config

@alexanderConstantinescu
Copy link
Contributor Author

alexanderConstantinescu commented Apr 8, 2020

/cc @danwinship @squeed @dcbw @cgwalters

@danwinship
Copy link
Contributor

lgtm

@alexanderConstantinescu
Copy link
Contributor Author

Looks like flakes:

/retest

@kikisdeliveryservice
Copy link
Contributor

cc: @haircommander

@haircommander
Copy link
Member

LGTM from cri-o side

@cgwalters
Copy link
Member

/approve

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 8, 2020
@wking
Copy link
Member

wking commented Apr 8, 2020

Does this change need to get linked to a 4.5 bug so we can backport it alongside rhbz#1802481?

@alexanderConstantinescu
Copy link
Contributor Author

Does this change need to get linked to a 4.5 bug so we can backport it alongside rhbz#1802481?

It does. But I am going to need a complementary PR in the CNO, so I thought I'd leave the BZ tracking for that. What's the convention otherwise when dealing with multple PRs cross-repo which references the same BZ?

@wking
Copy link
Member

wking commented Apr 8, 2020

Also, the update job seems to have stuck:

$ curl -s https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_machine-config-operator/1630/pull-ci-openshift-machine-config-operator-master-e2e-gcp-upgrade/1627/artifacts/e2e-gcp-upgrade/pods/openshift-cluster-version_cluster-version-operator-758799946f-xvnsh_cluster-version-operator.log | grep 'Running sync.*in state\|Result of work'
I0408 16:35:13.134301       1 sync_worker.go:471] Running sync registry.svc.ci.openshift.org/ci-op-0jz0k2zj/release@sha256:ed9d8645f9bc7736ef6da0f485bb4eb82c5a1694411f4b2e9e8c43d60a5eb5a5 (force=true) on generation 2 in state Updating at attempt 0
I0408 16:40:58.212051       1 task_graph.go:596] Result of work: [deployment openshift-machine-api/machine-api-operator is progressing ReplicaSetUpdated: ReplicaSet "machine-api-operator-6c5db496d" is progressing. Cluster operator openshift-apiserver is still updating Cluster operator config-operator is still updating]
I0408 16:41:22.707011       1 sync_worker.go:471] Running sync registry.svc.ci.openshift.org/ci-op-0jz0k2zj/release@sha256:ed9d8645f9bc7736ef6da0f485bb4eb82c5a1694411f4b2e9e8c43d60a5eb5a5 (force=true) on generation 2 in state Updating at attempt 1
I0408 16:47:07.759017       1 task_graph.go:596] Result of work: [Cluster operator kube-storage-version-migrator is still updating]
I0408 16:47:51.400745       1 sync_worker.go:471] Running sync registry.svc.ci.openshift.org/ci-op-0jz0k2zj/release@sha256:ed9d8645f9bc7736ef6da0f485bb4eb82c5a1694411f4b2e9e8c43d60a5eb5a5 (force=true) on generation 2 in state Updating at attempt 2
I0408 16:53:36.454757       1 task_graph.go:596] Result of work: [Cluster operator insights is still updating Cluster operator openshift-controller-manager is still updating Cluster operator cluster-autoscaler is still updating]
I0408 16:55:08.959993       1 sync_worker.go:471] Running sync registry.svc.ci.openshift.org/ci-op-0jz0k2zj/release@sha256:ed9d8645f9bc7736ef6da0f485bb4eb82c5a1694411f4b2e9e8c43d60a5eb5a5 (force=true) on generation 2 in state Updating at attempt 3
I0408 17:00:54.012934       1 task_graph.go:596] Result of work: [Cluster operator insights is still updating Cluster operator openshift-controller-manager is still updating]
I0408 17:04:18.726927       1 sync_worker.go:471] Running sync registry.svc.ci.openshift.org/ci-op-0jz0k2zj/release@sha256:ed9d8645f9bc7736ef6da0f485bb4eb82c5a1694411f4b2e9e8c43d60a5eb5a5 (force=true) on generation 2 in state Updating at attempt 4
I0408 17:10:03.780088       1 task_graph.go:596] Result of work: [Cluster operator insights is still updating Cluster operator openshift-controller-manager is still updating]
I0408 17:13:08.312313       1 sync_worker.go:471] Running sync registry.svc.ci.openshift.org/ci-op-0jz0k2zj/release@sha256:ed9d8645f9bc7736ef6da0f485bb4eb82c5a1694411f4b2e9e8c43d60a5eb5a5 (force=true) on generation 2 in state Updating at attempt 5
I0408 17:18:53.365331       1 task_graph.go:596] Result of work: [Cluster operator insights is still updating Cluster operator openshift-controller-manager is still updating]
I0408 17:22:08.131304       1 sync_worker.go:471] Running sync registry.svc.ci.openshift.org/ci-op-0jz0k2zj/release@sha256:ed9d8645f9bc7736ef6da0f485bb4eb82c5a1694411f4b2e9e8c43d60a5eb5a5 (force=true) on generation 2 in state Updating at attempt 6
I0408 17:27:53.184166       1 task_graph.go:596] Result of work: [Cluster operator insights is still updating Cluster operator openshift-controller-manager is still updating]
I0408 17:30:59.868386       1 sync_worker.go:471] Running sync registry.svc.ci.openshift.org/ci-op-0jz0k2zj/release@sha256:ed9d8645f9bc7736ef6da0f485bb4eb82c5a1694411f4b2e9e8c43d60a5eb5a5 (force=true) on generation 2 in state Updating at attempt 7
I0408 17:36:44.921347       1 task_graph.go:596] Result of work: [Cluster operator insights is still updating Cluster operator openshift-controller-manager is still updating]

Looking for operators on the old version:

$ curl -s https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_machine-config-operator/1630/pull-ci-openshift-machine-config-operator-master-e2e-gcp-upgrade/1627/artifacts/e2e-gcp-upgrade/clusteroperators.json | jq -r '.items[] | (.status.versions[] | select(.name == "operator").version) + " " + .metadata.name' | sort
0.0.1-2020-04-08-154108 dns
0.0.1-2020-04-08-154108 insights
0.0.1-2020-04-08-154108 machine-config
0.0.1-2020-04-08-154108 network
0.0.1-2020-04-08-154108 openshift-controller-manager
0.0.1-2020-04-08-154310 authentication
...

@wking
Copy link
Member

wking commented Apr 8, 2020

The openshift-controller-manager Progressing condition is garbage:

$ curl -s https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_machine-config-operator/1630/pull-ci-openshift-machine-config-operator-master-e2e-gcp-upgrade/1627/artifacts/e2e-gcp-upgrade/clusteroperators.json | jq -r '.items[] | select(.metadata.name == "openshift-controller-manager").status.conditions[] | .lastTransitionTime + " " + .type + " " + .status + " " + .message' | sort
2020-04-08T15:53:50Z Degraded False 
2020-04-08T15:53:50Z Upgradeable Unknown 
2020-04-08T16:04:30Z Available True 
2020-04-08T16:05:30Z Progressing True 

I'll figure out who to poke about that...

@LorbusChris
Copy link
Member

/cherry-pick fcos
/cc @vrutkovs
We'll need to make sure this is in OKD's machine-os-content

@openshift-cherrypick-robot

@LorbusChris: once the present PR merges, I will cherry-pick it on top of fcos in a new PR and assign it to you.

In response to this:

/cherry-pick fcos
/cc @vrutkovs
We'll need to make sure this is in OKD's machine-os-content

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@wking
Copy link
Member

wking commented Apr 9, 2020

Bug for the flapping openshift-controller-manager Progressing message: rhbz#1822441. DeamonSet is stuck, but I'm still not sure why.

@wking
Copy link
Member

wking commented Apr 9, 2020

These Failed to create pod sandbox... name is reserved are concerning:

$ curl -s https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_machine-config-operator/1630/pull-ci-openshift-machine-config-operator-master-e2e-gcp-upgrade/1627/artifacts/e2e-gcp-upgrade/must-gather.tar | tar xzO ./registry-svc-ci-openshift-org-ci-op-0jz0k2zj-stable-sha256-605d33681323ff3d4db3711ec45d223d6b0c882431d571a0c07208868996be55/namespaces/openshift-controller-manager/core/events.yaml | yaml2json | jq -r '[.items[] | .timePrefix = if .firstTimestamp == null or .firstTimestamp == "null" then .eventTime else .firstTimestamp + " - " + .lastTimestamp + " (" + (.count | tostring) + ")" end] | sort_by(.timePrefix)[] | .timePrefix + " " + .metadata.namespace + " " + .message'
2020-04-08T15:53:53Z - 2020-04-08T15:54:50Z (18) openshift-controller-manager Error creating: pods "controller-manager-" is forbidden: unable to validate against any security context constraint: []
2020-04-08T15:55:40Z - 2020-04-08T15:56:00Z (13) openshift-controller-manager Error creating: pods "controller-manager-" is forbidden: unable to validate against any security context constraint: []
2020-04-08T15:56:34Z - 2020-04-08T15:57:28Z (3) openshift-controller-manager Error creating: Internal error occurred: admission plugin "image.openshift.io/ImagePolicy" failed to complete mutation in 13s
2020-04-08T15:57:57Z - 2020-04-08T15:57:57Z (1) openshift-controller-manager Successfully assigned openshift-controller-manager/controller-manager-b69zx to ci-op-kf7v5-m-1.c.openshift-gce-devel-ci.internal
2020-04-08T15:57:57Z - 2020-04-08T15:57:57Z (1) openshift-controller-manager Created pod: controller-manager-b69zx
2020-04-08T15:57:58Z - 2020-04-08T15:57:58Z (1) openshift-controller-manager Successfully assigned openshift-controller-manager/controller-manager-djw6l to ci-op-kf7v5-m-0.c.openshift-gce-devel-ci.internal
2020-04-08T15:57:58Z - 2020-04-08T15:57:58Z (1) openshift-controller-manager Successfully assigned openshift-controller-manager/controller-manager-w8k7f to ci-op-kf7v5-m-2.c.openshift-gce-devel-ci.internal
2020-04-08T15:57:58Z - 2020-04-08T15:57:58Z (1) openshift-controller-manager Created pod: controller-manager-djw6l
2020-04-08T15:57:58Z - 2020-04-08T15:57:58Z (1) openshift-controller-manager Created pod: controller-manager-w8k7f
2020-04-08T16:02:02Z - 2020-04-08T16:02:02Z (1) openshift-controller-manager Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded
2020-04-08T16:02:15Z - 2020-04-08T16:02:15Z (1) openshift-controller-manager Failed to create pod sandbox: rpc error: code = Unknown desc = error reserving pod name k8s_controller-manager-b69zx_openshift-controller-manager_d63a4aae-e3bd-4198-8abf-001dfb5ba7c5_0 for id 96aef90e184ceccd860ac202466c54120a8311f7d1bbac71ce5ad74f2c5c1508: name is reserved
2020-04-08T16:02:27Z - 2020-04-08T16:02:27Z (1) openshift-controller-manager Failed to create pod sandbox: rpc error: code = Unknown desc = error reserving pod name k8s_controller-manager-b69zx_openshift-controller-manager_d63a4aae-e3bd-4198-8abf-001dfb5ba7c5_0 for id 7a244e48460632759ea297aa9cbbbf58a50ea9eb8bfdb9de13e20b4aab4b4405: name is reserved
2020-04-08T16:02:32Z - 2020-04-08T16:02:32Z (1) openshift-controller-manager Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded
2020-04-08T16:02:40Z - 2020-04-08T16:02:40Z (1) openshift-controller-manager Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded
2020-04-08T16:02:41Z - 2020-04-08T16:02:41Z (1) openshift-controller-manager Failed to create pod sandbox: rpc error: code = Unknown desc = error reserving pod name k8s_controller-manager-b69zx_openshift-controller-manager_d63a4aae-e3bd-4198-8abf-001dfb5ba7c5_0 for id cb7a01745680a3dd2fc38369b1f085474b17c68c7f72c09e6e5f057b8bba4ef5: name is reserved
2020-04-08T16:02:45Z - 2020-04-08T16:02:45Z (1) openshift-controller-manager Failed to create pod sandbox: rpc error: code = Unknown desc = error reserving pod name k8s_controller-manager-djw6l_openshift-controller-manager_54ad2139-7ea0-45e6-9f1f-bffc0423ce20_0 for id e83ab4c81071f62f257959f79bf831edacfc22823d221b04aace770516b432a8: name is reserved
2020-04-08T16:02:52Z - 2020-04-08T16:02:52Z (1) openshift-controller-manager Failed to create pod sandbox: rpc error: code = Unknown desc = error reserving pod name k8s_controller-manager-w8k7f_openshift-controller-manager_dc64e56d-6b22-4ed6-abe0-03563a9950fe_0 for id 55bfe9945b2b87fde68c3fb152274d28619d86141fa0fa6af8037ffd53309c21: name is reserved
...

Doesn't seem to happen in a recent 4.5 nightly -> 4.5 nightly job:

$ curl -s https://storage.googleapis.com/origin-ci-test/logs/release-openshift-origin-installer-e2e-aws-upgrade/25139/artifacts/e2e-aws-upgrade/must-gather.tar | tar xzO ./quay-io-openshift-release-dev-ocp-v4-0-art-dev-sha256-71ad4296695cb05ae3fdfb863355167483872fe00631f2e1f33357da7dc42802/namespaces/openshift-controller-manager/core/events.yaml | yaml2json | jq -r '[.items[] | .timePrefix = if .firstTimestamp == null or .firstTimestamp == "null" then .eventTime else .firstTimestamp + " - " + .lastTimestamp + " (" + (.count | tostring) + ")" end] | sort_by(.timePrefix)[] | .timePrefix + " " + .metadata.namespace + " " + .message'
2020-04-08T18:34:58Z - 2020-04-08T18:35:35Z (17) openshift-controller-manager Error creating: pods "controller-manager-" is forbidden: unable to validate against any security context constraint: []
2020-04-08T18:38:34Z - 2020-04-08T18:38:55Z (13) openshift-controller-manager Error creating: pods "controller-manager-" is forbidden: unable to validate against any security context constraint: []
2020-04-08T18:39:16Z - 2020-04-08T18:39:16Z (1) openshift-controller-manager Successfully assigned openshift-controller-manager/controller-manager-bgfk6 to ip-10-0-146-111.us-east-2.compute.internal
2020-04-08T18:39:16Z - 2020-04-08T18:39:16Z (1) openshift-controller-manager Successfully assigned openshift-controller-manager/controller-manager-c97h7 to ip-10-0-141-229.us-east-2.compute.internal
2020-04-08T18:39:16Z - 2020-04-08T18:39:16Z (1) openshift-controller-manager Successfully assigned openshift-controller-manager/controller-manager-tlf92 to ip-10-0-143-180.us-east-2.compute.internal
2020-04-08T18:39:16Z - 2020-04-08T18:39:16Z (1) openshift-controller-manager Created pod: controller-manager-c97h7
2020-04-08T18:39:16Z - 2020-04-08T18:39:16Z (1) openshift-controller-manager Created pod: controller-manager-bgfk6
2020-04-08T18:39:16Z - 2020-04-08T18:39:16Z (1) openshift-controller-manager Created pod: controller-manager-tlf92
...

@wking
Copy link
Member

wking commented Apr 9, 2020

Ah, looks like the stuck DaemonSet was because we kept banging away with delete calls to no effect:

$ yaml2json <namespaces/openshift-controller-manager/core/events.yaml | jq -r '[.items[] | .timePrefix = if .firstTimestamp == null or .firstTimestamp == "null" then .eventTime else .firstTimestamp + " - " + .lastTimestamp + " (" + (.count | tostring) + ")" end] | sort_by(.timePrefix)[] | .timePrefix + " " + .metadata.namespace + " " + .message' | grep controller-manager-djw6l
2020-04-08T15:57:58Z - 2020-04-08T15:57:58Z (1) openshift-controller-manager Successfully assigned openshift-controller-manager/controller-manager-djw6l to ci-op-kf7v5-m-0.c.openshift-gce-devel-ci.internal
2020-04-08T15:57:58Z - 2020-04-08T15:57:58Z (1) openshift-controller-manager Created pod: controller-manager-djw6l
2020-04-08T16:02:45Z - 2020-04-08T16:02:45Z (1) openshift-controller-manager Failed to create pod sandbox: rpc error: code = Unknown desc = error reserving pod name k8s_controller-manager-djw6l_openshift-controller-manager_54ad2139-7ea0-45e6-9f1f-bffc0423ce20_0 for id e83ab4c81071f62f257959f79bf831edacfc22823d221b04aace770516b432a8: name is reserved
2020-04-08T16:02:59Z - 2020-04-08T16:02:59Z (1) openshift-controller-manager Failed to create pod sandbox: rpc error: code = Unknown desc = error reserving pod name k8s_controller-manager-djw6l_openshift-controller-manager_54ad2139-7ea0-45e6-9f1f-bffc0423ce20_0 for id 07146c0413fee1bb8d3e169d3f746f8a6dd2a30520fa7c91cc98b11ed1dfd5c1: name is reserved
2020-04-08T16:03:10Z - 2020-04-08T16:03:10Z (1) openshift-controller-manager Failed to create pod sandbox: rpc error: code = Unknown desc = error reserving pod name k8s_controller-manager-djw6l_openshift-controller-manager_54ad2139-7ea0-45e6-9f1f-bffc0423ce20_0 for id 52d9f871b1d23f122079db1984eb8dfb1423665e851910eb98deceb8e2a552d0: name is reserved
2020-04-08T16:03:25Z - 2020-04-08T16:03:25Z (1) openshift-controller-manager Failed to create pod sandbox: rpc error: code = Unknown desc = error reserving pod name k8s_controller-manager-djw6l_openshift-controller-manager_54ad2139-7ea0-45e6-9f1f-bffc0423ce20_0 for id 461cf1a10058796ac6d92563d6b75da79f248dde0b1a8b94fbdf1b44a12613b1: name is reserved
2020-04-08T16:03:38Z - 2020-04-08T16:03:38Z (1) openshift-controller-manager Failed to create pod sandbox: rpc error: code = Unknown desc = error reserving pod name k8s_controller-manager-djw6l_openshift-controller-manager_54ad2139-7ea0-45e6-9f1f-bffc0423ce20_0 for id ebf82fb599fb06fd1fb6164eb760ebcaa3cd4877430353647e52e8b1a2b2bc50: name is reserved
2020-04-08T16:03:52Z - 2020-04-08T16:03:52Z (1) openshift-controller-manager Failed to create pod sandbox: rpc error: code = Unknown desc = error reserving pod name k8s_controller-manager-djw6l_openshift-controller-manager_54ad2139-7ea0-45e6-9f1f-bffc0423ce20_0 for id 3854b790ff99c8162de3ed92b5966212207e32d59cfd208ebda51e17923b3053: name is reserved
2020-04-08T16:04:03Z - 2020-04-08T16:04:03Z (1) openshift-controller-manager Failed to create pod sandbox: rpc error: code = Unknown desc = error reserving pod name k8s_controller-manager-djw6l_openshift-controller-manager_54ad2139-7ea0-45e6-9f1f-bffc0423ce20_0 for id 11b85b52872bf0f8800b55e2357ad7e08793fb9e91add8968601a51ad65836fc: name is reserved
2020-04-08T16:04:15Z - 2020-04-08T16:04:15Z (1) openshift-controller-manager Failed to create pod sandbox: rpc error: code = Unknown desc = error reserving pod name k8s_controller-manager-djw6l_openshift-controller-manager_54ad2139-7ea0-45e6-9f1f-bffc0423ce20_0 for id fed3ee5787f6adc96141b468b05e75bfb03900c1200d84288f0d81402bfaf204: name is reserved
2020-04-08T16:04:17Z - 2020-04-08T16:04:17Z (1) openshift-controller-manager (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = error reserving pod name k8s_controller-manager-djw6l_openshift-controller-manager_54ad2139-7ea0-45e6-9f1f-bffc0423ce20_0 for id 2a1f5011b6066824bd8827fc451356f7a2162323d62a8071ef18c1ed102eeead: name is reserved
2020-04-08T16:05:30Z - 2020-04-08T16:05:30Z (1) openshift-controller-manager Deleted pod: controller-manager-djw6l
2020-04-08T16:07:05Z - 2020-04-08T16:07:05Z (1) openshift-controller-manager controller-manager-djw6l became leader
2020-04-08T16:07:12Z - 2020-04-08T16:07:12Z (1) openshift-controller-manager Deleted pod: controller-manager-djw6l
2020-04-08T16:12:00Z - 2020-04-08T16:12:00Z (1) openshift-controller-manager Deleted pod: controller-manager-djw6l
2020-04-08T16:18:01Z - 2020-04-08T16:18:01Z (1) openshift-controller-manager Deleted pod: controller-manager-djw6l
2020-04-08T16:19:26Z - 2020-04-08T16:39:25Z (3) openshift-controller-manager Deleted pod: controller-manager-djw6l
2020-04-08T16:40:54Z - 2020-04-08T16:40:54Z (1) openshift-controller-manager Deleted pod: controller-manager-djw6l
2020-04-08T16:41:58Z - 2020-04-08T17:41:58Z (13) openshift-controller-manager Deleted pod: controller-manager-djw6l

@runcom
Copy link
Member

runcom commented Apr 11, 2020

/retest
/lgtm

@openshift-ci-robot
Copy link
Contributor

@runcom: The /retest command does not accept any targets.
The following commands are available to trigger jobs:

  • /test e2e-aws
  • /test e2e-aws-disruptive
  • /test e2e-aws-scaleup-rhel7
  • /test e2e-gcp-op
  • /test e2e-gcp-upgrade
  • /test e2e-metal-ipi
  • /test e2e-openstack
  • /test e2e-ovirt
  • /test e2e-vsphere
  • /test images
  • /test unit
  • /test verify

Use /test all to run all jobs.

In response to this:

/retest
/lgtm

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Apr 11, 2020
@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: alexanderConstantinescu, cgwalters, runcom

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

7 similar comments
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-ci-robot
Copy link
Contributor

@alexanderConstantinescu: The following test failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
ci/prow/e2e-aws-scaleup-rhel7 a101129 link /test e2e-aws-scaleup-rhel7

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-cherrypick-robot

@LorbusChris: new pull request created: #1640

In response to this:

/cherry-pick fcos
/cc @vrutkovs
We'll need to make sure this is in OKD's machine-os-content

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet