Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Start openvswitch and ovsdb-server when network is ovn/ovs #1636

Merged

Conversation

runcom
Copy link
Member

@runcom runcom commented Apr 9, 2020

Needed by CNO for now - aim is those 2 services enablement are gonna be owned by CNO itself. Left some comments for things I'm not aware of.

Signed-off-by: Antonio Murdaca runcom@linux.com

@openshift-ci-robot openshift-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Apr 9, 2020
@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 9, 2020
@runcom runcom force-pushed the start-services-ovn-ovs branch 2 times, most recently from 569f304 to 9a50347 Compare April 9, 2020 17:05
@ashcrow
Copy link
Member

ashcrow commented Apr 9, 2020

ci/prow/e2e-gcp-op failure looks legit:

 --- FAIL: TestKernelType (1281.30s)
[...]
    mcd_test.go:290: Node ci-op-lkxd8-w-d-q7rtb.c.openshift-gce-devel-ci.internal did not rollback successfully 

@smarterclayton
Copy link
Contributor

Interesting, looks like the units were still disabled:

Apr 09 17:20:42.739763 ip-10-0-138-110.us-west-2.compute.internal systemd[1]: Started Reload Configuration from the Real Root.
Apr 09 17:20:43.489021 ip-10-0-138-110.us-west-2.compute.internal ignition[984]: INFO     : files: op(27): [finished] disabling unit "machine-config-daemon-host.service"
Apr 09 17:20:43.489021 ip-10-0-138-110.us-west-2.compute.internal ignition[984]: INFO     : files: op(28): [started]  processing unit "openvswitch.service"
Apr 09 17:20:43.489021 ip-10-0-138-110.us-west-2.compute.internal ignition[984]: INFO     : files: op(28): [finished] processing unit "openvswitch.service"
Apr 09 17:20:43.489021 ip-10-0-138-110.us-west-2.compute.internal ignition[984]: INFO     : files: op(29): [started]  disabling unit "openvswitch.service"
Apr 09 17:20:43.489021 ip-10-0-138-110.us-west-2.compute.internal ignition[984]: INFO     : files: op(29): [finished] disabling unit "openvswitch.service"
Apr 09 17:20:43.489021 ip-10-0-138-110.us-west-2.compute.internal ignition[984]: INFO     : files: op(2a): [started]  processing unit "ovsdb-server.service"
Apr 09 17:20:43.489021 ip-10-0-138-110.us-west-2.compute.internal ignition[984]: INFO     : files: op(2a): [finished] processing unit "ovsdb-server.service"
Apr 09 17:20:43.489021 ip-10-0-138-110.us-west-2.compute.internal ignition[984]: INFO     : files: op(2b): [started]  disabling unit "ovsdb-server.service"
Apr 09 17:20:43.489021 ip-10-0-138-110.us-west-2.compute.internal ignition[984]: INFO     : files: op(2b): [finished] disabling unit "ovsdb-server.service"
Apr 09 17:20:43.489021 ip-10-0-138-110.us-west-2.compute.internal ignition[984]: INFO     : files: op(2c): [started]  processing unit "kubelet.service"

@smarterclayton
Copy link
Contributor

smarterclayton commented Apr 10, 2020

Does bootstrap need to take network config as an input?

		bootstrap \
			--etcd-ca=/assets/tls/etcd-ca-bundle.crt \
			--etcd-metric-ca=/assets/tls/etcd-metric-ca-bundle.crt \
			--root-ca=/assets/tls/root-ca.crt \
			--kube-ca=/assets/tls/kube-apiserver-complete-client-ca-bundle.crt \
			--config-file=/assets/manifests/cluster-config.yaml \
			--dest-dir=/assets/mco-bootstrap \
			--pull-secret=/assets/manifests/openshift-config-secret-pull-secret.yaml \
			--etcd-image="${MACHINE_CONFIG_ETCD_IMAGE}" \
			--kube-client-agent-image="${MACHINE_CONFIG_KUBE_CLIENT_AGENT_IMAGE}" \
			--machine-config-operator-image="${MACHINE_CONFIG_OPERATOR_IMAGE}" \
			--machine-config-oscontent-image="${MACHINE_CONFIG_OSCONTENT}" \
			--infra-image="${MACHINE_CONFIG_INFRA_IMAGE}" \
			--keepalived-image="${KEEPALIVED_IMAGE}" \
			--coredns-image="${COREDNS_IMAGE}" \
			--mdns-publisher-image="${MDNS_PUBLISHER_IMAGE}" \
			--haproxy-image="${HAPROXY_IMAGE}" \
			--baremetal-runtimecfg-image="${BAREMETAL_RUNTIMECFG_IMAGE}" \
			--cloud-config-file=/assets/manifests/cloud-provider-config.yaml \
			--cluster-etcd-operator-image="${CLUSTER_ETCD_OPERATOR_IMAGE}" \
			${ADDITIONAL_FLAGS}

Nm, it's using the default path to read it. So we have the same values the other operators have...

@runcom
Copy link
Member Author

runcom commented Apr 10, 2020

@cgwalters ptal as well

@runcom runcom changed the title WIP: Start openvswitch and ovsdb-server when network is ovn/ovs Start openvswitch and ovsdb-server when network is ovn/ovs Apr 10, 2020
@openshift-ci-robot openshift-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Apr 10, 2020
@runcom
Copy link
Member Author

runcom commented Apr 10, 2020

/hold

@openshift-ci-robot openshift-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Apr 10, 2020
@smarterclayton
Copy link
Contributor

Now that this is correctly failing, @mccv1r0 should be able to update his PR to get the OVS daemonset to detect if the host OVS is running and do a poll loop as long as it is up (basically if host OVS is running, delegate to it), which will allow upgrades and downgrades. He can test this PR and his in the cluster-bot. Once he's got it working, we can merge his, then this should go green (we can also verify both forward and backward upgrades - PR1+PR2 -> master, master -> PR1+PR2, PR1+PR2 -> PR1+PR2). I'm sure we'll catch some other scenarios.

@mccv1r0
Copy link
Contributor

mccv1r0 commented May 5, 2020

@runcom @smarterclayton This PR seems to only have rhcos start OVS on master nodes. On worker nodes rhcos doesn't seem to be starting OVS.

@sinnykumari
Copy link
Contributor

Few notes from a local aws cluster run by using custom payload with PR #1636 + openshift/cluster-network-operator#477 and setting networkType: OVNKubernetes in install-config

  1. Bootstrapping fails and it appears networking is not completely up on master nodes.
    Journal log from bootstrap node:
[core@ip-10-0-12-201 ~]$ journalctl -b -f -u release-image.service -u bootkube.service
-- Logs begin at Wed 2020-05-20 11:45:52 UTC. --
...
May 20 12:29:33 ip-10-0-12-201 bootkube.sh[10269]: Skipped "secret-service-network-serving-signer.yaml" secrets.v1./service-network-serving-signer -n openshift-kube-apiserver-operator as it already exists
May 20 12:29:44 ip-10-0-12-201 bootkube.sh[10269]: E0520 12:29:44.695583       1 reflector.go:251] github.com/openshift/cluster-bootstrap/pkg/start/status.go:66: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?watch=true: dial tcp [::1]:6443: connect: connection refused
May 20 12:29:45 ip-10-0-12-201 bootkube.sh[10269]: E0520 12:29:45.699561       1 reflector.go:134] github.com/openshift/cluster-bootstrap/pkg/start/status.go:66: Failed to list *v1.Pod: Get https://localhost:6443/api/v1/pods: dial tcp [::1]:6443: connect: connection refused
...
  1. Did ssh into one of bootstrap master node, both openvswitch.service and ovsdb-server.service seems to be in running state but there are some failures seen in ovsdb-server.service journal logs which may be relevant.
[core@ip-10-0-145-116 ~]$ journalctl -u ovsdb-server.service
-- Logs begin at Wed 2020-05-20 11:46:01 UTC, end at Wed 2020-05-20 12:50:04 UTC. --
May 20 11:50:45 ip-10-0-145-116.ec2.internal systemd[1]: Starting Open vSwitch Database Unit...
May 20 11:50:45 ip-10-0-145-116.ec2.internal chown[1529]: /usr/bin/chown: cannot access '/var/run/openvswitch': No such file or directory
May 20 11:50:45 ip-10-0-145-116.ec2.internal ovs-ctl[1582]: /etc/openvswitch/conf.db does not exist ... (warning).
May 20 11:50:45 ip-10-0-145-116.ec2.internal ovs-ctl[1582]: Creating empty database /etc/openvswitch/conf.db.
May 20 11:50:46 ip-10-0-145-116.ec2.internal ovs-ctl[1582]: Starting ovsdb-server.
May 20 11:50:46 ip-10-0-145-116.ec2.internal ovs-vsctl[1647]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=7.16.1
May 20 11:50:46 ip-10-0-145-116.ec2.internal ovs-ctl[1582]: PMD: net_mlx4: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory
May 20 11:50:46 ip-10-0-145-116.ec2.internal ovs-ctl[1582]: PMD: net_mlx4: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx4)
May 20 11:50:46 ip-10-0-145-116.ec2.internal ovs-ctl[1582]: net_mlx5: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory
May 20 11:50:46 ip-10-0-145-116.ec2.internal ovs-ctl[1582]: net_mlx5: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx5)
May 20 11:50:46 ip-10-0-145-116.ec2.internal ovs-ctl[1582]: 2020-05-20T11:50:46Z|00001|dns_resolve|WARN|Failed to read /etc/resolv.conf: No such file or directory
May 20 11:50:46 ip-10-0-145-116.ec2.internal ovs-vswitchd[1649]: ovs|00001|dns_resolve|WARN|Failed to read /etc/resolv.conf: No such file or directory
May 20 11:50:46 ip-10-0-145-116.ec2.internal ovs-vsctl[1681]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=2.11.0 "external-ids:system-id=\"69ad8629-9bac-4644-acee-78b68fa9dbe5\"" "external-ids:rundir>
May 20 11:50:46 ip-10-0-145-116.ec2.internal ovs-ctl[1582]: Configuring Open vSwitch system IDs.
May 20 11:50:46 ip-10-0-145-116.ec2.internal ovs-vsctl[1688]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . external-ids:hostname=ip-10-0-145-116.ec2.internal
May 20 11:50:46 ip-10-0-145-116.ec2.internal ovs-ctl[1582]: Enabling remote OVSDB managers.
May 20 11:50:46 ip-10-0-145-116.ec2.internal systemd[1]: Started Open vSwitch Database Unit.
May 20 11:51:42 ip-10-0-145-116 systemd[1]: Stopping Open vSwitch Database Unit...
May 20 11:51:42 ip-10-0-145-116 ovs-ctl[2129]: Exiting ovsdb-server (1646).
May 20 11:51:42 ip-10-0-145-116 systemd[1]: Stopped Open vSwitch Database Unit.
May 20 11:51:42 ip-10-0-145-116 systemd[1]: ovsdb-server.service: Consumed 213ms CPU time
-- Reboot --
May 20 11:52:33 localhost systemd[1]: Starting Open vSwitch Database Unit...
May 20 11:52:33 localhost chown[1258]: /usr/bin/chown: cannot access '/var/run/openvswitch': No such file or directory
May 20 11:52:33 localhost ovs-ctl[1277]: Backing up database to /etc/openvswitch/conf.db.backup7.16.1-1452282319.
May 20 11:52:33 localhost ovs-ctl[1277]: Compacting database.
May 20 11:52:33 localhost ovs-ctl[1277]: Converting database schema.
May 20 11:52:33 localhost ovs-ctl[1277]: Starting ovsdb-server.
May 20 11:52:34 localhost ovs-vsctl[1371]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.2.0
May 20 11:52:34 localhost ovs-vsctl[1384]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=2.13.0 "external-ids:system-id=\"69ad8629-9bac-4644-acee-78b68fa9dbe5\"" "external-ids:rundir=\"/var/run/openvsw>
May 20 11:52:34 localhost ovs-ctl[1277]: Configuring Open vSwitch system IDs.
May 20 11:52:34 localhost ovs-vsctl[1390]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . external-ids:hostname=localhost
May 20 11:52:34 localhost ovs-ctl[1277]: Enabling remote OVSDB managers.
May 20 11:52:34 localhost systemd[1]: Started Open vSwitch Database Unit.
May 20 11:54:41 ip-10-0-145-116 ovsdb-server[1370]: ovs|00007|stream_ssl|ERR|SSL_use_certificate_file: error:02001002:system library:fopen:No such file or directory
May 20 11:54:41 ip-10-0-145-116 ovsdb-server[1370]: ovs|00008|stream_ssl|ERR|SSL_use_PrivateKey_file: error:20074002:BIO routines:file_ctrl:system lib
  1. Corresponding worker nodes never started(checked in aws console), maybe because bootstrap master nodes wasn't fully functional yet?

@mccv1r0
Copy link
Contributor

mccv1r0 commented May 20, 2020

Do you have the artifacts? It would help to compare OVN/OVS logs to what I've seen. Those logs added debug jourunalctl -xeu calls.

To be safe, you might also need to use: openshift/ovn-kubernetes#149 to deal with a permission issue in addition to PR #1636 + openshift/cluster-network-operator#477

@sinnykumari
Copy link
Contributor

Do you have the artifacts? It would help to compare OVN/OVS logs to what I've seen. Those logs added debug jourunalctl -xeu calls.

I have destroyed the test cluster but I do have log-bundle and journal log from bootstrap node and one of master node at https://sinnykumari.fedorapeople.org/ovn/ . I hope it helps.

To be safe, you might also need to use: openshift/ovn-kubernetes#149 to deal with a permission issue in addition to PR #1636 + openshift/cluster-network-operator#477

ah ok, will include it dnext time

@sinnykumari
Copy link
Contributor

To be safe, you might also need to use: openshift/ovn-kubernetes#149 to deal with a permission issue in addition to PR #1636 + openshift/cluster-network-operator#477

custom image build fails with PR openshift/ovn-kubernetes#149

could not wait for build: the build ovn-kubernetes failed after 3m40s with reason DockerBuildFailed: Docker build strategy has failed.

ovn2.13-vtep-2.13.0-30.el7fdp.x86_64: [Errno 256] No more mirrors to try.
  ovn2.13-host-2.13.0-30.el7fdp.x86_64: [Errno 256] No more mirrors to try.
  ovn2.13-2.13.0-30.el7fdp.x86_64: [Errno 256] No more mirrors to try.

error: build error: running 'INSTALL_PKGS=" 	PyYAML openss...m clean all && rm -rf /var/cache/*' failed with exit code 1

@mccv1r0
Copy link
Contributor

mccv1r0 commented May 22, 2020

ovn2.13-vtep-2.13.0-30.el7fdp.x86_64: [Errno 256] No more mirrors to try.
ovn2.13-host-2.13.0-30.el7fdp.x86_64: [Errno 256] No more mirrors to try.
ovn2.13-2.13.0-30.el7fdp.x86_64: [Errno 256] No more mirrors to try.

The rpm repos needed cannot be reached.

Can you use gcp for what you need to do, just in case the problem accessing repos is platform specific?

@sinnykumari
Copy link
Contributor

sinnykumari commented May 26, 2020

Did couple of runs with custom payload including PRs openshift/ovn-kubernetes#149, openshift/cluster-network-operator#477 and #1636 . Cluster install fails as multiple operators fail to update authentication, console, csi-snapshot-controller, image-registry, ingress, kube-storage-version-migrator, machine-config, monitoring . Related to MCO, one noticable error I see in MCD pod is Marking Degraded due to: machineconfig.machineconfiguration.openshift.io "rendered-master-5df6c4a90778134b1e8713d5a246747d" not found. But I am not sure why we are seeing this, it could be side effect of other failures?

Worker nodes did come up on both AWS and GCP as I can do oc debug on the worker nodes but they are in NotReady state.

Have saved must-gather log from AWS and GCP cluster install. @mccv1r0 maybe you can find something relevant which could be related to the ovn network switch.

@sinnykumari
Copy link
Contributor

By looking at rendered config once bootstraping is done, enabled: {{if eq .NetworkType "OVNKubernetes"}}true{{else if eq .NetworkType "OpenShiftSDN"}}true{{else}}false{{end}} is getting evaluated to false for both openvswitch and ovsdb-server service.

From one of the failed test cluster after bootstrapping

$ oc get mc rendered-master-647de31c316c66d9298d3e649156bc66  -o yaml
...
- enabled: false
        name: openvswitch.service
      - enabled: false
        name: ovsdb-server.service
...
$ oc get mc rendered-worker-c7fb7dceec71a6f3789100674770e762 -o yaml
      - enabled: false
        name: openvswitch.service
      - enabled: false
        name: ovsdb-server.service

It looks like there is a mismatch of these service being enabled during bootstrapping (are enabled) and after bootstrapping (are disabled) and that's why rendered config mismatch happens on master nodes.

$ oc get mc
NAME                                                        GENERATEDBYCONTROLLER                      IGNITIONVERSION   AGE
...
rendered-master-647de31c316c66d9298d3e649156bc66            7232845d162c56c7458f56d6730f5116a11cb48b   2.2.0             10h
rendered-worker-c7fb7dceec71a6f3789100674770e762            7232845d162c56c7458f56d6730f5116a11cb48b   2.2.0             10h
$ oc get mcp
NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
master                                                      False     True       True       3              0                   0                     3                      10h
worker   rendered-worker-c7fb7dceec71a6f3789100674770e762   False     True       False      3              0                   0                     0                      10h

MCD log running on one of the master node:

E0604 08:01:39.161147    7508 writer.go:135] Marking Degraded due to: machineconfig.machineconfiguration.openshift.io "rendered-master-e1708e3e82a6d9547aba693919a20b2d" not found

If we look at the mcs-machine-config-content.json content on one of the master node:

$  cat /etc/mcs-machine-config-content.json 
 {
    "kind": "MachineConfig",
    "apiVersion": "machineconfiguration.openshift.io/v1",
    "metadata": {
        "name": "rendered-master-e1708e3e82a6d9547aba693919a20b2d",  <----- served during bootstrap
...
                   {
                        "enabled": true,
                        "name": "openvswitch.service"
                    },
                    {
                        "enabled": true,
                        "name": "ovsdb-server.service"
                    },
...

I wonder if .NetworkType is getting evaluated correctly 🤔

@sinnykumari
Copy link
Contributor

So, I did another test run with few modification (see https://github.com/openshift/machine-config-operator/pull/1786/files) in which enabled openvswitch.service and ovsdb-server.service. Cluster came up successfully with NetworkType: OVNKubernetes 🎉 Can confirm that NetworkType gets set correctly . Log from m-c-o pod I0605 12:45:13.829112 1 sync.go:226] SYNC: TEST NETWORK TYPE OVNKubernetes

@runcom Looks to me that {{if eq .NetworkType "OVNKubernetes"}}true{{else if eq .NetworkType "OpenShiftSDN"}}true{{else}}false{{end}} is not getting evaluated properly.

@runcom
Copy link
Member Author

runcom commented Jun 5, 2020

@runcom Looks to me that {{if eq .NetworkType "OVNKubernetes"}}true{{else if eq .NetworkType "OpenShiftSDN"}}true{{else}}false{{end}} is not getting evaluated properl

uhm, that's surprising, masters evaluate that properly right? this issue seems to be specific to workers somehow 🤔

@sinnykumari
Copy link
Contributor

@runcom Looks to me that {{if eq .NetworkType "OVNKubernetes"}}true{{else if eq .NetworkType "OpenShiftSDN"}}true{{else}}false{{end}} is not getting evaluated properl

uhm, that's surprising, masters evaluate that properly right? this issue seems to be specific to workers somehow thinking

No, both final rendered master and worker evaluates to false but during initial bootstrap it seems master config get evaluated to true. see #1636 (comment) .

@runcom
Copy link
Member Author

runcom commented Jun 15, 2020

Pushed a fix to make sure NetworkType doesn't end up being empty and evaluates the templates correctly - this should fix the weird things we're seeing.

Signed-off-by: Antonio Murdaca <runcom@linux.com>
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Jun 15, 2020

@runcom: The following test failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
ci/prow/e2e-metal-ipi be883a8 link /test e2e-metal-ipi

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@sinnykumari
Copy link
Contributor

NetworkType is getting correctly set in MCO now!

Tested latest content of this PR with openshift/cluster-network-operator#477 and openshift/ovn-kubernetes#149, cluster came up perfectly fine in both cases when networkType is set to OpenShiftSDN (default one) and OVNKubernetes.
openvswitch.service and ovsdb-server.service are enabled on the nodes.

@runcom
Copy link
Member Author

runcom commented Jun 16, 2020

NetworkType is getting correctly set in MCO now!

Tested latest content of this PR with openshift/cluster-network-operator#477 and openshift/ovn-kubernetes#149, cluster came up perfectly fine in both cases when networkType is set to OpenShiftSDN (default one) and OVNKubernetes.
openvswitch.service and ovsdb-server.service are enabled on the nodes.

awesome, I guess we can merge this PR then

/skip

@sinnykumari
Copy link
Contributor

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Jun 16, 2020
@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: runcom, sinnykumari

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@sinnykumari
Copy link
Contributor

shall we cancel hold now?

@runcom runcom removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jun 16, 2020
@squeed
Copy link
Contributor

squeed commented Jun 16, 2020

Hang on.

Shouldn't we merge the changes in to CNO first so that we don't fight with system openvswitch? It looks like they're fighting back and forth, and it only happens to work because we kill -9 openvswitch for some other hackery.

@openshift-merge-robot openshift-merge-robot merged commit 8b3e260 into openshift:master Jun 16, 2020
@mccv1r0
Copy link
Contributor

mccv1r0 commented Jun 16, 2020

Hang on.

Shouldn't we merge the changes in to CNO first

Correct. Can this be rolled back?

@runcom runcom deleted the start-services-ovn-ovs branch June 16, 2020 13:26
@runcom
Copy link
Member Author

runcom commented Jun 16, 2020

I guess this was too fast then, @squeed @mccv1r0 isn't there any way to make it work w/o reverting this? Otherwise I have the revert ready - also this shouldn't impact anything else afaict

@runcom
Copy link
Member Author

runcom commented Jun 16, 2020

Taking the discussion as to revert or not elsewhere for a moment with Casey and Mike :)

@vishnoianil
Copy link

Seems like this merge is causing failure in CNO e2e-gcp-ovn job failures. All the jobs are failing at this moment.
https://deck-ci.apps.ci.l2s4.p1.openshiftapps.com/job-history/gs/origin-ci-test/pr-logs/directory/pull-ci-openshift-cluster-network-operator-master-e2e-gcp-ovn

@squeed @sinnykumari @runcom

dcbw added a commit to dcbw/release that referenced this pull request Jun 18, 2020
Ensures changes to MCO (like openshift/machine-config-operator#1636)
don't break ovn-kubernetes, Windows, hybrid overlay, etc.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

10 participants