Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

master machineconfig pool reports degraded on new cluster #367

Closed
sjenning opened this issue Feb 1, 2019 · 24 comments
Closed

master machineconfig pool reports degraded on new cluster #367

sjenning opened this issue Feb 1, 2019 · 24 comments

Comments

@sjenning
Copy link
Contributor

sjenning commented Feb 1, 2019

Just installed the cluster and MCO reports failure

$ oc get clusteroperators machine-config-operator
NAME                      VERSION   AVAILABLE   PROGRESSING   FAILING   SINCE
machine-config-operator             False       True          True      1h

$ oc get clusteroperators machine-config-operator -o yaml
apiVersion: config.openshift.io/v1
kind: ClusterOperator
metadata:
  creationTimestamp: 2019-02-01T20:19:35Z
  generation: 1
  name: machine-config-operator
  resourceVersion: "47718"
  selfLink: /apis/config.openshift.io/v1/clusteroperators/machine-config-operator
  uid: b189665d-265e-11e9-9d7f-06538a4d7af6
spec: {}
status:
  conditions:
  - lastTransitionTime: 2019-02-01T20:19:35Z
    status: "False"
    type: Available
  - lastTransitionTime: 2019-02-01T20:19:35Z
    message: Progressing towards 3.11.0-543-g6c3e3e6a-dirty
    status: "True"
    type: Progressing
  - lastTransitionTime: 2019-02-01T20:25:18Z
    message: 'Failed when progressing towards 3.11.0-543-g6c3e3e6a-dirty because:
      error syncing: timed out waiting for the condition during syncRequiredMachineConfigPools:
      error pool master is not ready. status: (total: 3, updated: 0, unavailable:
      1)'
    reason: 'error syncing: timed out waiting for the condition during syncRequiredMachineConfigPools:
      error pool master is not ready. status: (total: 3, updated: 0, unavailable:
      1)'
    status: "True"
    type: Failing
  extension:
    master: pool is degraded because of 1 nodes are reporting degraded status on update.
      Cannot proceed.
    worker: all 3 nodes are at latest configuration worker-9f1b04fd7540807d1cad9739722e3ba5
  relatedObjects: null
  versions: null

$ oc get machineconfigpool master -o yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
  creationTimestamp: 2019-02-01T20:19:35Z
  generation: 1
  labels:
    operator.machineconfiguration.openshift.io/required-for-upgrade: ""
  name: master
  resourceVersion: "11941"
  selfLink: /apis/machineconfiguration.openshift.io/v1/machineconfigpools/master
  uid: b18c2d27-265e-11e9-9d7f-06538a4d7af6
spec:
  machineConfigSelector:
    matchLabels:
      machineconfiguration.openshift.io/role: master
  machineSelector:
    matchLabels:
      node-role.kubernetes.io/master: ""
  maxUnavailable: null
  paused: false
status:
  conditions:
  - lastTransitionTime: 2019-02-01T20:20:06Z
    message: ""
    reason: ""
    status: "False"
    type: Updated
  - lastTransitionTime: 2019-02-01T20:20:06Z
    message: ""
    reason: All nodes are updating to master-29f27c027daecb6164ccd9a7a43fa58a
    status: "True"
    type: Updating
  - lastTransitionTime: 2019-02-01T20:26:08Z
    message: ""
    reason: 1 nodes are reporting degraded status on update. Cannot proceed.
    status: "True"
    type: Degraded
  configuration:
    name: master-29f27c027daecb6164ccd9a7a43fa58a
    source:
    - apiVersion: machineconfiguration.openshift.io/v1
      kind: MachineConfig
      name: 00-master
    - apiVersion: machineconfiguration.openshift.io/v1
      kind: MachineConfig
      name: 00-master-ssh
    - apiVersion: machineconfiguration.openshift.io/v1
      kind: MachineConfig
      name: 01-master-kubelet
  machineCount: 3
  observedGeneration: 1
  readyMachineCount: 0
  unavailableMachineCount: 1
  updatedMachineCount: 0

$ oc get nodes -l node-role.kubernetes.io/master=
NAME                                        STATUS   ROLES    AGE   VERSION
ip-10-0-10-145.us-west-1.compute.internal   Ready    master   78m   v1.12.4+de4b0b31fd
ip-10-0-28-6.us-west-1.compute.internal     Ready    master   78m   v1.12.4+de4b0b31fd
ip-10-0-9-223.us-west-1.compute.internal    Ready    master   78m   v1.12.4+de4b0b31fd

$ oc get nodes -o yaml | grep -e name: -e machineconfiguration
      machineconfiguration.openshift.io/currentConfig: master-b41804f2dd413f9ac7a730e8a241d716
      machineconfiguration.openshift.io/desiredConfig: master-29f27c027daecb6164ccd9a7a43fa58a
      machineconfiguration.openshift.io/state: Degraded
      kubernetes.io/hostname: ip-10-0-10-145
    name: ip-10-0-10-145.us-west-1.compute.internal
      machineconfiguration.openshift.io/currentConfig: worker-9f1b04fd7540807d1cad9739722e3ba5
      machineconfiguration.openshift.io/desiredConfig: worker-9f1b04fd7540807d1cad9739722e3ba5
      machineconfiguration.openshift.io/state: Done
      kubernetes.io/hostname: ip-10-0-135-178
    name: ip-10-0-135-178.us-west-1.compute.internal
      machineconfiguration.openshift.io/currentConfig: worker-9f1b04fd7540807d1cad9739722e3ba5
      machineconfiguration.openshift.io/desiredConfig: worker-9f1b04fd7540807d1cad9739722e3ba5
      machineconfiguration.openshift.io/state: Done
      kubernetes.io/hostname: ip-10-0-142-204
    name: ip-10-0-142-204.us-west-1.compute.internal
      machineconfiguration.openshift.io/currentConfig: worker-9f1b04fd7540807d1cad9739722e3ba5
      machineconfiguration.openshift.io/desiredConfig: worker-9f1b04fd7540807d1cad9739722e3ba5
      machineconfiguration.openshift.io/state: Done
      kubernetes.io/hostname: ip-10-0-155-132
    name: ip-10-0-155-132.us-west-1.compute.internal
      machineconfiguration.openshift.io/currentConfig: master-b41804f2dd413f9ac7a730e8a241d716
      machineconfiguration.openshift.io/desiredConfig: master-b41804f2dd413f9ac7a730e8a241d716
      machineconfiguration.openshift.io/state: Degraded
      kubernetes.io/hostname: ip-10-0-28-6
    name: ip-10-0-28-6.us-west-1.compute.internal
      machineconfiguration.openshift.io/currentConfig: master-b41804f2dd413f9ac7a730e8a241d716
      machineconfiguration.openshift.io/desiredConfig: master-b41804f2dd413f9ac7a730e8a241d716
      machineconfiguration.openshift.io/state: Degraded
      kubernetes.io/hostname: ip-10-0-9-223

In fact, all masters show state: Degraded. However 2 of the 3, the currentConfig and desiredConfig is the same, which conflicts with the state.

$ oc get ds
NAME                    DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                     AGE
machine-config-daemon   6         6         6       6            6           beta.kubernetes.io/os=linux       75m
machine-config-server   3         3         3       3            3           node-role.kubernetes.io/master=   76m

machine-config-daemon is running on all nodes.

logs from the daemon on the degraded master

$ oc logs machine-config-daemon-dhkwt
I0201 20:20:16.884660   13154 start.go:52] Version: 3.11.0-543-g6c3e3e6a-dirty
I0201 20:20:16.886148   13154 start.go:88] starting node writer
I0201 20:20:16.892976   13154 run.go:22] Running captured: chroot /rootfs rpm-ostree status --json
I0201 20:20:16.975145   13154 daemon.go:150] Booted osImageURL: registry.svc.ci.openshift.org/rhcos/maipo@sha256:a224ed43167e3830dfa5c22a15860cbae5aa99b1fa9eed342e299115168f0bb1 (47.295)
I0201 20:20:16.975680   13154 daemon.go:219] Managing node: ip-10-0-10-145.us-west-1.compute.internal
I0201 20:20:16.999958   13154 node.go:44] Setting initial node config: master-b41804f2dd413f9ac7a730e8a241d716
I0201 20:20:17.017228   13154 start.go:139] Calling chroot("/rootfs")
I0201 20:20:17.052475   13154 daemon.go:847] While getting MachineConfig master-b41804f2dd413f9ac7a730e8a241d716, got: machineconfigs.machineconfiguration.openshift.io "master-b41804f2dd413f9ac7a730e8a241d716" not found. Retrying...
...
I0201 20:24:07.061620   13154 daemon.go:847] While getting MachineConfig master-b41804f2dd413f9ac7a730e8a241d716, got: machineconfigs.machineconfiguration.openshift.io "master-b41804f2dd413f9ac7a730e8a241d716" not found. Retrying...
E0201 20:24:17.052973   13154 token_source.go:132] Unable to rotate token: failed to read token file "/var/run/secrets/kubernetes.io/serviceaccount/token": open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
I0201 20:24:17.065156   13154 daemon.go:847] While getting MachineConfig master-b41804f2dd413f9ac7a730e8a241d716, got: machineconfigs.machineconfiguration.openshift.io "master-b41804f2dd413f9ac7a730e8a241d716" not found. Retrying...
E0201 20:24:27.052924   13154 token_source.go:132] Unable to rotate token: failed to read token file "/var/run/secrets/kubernetes.io/serviceaccount/token": open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
I0201 20:24:27.063428   13154 daemon.go:847] While getting MachineConfig master-b41804f2dd413f9ac7a730e8a241d716, got: machineconfigs.machineconfiguration.openshift.io "master-b41804f2dd413f9ac7a730e8a241d716" not found. Retrying...
E0201 20:24:37.053047   13154 token_source.go:132] Unable to rotate token: failed to read token file "/var/run/secrets/kubernetes.io/serviceaccount/token": open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
...
I0201 20:25:17.058167   13154 daemon.go:847] While getting MachineConfig master-b41804f2dd413f9ac7a730e8a241d716, got: machineconfigs.machineconfiguration.openshift.io "master-b41804f2dd413f9ac7a730e8a241d716" not found. Retrying...
E0201 20:25:17.058214   13154 daemon.go:348] Fatal error checking initial state of node: Checking initial state: timed out waiting for the condition
E0201 20:25:17.058229   13154 writer.go:85] Marking degraded due to: Checking initial state: timed out waiting for the condition
E0201 20:25:17.058339   13154 token_source.go:132] Unable to rotate token: failed to read token file "/var/run/secrets/kubernetes.io/serviceaccount/token": open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
E0201 20:25:17.066964   13154 token_source.go:132] Unable to rotate token: failed to read token file "/var/run/secrets/kubernetes.io/serviceaccount/token": open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
I0201 20:25:17.074702   13154 daemon.go:350] Entering degraded state; going to sleep

Indeed, the old MachineConfig master-b41804f2dd413f9ac7a730e8a241d716 has been removed

$ oc get machineconfigs
NAME                                      GENERATEDBYCONTROLLER        IGNITIONVERSION   CREATED   OSIMAGEURL
00-master                                 3.11.0-543-g6c3e3e6a-dirty   2.2.0             1h        
00-master-ssh                             3.11.0-543-g6c3e3e6a-dirty                     1h        
00-worker                                 3.11.0-543-g6c3e3e6a-dirty   2.2.0             1h        
00-worker-ssh                             3.11.0-543-g6c3e3e6a-dirty                     1h        
01-master-kubelet                         3.11.0-543-g6c3e3e6a-dirty   2.2.0             1h        
01-worker-kubelet                         3.11.0-543-g6c3e3e6a-dirty   2.2.0             1h        
master-29f27c027daecb6164ccd9a7a43fa58a   3.11.0-543-g6c3e3e6a-dirty   2.2.0             1h        
worker-9f1b04fd7540807d1cad9739722e3ba5   3.11.0-543-g6c3e3e6a-dirty   2.2.0             1h
@kikisdeliveryservice
Copy link
Contributor

kikisdeliveryservice commented Feb 1, 2019

Token rotation issues seen again (as seen in #358):

E0201 20:25:17.058339   13154 token_source.go:132] Unable to rotate token: failed to read token file "/var/run/secrets/kubernetes.io/serviceaccount/token": open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
E0201 20:25:17.066964   13154 token_source.go:132] Unable to rotate token: failed to read token file "/var/run/secrets/kubernetes.io/serviceaccount/token": open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory

@kikisdeliveryservice kikisdeliveryservice changed the title master machineconfig pool reports degraded on new cluster token rotation: master machineconfig pool reports degraded on new cluster Feb 1, 2019
@cgwalters
Copy link
Member

Indeed, the old MachineConfig master-b41804f2dd413f9ac7a730e8a241d716 has been removed

Nothing is removed right now, see #354

I don't think this is token rotation.

The problem here is likely related to #338
or a variant of #301

where if the MC generated at bootstrap time isn't identical to the one the cluster generates at start, the nodes will fail to find their MC.

I saw this with osimageurl, but maybe it's possible e.g. we don't get the kubelet config in the bootstrap MC sometimes?

@kikisdeliveryservice kikisdeliveryservice changed the title token rotation: master machineconfig pool reports degraded on new cluster master machineconfig pool reports degraded on new cluster Feb 1, 2019
@sjenning
Copy link
Contributor Author

sjenning commented Feb 1, 2019

I provided quite a data dump there but the tl;dr is this in the daemon logs

I0201 20:25:17.058167   13154 daemon.go:847] While getting MachineConfig master-b41804f2dd413f9ac7a730e8a241d716, got: machineconfigs.machineconfiguration.openshift.io "master-b41804f2dd413f9ac7a730e8a241d716" not found. Retrying...
E0201 20:25:17.058214   13154 daemon.go:348] Fatal error checking initial state of node: Checking initial state: timed out waiting for the condition
E0201 20:25:17.058229   13154 writer.go:85] Marking degraded due to: Checking initial state: timed out waiting for the condition

Seems like something is jumping the gun on deleting the old master MachineConfig before all the masters have finished their rollout.

@cgwalters
Copy link
Member

cgwalters commented Feb 4, 2019

Seems like something is jumping the gun on deleting the old master MachineConfig

To repeat - nothing is deleting MCs today. The problem is much more complex than that.

A "rendered" MachineConfig object is a function of a variety of inputs, from the base templates we ship with the operator, to SSH keys, kubelet config, soon osImageURL etc. The generated MC name includes a hash of its contents. So if there's a difference they'll have different names.

The bootstrap path generates MCs via a different codepath than after the cluster comes up. You can see this in how in openshift/installer#1149 I need to change things to pass the osImageURL there too.

So if the bootstrap differs from what the renderer outputs in the main cluster, the booted nodes which talk to the MCS on the bootstrap node will be looking for a MC that doesn't exist.

I think maintaining both of these paths is going to be a long term struggle particularly as we go and work on e.g. adding the crio CRD etc.

The failure case here of booted nodes simply not being able to find their MC is pretty bad.

I think there are two options:

First, we could change the bootstrap to pass the MC object to the first booted master, and have the operator inject it into the real cluster. The advantage of this is any "drift" between the bootstrap gets reconciled. But the downside of course is that we're rebooting on node bringup to change config.

The other path I think is to try to de-duplicate the bootstrap codepath more. I need to study the code more to understand how hard this would be.

@cgwalters
Copy link
Member

@abhinavdahiya does ⬆️ sound right? Any other ideas?

@cgwalters
Copy link
Member

OK, been reading installer code this morning. Today on the bootstrap node you'll see this:

[root@osiris-bootstrap ~]# ls -al /etc/mcs/bootstrap/machine-configs/
master-393c27e20a207e6f4bae2da80331d940.yaml  worker-db6fbed5995e18e0494e1441bb38d66c.yaml  

These are generated by the MCC in bootstrap mode, which is a static pod launched by bootkube.sh.

There's a separate (strangely named?) openshift.service which executes openshift.sh whose job is to inject installer-generated assets into the cluster. I think then it should work for us to simply also copy those files into /opt/openshift/openshift?

@abhinavdahiya
Copy link
Contributor

// GetConfig fetches the machine config(type - Ignition) from the bootstrap server,
// based on the pool request.
// It returns nil for conf, error if the config isn't found. It returns a formatted
// error if any other error is encountered during its operations.
//
// The method does the following:
//
// 1. Read the machine config pool by using the following path template:
// "<serverBaseDir>/machine-pools/<machineConfigPoolName>.yaml"
//
// 2. Read the currentConfig field from the Status and read the config file
// using the following path template:
// "<serverBaseDir>/machine-configs/<currentConfig>.yaml"
//
// 3. Load the machine config.
// 4. Append the machine annotations file.
// 5. Append the KubeConfig file.

@cgwalters
Copy link
Member

@abhinavdahiya can you add a couple of words to that? Remember here most of us are learning a codebase we didn't create; getting up to speed on all of it is going to take some time.

Are you thinking that the bootstrap MCS would also serve the bootstrap MC directly embedded in the Ignition and e.g. the MCD would find it on disk and create it if not found?

@cgwalters
Copy link
Member

Another idea I had is that given that a MachineConfig object and Ignition are almost the same thing - I could imagine that we embed the necessary data inside the Ignition JSON, and turn it into a MC.

Then the MCD wouldn't need to hit the cluster to find its current config; could theoretically make GC easier too.

@ashcrow
Copy link
Member

ashcrow commented Feb 4, 2019

Another idea I had is that given that a MachineConfig object and Ignition are almost the same thing - I could imagine that we embed the necessary data inside the Ignition JSON, and turn it into a MC.

Would we need to expand the MC spec a bit (which is doable) or are you thinking of adding a section to Ignition literally, just one that would be intercepted before it hits Ignition?

@cgwalters
Copy link
Member

Currently testing:

diff --git a/data/data/bootstrap/files/usr/local/bin/bootkube.sh.template b/data/data/bootstrap/files/usr/local/bin/bootkube.sh.template
index a2fe57887..0b528a4d8 100755
--- a/data/data/bootstrap/files/usr/local/bin/bootkube.sh.template
+++ b/data/data/bootstrap/files/usr/local/bin/bootkube.sh.template
@@ -201,6 +201,8 @@ echo "etcd cluster up. Killing etcd certificate signer..."
 
 podman rm --force etcd-signer
 rm --force /etc/kubernetes/manifests/machineconfigoperator-bootstrap-pod.yaml
+# Copy the bootstrap MCs to inject into the target cluster
+cp -a /etc/mcs/bootstrap/machine-configs/*.yaml /opt/openshift/openshift/
 
 echo "Starting cluster-bootstrap..."
 

@cgwalters
Copy link
Member

Would we need to expand the MC spec a bit (which is doable) or are you thinking of adding a section to Ignition literally, just one that would be intercepted before it hits Ignition?

The node gets Ignition JSON which; not sure yet actually if Ignition saves it around somewhere for other processes to read. But let's say Ignition saved it for us in /run/ignition.json. 99% of that data is the files section; the rest of the machineconfig data (e.g. osimageurl) we could stick inside the ignition json in some special extension section.

@ashcrow
Copy link
Member

ashcrow commented Feb 4, 2019

The node gets Ignition JSON which; not sure yet actually if Ignition saves it around somewhere for other processes to read.

I don't believe so, but @ajeddeloh could answer.

But let's say Ignition saved it for us in /run/ignition.json. 99% of that data is the files section; the rest of the machineconfig data (e.g. osimageurl) we could stick inside the ignition json in some special extension section.

OK, I follow. So it would be utilizing an unused section within Ignition. My only worry is I'm not sure that would be considered a valid Ignition config.:

[steve@work ignition]$ cat config.ign 
{
  "ignition": { "version": "2.2.0" },
  "systemd": {
    "units": [{
      "name": "example.service",
      "enabled": true,
      "contents": "[Service]\nType=oneshot\nExecStart=/usr/bin/echo Hello World\n\n[Install]\nWantedBy=multi-user.target"
    }]
  },
  "test": {
    "data": 123123123
  }
}
[steve@work ignition]$ cat config.ign | json_verify 
JSON is valid
[steve@work ignition]$ bin/amd64/ignition-validate config.ign 
warning at line 10, column 9
    9:   },
   10:   "test"
              ^
Config has unrecognized key: test
[steve@work ignition]$ 

@kikisdeliveryservice
Copy link
Contributor

The current Igniton 2.2 spec: https://coreos.com/ignition/docs/latest/configuration-v2_2.html

@cgwalters
Copy link
Member

PR in openshift/installer#1189

@cgwalters
Copy link
Member

One thing that's a pain about this is that it's hard to debug why the MCs are different; the bootstrap node will often already have been torn down, so you can't just go and diff it versus the current one.

I haven't done that yet for the cases I've hit; it may work to patch the installer to disable bootstrap destruction. Or, with my installer patch they should be reliably in the target cluster, and we'll be able to see if e.g. the MCO does node updates on an initial install due to drift.

@abhinavdahiya
Copy link
Contributor

One thing that's a pain about this is that it's hard to debug why the MCs are different; the bootstrap node will often already have been torn down, so you can't just go and diff it versus the current one.

PRs welcome that help collect that debug information in CI. but openshift/installer#1189 doesn't seem like a solution.

@jlebon
Copy link
Member

jlebon commented Feb 4, 2019

Re. sneaking in third-party keys in the Ignition config, this is discussed here: coreos/ignition#696.

@cgwalters
Copy link
Member

PRs welcome that help collect that debug information in CI.

Mmm...I guess we could scrape off the bootstrap MCs in the e2e-aws Prow job and put them in artifacts? Hm but not sure how to do that since the bootstrap will be torn down by the installer usually.

but openshift/installer#1189 doesn't seem like a solution.

OK but...given that we've never had CI gating on degraded, we simply don't know when this started (I think it was relatively recently but I can't be sure), and IMO we need to as quickly as possible fix it and start gating on not regressing here, even if it's not an ideal fix.

@cgwalters
Copy link
Member

OK this is fallout from #343

Here's a diff from my local libvirt's bootstrap MC versus cluster:

$ diff -u /srv/walters/tmp/{bootstrap,cluster}-registries.txt 
--- /srv/walters/tmp/bootstrap-registries.txt	2019-02-05 09:56:22.370509505 -0500
+++ /srv/walters/tmp/cluster-registries.txt	2019-02-05 09:56:40.976494842 -0500
@@ -8,7 +8,7 @@
 spec:
   initContainers:
   - name: discovery
-    image: "registry.svc.ci.openshift.org/openshift/origin-v4.0:setup-etcd-environment"
+    image: "registry.svc.ci.openshift.org/openshift/origin-v4.0-2019-02-05-114841@sha256:b950be806f05c4b3eb5076a8800ec8c6b05483477dd6e3ec0013bb8d92954ee5"
     args:
     - "run"
     - "--discovery-srv=osiris.verbum.local"
@@ -67,7 +67,7 @@
       mountPath: /etc/kubernetes/kubeconfig
   containers:
   - name: etcd-member
-    image: "quay.io/coreos/etcd:v3.3.10"
+    image: "registry.svc.ci.openshift.org/openshift/origin-v4.0-2019-02-05-114841@sha256:d3e128c248fa723e267d1a4e4eba9a8eb6e55e7e862ed8a5e0f49ea1d2a1cb13"
     command:
     - /bin/sh
     - -c

@cgwalters
Copy link
Member

cgwalters commented Feb 5, 2019

It looks to me like 6f0f3ff
added --etcd-image and --setup-etcd-env-image but there was no corresponding change in the installer to pull those out of the release payload?


Edit: Bigger picture it feels to me like there's too tight a coupling between the installer and MCO; the dance necessary to shuffle data from the release payload into the MCO isn't worth it. Rather we could just pass the whole imagestream mapping into the MCO bootstrap or so. Particularly since the MCO is carrying multiple external images (etcd, machine-os-content), and necessarily is heavily involved in cluster setup.

@cgwalters
Copy link
Member

Took a stab at this in openshift/installer#1194 - trying to test locally but the libvirt download is being slow for some reason.

cgwalters added a commit to cgwalters/installer that referenced this issue Feb 5, 2019
@runcom
Copy link
Member

runcom commented Feb 7, 2019

referenced PRs are merged

@cgwalters
Copy link
Member

This should be fixed now.

cgwalters added a commit to cgwalters/machine-config-operator that referenced this issue Feb 11, 2019
Since having a mismatch here will result in a broken cluster,
change things so that the command line arguments are required
and drop the `docker.io` image references.  We never want
to pull those into a real cluster.

Ref: openshift#367
cgwalters added a commit to cgwalters/installer that referenced this issue Feb 12, 2019
cgwalters added a commit to cgwalters/installer that referenced this issue Jan 16, 2020
cgwalters added a commit to cgwalters/installer that referenced this issue Jan 16, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants