Skip to content
This repository has been archived by the owner on Jul 23, 2019. It is now read-only.

Remove early exit from installer #60

Closed
4 tasks done
stbenjam opened this issue Apr 30, 2019 · 7 comments · Fixed by #136
Closed
4 tasks done

Remove early exit from installer #60

stbenjam opened this issue Apr 30, 2019 · 7 comments · Fixed by #136

Comments

@stbenjam
Copy link
Member

stbenjam commented Apr 30, 2019

Currently we exit early from the installer, and do a bunch of hacks in 06_create_cluster.sh.

@russellb
Copy link
Member

One complication here is that we don't want this done in every case of the baremetal platform.

We don't have a declarative way to specify this right now. Once it's configurable somewhere, this might be a customization that happens at the "create manifests" stage, or remain as a day 2 operation.

If we want to re-phrase this issue as "remove early exit", there's another path to doing that.

With that in place, the installer can provision a cluster, including a worker, that should run to completion.

Making workloads run on masters is still required if we only have 3 hosts to deploy to, though.

@stbenjam

This comment has been minimized.

@dhellmann

This comment has been minimized.

@stbenjam

This comment has been minimized.

@stbenjam stbenjam changed the title Remove NoSchedule taint from masters in the installer Remove early exit from installer Jun 10, 2019
@stbenjam
Copy link
Member Author

stbenjam commented Jun 12, 2019

I re-opened the MCO PR, openshift/machine-config-operator#846, to remove the no schedule taints. Until workers are being deployed with the installer, this is required because ingress, monitoring, and registry operators otherwise won't come up. We can remove the kubelet override once that happens, but I don't want to block opening the openshift/installer PR on waiting for that.

@markmc
Copy link
Contributor

markmc commented Jul 4, 2019

It looks like we're mostly just waiting on a replacement for add-machine-ips.sh ?

I see progress on e.g. metal3-io/cluster-api-provider-baremetal#49 but it's a struggle to piece it all together into a coherent summary of status on this ...

@russellb
Copy link
Member

russellb commented Jul 4, 2019

I don’t think we need early exit for that. It can be moved to post install. The purpose of that is to enable auto approval of CSRs, which the cron job handles in the meantime.

For a more detailed history of the CSR issue, see openshift-metal3/dev-scripts#260

I need to check back into this. I think we may have enough finished that everything works normally for workers. We lack introspection data on the BareMetalHost objects for masters. That data is needed so the actuator can copy it over to the Machine.

russellb added a commit to russellb/dev-scripts that referenced this issue Jul 5, 2019
The only remaining change we have running after exiting early from the
install process is adding IPs to the master Machines.  This is to
enable auto approval of CSRs.  We also have a cron job that does this,
so it's not necessary to do this in the middle of the install.  The
change moves it to post-install.

Related issues:
openshift-metal3/kni-installer#60
openshift-metal3#260
metal3-io/baremetal-operator#242
russellb added a commit to russellb/kni-installer that referenced this issue Jul 5, 2019
russellb added a commit to russellb/dev-scripts that referenced this issue Jul 5, 2019
The only remaining change we have running after exiting early from the
install process is adding IPs to the master Machines.  This is to
enable auto approval of CSRs.  We also have a cron job that does this,
so it's not necessary to do this in the middle of the install.  The
change moves it to post-install.

Related issues:
  openshift-metal3/kni-installer#60
  openshift-metal3#260
  metal3-io/baremetal-operator#242
markmc pushed a commit that referenced this issue Jul 8, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants