Skip to content
Switch branches/tags
Go to file
This implements part of the plan from:

When we originally added the pinned RHCOS metadata `rhcos.json`
to the installer, we also changed the coreos-assembler `meta.json`
format into an arbitrary new format in the name of some cleanups.
In retrospect, this was a big mistake because we now have two

Then Fedora CoreOS appeared and added streams JSON as a public API.

We decided to unify on streams metadata; there's now a published
Go library for it:

Among other benefits, it is a single file that supports multiple

UPI installs should now use stream metadata, particularly
to find public cloud images.  This is exposed via a new
`openshift-install coreos print-stream-json` command.

This is an important preparatory step for exposing this via
`oc` as well as having something in the cluster update to

HOWEVER as a (really hopefully temporary) hack, we *duplicate*
the metadata so that IPI installs use the new stream format,
and UPI CI jobs can still use the old format (with different RHCOS versions).

We will port the UPI docs and CI jobs after this merges.

Co-authored-by: Matthew Staebler <>
10 contributors

Users who have contributed to this file

@wking @crawford @cgwalters @staebler @openshift-merge-robot @mhrivnak @jhixson74 @jamesnetherton @dankenigsberg @cfelder

Installer Overview

The OpenShift Installer is designed to help users, ranging from novices to experts, create OpenShift clusters in various environments. By default, the installer acts as an installation wizard, prompting the user for values that it cannot determine on its own and providing reasonable defaults for everything else. For more advanced users, the installer provides facilities for varying levels of customization.

On supported platforms, the installer is also capable of provisioning the underlying infrastructure for the cluster. It is recommended that most users make use of this functionality in order to avoid having to provision their own infrastructure. For other platforms or in scenarios where installer-created infrastructure would be incompatible, the installer can stop short of creating the infrastructure, and allow the user to provision their own infrastructure using the cluster assets generated by the installer.

Cluster Installation Process

OpenShift is unique in that its management extends all the way down to the operating system itself. Every machine boots with a configuration which references resources hosted in the cluster it is joining. This allows the cluster to manage itself as updates are applied. A downside to this approach, however, is that new clusters have no way of starting without external help - every machine in the to-be-created cluster is waiting on the to-be-created cluster.

OpenShift breaks this dependency loop using a temporary bootstrap machine. This bootstrap machine is booted with a concrete Ignition Config which describes how to create the cluster. This machine acts as a temporary control plane whose sole purpose is launching the rest of the cluster.

The main assets generated by the installer are the Ignition Configs for the bootstrap, master, and worker machines. Given these three configs (and correctly configured infrastructure), it is possible to start an OpenShift cluster. The process for bootstrapping a cluster looks like the following:

  1. The bootstrap machine boots and starts hosting the remote resources required for the master machines to boot.
  2. The master machines fetch the remote resources from the bootstrap machine and finish booting.
  3. The master machines use the bootstrap node to form an etcd cluster.
  4. The bootstrap node starts a temporary Kubernetes control plane using the newly-created etcd cluster.
  5. The temporary control plane schedules the production control plane to the master machines.
  6. The bootstrap node injects OpenShift-specific components via the temporary control plane.
  7. The temporary control plane shuts down, leaving just the production control plane.
  8. The installer tears down the bootstrap node.

The result of this bootstrapping process is a fully running OpenShift cluster. The cluster will then download and configure remaining components needed for the day-to-day operation, including the creation of worker machines in supported platforms.

Key Concepts

While striving to remain simple and easy to use, the installer allows many aspects of the clusters it creates to be customized. It is helpful to understand certain key concepts before attempting to customize the installation.


The OpenShift Installer operates on the notion of creating and destroying targets. Similar to other tools which operate on a graph of dependencies (e.g. make, systemd), each target represents a subset of the dependencies in the graph. The main target in the installer creates a cluster, but the other targets allow the user to interrupt this process and consume or modify the intermediate artifacts (e.g. the Kubernetes manifests that will be installed into the cluster). Only the immediate dependencies of a target are written to disk by the installer, but the installer can be invoked multiple times.

The following targets can be created by the installer:

  • install-config - The install config contains the main parameters for the installation process. This configuration provides the user with more options than the interactive prompts and comes pre-populated with default values.
  • manifests - This target outputs all of the Kubernetes manifests that will be installed on the cluster.
  • ignition-configs - These are the three Ignition Configs for the bootstrap, master, and worker machines.
  • cluster - This target provisions the cluster and its associated infrastructure.

The following targets can be destroyed by the installer:

  • cluster - This destroys the created cluster and its associated infrastructure.
  • bootstrap - This destroys the bootstrap infrastructure.

Multiple Invocations

In order to allow users to customize their installation, the installer can be invoked multiple times. The state is stored in a hidden file in the asset directory and contains all of the intermediate artifacts. This allows the installer to pause during the installation and wait for the user to modify intermediate artifacts.

For example, you can create an install config and save it in a cluster-agnostic location:

openshift-install --dir=initial create install-config
mv initial/install-config.yaml .
rm -rf initial

You can use the saved install-config for future clusters by copying it into the asset directory and then invoking the installer:

mkdir cluster-0
cp install-config.yaml cluster-0/
openshift-install --dir=cluster-0 create cluster

Supplying a previously-generated install-config like this is explicitly part of the stable installer API. Note that the installer would consume install-config.yaml from the asset directory. At any point before running destroy cluster, install-config.yaml can be regenerated by running openshift-install --dir=cluster-0 create install-config.

You can also edit the assets in the asset directory during a single run. For example, you can adjust the cluster-version operator's configuration:

mkdir cluster-1
cp install-config.yaml cluster-1/
openshift-install --dir=cluster-1 create manifests  # warning: this target is unstable
"${EDITOR}" cluster-1/manifests/cvo-overrides.yaml
openshift-install --dir=cluster-1 create cluster

As the unstable warning suggests, the presence of manifests and the names and content of its output is an unstable installer API. It is occasionally useful to make alterations like this as one-off changes, but don't expect them to work on subsequent installer releases.

CoreOS bootimages

The openshift-install binary contains pinned versions of RHEL CoreOS "bootimages" (e.g. OpenStack qcow2, AWS AMI, bare metal .iso). Fully automated installs use these by default.

For UPI (User Provisioned Infrastructure) installs, you can use the openshift-install coreos print-stream-json command to access information about the bootimages in CoreOS Stream Metadata format.

For example, this command will print the x86_64 AMI for us-west-1:

$ openshift-install coreos print-stream-json | jq -r '["us-west-1"].image'

For on-premise clouds (e.g. OpenStack) with UPI installs, you may need to manually copy a bootimage into the infrastructure. Here's an example command to print the x86_64 qcow2 file for openstack:

$ openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.openstack.formats["qcow2.gz"]'
  "disk": {
    "location": "",
    "signature": "",
    "sha256": "abc2add9746eb7be82e6919ec13aad8e9eae8cf073d8da6126d7c95ea0dee962",
    "uncompressed-sha256": "9ed73a4e415ac670535c2188221e5a4a5f3e945bc2e03a65b1ed4fc76e5db6f2"