Skip to content
Special scripts to setup openshift integration with metalkube for demo
Branch: master
Clone or download
Pull request Compare This branch is 98 commits ahead, 208 commits behind openshift-metal3:master.
Latest commit 0aa2590 May 4, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
.github/ISSUE_TEMPLATE Update issue templates Apr 17, 2019
assets Merge pull request openshift-metal3#469 from nixpanic/worker-mdns-pub… Apr 27, 2019
chimera-cdi add dv yaml for win2012 May 1, 2019
chimera-docs doc updates May 7, 2019
chimera-kubevirt lockdown fedora vm image May 4, 2019
chimera-rook make rook storage class default May 1, 2019
config/environments Using local user instead of user stack Feb 7, 2019
docs prevents CVO from scaling MAO back up when you are trying to run a lo… Apr 23, 2019
extras Merge pull request openshift-metal3#480 from karmab/no_scale Apr 29, 2019
library Pull in libvirt and virtbmc parts from quickstart Jan 24, 2019
make-bm-worker Add -n openshift-machine-api to oc apply for baremetalhost Apr 29, 2019
manifests Update to kubevirt 0.16.2 Apr 26, 2019
tripleo-quickstart-config add qcow2 disk option Apr 24, 2019
.gitignore Use Red Hat NTP server if necessary. Apr 16, 2019 setenforce 0 in 00 script May 4, 2019 Use newer 4.1 version of oc. Apr 17, 2019 initializing workshop-docs Apr 15, 2019 lockdowns and doc updates May 1, 2019 move 03 script stuff back to prep work and out of lab Apr 16, 2019 rook lab work Apr 21, 2019 lockdowns and doc updates May 1, 2019 Add notes on when we can unpin the baremetal-operator. Apr 23, 2019 Use pin version for UI Apr 29, 2019 extend rook timeouts Apr 29, 2019 Only wait for worker-0 in the default virt config. Apr 24, 2019 Introduce doc. Mar 15, 2019
DCO Introduce doc. Mar 15, 2019
LICENSE Initial commit Jan 22, 2019
Makefile Merge remote-tracking branch 'upstream/master' into merge-upstream Apr 25, 2019 Merge remote-tracking branch 'upstream/master' into merge-upstream Apr 25, 2019 initializing workshop-docs Apr 15, 2019 extend kni-installer timeouts Apr 18, 2019 Merge remote-tracking branch 'upstream/master' into merge-upstream Apr 23, 2019 MetalKube -> Metal³ Apr 24, 2019 add-machine-ips: Drop oc proxy. Apr 24, 2019 minor fix Apr 29, 2019 lockdowns and doc updates May 1, 2019 Prepare repository for CI Mar 7, 2019 changes to the rook deployment, following the dev scripts, minor fixe… Apr 26, 2019 MetalKube -> Metal³ Apr 24, 2019 Run fix_certs script automatically Apr 9, 2019 Correct permssions for fix_ep Apr 26, 2019 Use RHCOS image url from installer rhcos.json Apr 17, 2019 Merge pull request openshift-metal3#444 from cybertron/baremetal-delete Apr 25, 2019 Split ironic cleanup into a separate script Apr 24, 2019
ironic_hosts.json.example Added example json file for ironic hosts Mar 26, 2019 minor fix Apr 29, 2019 Use a separate include for logging Apr 3, 2019 Integrating upstream after local testing Apr 22, 2019 Integrating upstream after local testing Apr 22, 2019 Use Red Hat NTP server if necessary. Apr 16, 2019 Change the data structure to allow multiple masters Apr 18, 2019
pyxpath bootstrap vm deployment: support multiple nets Feb 11, 2019 Improve snapshot scripts and docs Apr 25, 2019 Take a copy of the ironic container logs after CI jobs Apr 24, 2019 Improve snapshot scripts and docs Apr 25, 2019 Support additional root device hints Apr 23, 2019

Metal³ Installer Dev Scripts

This set of scripts configures some libvirt VMs and associated virtualbmc processes to enable deploying to them as dummy baremetal nodes.

This is very similar to how we do TripleO testing so we reuse some roles from tripleo-quickstart here.

We are using this repository as a work space while we figure out what the installer needs to do for bare metal provisioning. As that logic is ironed out, we are moving it into the facet wrapper API, or the go-based kni-installer. Eventually that kni-installer fork of openshift-installer will be merged back, and we won't need much or any of this. For now, these tools are the canonical way to set up a metalkube cluster.


  • CentOS 7.5 or greater (installed from 7.4 or newer)
  • file system that supports d_type (see Troubleshooting section for more information)
  • ideally on a bare metal host
  • run as a user with passwordless sudo access
  • get a valid pull secret (json string) from



Make a copy of the to config_$, and set the PULL_SECRET variable to the secret obtained from

For baremetal test setups where you don't require the VM fake-baremetal nodes, you may also set NODES_FILE to reference a manually created json file with the node details (see ironic_hosts.json.example), and NODES_PLATFORM which can be set to e.g "baremetal" to disable the libvirt master/worker node setup. See for other variables that can be overridden.


For a new setup, run:


The Makefile will run the scripts in this order:

  • ./
  • ./

This should result in some (stopped) VMs created by tripleo-quickstart on the local virthost and some other dependencies installed.

  • ./

After this step, you can run the facet server with:

$ go run "${GOPATH}/src/" server
  • ./

This will setup containers for the Ironic infrastructure on the host server and download the resources it requires.

The Ironic container is stored at, built from

  • ./

These will pull and build the openshift-install and some other things from source.

  • ./

This will run the kni-installer to generate ignition configs for the bootstrap node and the masters. The installer then launches both the bootstrap VM and master nodes using the Terraform providers for libvirt and Ironic. Once bootstrap is complete, the installer removes the bootstrap node and the cluster will be online.

You can view the IP for the bootstrap node by running virsh net-dhcp-leases baremetal. You can SSH to it using ssh core@IP.

Then you can interact with the k8s API on the bootstrap VM e.g sudo oc status --verbose --config /etc/kubernetes/kubeconfig.

You can also see the status of the script which is running via journalctl -b -f -u bootkube.service.

  • ./

After running ./ the cluster that becomes active in the previous step is updated by deploying the baremetal-operator into the pre-existing "openshift-machine-api" project/namespace.

Interacting with the deployed cluster

When the master nodes are up and the cluster is active, you can interact with the API:

$ oc --config ocp/auth/kubeconfig get nodes
master-0   Ready     master    20m       v1.12.4+50c2f2340a
master-1   Ready     master    20m       v1.12.4+50c2f2340a
master-2   Ready     master    20m       v1.12.4+50c2f2340a

Interacting with Ironic directly

For manual debugging via openstackclient, you can use the following:

export OS_TOKEN=fake-token
export OS_URL=http://localhost:6385/
openstack baremetal node list


  • To clean up the ocp deployment run ./

  • To clean up the dummy baremetal VMs and associated libvirt resources run ./

e.g. to clean and re-install ocp run:

rm -fr ocp

Or, you can run make clean which will run all of the cleanup steps.


If you're having trouble, try systemctl restart libvirtd.

You can use:

virsh console domain_name

To get to the bootstrap node. The username is core and the password is notworking

Determining your filesystem type

If you're not sure what filesystem you have, try df - T and the second column will include the type.

Determining if your filesystem supports d_type

If the above command returns ext4 or btrfs, d_type is supported by default. If not, at the command line, try:

xfs_info /mount-point

If you see ftype=1 then you have d_type support.

Modifying cpu/memory/disk resources

The default cpu/memory/disk resources when using virtual machines are provided by the tripleo-quickstart-config/metalkube-nodes.yml file:

  • 4 vCPU
  • 8 Gb RAM
  • 50 Gb HDD

If required, the cpu/memory/disk can be customzied per host role (openshift_master_memory, openshift_worker_memory,...) by modifying that file prior to run the script.

You can’t perform that action at this time.