Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Doc: OKD4 on Proxmox #27

Closed
jomeier opened this issue Dec 23, 2019 · 14 comments
Closed

Doc: OKD4 on Proxmox #27

jomeier opened this issue Dec 23, 2019 · 14 comments

Comments

@jomeier
Copy link
Contributor

jomeier commented Dec 23, 2019

Hi,

although it is no full documentation I like to lose a few words about how OKD4 can be installed on Proxmox (without storage).

  1. Assumptions:

    • You use PFSense for Load Balancing, DNS resolving and as DHCP server in your private OKD network
    • your config files and scripts are located on the Proxmox host in this directory: /root/install-config
    • the name of your storage in Proxmox is 'local' (all disk images are stored here)
    • Your cloud provider uses VLAN and wants your MTU to be set to 1400 (in my case its Co. Hetzner)
    • Proxmox version: 6 (Virtual Environment 6.0-15 or higher)
    • You installed openshift-installer from the OKD preview 1
    • MAC addresses for each VM must be unique. I use static IP mapping in PFSense's DHCP server.
    • I had to setup a 2nd server for the workers in Proxmox (different story!). But the scripts are similar for the workers. Change master.ign to worker.ign.
  2. Create SSH pubkey

  3. Create install-config.yaml

    apiVersion: v1
    baseDomain: <your domain name e.g. example.com>
    compute:
    - name: worker
      replicas: 0
    controlPlane:
      name: master
      replicas: 3
    metadata:
      name: okd
    networking:
      clusterNetworks:
      - cidr: 10.254.0.0/16
        hostPrefix: 24
      networkType: OpenShiftSDN
      serviceNetwork:
      - 172.30.0.0/16
    platform:
      none: {}
    pullSecret: '<Your pull secret from https://cloud.redhat.com/openshift/install/vsphere/user-provisioned>'
    sshKey: <Your public SSH key beginning with ssh-rsa ...>
    
  4. Create ignition files:
    IMPORTANT: Save your install-config.yaml because the openshift installer will delete it :-)

    openshift-installer create ignition-configs
    
  5. Install Proxmox and PFSense (I use its DNS resolvers, DHCP)
    If your cloud provider requires a MTU size of 1400 you maybe must patch your Proxmox to be able to set that on your VMs. I followed the instructions on https://forum.proxmox.com/threads/set-mtu-on-guest.45078/page-2 for that. Setting up networking on Hetzner with only one NIC per VM in a VLAN was a nightmare (routing, ...). But now it works.

  6. Put your DNS entries in the DNS forwarder of PFSense. The etcd SVC entries should be entered in Services->DNS Resolver->General Settings->Custom Options like this:

    server:
    local-data: "_etcd-server-ssl._tcp.okd.<YOUR HOSTNAME e.g. example.com>  60 IN    SRV 0        10     2380 etcd-0.okd.<YOUR HOSTNAME e.g. example.com>."
    local-data: "_etcd-server-ssl._tcp.okd.<YOUR HOSTNAME e.g. example.com>  60 IN    SRV 0        10     2380 etcd-1.okd.<YOUR HOSTNAME e.g. example.com>."
    local-data: "_etcd-server-ssl._tcp.okd.<YOUR HOSTNAME e.g. example.com>  60 IN    SRV 0        10     2380 etcd-2.okd.<YOUR HOSTNAME e.g. example.com>."
    
    local-zone: "apps.okd.<YOUR HOSTNAME e.g. example.com>" redirect
    local-data: "apps.okd.<YOUR HOSTNAME e.g. example.com> 60 IN A <IP ADDRESS OF YOUR INGRESS ROUTER/DOMAIN NAME>"
    

    I had to add the last two entries for apps.okd because during the installation some OKD services are communicating with apps.okd.... adresses and I had network hairpinning problems I didn't get resolved.

  7. Download FCOS image for QEMU: https://builds.coreos.fedoraproject.org/prod/streams/testing/builds/31.20191217.2.0/x86_64/fedora-coreos-31.20191217.2.0-qemu.x86_64.qcow2.xz or similar. Rename FCOS image to fedora-coreos.qcow2

  8. Scripts for bootstrap and master machines. Run them on the Proxmox host:

    create-cluster.sh

    ./create-bootstrap.sh 109 bootstrap <some unique MAC address #0 for static IP through DHCP e.g.: 12:34:56:78:90:12>
    ./create-masters.sh
    

    create-bootstrap.sh

    ID=$1
    NAME=$2
    MACADDR=$3
    
    qm stop $ID
    sleep 10
    
    qm destroy $ID
    sleep 10
    
    qm create $ID --name $NAME --memory 2048 --net0 
    virtio,bridge=vmbr1,macaddr=$MACADDR,mtu=1400
    qm importdisk $ID fedora-coreos.qcow2 local
    qm set $ID --scsihw virtio-scsi-pci --scsi0 local:$ID/vm-$ID-disk-0.raw
    qm set $ID --boot c --bootdisk scsi0
    qm set $ID --serial0 socket --vga serial0
    
    echo "args: -fw_cfg name=opt/com.coreos/config,file=/root/install-config/bootstrap.ign" >> /etc/pve/qemu-server/$ID.conf
    
    qm start $ID
    

    create-masters.sh

    ./create-master.sh 110 master0 <some unique MAC address #1 for static IP through DHCP e.g.: 12:34:56:78:90:12>
    ./create-master.sh 111 master1 <some unique MAC address #2 for static IP through DHCP e.g.: 12:34:56:78:90:12>
    ./create-master.sh 112 master2 <some unique MAC address #3 for static IP through DHCP e.g.: 12:34:56:78:90:12>
    

    create-master.sh

    ID=$1
    NAME=$2
    MACADDR=$3
    
    qm stop $ID
    sleep 10
    
    qm destroy $ID
    sleep 10
    
    # !!! Important: Minimum 2 cores and 4GByte RAM. Without that SDN won't start !!!
    qm create $ID --name $NAME --cores 3 --memory 12000 --net0 virtio,bridge=vmbr1,macaddr=$MACADDR,mtu=1400
    qm importdisk $ID fedora-coreos.qcow2 local
    qm set $ID --scsihw virtio-scsi-pci --scsi0 local:$ID/vm-$ID-disk-0.raw
    qm set $ID --boot c --bootdisk scsi0
    qm set $ID --serial0 socket --vga serial0
    
    echo "args: -fw_cfg name=opt/com.coreos/config,file=/root/install-config/master.ign" >> /etc/pve/qemu-server/$ID.conf
    
    qm start $ID
    

I hope that's enough for the start. The installation and configuration of PFSense is worth an article on its own :-)

Have fun.

Greetings,

Josef

@vrutkovs
Copy link
Member

Looks interesting! Could you convert this into a pull request, which adds examples/install_on_promox.md file with this info?

I'm not quite following the static IP part - since its being set in DHCP server its in fact "dynamic IP" as the host follows DHCP server rules. "Static IP" would be patching Ignition for each particular master to set IP when deploying infra. See vSphere UPI example.
Did I understood that correctly?

@jomeier
Copy link
Contributor Author

jomeier commented Dec 27, 2019

@vrutkovs
I‘m not sure if the Installer sets Hostnames for the VMs (bootstrap, masters, workers). Does it set them? If yes, which names are used?

@vrutkovs
Copy link
Member

Installer doesn't set these, yeah, it has to be done separately - either via DHCP or via running hostnamectl / writing /etc/hostname

@ghost ghost mentioned this issue Dec 29, 2019
@vrutkovs
Copy link
Member

Please convert it into a PR

@livelikealegend
Copy link

maybe create ansible playbook for this ?

@vrutkovs
Copy link
Member

Closing - this needs to be a PR for guides

@jwklijnsma
Copy link

maybe create ansible playbook for this ?

I have a working playbook for deploy vm + bootstrap.

@Bilal-io
Copy link

@jwklijnsma do you mind sharing it?

@jwklijnsma
Copy link

@jwklijnsma do you mind sharing it?

Not yet i need to change to more vars and cloud-init is broken with coreos and proxmox ve.

@nate-duke
Copy link

@jwklijnsma do you mind sharing it?

Not yet i need to change to more vars and cloud-init is broken with coreos and proxmox ve.

If it helps you can embed the ignition config in the VM (AFAICT cloud-init is a no-go in coreos for good) using the "args" argument to qm create when you create the vm. E.g:

# mk-fcos.sh
VMID=$(pvesh get /cluster/nextid)

qm create ${VMID} \
  --name fcos-${VMID} \
  --pool coreos \
  --bios ovmf \
  --storage vm-disk-images \
  --scsihw virtio-scsi-pci \
  --scsi0 vm-disk-images:${VMID}/vm-${VMID}-disk-1.qcow2 \
  --efidisk0 vm-disk-images:1,format=qcow2 \
  --net0 virtio,bridge=vmbr0 \
  --memory 8192 \
  --serial0 socket \
  --args '-fw_cfg name=opt/com.coreos/config,file=/mnt/pve/vm-disk-images/images/ignition/fcos.ign' && \
qm importdisk ${VMID} /mnt/pve/vm-disk-images/images/dist-images/fedora-coreos-31.20200420.3.0-qemu.qcow2 vm-disk-images --format=qcow2 && \
qm showcmd ${VMID}
qm start ${VMID}

@jwklijnsma
Copy link

@jwklijnsma do you mind sharing it?

Not yet i need to change to more vars and cloud-init is broken with coreos and proxmox ve.

If it helps you can embed the ignition config in the VM (AFAICT cloud-init is a no-go in coreos for good) using the "args" argument to qm create when you create the vm. E.g:

# mk-fcos.sh
VMID=$(pvesh get /cluster/nextid)

qm create ${VMID} \
  --name fcos-${VMID} \
  --pool coreos \
  --bios ovmf \
  --storage vm-disk-images \
  --scsihw virtio-scsi-pci \
  --scsi0 vm-disk-images:${VMID}/vm-${VMID}-disk-1.qcow2 \
  --efidisk0 vm-disk-images:1,format=qcow2 \
  --net0 virtio,bridge=vmbr0 \
  --memory 8192 \
  --serial0 socket \
  --args '-fw_cfg name=opt/com.coreos/config,file=/mnt/pve/vm-disk-images/images/ignition/fcos.ign' && \
qm importdisk ${VMID} /mnt/pve/vm-disk-images/images/dist-images/fedora-coreos-31.20200420.3.0-qemu.qcow2 vm-disk-images --format=qcow2 && \
qm showcmd ${VMID}
qm start ${VMID}

I use the .ign file for bootstrap of okd 4.4 so i need to have a look at it.

@jwklijnsma
Copy link

Hi i have working playbook put some parts needs do with the hand. Okd is still in beta.

Maybe working with soms people on the project ?

@vrutkovs
Copy link
Member

vrutkovs commented Jul 2, 2020

Hi i have working playbook put some parts needs do with the hand

Feel free to submit a PR with description of your steps to guides/ folder.

Another idea worth exploring - proxmox has a libvirt socket, so we could use libvirt IPI instructions to create machines automatically. OKD installer doesn't have it enabled out of the box, but a separate build is available at registry.svc.ci.openshift.org/origin/4.5:libvirt-installer. External LB and DNS are still required though

@jomeier
Copy link
Contributor Author

jomeier commented Aug 4, 2020

There is a Terraform provider for Proxmox:

https://github.com/Telmate/terraform-provider-proxmox

I'll try out if the "--args" parameter is already implemented. With that it would be possible to support ignition files for FCOS images :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants