Skip to content

Commit

Permalink
Enhancement: Support of OSA for zVM HCP (#230)
Browse files Browse the repository at this point in the history
Support of OSA for zVM HCP

- Added changes to support for OSA for zVM HCP
- Changed variable name from osa to interface , to be generic.
- Updated the documentation for the same.

---------

Signed-off-by: root <root@m42lp53.lnxero1.boe>
Signed-off-by: veera-damisetti <damisetti.veerabhadrarao@ibm.com>
Signed-off-by: Klaus Smolin <smolin@de.ibm.com>
Signed-off-by: Mohammed Zeeshan Ahmed <mohammed.zee1000@gmail.com>
Co-authored-by: root <root@m42lp53.lnxero1.boe>
Co-authored-by: Klaus Smolin <88041391+smolin-de@users.noreply.github.com>
Co-authored-by: Amadeuds Podvratnik <pod@de.ibm.com>
Co-authored-by: Mohammed Ahmed <mohammed.zee1000@gmail.com>
  • Loading branch information
5 people committed Feb 25, 2024
1 parent 9c6f371 commit 51849e4
Show file tree
Hide file tree
Showing 13 changed files with 66 additions and 32 deletions.
14 changes: 13 additions & 1 deletion docs/set-variables-group-vars.md
Original file line number Diff line number Diff line change
Expand Up @@ -253,14 +253,26 @@
**hypershift.agents_parms.ram** | RAM for agents | 16384
**hypershift.agents_parms.vcpus** | vCPUs for agents | 4
**hypershift.agents_parms.nameserver** | Nameserver to be used for agents | 192.168.10.1
**hypershift.agents_parms.zvm_parameters.network_mode** | Network mode for zvm nodes <br /> Supported modes: vswitch | vswitch
**hypershift.agents_parms.zvm_parameters.network_mode** | Network mode for zvm nodes <br /> Supported modes: vswitch,osa | vswitch
**hypershift.agents_parms.zvm_parameters.disk_type** | Disk type for zvm nodes <br /> Supported disk types: fcp, dasd | dasd
**hypershift.agents_parms.zvm_parameters.vcpus** | CPUs for each zvm node | 4
**hypershift.agents_parms.zvm_parameters.memory** | RAM for each zvm node | 16384
**hypershift.agents_parms.zvm_parameters.nameserver** | Nameserver for compute nodes | 192.168.10.1
**hypershift.agents_parms.zvm_parameters.subnetmask** | Subnet mask for compute nodes | 255.255.255.0
**hypershift.agents_parms.zvm_parameters.gateway** | Gateway for compute nodes | 192.168.10.1
**hypershift.agents_parms.zvm_parameters.nodes** | Set of parameters for zvm nodes <br /> Give the details of each zvm node here |
**hypershift.agents_parms.zvm_parameters.nodes.name** | Name of the zVM guest | m1317002
**hypershift.agents_parms.zvm_parameters.nodes.host** | Host name of the zVM guests <br /> which we use to login 3270 console | boem1317
**hypershift.agents_parms.zvm_parameters.nodes.user** | Username for zVM guests to login | m1317002
**hypershift.agents_parms.zvm_parameters.nodes.password** | password for the zVM guests to login | password
**hypershift.agents_parms.zvm_parameters.nodes.interface.ifname** | Network interface name for zVM guests | encbdf0
**hypershift.agents_parms.zvm_parameters.nodes.interface.nettype** | Network type for zVM guests for network connectivity | qeth
**hypershift.agents_parms.zvm_parameters.nodes.interface.subchannels** | subchannels for zVM guests interfaces | 0.0.bdf0,0.0.bdf1,0.0.bdf2
**hypershift.agents_parms.zvm_parameters.nodes.interface.options** | Configurations options | layer2=1
**hypershift.agents_parms.zvm_parameters.nodes.interface.ip** | IP addresses for to be used for zVM nodes | 192.168.10.1
**hypershift.agents_parms.zvm_parameters.nodes.dasd.disk_id** | Disk id for dasd disk to be used for zVM node | 4404
**hypershift.agents_parms.zvm_parameters.nodes.lun** | Disk details of fcp disk to be used for zVM node | 4404


## 17 - (Optional) Disconnected cluster setup
**Variable Name** | **Description** | **Example**
Expand Down
34 changes: 20 additions & 14 deletions inventories/default/group_vars/all.yaml.template
Original file line number Diff line number Diff line change
Expand Up @@ -275,6 +275,7 @@ hypershift:
#Hosted Control Plane Parameters

hcp:
high_availabiliy: true
clusters_namespace:
hosted_cluster_name:
basedomain:
Expand Down Expand Up @@ -320,13 +321,14 @@ hypershift:
ram: 16384
vcpus: 4
nameserver:

storage:
pool_path: "/var/lib/libvirt/images/"


# zVM specific parameters - s390x

zvm_parameters:
network_mode: vswitch # Supported modes: vswitch
network_mode: vswitch # Supported modes: vswitch,osa
disk_type: # Supported modes: fcp , dasd
vcpus: 4
memory: 16384
Expand All @@ -335,13 +337,15 @@ hypershift:
gateway:

nodes:
- name:
host:
user:
password:
osa:
- name:
host:
user:
password:
interface:
ifname: encbdf0
id: 0.0.bdf0,0.0.bdf1,0.0.bdf2
nettype: qeth
subchannels: 0.0.bdf0,0.0.bdf1,0.0.bdf2
options: layer2=1
ip:

# Required if disk_type is dasd
Expand All @@ -355,13 +359,15 @@ hypershift:
- wwpn:
fcp:

- name:
host:
user:
password:
osa:
- name:
host:
user:
password:
interface:
ifname: encbdf0
id: 0.0.bdf0,0.0.bdf1,0.0.bdf2
nettype: qeth
subchannels: 0.0.bdf0,0.0.bdf1,0.0.bdf2
options: layer2=1
ip:

dasd:
Expand Down
2 changes: 1 addition & 1 deletion roles/add_hc_workers_to_haproxy_hypershift/tasks/main.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---

- name: Get the IPs of Hosted Cluster Workers
shell: oc get no -o wide --kubeconfig=/root/ansible_workdir/{{ hypershift.hcp.hosted_cluster_name }}-kubeconfig --no-headers|grep -i worker| awk '{print $6}'
shell: oc get no -o wide --kubeconfig=/root/ansible_workdir/hcp-kubeconfig --no-headers|grep -i worker| awk '{print $6}'
register: hosted_workers

- name: Configuring HAproxy for Hosted Cluster
Expand Down
4 changes: 2 additions & 2 deletions roles/boot_agents_hypershift/tasks/main.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
- name: Create qemu image for agents
command: "qemu-img create -f qcow2 /home/libvirt/images/{{ hypershift.hcp.hosted_cluster_name }}-agent{{ item }}.qcow2 {{ hypershift.agents_parms.disk_size }}"
command: "qemu-img create -f qcow2 {{ hypershift.agents_parms.storage.pool_path }}{{ hypershift.hcp.hosted_cluster_name }}-agent{{ item }}.qcow2 {{ hypershift.agents_parms.disk_size }}"
loop: "{{ range(hypershift.agents_parms.agents_count|int) | list }}"

- name: Boot Agents
Expand All @@ -19,7 +19,7 @@
--cpu host \
--vcpus="{{ hypershift.agents_parms.vcpus }}" \
--location "/var/lib/libvirt/images/pxeboot/,kernel=kernel.img,initrd=initrd.img" \
--disk /home/libvirt/images/{{ hypershift.hcp.hosted_cluster_name }}-agent{{ item }}.qcow2 \
--disk {{ hypershift.agents_parms.storage.pool_path }}{{ hypershift.hcp.hosted_cluster_name }}-agent{{ item }}.qcow2 \
--network network:{{ env.bridge_name }},mac=$mac_address \
--graphics none \
--noautoconsole \
Expand Down
11 changes: 9 additions & 2 deletions roles/boot_zvm_nodes_hypershift/tasks/main.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,8 @@
--memory "{{ hypershift.agents_parms.zvm_parameters.memory }}" \
--kernel 'file:///var/lib/libvirt/images/pxeboot/kernel.img' \
--initrd 'file:///var/lib/libvirt/images/pxeboot/initrd.img' \
--cmdline "$(cat /root/ansible_workdir/agent-{{ item }}.parm)"
--cmdline "$(cat /root/ansible_workdir/agent-{{ item }}.parm)" \
--network "{{ hypershift.agents_parms.zvm_parameters.network_mode }}"
- name: Attaching dasd disk
shell: vmcp attach {{ hypershift.agents_parms.zvm_parameters.nodes[item].dasd.disk_id }} to {{ hypershift.agents_parms.zvm_parameters.nodes[item].name }}
Expand All @@ -43,5 +44,11 @@
register: agent_name

- name: Approve agents
shell: oc -n {{ hypershift.hcp.clusters_namespace }}-{{ hypershift.hcp.hosted_cluster_name }} patch agent {{ agent_name.stdout.split(' ')[0] }} -p '{"spec":{"approved":true,"hostname":"compute-{{item}}.{{ hypershift.hcp.hosted_cluster_name }}.{{ hypershift.hcp.basedomain }}","installerArgs":"[\"--append-karg\",\"rd.neednet=1\", \"--append-karg\", \"ip={{ hypershift.agents_parms.zvm_parameters.nodes[item].osa.ip }}::{{ hypershift.agents_parms.zvm_parameters.gateway }}:{{ hypershift.agents_parms.zvm_parameters.subnetmask }}:compute-{{ item }}.{{ hypershift.hcp.hosted_cluster_name }}.{{ hypershift.hcp.basedomain }}:{{ hypershift.agents_parms.zvm_parameters.nodes[item].osa.ifname }}:none\", \"--append-karg\", \"nameserver={{ hypershift.agents_parms.zvm_parameters.nameserver }}\", \"--append-karg\",\"rd.znet=qeth,{{ hypershift.agents_parms.zvm_parameters.nodes[item].osa.id }},layer2=1\",\"--append-karg\", {% if hypershift.agents_parms.zvm_parameters.disk_type | lower != 'fcp' %}\"rd.dasd=0.0.{{ hypershift.agents_parms.zvm_parameters.nodes[item].dasd.disk_id }}\"{% else %}\"rd.zfcp={{ hypershift.agents_parms.zvm_parameters.nodes[item].lun[0].paths[0].fcp}},{{ hypershift.agents_parms.zvm_parameters.nodes[item].lun[0].paths[0].wwpn }},{{ hypershift.agents_parms.zvm_parameters.nodes[item].lun[0].id }}\"{% endif %}]"}}' --type merge
shell: oc -n {{ hypershift.hcp.clusters_namespace }}-{{ hypershift.hcp.hosted_cluster_name }} patch agent {{ agent_name.stdout.split(' ')[0] }} -p '{"spec":{"approved":true,"hostname":"compute-{{ item }}.{{ hypershift.hcp.hosted_cluster_name }}.{{ hypershift.hcp.basedomain }}"}}' --type merge
when: "{{ hypershift.mce.version != '2.4' }}"

- name: Approve agents and patch installer args
shell: oc -n {{ hypershift.hcp.clusters_namespace }}-{{ hypershift.hcp.hosted_cluster_name }} patch agent {{ agent_name.stdout.split(' ')[0] }} -p '{"spec":{"approved":true,"hostname":"compute-{{item}}.{{ hypershift.hcp.hosted_cluster_name }}.{{ hypershift.hcp.basedomain }}","installerArgs":"[\"--append-karg\",\"rd.neednet=1\", \"--append-karg\", \"ip={{ hypershift.agents_parms.zvm_parameters.nodes[item].interface.ip }}::{{ hypershift.agents_parms.zvm_parameters.gateway }}:{{ hypershift.agents_parms.zvm_parameters.subnetmask }}:compute-{{ item }}.{{ hypershift.hcp.hosted_cluster_name }}.{{ hypershift.hcp.basedomain }}:{{ hypershift.agents_parms.zvm_parameters.nodes[item].interface.ifname }}:none\", \"--append-karg\", \"nameserver={{ hypershift.agents_parms.zvm_parameters.nameserver }}\", \"--append-karg\",\"rd.znet={{ hypershift.agents_parms.zvm_parameters.nodes[item].interface.nettype }},{{ hypershift.agents_parms.zvm_parameters.nodes[item].interface.subchannels }},{{ hypershift.agents_parms.zvm_parameters.nodes[item].interface.options }}\",\"--append-karg\", {% if hypershift.agents_parms.zvm_parameters.disk_type | lower != 'fcp' %}\"rd.dasd=0.0.{{ hypershift.agents_parms.zvm_parameters.nodes[item].dasd.disk_id }}\"{% else %}\"rd.zfcp={{ hypershift.agents_parms.zvm_parameters.nodes[item].lun[0].paths[0].fcp}},{{ hypershift.agents_parms.zvm_parameters.nodes[item].lun[0].paths[0].wwpn }},{{ hypershift.agents_parms.zvm_parameters.nodes[item].lun[0].id }}\"{% endif %}]"}}' --type merge
when: "{{ hypershift.mce.version == '2.4' }}"


7 changes: 6 additions & 1 deletion roles/boot_zvm_nodes_hypershift/templates/boot_nodes.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,17 +13,22 @@
parser.add_argument("--kernel", type=str, help="kernel URI", required=True, default='')
parser.add_argument("--cmdline", type=str, help="kernel cmdline", required=True, default='')
parser.add_argument("--initrd", type=str, help="Initrd URI", required=True, default='')
parser.add_argument("--network", type=str, help="Network mode for zvm nodes Supported modes: OSA, vswitch ", required=True)

args = parser.parse_args()

parameters = {
'transfer-buffer-size': 8000
}

interfaces=[]
if args.network.lower() == 'osa':
interfaces=[{ "type": "osa", "id": "{{ hypershift.agents_parms.zvm_parameters.nodes[item].interface.subchannels.split(',') | map('regex_replace', '0.0.', '') | join(',') }}"}]

guest_parameters = {
"boot_method": "network",
"storage_volumes" : [],
"ifaces" : [],
"ifaces" : interfaces,
"netboot": {
"cmdline": args.cmdline,
"kernel_uri": args.kernel,
Expand Down
4 changes: 3 additions & 1 deletion roles/create_agentserviceconfig_hypershift/tasks/main.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,9 @@
- name: Wait for Agent Service Deployment to be Succeeded
shell: oc get AgentServiceConfig agent -o json | jq -r '.status|.conditions[]|.status' | grep False | wc -l
register: asc
until: asc.stdout == '0'
until:
- asc.stdout == '0'
- asc.stderr == ''
retries: 60
delay: 20

Expand Down
4 changes: 2 additions & 2 deletions roles/create_bastion_hypershift/tasks/main.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -48,15 +48,15 @@
chmod 0600 /root/.ssh/authorized_keys
- name: Create qemu image for bastion
command: qemu-img create -f qcow2 /home/libvirt/images/{{ hypershift.hcp.hosted_cluster_name }}-bastion.qcow2 100G
command: qemu-img create -f qcow2 {{ hypershift.agents_parms.storage.pool_path }}{{ hypershift.hcp.hosted_cluster_name }}-bastion.qcow2 100G

- name: Create bastion
shell: |
virt-install \
--name {{ hypershift.hcp.hosted_cluster_name }}-bastion \
--memory 4096 \
--vcpus sockets=1,cores=4,threads=1 \
--disk /home/libvirt/images/{{ hypershift.hcp.hosted_cluster_name }}-bastion.qcow2,format=qcow2,bus=virtio,cache=none \
--disk {{ hypershift.agents_parms.storage.pool_path }}{{ hypershift.hcp.hosted_cluster_name }}-bastion.qcow2,format=qcow2,bus=virtio,cache=none \
--os-variant "rhel{{hypershift.bastion_parms.os_variant}}" \
--network network:{{ env.bridge_name }} \
--location '{{ env.file_server.protocol }}://{{ env.file_server.user + ':' + env.file_server.pass + '@' if env.file_server.protocol == 'ftp' else '' }}{{ env.file_server.ip }}{{ ':' + env.file_server.port if env.file_server.port | default('') | length > 0 else '' }}/{{ env.file_server.iso_mount_dir }}/' \
Expand Down
2 changes: 2 additions & 0 deletions roles/create_hcp_InfraEnv_hypershift/tasks/main.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,9 @@
--base-domain={{ hypershift.hcp.basedomain }}
--api-server-address=api.{{ hypershift.hcp.hosted_cluster_name }}.{{ hypershift.hcp.basedomain }}
--ssh-key ~/.ssh/{{ env.ansible_key_name }}.pub
{% if hypershift.hcp.high_availabiliy == false %}
--control-plane-availability-policy "SingleReplica"
{% endif %}
--infra-availability-policy "SingleReplica"
--release-image=quay.io/openshift-release-dev/ocp-release:{{ hypershift.hcp.ocp_release }}
{% set release_image = lookup('env', 'HCP_RELEASE_IMAGE') %}
Expand Down
2 changes: 1 addition & 1 deletion roles/delete_resources_bastion_hypershift/tasks/main.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
k8s_info:
api_version: v1
kind: Node
kubeconfig: "/root/ansible_workdir/{{ hypershift.hcp.hosted_cluster_name }}-kubeconfig"
kubeconfig: "/root/ansible_workdir/hcp-kubeconfig"
register: nodes
until: nodes.resources | length == 0
retries: 30
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,20 +53,20 @@
delay: 10

- name: Create Kubeconfig for Hosted Cluster
shell: hcp create kubeconfig --namespace {{ hypershift.hcp.clusters_namespace }} --name {{ hypershift.hcp.hosted_cluster_name }} > /root/ansible_workdir/{{ hypershift.hcp.hosted_cluster_name }}-kubeconfig
shell: hcp create kubeconfig --namespace {{ hypershift.hcp.clusters_namespace }} --name {{ hypershift.hcp.hosted_cluster_name }} > /root/ansible_workdir/hcp-kubeconfig

- name: Wait for Worker Nodes to Join
k8s_info:
api_version: v1
kind: Node
kubeconfig: "/root/ansible_workdir/{{ hypershift.hcp.hosted_cluster_name }}-kubeconfig"
kubeconfig: "/root/ansible_workdir/hcp-kubeconfig"
register: nodes
until: nodes.resources | length == {{ hypershift.agents_parms.agents_count }}
retries: 300
delay: 10

- name: Wait for Worker nodes to be Ready
shell: oc get no --kubeconfig=/root/ansible_workdir/{{ hypershift.hcp.hosted_cluster_name }}-kubeconfig --no-headers | grep -i 'NotReady' | wc -l
shell: oc get no --kubeconfig=/root/ansible_workdir/hcp-kubeconfig --no-headers | grep -i 'NotReady' | wc -l
register: node_status
until: node_status.stdout == '0'
retries: 50
Expand Down
Original file line number Diff line number Diff line change
@@ -1 +1 @@
rd.neednet=1 console=ttysclp0 coreos.live.rootfs_url=http://{{ hypershift.bastion_hypershift }}:8080/rootfs.img ip={{ hypershift.agents_parms.zvm_parameters.nodes[item].osa.ip }}::{{ hypershift.agents_parms.zvm_parameters.gateway }}:{{ hypershift.agents_parms.zvm_parameters.subnetmask }}::{{ hypershift.agents_parms.zvm_parameters.nodes[item].osa.ifname }}:none nameserver={{ hypershift.agents_parms.zvm_parameters.nameserver }} zfcp.allow_lun_scan=0 rd.znet=qeth,{{ hypershift.agents_parms.zvm_parameters.nodes[item].osa.id }},layer2=1 {% if hypershift.agents_parms.zvm_parameters.disk_type | lower != 'fcp' %}rd.dasd=0.0.{{ hypershift.agents_parms.zvm_parameters.nodes[item].dasd.disk_id }}{% else %}rd.zfcp={{ hypershift.agents_parms.zvm_parameters.nodes[item].lun[0].paths[0].fcp}},{{ hypershift.agents_parms.zvm_parameters.nodes[item].lun[0].paths[0].wwpn }},{{ hypershift.agents_parms.zvm_parameters.nodes[item].lun[0].id }} {% endif %} random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs="console=tty1 console=ttyS1,115200n8"
rd.neednet=1 console=ttysclp0 coreos.live.rootfs_url=http://{{ hypershift.bastion_hypershift }}:8080/rootfs.img ip={{ hypershift.agents_parms.zvm_parameters.nodes[item].interface.ip }}::{{ hypershift.agents_parms.zvm_parameters.gateway }}:{{ hypershift.agents_parms.zvm_parameters.subnetmask }}::{{ hypershift.agents_parms.zvm_parameters.nodes[item].interface.ifname }}:none nameserver={{ hypershift.agents_parms.zvm_parameters.nameserver }} zfcp.allow_lun_scan=0 rd.znet={{ hypershift.agents_parms.zvm_parameters.nodes[item].interface.nettype }},{{ hypershift.agents_parms.zvm_parameters.nodes[item].interface.subchannels }},{{ hypershift.agents_parms.zvm_parameters.nodes[item].interface.options }} {% if hypershift.agents_parms.zvm_parameters.disk_type | lower != 'fcp' %}rd.dasd=0.0.{{ hypershift.agents_parms.zvm_parameters.nodes[item].dasd.disk_id }}{% else %}rd.zfcp={{ hypershift.agents_parms.zvm_parameters.nodes[item].lun[0].paths[0].fcp}},{{ hypershift.agents_parms.zvm_parameters.nodes[item].lun[0].paths[0].wwpn }},{{ hypershift.agents_parms.zvm_parameters.nodes[item].lun[0].id }} {% endif %} random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs="console=tty1 console=ttyS1,115200n8"
6 changes: 3 additions & 3 deletions roles/wait_for_hc_to_complete_hypershift/tasks/main.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---

- name: Wait for All Cluster Operators to be available
shell: oc get co --kubeconfig=/root/ansible_workdir/{{ hypershift.hcp.hosted_cluster_name }}-kubeconfig --no-headers| awk '$3 != "True" {print $1}' | wc -l
shell: oc get co --kubeconfig=/root/ansible_workdir/hcp-kubeconfig --no-headers| awk '$3 != "True" {print $1}' | wc -l
register: co
until: co.stdout == '0'
retries: 60
Expand All @@ -15,7 +15,7 @@
delay: 15

- name: Get URL for Webconsole of Hosted Cluster
shell: oc whoami --show-console --kubeconfig=/root/ansible_workdir/{{ hypershift.hcp.hosted_cluster_name }}-kubeconfig
shell: oc whoami --show-console --kubeconfig=/root/ansible_workdir/hcp-kubeconfig
register: console_url

- name: Get Password for Hosted Cluster
Expand All @@ -32,7 +32,7 @@
dest: /root/ansible_workdir/kubeadmin-password

- name: Get api server of Hosted Cluster
shell: "cat /root/ansible_workdir/{{ hypershift.hcp.hosted_cluster_name }}-kubeconfig | grep -i server:"
shell: "cat /root/ansible_workdir/hcp-kubeconfig | grep -i server:"
register: api_server

- name: Display Login Credentials
Expand Down

0 comments on commit 51849e4

Please sign in to comment.