- 
                Notifications
    
You must be signed in to change notification settings  - Fork 199
 
Switch to using cluster-hosted Ironic for worker deployments #659
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | 
|---|---|---|
| 
          
            
          
           | 
    @@ -3,7 +3,7 @@ | |
| set -ex | ||
| 
     | 
||
| source logging.sh | ||
| #source common.sh | ||
| source common.sh | ||
| eval "$(go env)" | ||
| 
     | 
||
| # Get the latest bits for baremetal-operator | ||
| 
        
          
        
         | 
    @@ -12,10 +12,36 @@ export BMOPATH="$GOPATH/src/github.com/metal3-io/baremetal-operator" | |
| # Make a local copy of the baremetal-operator code to make changes | ||
| cp -r $BMOPATH/deploy ocp/. | ||
| sed -i 's/namespace: .*/namespace: openshift-machine-api/g' ocp/deploy/role_binding.yaml | ||
| cp $SCRIPTDIR/operator_ironic.yaml ocp/deploy | ||
| cp $SCRIPTDIR/ironic_bmo_configmap.yaml ocp/deploy | ||
| sed -i "s#__RHCOS_IMAGE_URL__#${RHCOS_IMAGE_URL}#" ocp/deploy/ironic_bmo_configmap.yaml | ||
| 
     | 
||
| # Kill the dnsmasq container on the host since it is performing DHCP and doesn't | ||
| # allow our pod in openshift to take over. | ||
| for name in dnsmasq ironic-inspector ; do | ||
| sudo podman ps | grep -w "$name$" && sudo podman stop $name | ||
| done | ||
| 
     | 
||
| # Start deploying on the new cluster | ||
| oc --config ocp/auth/kubeconfig apply -f ocp/deploy/service_account.yaml --namespace=openshift-machine-api | ||
| oc --config ocp/auth/kubeconfig apply -f ocp/deploy/role.yaml --namespace=openshift-machine-api | ||
| oc --config ocp/auth/kubeconfig apply -f ocp/deploy/role_binding.yaml | ||
| oc --config ocp/auth/kubeconfig apply -f ocp/deploy/crds/metal3_v1alpha1_baremetalhost_crd.yaml | ||
| oc --config ocp/auth/kubeconfig apply -f ocp/deploy/operator.yaml --namespace=openshift-machine-api | ||
| 
     | 
||
| oc --config ocp/auth/kubeconfig apply -f ocp/deploy/ironic_bmo_configmap.yaml --namespace=openshift-machine-api | ||
| # I'm leaving this as is for debugging but we could easily generate a random password here. | ||
| oc --config ocp/auth/kubeconfig delete secret mariadb-password --namespace=openshift-machine-api || true | ||
| oc --config ocp/auth/kubeconfig create secret generic mariadb-password --from-literal password=password --namespace=openshift-machine-api | ||
| 
     | 
||
| oc --config ocp/auth/kubeconfig adm --as system:admin policy add-scc-to-user privileged system:serviceaccount:openshift-machine-api:baremetal-operator | ||
| oc --config ocp/auth/kubeconfig apply -f ocp/deploy/operator_ironic.yaml -n openshift-machine-api | ||
| 
     | 
||
| # Sadly I don't see a way to get this from the json.. | ||
| 
         There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 
  | 
||
| POD_NAME=$(oc --config ocp/auth/kubeconfig get pods -n openshift-machine-api | grep metal3-baremetal-operator | cut -f 1 -d ' ') | ||
| 
     | 
||
| # Make sure our pod is running. | ||
| echo "Waiting for baremetal-operator pod to become ready" | ||
| 
         There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why do we need to wait? It shouldn't be required to block here There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah it's probably optional. I haven't tested the whole thing without the wait yet though. I'll give it a go. On the other hand we could use this to catch errors early. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Fair enough. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. FWIW in my testing this was helpful when one of the containers went into CrashLoopBackoff, it meant I could start investigating when it became clear the pod was wedged and not starting correctly. I don't have a strong opinion but for development having some verbose monitoring of the pod startup is probably no bad thing? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yeah I'm 50/50 on it. I do kinda like how it can catch an issue with the pod, but for CI it's probably not useful.  | 
||
| while [ $(oc --config ocp/auth/kubeconfig get pod $POD_NAME -n openshift-machine-api -o json | jq .status.phase) != '"Running"' ] | ||
| do | ||
| sleep 5 | ||
| done | ||
| Original file line number | Diff line number | Diff line change | 
|---|---|---|
| @@ -0,0 +1,15 @@ | ||
| kind: ConfigMap | ||
| apiVersion: v1 | ||
| metadata: | ||
| name: ironic-bmo-configmap | ||
| data: | ||
| http_port: "6180" | ||
| provisioning_interface: "ens3" | ||
| provisioning_ip: "172.22.0.3/24" | ||
| dhcp_range: "172.22.0.10,172.22.0.100" | ||
| deploy_kernel_url: "http://172.22.0.3:6180/images/ironic-python-agent.kernel" | ||
| deploy_ramdisk_url: "http://172.22.0.3:6180/images/ironic-python-agent.initramfs" | ||
| ironic_endpoint: "http://172.22.0.3:6385/v1/" | ||
| ironic_inspector_endpoint: "http://172.22.0.3:5050/v1/" | ||
| cache_url: "http://172.22.0.1/images" | ||
| rhcos_image_url: __RHCOS_IMAGE_URL__ | 
| Original file line number | Diff line number | Diff line change | 
|---|---|---|
| @@ -0,0 +1,288 @@ | ||
| apiVersion: apps/v1 | ||
| kind: Deployment | ||
| metadata: | ||
| name: metal3-baremetal-operator | ||
| spec: | ||
| replicas: 1 | ||
| selector: | ||
| matchLabels: | ||
| name: metal3-baremetal-operator | ||
| template: | ||
| metadata: | ||
| labels: | ||
| name: metal3-baremetal-operator | ||
| spec: | ||
| serviceAccountName: metal3-baremetal-operator | ||
| hostNetwork: true | ||
| initContainers: | ||
| - name: ipa-downloader | ||
| image: quay.io/metal3-io/ironic-ipa-downloader:master | ||
| command: | ||
| - /usr/local/bin/get-resource.sh | ||
| imagePullPolicy: Always | ||
| securityContext: | ||
| privileged: true | ||
| volumeMounts: | ||
| - mountPath: /shared | ||
| name: ironic-data-volume | ||
| env: | ||
| - name: CACHEURL | ||
| valueFrom: | ||
| configMapKeyRef: | ||
| name: ironic-bmo-configmap | ||
| key: cache_url | ||
| - name: rhcos-downloader | ||
| image: quay.io/openshift-metal3/rhcos-downloader:master | ||
| command: | ||
| - /usr/local/bin/get-resource.sh | ||
| imagePullPolicy: Always | ||
| securityContext: | ||
| privileged: true | ||
| volumeMounts: | ||
| - mountPath: /shared | ||
| name: ironic-data-volume | ||
| env: | ||
| - name: RHCOS_IMAGE_URL | ||
| valueFrom: | ||
| configMapKeyRef: | ||
| name: ironic-bmo-configmap | ||
| key: rhcos_image_url | ||
| - name: CACHEURL | ||
| valueFrom: | ||
| configMapKeyRef: | ||
| name: ironic-bmo-configmap | ||
| key: cache_url | ||
| - name: static-ip-set | ||
| image: quay.io/metal3-io/static-ip-manager:latest | ||
| command: | ||
| - /set-static-ip | ||
| imagePullPolicy: Always | ||
| securityContext: | ||
| privileged: true | ||
| env: | ||
| - name: PROVISIONING_IP | ||
| valueFrom: | ||
| configMapKeyRef: | ||
| name: ironic-bmo-configmap | ||
| key: provisioning_ip | ||
| - name: PROVISIONING_INTERFACE | ||
| valueFrom: | ||
| configMapKeyRef: | ||
| name: ironic-bmo-configmap | ||
| key: provisioning_interface | ||
| containers: | ||
| - name: baremetal-operator | ||
| image: quay.io/metal3-io/baremetal-operator:master | ||
| ports: | ||
| - containerPort: 60000 | ||
| name: metrics | ||
| command: | ||
| - /baremetal-operator | ||
| imagePullPolicy: Always | ||
| env: | ||
| - name: WATCH_NAMESPACE | ||
| valueFrom: | ||
| fieldRef: | ||
| fieldPath: metadata.namespace | ||
| - name: POD_NAME | ||
| valueFrom: | ||
| fieldRef: | ||
| fieldPath: metadata.name | ||
| - name: OPERATOR_NAME | ||
| value: "baremetal-operator" | ||
| - name: DEPLOY_KERNEL_URL | ||
| valueFrom: | ||
| configMapKeyRef: | ||
| name: ironic-bmo-configmap | ||
| key: deploy_kernel_url | ||
| - name: DEPLOY_RAMDISK_URL | ||
| valueFrom: | ||
| configMapKeyRef: | ||
| name: ironic-bmo-configmap | ||
| key: deploy_ramdisk_url | ||
| - name: IRONIC_ENDPOINT | ||
| valueFrom: | ||
| configMapKeyRef: | ||
| name: ironic-bmo-configmap | ||
| key: ironic_endpoint | ||
| - name: IRONIC_INSPECTOR_ENDPOINT | ||
| valueFrom: | ||
| configMapKeyRef: | ||
| name: ironic-bmo-configmap | ||
| key: ironic_inspector_endpoint | ||
| - name: ironic-dnsmasq | ||
| image: quay.io/metal3-io/ironic:master | ||
| imagePullPolicy: Always | ||
| securityContext: | ||
| privileged: true | ||
| command: | ||
| - /bin/rundnsmasq | ||
| volumeMounts: | ||
| - mountPath: /shared | ||
| name: ironic-data-volume | ||
| env: | ||
| - name: HTTP_PORT | ||
| valueFrom: | ||
| configMapKeyRef: | ||
| name: ironic-bmo-configmap | ||
| key: http_port | ||
| - name: PROVISIONING_INTERFACE | ||
| valueFrom: | ||
| configMapKeyRef: | ||
| name: ironic-bmo-configmap | ||
| key: provisioning_interface | ||
| - name: DHCP_RANGE | ||
| valueFrom: | ||
| configMapKeyRef: | ||
| name: ironic-bmo-configmap | ||
| key: dhcp_range | ||
| - name: mariadb | ||
| image: quay.io/metal3-io/ironic:master | ||
| imagePullPolicy: Always | ||
| securityContext: | ||
| privileged: true | ||
| command: | ||
| - /bin/runmariadb | ||
| volumeMounts: | ||
| - mountPath: /shared | ||
| name: ironic-data-volume | ||
| env: | ||
| - name: MARIADB_PASSWORD | ||
| valueFrom: | ||
| secretKeyRef: | ||
| name: mariadb-password | ||
| key: password | ||
| - name: ironic-httpd | ||
| image: quay.io/metal3-io/ironic:master | ||
| imagePullPolicy: Always | ||
| securityContext: | ||
| privileged: true | ||
| command: | ||
| - /bin/runhttpd | ||
| volumeMounts: | ||
| - mountPath: /shared | ||
| name: ironic-data-volume | ||
| env: | ||
| - name: HTTP_PORT | ||
| valueFrom: | ||
| configMapKeyRef: | ||
| name: ironic-bmo-configmap | ||
| key: http_port | ||
| - name: PROVISIONING_INTERFACE | ||
| valueFrom: | ||
| configMapKeyRef: | ||
| name: ironic-bmo-configmap | ||
| key: provisioning_interface | ||
| - name: ironic-conductor | ||
| image: quay.io/metal3-io/ironic:master | ||
| imagePullPolicy: Always | ||
| securityContext: | ||
| privileged: true | ||
| command: | ||
| - /bin/runironic-conductor | ||
| volumeMounts: | ||
| - mountPath: /shared | ||
| name: ironic-data-volume | ||
| env: | ||
| - name: MARIADB_PASSWORD | ||
| valueFrom: | ||
| secretKeyRef: | ||
| name: mariadb-password | ||
| key: password | ||
| - name: HTTP_PORT | ||
| valueFrom: | ||
| configMapKeyRef: | ||
| name: ironic-bmo-configmap | ||
| key: http_port | ||
| - name: PROVISIONING_INTERFACE | ||
| valueFrom: | ||
| configMapKeyRef: | ||
| name: ironic-bmo-configmap | ||
| key: provisioning_interface | ||
| - name: ironic-api | ||
| image: quay.io/metal3-io/ironic:master | ||
| imagePullPolicy: Always | ||
| securityContext: | ||
| privileged: true | ||
| command: | ||
| - /bin/runironic-api | ||
| volumeMounts: | ||
| - mountPath: /shared | ||
| name: ironic-data-volume | ||
| env: | ||
| - name: MARIADB_PASSWORD | ||
| valueFrom: | ||
| secretKeyRef: | ||
| name: mariadb-password | ||
| key: password | ||
| - name: HTTP_PORT | ||
| valueFrom: | ||
| configMapKeyRef: | ||
| name: ironic-bmo-configmap | ||
| key: http_port | ||
| - name: PROVISIONING_INTERFACE | ||
| valueFrom: | ||
| configMapKeyRef: | ||
| name: ironic-bmo-configmap | ||
| key: provisioning_interface | ||
| - name: ironic-exporter | ||
| image: quay.io/metal3-io/ironic:master | ||
| imagePullPolicy: Always | ||
| securityContext: | ||
| privileged: true | ||
| command: | ||
| - /bin/runironic-exporter | ||
| volumeMounts: | ||
| - mountPath: /shared | ||
| name: ironic-data-volume | ||
| env: | ||
| - name: MARIADB_PASSWORD | ||
| valueFrom: | ||
| secretKeyRef: | ||
| name: mariadb-password | ||
| key: password | ||
| - name: HTTP_PORT | ||
| valueFrom: | ||
| configMapKeyRef: | ||
| name: ironic-bmo-configmap | ||
| key: http_port | ||
| - name: PROVISIONING_INTERFACE | ||
| valueFrom: | ||
| configMapKeyRef: | ||
| name: ironic-bmo-configmap | ||
| key: provisioning_interface | ||
| - name: ironic-inspector | ||
| image: quay.io/metal3-io/ironic-inspector:master | ||
| imagePullPolicy: Always | ||
| securityContext: | ||
| privileged: true | ||
| volumeMounts: | ||
| - mountPath: /shared | ||
| name: ironic-data-volume | ||
| env: | ||
| - name: PROVISIONING_INTERFACE | ||
| valueFrom: | ||
| configMapKeyRef: | ||
| name: ironic-bmo-configmap | ||
| key: provisioning_interface | ||
| - name: static-ip-refresh | ||
| image: quay.io/metal3-io/static-ip-manager:latest | ||
| command: | ||
| - /refresh-static-ip | ||
| imagePullPolicy: Always | ||
| securityContext: | ||
| privileged: true | ||
| env: | ||
| - name: PROVISIONING_IP | ||
| valueFrom: | ||
| configMapKeyRef: | ||
| name: ironic-bmo-configmap | ||
| key: provisioning_ip | ||
| - name: PROVISIONING_INTERFACE | ||
| valueFrom: | ||
| configMapKeyRef: | ||
| name: ironic-bmo-configmap | ||
| key: provisioning_interface | ||
| volumes: | ||
| - name: ironic-data-volume | ||
| emptyDir: {} | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about the rest of the ironic containers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we kill all the containers we lose the ironic database and cleanup will not work. eg 'make clean' in dev-scripts will fail to remove the ironic managed masters.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, expanding the comment would help. I imagine all of this is going away shortly anyway once the bootstrap VM ironic work lands.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that's correct, but then we'll also need to modify make clean to not rely on the terraform cleanup (until we figure out how to reimplement destroy)