Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Starting with version 1.8.0 linchpin destroy fails on libvirt target #1301

Closed
mcornea opened this issue Aug 23, 2019 · 5 comments · Fixed by #1304
Closed

Starting with version 1.8.0 linchpin destroy fails on libvirt target #1301

mcornea opened this issue Aug 23, 2019 · 5 comments · Fixed by #1304
Assignees
Milestone

Comments

@mcornea
Copy link
Contributor

mcornea commented Aug 23, 2019

Describe the bug

(rhhi-qe-venv) [root@sealusa2 linchpin_workspace]# linchpin -v --template-data @hooks/ansible/rhhi-setup/extravars.yaml destroy libvirt-new
 [WARNING]: Unable to parse /home/rhhi-ci/jenkins/workspace/rhhi.next-virt-customized/rhhi-qe-core/s2i/linchpin-provisioner/linchpin_workspace/localhost as an inventory source

 [WARNING]: No inventory was parsed, only implicit localhost is available


PLAY [schema check and Pre Provisioning Activities on topology_file] *************************************************************************************************************************************************************************

TASK [Gathering Facts] ***********************************************************************************************************************************************************************************************************************
ok: [localhost]

TASK [common : assign async value] ***********************************************************************************************************************************************************************************************************
skipping: [localhost] => {"changed": false, "skip_reason": "Conditional result was False"}

TASK [common : declare async_types array] ****************************************************************************************************************************************************************************************************
skipping: [localhost] => {"changed": false, "skip_reason": "Conditional result was False"}

TASK [common : output vars] ******************************************************************************************************************************************************************************************************************
ok: [localhost] => {"ansible_facts": {"topology_outputs": {}}, "changed": false}

PLAY [Provisioning libvirt resources] ********************************************************************************************************************************************************************************************************

TASK [Gathering Facts] ***********************************************************************************************************************************************************************************************************************
ok: [localhost]

TASK [libvirt : Gather facts] ****************************************************************************************************************************************************************************************************************
ok: [localhost]

TASK [libvirt : declaring output vars] *******************************************************************************************************************************************************************************************************
ok: [localhost] => {"ansible_facts": {"topology_outputs_libvirt_nodes": []}, "changed": false}

TASK [libvirt : Initiating libvirt resource group] *******************************************************************************************************************************************************************************************
included: /home/rhhi-ci/jenkins/workspace/rhhi.next-virt-customized/rhhi-qe-venv/lib/python2.7/site-packages/linchpin/provision/roles/libvirt/tasks/provision_resource_group.yml for localhost

TASK [libvirt : gather resource definitions of current group] ********************************************************************************************************************************************************************************
included: /home/rhhi-ci/jenkins/workspace/rhhi.next-virt-customized/rhhi-qe-venv/lib/python2.7/site-packages/linchpin/provision/roles/libvirt/tasks/provision_res_defs.yml for localhost

TASK [libvirt : Get host from uri] ***********************************************************************************************************************************************************************************************************
ok: [localhost] => {"ansible_facts": {"uri_hostname": "localhost"}, "changed": false}

TASK [libvirt : set resource_type] ***********************************************************************************************************************************************************************************************************
ok: [localhost] => {"ansible_facts": {"resource_type": "libvirt_node"}, "changed": false}

TASK [libvirt : provision libvirt network] ***************************************************************************************************************************************************************************************************
skipping: [localhost] => {"changed": false, "skip_reason": "Conditional result was False"}

TASK [libvirt : teardown libvirt network] ****************************************************************************************************************************************************************************************************
skipping: [localhost] => {"changed": false, "skip_reason": "Conditional result was False"}

TASK [libvirt : Set the resource node name] **************************************************************************************************************************************************************************************************
ok: [localhost] => {"ansible_facts": {"libvirt_resource_name": "rhhi-node-master"}, "changed": false}

TASK [libvirt : Create name using uhash value] ***********************************************************************************************************************************************************************************************
skipping: [localhost] => {"changed": false, "skip_reason": "Conditional result was False"}

TASK [libvirt : provision libvirt node] ******************************************************************************************************************************************************************************************************
skipping: [localhost] => {"changed": false, "skip_reason": "Conditional result was False"}

TASK [libvirt : teardown libvirt node] *******************************************************************************************************************************************************************************************************
included: /home/rhhi-ci/jenkins/workspace/rhhi.next-virt-customized/rhhi-qe-venv/lib/python2.7/site-packages/linchpin/provision/roles/libvirt/tasks/teardown_libvirt_node.yml for localhost

TASK [libvirt : set_fact] ********************************************************************************************************************************************************************************************************************
 [WARNING]: The loop variable 'item' is already in use. You should set the `loop_var` value in the `loop_control` option for the task to something else to avoid variable collisions and unexpected behavior.

ok: [localhost] => (item=0) => {"ansible_facts": {"res_count": [0]}, "ansible_loop_var": "item", "changed": false, "item": "0"}
ok: [localhost] => (item=1) => {"ansible_facts": {"res_count": [0, 1]}, "ansible_loop_var": "item", "changed": false, "item": "1"}
ok: [localhost] => (item=2) => {"ansible_facts": {"res_count": [0, 1, 2]}, "ansible_loop_var": "item", "changed": false, "item": "2"}

TASK [libvirt : halt node] *******************************************************************************************************************************************************************************************************************
skipping: [localhost] => (item=[{u'count': 3, u'name': u'rhhi-node-master', u'name_separator': u'-', u'storage': [{u'units': u'G', u'disk_type': u'virtio_scsi', u'name': u'osd0', u'device': u'sdb', u'cache': u'unsafe', u'size': 8}, {u'units': u'G', u'disk_type': u'virtio_scsi', u'name': u'osd1', u'device': u'sdc', u'cache': u'unsafe', u'size': 8}, {u'units': u'G', u'disk_type': u'virtio_scsi', u'name': u'osd2', u'device': u'sdd', u'cache': u'unsafe', u'size': 8}, {u'units': u'G', u'disk_type': u'virtio_scsi', u'name': u'osd3', u'device': u'sde', u'cache': u'unsafe', u'size': 8}], u'uri': u'qemu:///system', u'ssh_key': u'id_rsa', u'cpu_mode': u'host-passthrough', u'vcpus': 24, u'additional_storage': u'42G', u'role': u'libvirt_node', u'memory': 36864, u'disk_type': u'virtio_scsi', u'arch': u'x86_64', u'networks': [{u'name': u'baremetal'}, {u'dhcp': False, u'name': u'provisioning'}], u'image_src': u'https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2}, 0, u'-'])  => {"ansible_loop_var": "instance", "changed": false, "instance": [{"additional_storage": "42G", "arch": "x86_64", "count": 3, "cpu_mode": "host-passthrough", "disk_type": "virtio_scsi", "image_src": "https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2", "memory": 36864, "name": "rhhi-node-master", "name_separator": "-", "networks": [{"name": "baremetal"}, {"dhcp": false, "name": "provisioning"}], "role": "libvirt_node", "ssh_key": "id_rsa", "storage": [{"cache": "unsafe", "device": "sdb", "disk_type": "virtio_scsi", "name": "osd0", "size": 8, "units": "G"}, {"cache": "unsafe", "device": "sdc", "disk_type": "virtio_scsi", "name": "osd1", "size": 8, "units": "G"}, {"cache": "unsafe", "device": "sdd", "disk_type": "virtio_scsi", "name": "osd2", "size": 8, "units": "G"}, {"cache": "unsafe", "device": "sde", "disk_type": "virtio_scsi", "name": "osd3", "size": 8, "units": "G"}], "uri": "qemu:///system", "vcpus": 24}, 0, "-"], "skip_reason": "Conditional result was False"}
skipping: [localhost] => (item=[{u'count': 3, u'name': u'rhhi-node-master', u'name_separator': u'-', u'storage': [{u'units': u'G', u'disk_type': u'virtio_scsi', u'name': u'osd0', u'device': u'sdb', u'cache': u'unsafe', u'size': 8}, {u'units': u'G', u'disk_type': u'virtio_scsi', u'name': u'osd1', u'device': u'sdc', u'cache': u'unsafe', u'size': 8}, {u'units': u'G', u'disk_type': u'virtio_scsi', u'name': u'osd2', u'device': u'sdd', u'cache': u'unsafe', u'size': 8}, {u'units': u'G', u'disk_type': u'virtio_scsi', u'name': u'osd3', u'device': u'sde', u'cache': u'unsafe', u'size': 8}], u'uri': u'qemu:///system', u'ssh_key': u'id_rsa', u'cpu_mode': u'host-passthrough', u'vcpus': 24, u'additional_storage': u'42G', u'role': u'libvirt_node', u'memory': 36864, u'disk_type': u'virtio_scsi', u'arch': u'x86_64', u'networks': [{u'name': u'baremetal'}, {u'dhcp': False, u'name': u'provisioning'}], u'image_src': u'https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2'}, 1, u'-'])  => {"ansible_loop_var": "instance", "changed": false, "instance": [{"additional_storage": "42G", "arch": "x86_64", "count": 3, "cpu_mode": "host-passthrough", "disk_type": "virtio_scsi", "image_src": "https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2", "memory": 36864, "name": "rhhi-node-master", "name_separator": "-", "networks": [{"name": "baremetal"}, {"dhcp": false, "name": "provisioning"}], "role": "libvirt_node", "ssh_key": "id_rsa", "storage": [{"cache": "unsafe", "device": "sdb", "disk_type": "virtio_scsi", "name": "osd0", "size": 8, "units": "G"}, {"cache": "unsafe", "device": "sdc", "disk_type": "virtio_scsi", "name": "osd1", "size": 8, "units": "G"}, {"cache": "unsafe", "device": "sdd", "disk_type": "virtio_scsi", "name": "osd2", "size": 8, "units": "G"}, {"cache": "unsafe", "device": "sde", "disk_type": "virtio_scsi", "name": "osd3", "size": 8, "units": "G"}], "uri": "qemu:///system", "vcpus": 24}, 1, "-"], "skip_reason": "Conditional result was False"}
skipping: [localhost] => (item=[{u'count': 3, u'name': u'rhhi-node-master', u'name_separator': u'-', u'storage': [{u'units': u'G', u'disk_type': u'virtio_scsi', u'name': u'osd0', u'device': u'sdb', u'cache': u'unsafe', u'size': 8}, {u'units': u'G', u'disk_type': u'virtio_scsi', u'name': u'osd1', u'device': u'sdc', u'cache': u'unsafe', u'size': 8}, {u'units': u'G', u'disk_type': u'virtio_scsi', u'name': u'osd2', u'device': u'sdd', u'cache': u'unsafe', u'size': 8}, {u'units': u'G', u'disk_type': u'virtio_scsi', u'name': u'osd3', u'device': u'sde', u'cache': u'unsafe', u'size': 8}], u'uri': u'qemu:///system', u'ssh_key': u'id_rsa', u'cpu_mode': u'host-passthrough', u'vcpus': 24, u'additional_storage': u'42G', u'role': u'libvirt_node', u'memory': 36864, u'disk_type': u'virtio_scsi', u'arch': u'x86_64', u'networks': [{u'name': u'baremetal'}, {u'dhcp': False, u'name': u'provisioning'}], u'image_src': u'https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2'}, 2, u'-'])  => {"ansible_loop_var": "instance", "changed": false, "instance": [{"additional_storage": "42G", "arch": "x86_64", "count": 3, "cpu_mode": "host-passthrough", "disk_type": "virtio_scsi", "image_src": "https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2", "memory": 36864, "name": "rhhi-node-master", "name_separator": "-", "networks": [{"name": "baremetal"}, {"dhcp": false, "name": "provisioning"}], "role": "libvirt_node", "ssh_key": "id_rsa", "storage": [{"cache": "unsafe", "device": "sdb", "disk_type": "virtio_scsi", "name": "osd0", "size": 8, "units": "G"}, {"cache": "unsafe", "device": "sdc", "disk_type": "virtio_scsi", "name": "osd1", "size": 8, "units": "G"}, {"cache": "unsafe", "device": "sdd", "disk_type": "virtio_scsi", "name": "osd2", "size": 8, "units": "G"}, {"cache": "unsafe", "device": "sde", "disk_type": "virtio_scsi", "name": "osd3", "size": 8, "units": "G"}], "uri": "qemu:///system", "vcpus": 24}, 2, "-"], "skip_reason": "Conditional result was False"}

TASK [libvirt : get XML definition of vm] ****************************************************************************************************************************************************************************************************
skipping: [localhost] => (item=[u'rhhi-node-master', u'qemu:///system'])  => {"ansible_loop_var": "instance", "changed": false, "instance": ["rhhi-node-master", "qemu:///system"], "skip_reason": "Conditional result was False"}

TASK [libvirt : undefine node] ***************************************************************************************************************************************************************************************************************
skipping: [localhost] => (item=[u'count', u'name', u'name_separator', u'storage', u'uri', u'ssh_key', u'cpu_mode', u'vcpus', u'additional_storage', u'role', u'memory', u'disk_type', u'arch', u'networks', u'image_src'])  => {"ansible_loop_var": "instance", "changed": false, "instance": ["count", "name", "name_separator", "storage", "uri", "ssh_key", "cpu_mode", "vcpus", "additional_storage", "role", "memory", "disk_type", "arch", "networks", "image_src"], "skip_reason": "Conditional result was False"}
Traceback (most recent call last):
  File "/home/rhhi-ci/jenkins/workspace/rhhi.next-virt-customized/rhhi-qe-venv/bin/linchpin", line 10, in <module>
    sys.exit(runcli())
  File "/home/rhhi-ci/jenkins/workspace/rhhi.next-virt-customized/rhhi-qe-venv/lib/python2.7/site-packages/click/core.py", line 764, in __call__
    return self.main(*args, **kwargs)
  File "/home/rhhi-ci/jenkins/workspace/rhhi.next-virt-customized/rhhi-qe-venv/lib/python2.7/site-packages/click/core.py", line 717, in main
    rv = self.invoke(ctx)
  File "/home/rhhi-ci/jenkins/workspace/rhhi.next-virt-customized/rhhi-qe-venv/lib/python2.7/site-packages/click/core.py", line 1137, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/rhhi-ci/jenkins/workspace/rhhi.next-virt-customized/rhhi-qe-venv/lib/python2.7/site-packages/click/core.py", line 956, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/rhhi-ci/jenkins/workspace/rhhi.next-virt-customized/rhhi-qe-venv/lib/python2.7/site-packages/click/core.py", line 555, in invoke
    return callback(*args, **kwargs)
  File "/home/rhhi-ci/jenkins/workspace/rhhi.next-virt-customized/rhhi-qe-venv/lib/python2.7/site-packages/click/decorators.py", line 64, in new_func
    return ctx.invoke(f, obj, *args, **kwargs)
  File "/home/rhhi-ci/jenkins/workspace/rhhi.next-virt-customized/rhhi-qe-venv/lib/python2.7/site-packages/click/core.py", line 555, in invoke
    return callback(*args, **kwargs)
  File "/home/rhhi-ci/jenkins/workspace/rhhi.next-virt-customized/rhhi-qe-venv/lib/python2.7/site-packages/linchpin/shell/__init__.py", line 387, in destroy
    env_vars=env_vars)
  File "/home/rhhi-ci/jenkins/workspace/rhhi.next-virt-customized/rhhi-qe-venv/lib/python2.7/site-packages/linchpin/cli/__init__.py", line 518, in lp_destroy
    tx_id=tx_id)
  File "/home/rhhi-ci/jenkins/workspace/rhhi.next-virt-customized/rhhi-qe-venv/lib/python2.7/site-packages/linchpin/cli/__init__.py", line 585, in _execute_action
    run_id=run_id)
  File "/home/rhhi-ci/jenkins/workspace/rhhi.next-virt-customized/rhhi-qe-venv/lib/python2.7/site-packages/linchpin/cli/__init__.py", line 760, in _execute
    tx_id=tx_id)
  File "/home/rhhi-ci/jenkins/workspace/rhhi.next-virt-customized/rhhi-qe-venv/lib/python2.7/site-packages/linchpin/__init__.py", line 631, in do_action
    console=ansible_console)
  File "/home/rhhi-ci/jenkins/workspace/rhhi.next-virt-customized/rhhi-qe-venv/lib/python2.7/site-packages/linchpin/__init__.py", line 938, in _invoke_playbooks
    console=console)
  File "/home/rhhi-ci/jenkins/workspace/rhhi.next-virt-customized/rhhi-qe-venv/lib/python2.7/site-packages/linchpin/__init__.py", line 898, in _find_n_run_pb
    use_shell=use_shell)
  File "/home/rhhi-ci/jenkins/workspace/rhhi.next-virt-customized/rhhi-qe-venv/lib/python2.7/site-packages/linchpin/ansible_runner.py", line 285, in ansible_runner
    return_code = pbex.run()
  File "/home/rhhi-ci/jenkins/workspace/rhhi.next-virt-customized/rhhi-qe-venv/lib/python2.7/site-packages/ansible/executor/playbook_executor.py", line 169, in run
    result = self._tqm.run(play=play)
  File "/home/rhhi-ci/jenkins/workspace/rhhi.next-virt-customized/rhhi-qe-venv/lib/python2.7/site-packages/ansible/executor/task_queue_manager.py", line 249, in run
    play_return = strategy.run(iterator, play_context)
  File "/home/rhhi-ci/jenkins/workspace/rhhi.next-virt-customized/rhhi-qe-venv/lib/python2.7/site-packages/ansible/plugins/strategy/linear.py", line 278, in run
    task_vars = self._variable_manager.get_vars(play=iterator._play, host=host, task=task)
  File "/home/rhhi-ci/jenkins/workspace/rhhi.next-virt-customized/rhhi-qe-venv/lib/python2.7/site-packages/ansible/vars/manager.py", line 418, in get_vars
    all_vars['ansible_delegated_vars'], all_vars['_ansible_loop_cache'] = self._get_delegated_vars(play, task, all_vars)
  File "/home/rhhi-ci/jenkins/workspace/rhhi.next-virt-customized/rhhi-qe-venv/lib/python2.7/site-packages/ansible/vars/manager.py", line 509, in _get_delegated_vars
    loader=self._loader, fail_on_undefined=True, convert_bare=False)
  File "/home/rhhi-ci/jenkins/workspace/rhhi.next-virt-customized/rhhi-qe-venv/lib/python2.7/site-packages/ansible/utils/listify.py", line 33, in listify_lookup_plugin_terms
    terms = templar.template(terms.strip(), convert_bare=convert_bare, fail_on_undefined=fail_on_undefined)
  File "/home/rhhi-ci/jenkins/workspace/rhhi.next-virt-customized/rhhi-qe-venv/lib/python2.7/site-packages/ansible/template/__init__.py", line 539, in template
    disable_lookups=disable_lookups,
  File "/home/rhhi-ci/jenkins/workspace/rhhi.next-virt-customized/rhhi-qe-venv/lib/python2.7/site-packages/ansible/template/__init__.py", line 804, in do_template
    res = j2_concat(rf)
  File "/home/rhhi-ci/jenkins/workspace/rhhi.next-virt-customized/rhhi-qe-venv/lib/python2.7/site-packages/ansible/template/native_helpers.py", line 29, in ansible_native_concat
    head = list(islice(nodes, 2))
  File "<template>", line 12, in root
  File "/home/rhhi-ci/jenkins/workspace/rhhi.next-virt-customized/rhhi-qe-venv/lib/python2.7/site-packages/linchpin/FilterUtils/FilterUtils.py", line 172, in get_libvirt_files
    if len(result['stdout']) > 0:
KeyError: 'stdout'


(rhhi-qe-venv) [root@sealusa2 linchpin_workspace]# cat PinFile 
#jinja2:lstrip_blocks: True
---
libvirt-network:
  topology: libvirt-network.yml
  hooks:
    predestroy:
      - name: rhhi-setup
        type: ansible
        context: True
        actions:
          - playbook: requirements.yaml

cfgs:
  libvirt:
    __IP__: name
    __ADDRESS__: ip

libvirt-new:
  topology: libvirt-new.yml
  layout: libvirt-new.yml
  hooks:
    postdestroy:
      - name: rhhi-setup
        type: ansible
        context: True
        actions:
          - playbook: vbmc.yaml
            extra_vars: { "action": "cleanup" }
    postup:
      - name: rhhi-setup
        type: ansible
        context: True
        actions:
          - playbook: dns_update.yaml
            vars: extravars.yaml
          - playbook: nic_adjust.yaml
            vars: extravars.yaml
          - playbook: client_ssh_key.yaml
          - playbook: vbmc.yaml
            extra_vars: { "action": "install" }

beaker-specific-host:
  topology: beaker-specific-host.yml
  layout: beaker-layout.yml
  hooks:
    postup:
      - name: rhhi-setup
        type: ansible
        context: True
        actions:
          - playbook: dev-scripts-bm.yaml
            vars: extravars.yaml

beaker-qe-dedicated:
  topology: beaker-qe-dedicated.yml
  layout: beaker-layout.yml
  hooks:
    postup:
      - name: rhhi-setup
        type: ansible
        context: True
        actions:
          - playbook: dev-scripts-bm.yaml
            vars: extravars.yaml

beaker-ci-pool:
  topology: beaker-ci-pool.yml
  layout: beaker-layout.yml


(rhhi-qe-venv) [root@sealusa2 linchpin_workspace]# cat topologies/libvirt-new.yml 
---
topology_name: rhhi
resource_groups:
  - resource_group_name: rhhi-node-master
    resource_group_type: libvirt
    resource_definitions:
      - role: libvirt_node
        name: rhhi-node-master
        uri: qemu:///system
        count: 3
        image_src: https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
        cpu_mode: host-passthrough
        memory: {{ node_memory }}
        vcpus: {{ node_vcpus }}
        arch: x86_64
        ssh_key: id_rsa
        additional_storage: 42G
        disk_type: virtio_scsi
        name_separator: '-'
        networks:
          - name: baremetal
          - name: provisioning
            dhcp: false
        storage:
          - name: osd0
            size: {{ osd_disk_size }}
            cache: unsafe
            units: G
            disk_type: virtio_scsi
            device: sdb
          - name: osd1
            size: {{ osd_disk_size }}
            cache: unsafe
            units: G
            disk_type: virtio_scsi
            device: sdc
          - name: osd2
            size: {{ osd_disk_size }}
            cache: unsafe
            units: G
            disk_type: virtio_scsi
            device: sdd
          - name: osd3
            size: {{ osd_disk_size }}
            cache: unsafe
            units: G
            disk_type: virtio_scsi
            device: sde
  - resource_group_name: rhhi-node-worker
    resource_group_type: libvirt
    resource_definitions:
      - role: libvirt_node
        name: rhhi-node-worker
        uri: qemu:///system
        count: {{ node_count|int - 3 }} # If this results in count <= 0, I am quite sure linchpin up will fail
        memory: {{ node_memory }}
        vcpus: {{ node_vcpus }}
        image_src: https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
        cpu_mode: host-passthrough
        arch: x86_64
        ssh_key: id_rsa
        additional_storage: 42G
        disk_type: virtio_scsi
        name_separator: '-'
        networks:
          - name: baremetal
          - name: provisioning
            dhcp: false
        storage:
          - name: osd0
            size: {{ osd_disk_size }}
            cache: unsafe
            units: G
            disk_type: virtio_scsi
            device: sdb
          - name: osd1
            size: {{ osd_disk_size }}
            cache: unsafe
            units: G
            disk_type: virtio_scsi
            device: sdc
          - name: osd2
            size: {{ osd_disk_size }}
            cache: unsafe
            units: G
            disk_type: virtio_scsi
            device: sdd
          - name: osd3
            size: {{ osd_disk_size }}
            cache: unsafe
            units: G
            disk_type: virtio_scsi
            device: sde

(rhhi-qe-venv) [root@sealusa2 linchpin_workspace]# cat layouts/libvirt-new.yml 
---
inventory_layout:
  vars:
    hostname: __IP__
    ansible_ssh_host: __ADDRESS__
    ansible_ssh_user: {{ provisionhost_user }}
    ansible_python_interpreter: '/usr/libexec/platform-python'
    ansible_ssh_common_args: '"-o StrictHostKeyChecking=no"'
  hosts:
    master:
      count: 3
      host_groups:
        - master
        - openshift
    worker:
      count: {{ node_count|int - 3 }}
      host_groups:
        - worker
        - openshift
    openshift:
      count: {{ node_count }}
      host_groups:
        - openshift

To Reproduce
Steps to reproduce the behavior:

  1. Run lunchpin destroy with version 1.8.0 and aforementioned Pinfile

Expected behavior
No failure

Additional context
Same command works with version 1.7.6.2

@mcornea mcornea changed the title Starting with 1.8.0 linchpin destroy fails on libvirt target Starting with version 1.8.0 linchpin destroy fails on libvirt target Aug 23, 2019
@mcornea
Copy link
Contributor Author

mcornea commented Aug 23, 2019

/cc @14rcole this is a regression introduced by 1.8.0 release

@14rcole
Copy link
Contributor

14rcole commented Aug 23, 2019

@mcornea I'll take a look

@14rcole
Copy link
Contributor

14rcole commented Aug 23, 2019

@mcornea I identified the problem code, but I need to ask someone a question before I make changes to it. He's on PTO today. If we push a hotfix release on Monday will that work for you?

@mcornea
Copy link
Contributor Author

mcornea commented Aug 23, 2019

@mcornea I identified the problem code, but I need to ask someone a question before I make changes to it. He's on PTO today. If we push a hotfix release on Monday will that work for you?

Sure, np

@14rcole
Copy link
Contributor

14rcole commented Aug 26, 2019

@mcornea We just sent out a release with a fix that should resolve your issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants