Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

delegate_to with hosts from with_items not working correctly #14166

Closed
abadger opened this issue Jan 27, 2016 · 8 comments
Closed

delegate_to with hosts from with_items not working correctly #14166

abadger opened this issue Jan 27, 2016 · 8 comments
Labels
bug This issue/PR relates to a bug.
Milestone

Comments

@abadger
Copy link
Contributor

abadger commented Jan 27, 2016

Fresh checkout:

$ ansible-playbook --version                                    (12:39:21)
ansible-playbook 2.1.0 (devel 6bf2f45ff5) last updated 2016/01/27 11:43:50 (GMT -700)

following playbook:

- hosts: localhost
  vars:
    mhosts:
      - 192.168.122.160
      - 192.168.122.222
  gather_facts: False
  tasks:
   - command: hostname
     delegate_to: "{{ item }}"
     with_items: "{{ mhosts }}"

Following ouptut is generated:

$ ansible-playbook -i ',' 2832.yml -v                                  (12:42:50)
Using /etc/ansible/ansible.cfg as config file
 [WARNING]: provided hosts list is empty, only localhost is available


PLAY ***************************************************************************

TASK [command] *****************************************************************
changed: [localhost -> 192.168.122.160] => (item=192.168.122.160) => {"changed": true, "cmd": ["hostname"], "delta": "0:00:00.001582", "end": "2016-01-27 12:43:11.328723", "item": "192.168.122.160", "rc": 0, "start": "2016-01-27 12:43:11.327141", "stderr": "", "stdout": "rhel6.lan", "stdout_lines": ["rhel6.lan"], "warnings": []}
changed: [localhost -> 192.168.122.222] => (item=192.168.122.222) => {"changed": true, "cmd": ["hostname"], "delta": "0:00:00.001696", "end": "2016-01-27 12:43:11.446695", "item": "192.168.122.222", "rc": 0, "start": "2016-01-27 12:43:11.444999", "stderr": "", "stdout": "rhel6.lan", "stdout_lines": ["rhel6.lan"], "warnings": []}

PLAY RECAP *********************************************************************
localhost                  : ok=1    changed=1    unreachable=0    failed=0

The metadata says that the task is being run on each of the hosts listed in the hosts variable. But the stdout_lines indicate that the task is being run twice on the same host.

This seems related to #13880 but that issue was fixed.

@dflock
Copy link
Contributor

dflock commented Mar 10, 2016

I'm seeing the same thing - but with connection: docker, with some docker containers that I added using add_host. I have task with this on it:

  with_items:
    - 'nginx'
    - 'central'

which spits out this with -vvv:

TASK [configure : Check for dev hosts entry images (Development)] ******
task path: /home/duncan/dev/tools/cluster_provisioning/ansible/roles/central_configure/tasks/main.yml:248
ESTABLISH DOCKER CONNECTION FOR USER: ubuntu
<nginx> EXEC ['/usr/bin/docker', 'exec', '-i', u'nginx', '/bin/sh', '-c', '/bin/sh -c \'( umask 22 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1457583328.87-181028449536712 `" && echo "` echo $HOME/.ansible/tmp/ansible-tmp-1457583328.87-181028449536712 `" )\'']
<nginx> PUT /tmp/tmpbNZLSl TO /root/.ansible/tmp/ansible-tmp-1457583328.87-181028449536712/command
<nginx> EXEC ['/usr/bin/docker', 'exec', '-i', u'nginx', '/bin/sh', '-c', u'/bin/sh -c \'LANG=en_CA.UTF-8 LC_ALL=en_CA.UTF-8 LC_MESSAGES=en_CA.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1457583328.87-181028449536712/command; rm -rf "/root/.ansible/tmp/ansible-tmp-1457583328.87-181028449536712/" > /dev/null 2>&1\'']
<nginx> EXEC ['/usr/bin/docker', 'exec', '-i', u'nginx', '/bin/sh', '-c', '/bin/sh -c \'( umask 22 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1457583329.52-108499712108377 `" && echo "` echo $HOME/.ansible/tmp/ansible-tmp-1457583329.52-108499712108377 `" )\'']
<nginx> PUT /tmp/tmpULvPv_ TO /root/.ansible/tmp/ansible-tmp-1457583329.52-108499712108377/command
<nginx> EXEC ['/usr/bin/docker', 'exec', '-i', u'nginx', '/bin/sh', '-c', u'/bin/sh -c \'LANG=en_CA.UTF-8 LC_ALL=en_CA.UTF-8 LC_MESSAGES=en_CA.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1457583329.52-108499712108377/command; rm -rf "/root/.ansible/tmp/ansible-tmp-1457583329.52-108499712108377/" > /dev/null 2>&1\'']

So, it's essentially doing docker exec -i nginx every time, even though it then claims that it touched both containers:

changed: [127.0.0.1 -> nginx] => (item=nginx) => {"changed": true, "cmd": "grep -c '^10.129.1.9\\s\\+dev' /etc/hosts", "delta": "0:00:00.028816", "end": "2016-03-10 04:15:29.468877", "failed": false, "failed_when_result": false, "invocation": {"module_args": {"_raw_params": "grep -c '^10.129.1.9\\s\\+dev' /etc/hosts", "_uses_shell": true, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": true}, "module_name": "command"}, "item": "nginx", "rc": 1, "start": "2016-03-10 04:15:29.440061", "stderr": "", "stdout": "0", "stdout_lines": ["0"], "warnings": []}
changed: [127.0.0.1 -> central] => (item=central) => {"changed": true, "cmd": "grep -c '^10.129.1.9\\s\\+dev' /etc/hosts", "delta": "0:00:00.025137", "end": "2016-03-10 04:15:30.033923", "failed": false, "failed_when_result": false, "invocation": {"module_args": {"_raw_params": "grep -c '^10.129.1.9\\s\\+dev' /etc/hosts", "_uses_shell": true, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": true}, "module_name": "command"}, "item": "central", "rc": 1, "start": "2016-03-10 04:15:30.008786", "stderr": "", "stdout": "0", "stdout_lines": ["0"], "warnings": []}

If I flip the with_items around, so they look like this:

  with_items:
    - 'central'
    - 'nginx'

then is does the same, but uses the central container for everything instead of the nginx one.

@kilburn
Copy link
Contributor

kilburn commented Mar 11, 2016

I'm also hitting this issue.

This should be a high priority bug since it affects the core functionality advertised by ansible. For instance, the delegate_to + with_items combination is a recommended practice in the rolling upgrade guide.

@bcoca
Copy link
Member

bcoca commented Mar 17, 2016

not sure if #15024 should fix this, but it probably is related to the same part of the code (delegated vars not being recalculated per item?)

@zhangcheng
Copy link

At least v2.0.2.0-0.2.rc2 has it fixed.

alexxa added a commit to RedHatQE/rhui3-automation that referenced this issue Jun 3, 2016
If you use RHUI3 ansible automation, please mind to update ansible to 2.2.0 because of this BZ [1].

[1] ansible/ansible#14166
alexxa added a commit to RedHatQE/rhui3-automation that referenced this issue Jun 3, 2016
If you use RHUI3 ansible automation, please mind to update ansible to 2.2.0 because of this BZ [1].

[1] ansible/ansible#14166
@abadger
Copy link
Contributor Author

abadger commented Jul 26, 2016

Confirmed fixed. (checked on ansible 2.1.0.0)

@abadger abadger closed this as completed Jul 26, 2016
@Jorge-Rodriguez
Copy link
Contributor

Works on ansible 2.1.1.0, does NOT work on ansible 2.2.0 (devel da4c3eb)

@paulRbr
Copy link
Contributor

paulRbr commented Oct 31, 2017

@abadger @bcoca I am seeing the same problem stated in this issue with latest devel (ansible-playbook 2.5.0 (devel 710d1f074e) last updated 2017/10/31 20:27:00 (GMT +200)) should we reopen it or do you want me to create a new issue?

@ansibot ansibot added bug This issue/PR relates to a bug. and removed bug_report labels Mar 7, 2018
@ghost
Copy link

ghost commented Jul 30, 2018

@abadger @bcoca
I am seeing the same behavior described on this issue with Ansible 2.6.2, and is causing some issues for us:

Hosts:

[HA_Head_Nodes]
172.31.32.37
172.31.33.15

[kha:children]
HA_Head_Nodes
HA_Worker_Nodes

Play:

- hosts: kha
  tasks:
    - debug:
          msg: "This ran on this host: {{ item }}"
       delegate_to: "{{ item }}"
       loop: "{{ groups.HA_Head_Nodes }}"

Result:

TASK [kineticaHA : debug] *********************************************************************************************************************************************************************************************************************************************************************************************************************************************
ok: [172.31.32.37 -> 172.31.32.37] => (item=172.31.32.37) => {
    "msg": "This ran on this host: 172.31.32.37"
}
ok: [172.31.32.37 -> 172.31.33.15] => (item=172.31.33.15) => {
    "msg": "This ran on this host: 172.31.33.15"
}
ok: [172.31.33.15 -> 172.31.32.37] => (item=172.31.32.37) => {
    "msg": "This ran on this host: 172.31.32.37"
}
ok: [172.31.33.15 -> 172.31.33.15] => (item=172.31.33.15) => {
    "msg": "This ran on this host: 172.31.33.15"
}

@ansible ansible locked and limited conversation to collaborators Apr 25, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug This issue/PR relates to a bug.
Projects
None yet
Development

No branches or pull requests

9 participants