New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[v2] delegate_to runs task on local machine instead of Vagrant VM #12817

Closed
mgedmin opened this Issue Oct 19, 2015 · 6 comments

Comments

Projects
None yet
5 participants
@mgedmin
Copy link
Contributor

mgedmin commented Oct 19, 2015

Issue Type: Bug Report
Ansible Version:

ansible 2.0.0 (devel 1280e22) last updated 2015/10/19 08:38:39 (GMT +300)
lib/ansible/modules/core: (detached HEAD 5da7cf6) last updated 2015/10/19 08:39:02 (GMT +300)
lib/ansible/modules/extras: (detached HEAD 632de52) last updated 2015/10/19 08:39:02 (GMT +300)

Ansible Configuration:

[defaults]
inventory = .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory
remote_user = vagrant
private_key_file = ~/.vagrant.d/insecure_private_key
host_key_checking = false
gathering = smart
fact_caching = jsonfile
fact_caching_connection = .cache/facts/
fact_caching_timeout = 86400

[privilege_escalation]
become = true

[ssh_connection]
ssh_args = -o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null

and the inventory file has

trusty ansible_ssh_host=127.0.0.1 ansible_ssh_port=2201
precise ansible_ssh_host=127.0.0.1 ansible_ssh_port=2200

Summary:

I've a role that sets up SSH authenticated backup pushing between two hosts. One of the tasks is creating a dedicated user:

- name: user for accepting pushed backups on the backup buddy
  user: name="{{ backup_user }}" state=present
  delegate_to: "{{ backup_buddy }}"
  when: backup_buddy != ""

I'm testing this with a couple of Vagrant virtual machines, called trusty and precise. trusty is the target, precise is the value of {{ backup_buddy }}. Here's what Ansible v2 does:

TASK [backup-pusher : user for accepting pushed backups on the backup buddy] ***
ESTABLISH LOCAL CONNECTION FOR USER: vagrant
127.0.0.1 EXEC (umask 22 && mkdir -p "$(echo $HOME/.ansible/tmp/ansible-tmp-1445235090.23-187473370409589)" && echo "$(echo $HOME/.ansible/tmp/ansible-tmp-1445235090.23-187473370409589)")
127.0.0.1 PUT /tmp/tmp5c2bRG TO /home/mg/.ansible/tmp/ansible-tmp-1445235090.23-187473370409589/user
127.0.0.1 EXEC /bin/sh -c 'sudo -H -n -S -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-lpsktbokipyfwgtigsbpkqadldelsutb; LANG=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /home/mg/.ansible/tmp/ansible-tmp-1445235090.23-187473370409589/user; rm -rf "/home/mg/.ansible/tmp/ansible-tmp-1445235090.23-187473370409589/" > /dev/null 2>&1'"'"''
fatal: [trusty -> precise]: FAILED! => {"changed": false, "failed": true, "msg": "sudo: a password is required\n", "parsed": false}

Note how it's using a local connection and attempting to change stuff on my laptop, instead of SSHing into the vagrant VM. This fails because sudo requires a password (thank you sudo!), unlike in Vagrant.

@mgedmin

This comment has been minimized.

Copy link
Contributor

mgedmin commented Oct 19, 2015

Steps to Reproduce:

  • Get an Ansible VM running (e.g. vagrant init ubuntu/trusty64 && vagrant up)
  • Create an inventory file called hosts, e.g.
vagrant ansible_ssh_host=127.0.0.1 ansible_ssh_port=2200
  • Create a simple playbook test.yml:
---
- hosts: localhost
  gather_facts: no
  tasks:
    - command: hostname
      delegate_to: vagrant
  • Run ansible-playbook -i hosts test.yml -vvv

Expected Results:

(because I didn't bother setting up SSH keys for successful Vagrant auth)

$ ansible-playbook test.yml -vvv

PLAY [localhost] ************************************************************** 

TASK: [command hostname] ****************************************************** 
<127.0.0.1> ESTABLISH CONNECTION FOR USER: mg
<127.0.0.1> REMOTE_MODULE command hostname
<127.0.0.1> EXEC ssh -C -tt -v -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/home/mg/.ansible/cp/ansible-ssh-%h-%p-%r" -o Port=2200 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 127.0.0.1 /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1445236160.47-241359585522330 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1445236160.47-241359585522330 && echo $HOME/.ansible/tmp/ansible-tmp-1445236160.47-241359585522330'
The authenticity of host '[127.0.0.1]:2200 ([127.0.0.1]:2200)' can't be established.
ECDSA key fingerprint is 51:56:fb:c9:66:05:4f:1e:54:e0:ba:bb:c4:00:24:e9.
Are you sure you want to continue connecting (yes/no)? no 
fatal: [localhost -> vagrant] => SSH Error: Host key verification failed.
    while connecting to 127.0.0.1:2200
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.

FATAL: all hosts have already failed -- aborting

PLAY RECAP ******************************************************************** 
           to retry, use: --limit @/home/mg/test.retry

localhost                  : ok=0    changed=0    unreachable=1    failed=0   

Actual Results:

1 plays in test.yml

PLAY ***************************************************************************

TASK [command] *****************************************************************
ESTABLISH LOCAL CONNECTION FOR USER: mg
127.0.0.1 EXEC (umask 22 && mkdir -p "$(echo $HOME/.ansible/tmp/ansible-tmp-1445236241.37-18813461032791)" && echo "$(echo $HOME/.ansible/tmp/ansible-tmp-1445236241.37-18813461032791)")
127.0.0.1 PUT /tmp/tmp1cYUgW TO /home/mg/.ansible/tmp/ansible-tmp-1445236241.37-18813461032791/command
127.0.0.1 EXEC LANG=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /home/mg/.ansible/tmp/ansible-tmp-1445236241.37-18813461032791/command; rm -rf "/home/mg/.ansible/tmp/ansible-tmp-1445236241.37-18813461032791/" > /dev/null 2>&1
changed: [localhost -> localhost] => {"changed": true, "cmd": ["hostname"], "delta": "0:00:00.010268", "end": "2015-10-19 09:30:41.436348", "rc": 0, "start": "2015-10-19 09:30:41.426080", "stderr": "", "stdout": "platonas", "stdout_lines": ["platonas"], "warnings": []}

PLAY RECAP *********************************************************************
localhost                  : ok=1    changed=1    unreachable=0    failed=0   

(platonas is the hostname of my laptop)

@jimi-c jimi-c added this to the v2 milestone Oct 19, 2015

@jimi-c

This comment has been minimized.

Copy link
Member

jimi-c commented Oct 20, 2015

@mgedmin this is happening because we see the host is localhost, and therefor reset the connection to local. If you add ansible_connection=ssh to the inventory vars for the vagrant host, things work as expected:

TASK [command] *****************************************************************
changed: [localhost] => {"changed": true, "cmd": ["hostname"], "delta": "0:00:00.002595", "end": "2015-10-20 02:20:00.874443", "rc": 0, "start": "2015-10-20 02:20:00.871848", "stderr": "", "stdout": "jimi", "stdout_lines": ["jimi"], "warnings": []}
TASK [command] *****************************************************************
changed: [localhost -> vagrant] => {"changed": true, "cmd": ["hostname"], "delta": "0:00:00.001528", "end": "2015-10-20 06:20:01.094318", "rc": 0, "start": "2015-10-20 06:20:01.092790", "stderr": "", "stdout": "precise64", "stdout_lines": ["precise64"], "warnings": []}

The first task is run without delegate_to, the second is as you have it above, just to show there is a difference.

Really, I believe this behavior (always using the local connection method for localhost) is more correct than 1.x, as before it would sometimes try to ssh to localhost (which typically failed).

@jimi-c jimi-c added P3 pending_action and removed P2 labels Oct 20, 2015

@mgedmin

This comment has been minimized.

Copy link
Contributor

mgedmin commented Oct 20, 2015

Note: the inventory file is generated dynamically by Vagrant's Ansible provisioner, since the port numbers change all the time. This makes it hard to apply the workaround (add ansible_connection=ssh to the inventory file). It also increases the scope of the issue (anyone using Vagrant's Ansible provisioner is affected.)

@mgedmin

This comment has been minimized.

Copy link
Contributor

mgedmin commented Oct 20, 2015

BTW this issue only affects delegation: when a Vagrant host is used as a regular target, Ansible uses SSH. This inconsistency bugs me.

@halberom

This comment has been minimized.

Copy link
Contributor

halberom commented Oct 20, 2015

I think if the logic is going to change, it would be nice if it took the port into account. delegate_to host, where host is ip+port, is pretty obviously not a localhost connection. This change will affect all multi-host vagrant setups which use the nat port for access.

@jimi-c

This comment has been minimized.

Copy link
Member

jimi-c commented Oct 20, 2015

Per discussion, I think if any ansible_<connection>_* variable is set, we can safely assume that <connection> is what's wanted rather than local. I'll look at doing it that way, rather than the method used in #12834, which does not take inventory variables into account.

@jimi-c jimi-c removed the pending_action label Oct 20, 2015

@jimi-c jimi-c closed this in b46ce47 Oct 20, 2015

@ansibot ansibot added bug and removed bug_report labels Mar 7, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment