New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Skipping Host Key Checking fails on changed target host key #9442

Closed
danielsiwiec opened this Issue Oct 29, 2014 · 21 comments

Comments

Projects
None yet
@danielsiwiec

danielsiwiec commented Oct 29, 2014

Issue Type:

Bugfix Pull Request

Ansible Version:

1.7.2

Environment:

N/A

Summary:

When skipping "Host Key Checking" two flags need to be passed to ssh in order to allow connection to a host with a changed host key: StrictHostKeyChecking=no and UserKnownHostsFile=/dev/null. Currently only the first one is passed.

Steps To Reproduce:
  1. export ANSIBLE_HOST_KEY_CHECKING=False
  2. ansible all -m ping -i "hostname.example.com,"
  3. Change the host key (recreate the VM or change the DNS entry to point to a different IP) for the target
  4. ansible all -m ping -i "hostname.example.com,"
Expected Results:

The ping should pass.

Actual Results:
debug3: load_hostkeys: loading entries for host "hostname.example.com" from file "/Users/dsiwiec/.ssh/known_hosts"
debug3: load_hostkeys: found key type RSA in file /Users/dsiwiec/.ssh/known_hosts:49
debug3: load_hostkeys: loaded 1 keys
debug3: load_hostkeys: loading entries for host "168.61.73.29" from file "/Users/dsiwiec/.ssh/known_hosts"
debug3: load_hostkeys: loaded 0 keys
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@       WARNING: POSSIBLE DNS SPOOFING DETECTED!          @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
The RSA host key for hostname.example.com has changed,
and the key for the corresponding IP address 168.61.73.29
is unknown. This could either mean that
DNS SPOOFING is happening or the IP address for the host
and its host key have changed at the same time.
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
d2:d3:92:8b:52:aa:4f:9b:cb:a6:f8:f1:50:04:b3:da.
Please contact your system administrator.
@mpdehaan

This comment has been minimized.

Show comment
Hide comment
@mpdehaan

mpdehaan Nov 3, 2014

Contributor

In queue for investigation (not saying I agree just yet :))

Contributor

mpdehaan commented Nov 3, 2014

In queue for investigation (not saying I agree just yet :))

@danielsiwiec

This comment has been minimized.

Show comment
Hide comment
@danielsiwiec

danielsiwiec Nov 5, 2014

Cool, ping me if you had problems reproducing it.

danielsiwiec commented Nov 5, 2014

Cool, ping me if you had problems reproducing it.

@bcoca

This comment has been minimized.

Show comment
Hide comment
@bcoca

bcoca Nov 14, 2014

Member

I cannot reproduce the problem, once i "export ANSIBLE_HOST_KEY_CHECKING=False" I don't get any errors.

Member

bcoca commented Nov 14, 2014

I cannot reproduce the problem, once i "export ANSIBLE_HOST_KEY_CHECKING=False" I don't get any errors.

@danielsiwiec

This comment has been minimized.

Show comment
Hide comment
@danielsiwiec

danielsiwiec Nov 21, 2014

How did you attempt to replicate it? One easy way is to edit the know_hosts file and substitute the target host's key with a different host's key. The problem will not occur if it's a new host that doesn't have an entry in the known_hosts yet - in this case setting ANSIBLE_HOST_KEY_CHECKING to False does solve the problem.

danielsiwiec commented Nov 21, 2014

How did you attempt to replicate it? One easy way is to edit the know_hosts file and substitute the target host's key with a different host's key. The problem will not occur if it's a new host that doesn't have an entry in the known_hosts yet - in this case setting ANSIBLE_HOST_KEY_CHECKING to False does solve the problem.

@gildegoma

This comment has been minimized.

Show comment
Hide comment
@gildegoma

gildegoma Nov 21, 2014

Contributor

@danielsiwiec This is a known issue (also discussed in #3694).

At the moment there is no ad-hoc Ansible option to control UserKnownHostsFile=/dev/null, but you can set this option via ssh_args in ansible.cfg (or the ANSIBLE_SSH_ARGS environment variable).

Contributor

gildegoma commented Nov 21, 2014

@danielsiwiec This is a known issue (also discussed in #3694).

At the moment there is no ad-hoc Ansible option to control UserKnownHostsFile=/dev/null, but you can set this option via ssh_args in ansible.cfg (or the ANSIBLE_SSH_ARGS environment variable).

@danielsiwiec

This comment has been minimized.

Show comment
Hide comment
@danielsiwiec

danielsiwiec Nov 22, 2014

Thanks for pointing that out. According to documentation:

If a host is reinstalled and has a different key in ‘known_hosts’, this will result in an error message until corrected (...) You might not want this.

If you wish to disable this behavior and understand the implications, you can do so (...) by an environment variable:

$ export ANSIBLE_HOST_KEY_CHECKING=False

This functionality currently does not work, unless the UserKnownHostsFile=/dev/null property is set in SSH arguments, which is somewhat confusing. It bit me and took some time to figure out, so that's what I'm addressing with this PR.

danielsiwiec commented Nov 22, 2014

Thanks for pointing that out. According to documentation:

If a host is reinstalled and has a different key in ‘known_hosts’, this will result in an error message until corrected (...) You might not want this.

If you wish to disable this behavior and understand the implications, you can do so (...) by an environment variable:

$ export ANSIBLE_HOST_KEY_CHECKING=False

This functionality currently does not work, unless the UserKnownHostsFile=/dev/null property is set in SSH arguments, which is somewhat confusing. It bit me and took some time to figure out, so that's what I'm addressing with this PR.

@bcoca

This comment has been minimized.

Show comment
Hide comment
@bcoca

bcoca Nov 25, 2014

Member

@danielsiwiec I tested this by running against the same host and regenerating keys on that host between runs. It worked in all 3 cases:

  • initial host run (not in known_hosts)
  • subsequent host run (while in known_hosts)
  • run after regeneration of keys (known_hosts with different signature)
Member

bcoca commented Nov 25, 2014

@danielsiwiec I tested this by running against the same host and regenerating keys on that host between runs. It worked in all 3 cases:

  • initial host run (not in known_hosts)
  • subsequent host run (while in known_hosts)
  • run after regeneration of keys (known_hosts with different signature)
@gildegoma

This comment has been minimized.

Show comment
Hide comment
@gildegoma

gildegoma Nov 25, 2014

Contributor

@bcoca In this gist @glenjamin exactly described how to reproduce a similar problem, where the solution consists in using UserKnownHostsFile=/dev/null ssh option.

Contributor

gildegoma commented Nov 25, 2014

@bcoca In this gist @glenjamin exactly described how to reproduce a similar problem, where the solution consists in using UserKnownHostsFile=/dev/null ssh option.

gildegoma added a commit to hashicorp/vagrant that referenced this issue Nov 30, 2014

provisioners/ansible: don't read/write known_hosts
Like Vagrant's default SSH behaviors (e.g ssh or ssh-config commands),
the Ansible provisioner should by default not modify or read the user
known host file (e.g. ~/.ssh/known_hosts).

Given that `UserKnownHostsFile=/dev/null` SSH option is usually combined
with `StrictHostKeyChecking=no`, it seems quite reasonable to bind the
activation/disactivation of both options to `host_key_checking`
provisioner attribute.

For the records, a discussion held in Ansible-Development mailing list
clearly confirmed that there is no short-term plan to adapt Ansible to
offer an extra option or change the behavior of
ANSIBLE_HOST_KEY_CHECKING. For this reason, the current implementation
seems reasonable and should be stable on the long run.

Close #3900

Related References:

- https://groups.google.com/forum/#!msg/ansible-devel/iuoZs1oImNs/6xrj5oa1CmoJ
- ansible/ansible#9442
@amenonsen

This comment has been minimized.

Show comment
Hide comment
@amenonsen

amenonsen Jul 25, 2015

Contributor

I can also not reproduce this problem with devel in any of the various reported problem cases. I think this PR should be closed.

Contributor

amenonsen commented Jul 25, 2015

I can also not reproduce this problem with devel in any of the various reported problem cases. I think this PR should be closed.

@bcoca

This comment has been minimized.

Show comment
Hide comment
@bcoca

bcoca Jul 25, 2015

Member

closing the ticket as per comments above

Member

bcoca commented Jul 25, 2015

closing the ticket as per comments above

@yakhira

This comment has been minimized.

Show comment
Hide comment
@yakhira

yakhira Jul 31, 2015

# uncomment this to disable SSH key host checking
host_key_checking = False

yakhira commented Jul 31, 2015

# uncomment this to disable SSH key host checking
host_key_checking = False
@erickeller

This comment has been minimized.

Show comment
Hide comment
@erickeller

erickeller Oct 16, 2015

also confirm that:
export ANSIBLE_HOST_KEY_CHECKING=False does not work in any of the use cases sited by @bcoca ... can we reopen this issue or should we open a new one for fixing the documentation?

erickeller commented Oct 16, 2015

also confirm that:
export ANSIBLE_HOST_KEY_CHECKING=False does not work in any of the use cases sited by @bcoca ... can we reopen this issue or should we open a new one for fixing the documentation?

@tuxinaut

This comment has been minimized.

Show comment
Hide comment
@tuxinaut

tuxinaut Nov 10, 2015

@erickeller same here ansible 1.9.4

tuxinaut commented Nov 10, 2015

@erickeller same here ansible 1.9.4

@haasn

This comment has been minimized.

Show comment
Hide comment
@haasn

haasn Feb 14, 2016

Still seeing this issue with ansible 2.1.0 (devel 4b953c4).

Steps I'm using to reproduce:

  1. clear the contents of ~/.ssh/known_hosts
  2. ANSIBLE_HOST_KEY_CHECKING=false ansible-playbook site.yml
  3. check contents of ~/.ssh/known_hosts

It should be empty, but instead ansible records all of the known host entries.

The proper fix is to disable the user known host file, which is evidently still not being done.

haasn commented Feb 14, 2016

Still seeing this issue with ansible 2.1.0 (devel 4b953c4).

Steps I'm using to reproduce:

  1. clear the contents of ~/.ssh/known_hosts
  2. ANSIBLE_HOST_KEY_CHECKING=false ansible-playbook site.yml
  3. check contents of ~/.ssh/known_hosts

It should be empty, but instead ansible records all of the known host entries.

The proper fix is to disable the user known host file, which is evidently still not being done.

@fabianvf

This comment has been minimized.

Show comment
Hide comment
@fabianvf

fabianvf May 25, 2016

Contributor

I am also seeing this, if I run an ansible playbook on a VM host, then destroy and recreate that host and then rerun the ansible playbook, it will fail with host key verification errors, whether or not host key checking is set to false.

Contributor

fabianvf commented May 25, 2016

I am also seeing this, if I run an ansible playbook on a VM host, then destroy and recreate that host and then rerun the ansible playbook, it will fail with host key verification errors, whether or not host key checking is set to false.

@sivel

This comment has been minimized.

Show comment
Hide comment
@sivel

sivel May 25, 2016

Member

Setting host key checking to false, does not mean that it will not check the host key, in fact it means something more similar to "Trust on first use".

It correlates to the openssh option of StrictHostKeyChecking

             If this flag is set to ``yes'', ssh(1) will never automatically add host keys to the ~/.ssh/known_hosts file, and refuses to connect to hosts whose
             host key has changed.  This provides maximum protection against trojan horse attacks, though it can be annoying when the /etc/ssh/ssh_known_hosts
             file is poorly maintained or when connections to new hosts are frequently made.  This option forces the user to manually add all new hosts.  If
             this flag is set to ``no'', ssh will automatically add new host keys to the user known hosts files.  If this flag is set to ``ask'', new host keys
             will be added to the user known host files only after the user has confirmed that is what they really want to do, and ssh will refuse to connect to
             hosts whose host key has changed.  The host keys of known hosts will be verified automatically in all cases.  The argument must be ``yes'', ``no'',
             or ``ask''.  The default is ``ask''.
Member

sivel commented May 25, 2016

Setting host key checking to false, does not mean that it will not check the host key, in fact it means something more similar to "Trust on first use".

It correlates to the openssh option of StrictHostKeyChecking

             If this flag is set to ``yes'', ssh(1) will never automatically add host keys to the ~/.ssh/known_hosts file, and refuses to connect to hosts whose
             host key has changed.  This provides maximum protection against trojan horse attacks, though it can be annoying when the /etc/ssh/ssh_known_hosts
             file is poorly maintained or when connections to new hosts are frequently made.  This option forces the user to manually add all new hosts.  If
             this flag is set to ``no'', ssh will automatically add new host keys to the user known hosts files.  If this flag is set to ``ask'', new host keys
             will be added to the user known host files only after the user has confirmed that is what they really want to do, and ssh will refuse to connect to
             hosts whose host key has changed.  The host keys of known hosts will be verified automatically in all cases.  The argument must be ``yes'', ``no'',
             or ``ask''.  The default is ``ask''.
@fabianvf

This comment has been minimized.

Show comment
Hide comment
@fabianvf

fabianvf May 25, 2016

Contributor

That makes sense. For anyone else who would prefer to not manually deal with hostkey stuff when messing with ansible, putting this in my ansible.cfg fixed it:

[defaults]
host_key_checking = False

[paramiko_connection]
record_host_keys = False

[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null
Contributor

fabianvf commented May 25, 2016

That makes sense. For anyone else who would prefer to not manually deal with hostkey stuff when messing with ansible, putting this in my ansible.cfg fixed it:

[defaults]
host_key_checking = False

[paramiko_connection]
record_host_keys = False

[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null
@hackermd

This comment has been minimized.

Show comment
Hide comment
@hackermd

hackermd Apr 4, 2017

I would like to report an edge case that might help other users ending up here. In my use case, I connect to target machines via a bastion host. Simply setting -o StrictHostKeyChecking=no via Ansible doesn't have an effect as long as the default SSH settings in /etc/ssh/ssh_config are:

Hosts *
     # StrictHostKeyChecking ask

Including -o StrictHostKeyChecking=no into the ProxyCommand solved the problem for me:

ansible_ssh_conmon_args:  '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -i {key_file} -W %h:%p -q {user}@{host}"

Overriding the default SSH settings in ~/.ssh/config also did the trick (they get forwarded to the bastion host):

Hosts *
     StrictHostKeyChecking no

This is on Ubuntu 16.04 with Ansible 2.2.2.0.

hackermd commented Apr 4, 2017

I would like to report an edge case that might help other users ending up here. In my use case, I connect to target machines via a bastion host. Simply setting -o StrictHostKeyChecking=no via Ansible doesn't have an effect as long as the default SSH settings in /etc/ssh/ssh_config are:

Hosts *
     # StrictHostKeyChecking ask

Including -o StrictHostKeyChecking=no into the ProxyCommand solved the problem for me:

ansible_ssh_conmon_args:  '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -i {key_file} -W %h:%p -q {user}@{host}"

Overriding the default SSH settings in ~/.ssh/config also did the trick (they get forwarded to the bastion host):

Hosts *
     StrictHostKeyChecking no

This is on Ubuntu 16.04 with Ansible 2.2.2.0.

yfried pushed a commit to redhat-openstack/infrared that referenced this issue Oct 9, 2017

avoids potential ssh key conflicts
Provides better default ssh config by using
workaround from
ansible/ansible#9442 (comment)
- low enough control persist to avoid hijacking
- avoid use of local caches keys to avoid conflicts
- avoids poluting local host keys

RHOSINFRA-60

Change-Id: I3c36daf3b5c11da250d8f525ce197d95226adeba
@sandeepduhan92

This comment has been minimized.

Show comment
Hide comment
@sandeepduhan92

sandeepduhan92 Dec 12, 2017

Hi,

Is it possible to pass multiple remote users as variable in a single playbook.

Scenario:- I have multiple instances and diff users on each server. I wanted to create a playbook in which playbook takes the remote user one by one until successfully login. Once login it will perform the task.

Can anyone please help me on this.

Looking for help.

Thanks & Regards
Sandeep Kumar

sandeepduhan92 commented Dec 12, 2017

Hi,

Is it possible to pass multiple remote users as variable in a single playbook.

Scenario:- I have multiple instances and diff users on each server. I wanted to create a playbook in which playbook takes the remote user one by one until successfully login. Once login it will perform the task.

Can anyone please help me on this.

Looking for help.

Thanks & Regards
Sandeep Kumar

@justinsousa

This comment has been minimized.

Show comment
Hide comment
@justinsousa

justinsousa Jan 23, 2018

Using vagrant VMs, loading my GitHub key into the ssh agent on the Mac with forward agent set in the vagrant file and transport set to smart, I couldn't seem to get this to work. The problem was definitely ansible though as the forwarding was working via vagrant (if I ssh'd to the vagrant vm and ran the test ssh -T git@github.com, I was authenticated).

The settings that enabled auth with GitHub auth to work on the managed hosts was

[paramiko_connection]
record_host_keys = False

along with adding -o UserKnownHostsFile=/dev/null to the ssh args. and the host_key_checking = False which basically adds StrictHostKeyChecking=no to the ssh args for you

so basically the response a few above. thanks @fabianvf

what's odd is, even when I added the correct hosts keys, verification of those keys was not working and that includes without the root user/become used on any tasks.

justinsousa commented Jan 23, 2018

Using vagrant VMs, loading my GitHub key into the ssh agent on the Mac with forward agent set in the vagrant file and transport set to smart, I couldn't seem to get this to work. The problem was definitely ansible though as the forwarding was working via vagrant (if I ssh'd to the vagrant vm and ran the test ssh -T git@github.com, I was authenticated).

The settings that enabled auth with GitHub auth to work on the managed hosts was

[paramiko_connection]
record_host_keys = False

along with adding -o UserKnownHostsFile=/dev/null to the ssh args. and the host_key_checking = False which basically adds StrictHostKeyChecking=no to the ssh args for you

so basically the response a few above. thanks @fabianvf

what's odd is, even when I added the correct hosts keys, verification of those keys was not working and that includes without the root user/become used on any tasks.

@th31nitiate

This comment has been minimized.

Show comment
Hide comment
@th31nitiate

th31nitiate Feb 2, 2018

I think this is cool as is. It somewhat protects users from making an issue that persists past a certain point of simply testing. The best thing is to perform the mentioned additions to the ansible.cfg and then manage those separately for different environments.

I am strongly against the idea of making this controllable via ANSIBLE_env_vars

th31nitiate commented Feb 2, 2018

I think this is cool as is. It somewhat protects users from making an issue that persists past a certain point of simply testing. The best thing is to perform the mentioned additions to the ansible.cfg and then manage those separately for different environments.

I am strongly against the idea of making this controllable via ANSIBLE_env_vars

@ansibot ansibot added bug and removed bug_report labels Mar 6, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment