Skipping Host Key Checking fails on changed target host key #9442

danielsiwiec opened this Issue Oct 29, 2014 · 17 comments


None yet
Issue Type:

Bugfix Pull Request

Ansible Version:





When skipping "Host Key Checking" two flags need to be passed to ssh in order to allow connection to a host with a changed host key: StrictHostKeyChecking=no and UserKnownHostsFile=/dev/null. Currently only the first one is passed.

Steps To Reproduce:
  2. ansible all -m ping -i ","
  3. Change the host key (recreate the VM or change the DNS entry to point to a different IP) for the target
  4. ansible all -m ping -i ","
Expected Results:

The ping should pass.

Actual Results:
debug3: load_hostkeys: loading entries for host "" from file "/Users/dsiwiec/.ssh/known_hosts"
debug3: load_hostkeys: found key type RSA in file /Users/dsiwiec/.ssh/known_hosts:49
debug3: load_hostkeys: loaded 1 keys
debug3: load_hostkeys: loading entries for host "" from file "/Users/dsiwiec/.ssh/known_hosts"
debug3: load_hostkeys: loaded 0 keys
The RSA host key for has changed,
and the key for the corresponding IP address
is unknown. This could either mean that
DNS SPOOFING is happening or the IP address for the host
and its host key have changed at the same time.
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
Please contact your system administrator.
mpdehaan commented Nov 3, 2014

In queue for investigation (not saying I agree just yet :))


Cool, ping me if you had problems reproducing it.

bcoca commented Nov 14, 2014

I cannot reproduce the problem, once i "export ANSIBLE_HOST_KEY_CHECKING=False" I don't get any errors.


How did you attempt to replicate it? One easy way is to edit the know_hosts file and substitute the target host's key with a different host's key. The problem will not occur if it's a new host that doesn't have an entry in the known_hosts yet - in this case setting ANSIBLE_HOST_KEY_CHECKING to False does solve the problem.


@danielsiwiec This is a known issue (also discussed in #3694).

At the moment there is no ad-hoc Ansible option to control UserKnownHostsFile=/dev/null, but you can set this option via ssh_args in ansible.cfg (or the ANSIBLE_SSH_ARGS environment variable).


Thanks for pointing that out. According to documentation:

If a host is reinstalled and has a different key in ‘known_hosts’, this will result in an error message until corrected (...) You might not want this.

If you wish to disable this behavior and understand the implications, you can do so (...) by an environment variable:


This functionality currently does not work, unless the UserKnownHostsFile=/dev/null property is set in SSH arguments, which is somewhat confusing. It bit me and took some time to figure out, so that's what I'm addressing with this PR.

bcoca commented Nov 25, 2014

@danielsiwiec I tested this by running against the same host and regenerating keys on that host between runs. It worked in all 3 cases:

  • initial host run (not in known_hosts)
  • subsequent host run (while in known_hosts)
  • run after regeneration of keys (known_hosts with different signature)

@bcoca In this gist @glenjamin exactly described how to reproduce a similar problem, where the solution consists in using UserKnownHostsFile=/dev/null ssh option.

@rahulsundaram rahulsundaram referenced this issue in geerlingguy/ansible-for-devops Nov 30, 2014

digitalocean provisioning doesn't work #3

@gildegoma gildegoma added a commit to mitchellh/vagrant that referenced this issue Nov 30, 2014
@gildegoma gildegoma provisioners/ansible: don't read/write known_hosts
Like Vagrant's default SSH behaviors (e.g ssh or ssh-config commands),
the Ansible provisioner should by default not modify or read the user
known host file (e.g. ~/.ssh/known_hosts).

Given that `UserKnownHostsFile=/dev/null` SSH option is usually combined
with `StrictHostKeyChecking=no`, it seems quite reasonable to bind the
activation/disactivation of both options to `host_key_checking`
provisioner attribute.

For the records, a discussion held in Ansible-Development mailing list
clearly confirmed that there is no short-term plan to adapt Ansible to
offer an extra option or change the behavior of
ANSIBLE_HOST_KEY_CHECKING. For this reason, the current implementation
seems reasonable and should be stable on the long run.

Close #3900

Related References:

- ansible/ansible#9442

I can also not reproduce this problem with devel in any of the various reported problem cases. I think this PR should be closed.

bcoca commented Jul 25, 2015

closing the ticket as per comments above

@bcoca bcoca closed this Jul 25, 2015
yakhira commented Jul 31, 2015
# uncomment this to disable SSH key host checking
host_key_checking = False

also confirm that:
export ANSIBLE_HOST_KEY_CHECKING=False does not work in any of the use cases sited by @bcoca ... can we reopen this issue or should we open a new one for fixing the documentation?


@erickeller same here ansible 1.9.4

@danielsiwiec danielsiwiec referenced this issue in ansible/ansible-modules-core Jan 14, 2016

Synchronize fails when target host key changed #270

haasn commented Feb 14, 2016

Still seeing this issue with ansible 2.1.0 (devel 4b953c4).

Steps I'm using to reproduce:

  1. clear the contents of ~/.ssh/known_hosts
  2. ANSIBLE_HOST_KEY_CHECKING=false ansible-playbook site.yml
  3. check contents of ~/.ssh/known_hosts

It should be empty, but instead ansible records all of the known host entries.

The proper fix is to disable the user known host file, which is evidently still not being done.


I am also seeing this, if I run an ansible playbook on a VM host, then destroy and recreate that host and then rerun the ansible playbook, it will fail with host key verification errors, whether or not host key checking is set to false.

sivel commented May 25, 2016

Setting host key checking to false, does not mean that it will not check the host key, in fact it means something more similar to "Trust on first use".

It correlates to the openssh option of StrictHostKeyChecking

             If this flag is set to ``yes'', ssh(1) will never automatically add host keys to the ~/.ssh/known_hosts file, and refuses to connect to hosts whose
             host key has changed.  This provides maximum protection against trojan horse attacks, though it can be annoying when the /etc/ssh/ssh_known_hosts
             file is poorly maintained or when connections to new hosts are frequently made.  This option forces the user to manually add all new hosts.  If
             this flag is set to ``no'', ssh will automatically add new host keys to the user known hosts files.  If this flag is set to ``ask'', new host keys
             will be added to the user known host files only after the user has confirmed that is what they really want to do, and ssh will refuse to connect to
             hosts whose host key has changed.  The host keys of known hosts will be verified automatically in all cases.  The argument must be ``yes'', ``no'',
             or ``ask''.  The default is ``ask''.

That makes sense. For anyone else who would prefer to not manually deal with hostkey stuff when messing with ansible, putting this in my ansible.cfg fixed it:

host_key_checking = False

record_host_keys = False

ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment