New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Skipping Host Key Checking fails on changed target host key #9442

danielsiwiec opened this Issue Oct 29, 2014 · 21 comments


None yet

danielsiwiec commented Oct 29, 2014

Issue Type:

Bugfix Pull Request

Ansible Version:





When skipping "Host Key Checking" two flags need to be passed to ssh in order to allow connection to a host with a changed host key: StrictHostKeyChecking=no and UserKnownHostsFile=/dev/null. Currently only the first one is passed.

Steps To Reproduce:
  2. ansible all -m ping -i ","
  3. Change the host key (recreate the VM or change the DNS entry to point to a different IP) for the target
  4. ansible all -m ping -i ","
Expected Results:

The ping should pass.

Actual Results:
debug3: load_hostkeys: loading entries for host "" from file "/Users/dsiwiec/.ssh/known_hosts"
debug3: load_hostkeys: found key type RSA in file /Users/dsiwiec/.ssh/known_hosts:49
debug3: load_hostkeys: loaded 1 keys
debug3: load_hostkeys: loading entries for host "" from file "/Users/dsiwiec/.ssh/known_hosts"
debug3: load_hostkeys: loaded 0 keys
The RSA host key for has changed,
and the key for the corresponding IP address
is unknown. This could either mean that
DNS SPOOFING is happening or the IP address for the host
and its host key have changed at the same time.
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
Please contact your system administrator.

This comment has been minimized.


mpdehaan commented Nov 3, 2014

In queue for investigation (not saying I agree just yet :))


This comment has been minimized.

danielsiwiec commented Nov 5, 2014

Cool, ping me if you had problems reproducing it.


This comment has been minimized.


bcoca commented Nov 14, 2014

I cannot reproduce the problem, once i "export ANSIBLE_HOST_KEY_CHECKING=False" I don't get any errors.


This comment has been minimized.

danielsiwiec commented Nov 21, 2014

How did you attempt to replicate it? One easy way is to edit the know_hosts file and substitute the target host's key with a different host's key. The problem will not occur if it's a new host that doesn't have an entry in the known_hosts yet - in this case setting ANSIBLE_HOST_KEY_CHECKING to False does solve the problem.


This comment has been minimized.


gildegoma commented Nov 21, 2014

@danielsiwiec This is a known issue (also discussed in #3694).

At the moment there is no ad-hoc Ansible option to control UserKnownHostsFile=/dev/null, but you can set this option via ssh_args in ansible.cfg (or the ANSIBLE_SSH_ARGS environment variable).


This comment has been minimized.

danielsiwiec commented Nov 22, 2014

Thanks for pointing that out. According to documentation:

If a host is reinstalled and has a different key in ‘known_hosts’, this will result in an error message until corrected (...) You might not want this.

If you wish to disable this behavior and understand the implications, you can do so (...) by an environment variable:


This functionality currently does not work, unless the UserKnownHostsFile=/dev/null property is set in SSH arguments, which is somewhat confusing. It bit me and took some time to figure out, so that's what I'm addressing with this PR.


This comment has been minimized.


bcoca commented Nov 25, 2014

@danielsiwiec I tested this by running against the same host and regenerating keys on that host between runs. It worked in all 3 cases:

  • initial host run (not in known_hosts)
  • subsequent host run (while in known_hosts)
  • run after regeneration of keys (known_hosts with different signature)

This comment has been minimized.


gildegoma commented Nov 25, 2014

@bcoca In this gist @glenjamin exactly described how to reproduce a similar problem, where the solution consists in using UserKnownHostsFile=/dev/null ssh option.

gildegoma added a commit to hashicorp/vagrant that referenced this issue Nov 30, 2014

provisioners/ansible: don't read/write known_hosts
Like Vagrant's default SSH behaviors (e.g ssh or ssh-config commands),
the Ansible provisioner should by default not modify or read the user
known host file (e.g. ~/.ssh/known_hosts).

Given that `UserKnownHostsFile=/dev/null` SSH option is usually combined
with `StrictHostKeyChecking=no`, it seems quite reasonable to bind the
activation/disactivation of both options to `host_key_checking`
provisioner attribute.

For the records, a discussion held in Ansible-Development mailing list
clearly confirmed that there is no short-term plan to adapt Ansible to
offer an extra option or change the behavior of
ANSIBLE_HOST_KEY_CHECKING. For this reason, the current implementation
seems reasonable and should be stable on the long run.

Close #3900

Related References:

- ansible/ansible#9442

This comment has been minimized.


amenonsen commented Jul 25, 2015

I can also not reproduce this problem with devel in any of the various reported problem cases. I think this PR should be closed.


This comment has been minimized.


bcoca commented Jul 25, 2015

closing the ticket as per comments above


This comment has been minimized.

yakhira commented Jul 31, 2015

# uncomment this to disable SSH key host checking
host_key_checking = False

This comment has been minimized.

erickeller commented Oct 16, 2015

also confirm that:
export ANSIBLE_HOST_KEY_CHECKING=False does not work in any of the use cases sited by @bcoca ... can we reopen this issue or should we open a new one for fixing the documentation?


This comment has been minimized.

tuxinaut commented Nov 10, 2015

@erickeller same here ansible 1.9.4


This comment has been minimized.

haasn commented Feb 14, 2016

Still seeing this issue with ansible 2.1.0 (devel 4b953c4).

Steps I'm using to reproduce:

  1. clear the contents of ~/.ssh/known_hosts
  2. ANSIBLE_HOST_KEY_CHECKING=false ansible-playbook site.yml
  3. check contents of ~/.ssh/known_hosts

It should be empty, but instead ansible records all of the known host entries.

The proper fix is to disable the user known host file, which is evidently still not being done.


This comment has been minimized.


fabianvf commented May 25, 2016

I am also seeing this, if I run an ansible playbook on a VM host, then destroy and recreate that host and then rerun the ansible playbook, it will fail with host key verification errors, whether or not host key checking is set to false.


This comment has been minimized.


sivel commented May 25, 2016

Setting host key checking to false, does not mean that it will not check the host key, in fact it means something more similar to "Trust on first use".

It correlates to the openssh option of StrictHostKeyChecking

             If this flag is set to ``yes'', ssh(1) will never automatically add host keys to the ~/.ssh/known_hosts file, and refuses to connect to hosts whose
             host key has changed.  This provides maximum protection against trojan horse attacks, though it can be annoying when the /etc/ssh/ssh_known_hosts
             file is poorly maintained or when connections to new hosts are frequently made.  This option forces the user to manually add all new hosts.  If
             this flag is set to ``no'', ssh will automatically add new host keys to the user known hosts files.  If this flag is set to ``ask'', new host keys
             will be added to the user known host files only after the user has confirmed that is what they really want to do, and ssh will refuse to connect to
             hosts whose host key has changed.  The host keys of known hosts will be verified automatically in all cases.  The argument must be ``yes'', ``no'',
             or ``ask''.  The default is ``ask''.

This comment has been minimized.


fabianvf commented May 25, 2016

That makes sense. For anyone else who would prefer to not manually deal with hostkey stuff when messing with ansible, putting this in my ansible.cfg fixed it:

host_key_checking = False

record_host_keys = False

ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null

This comment has been minimized.

hackermd commented Apr 4, 2017

I would like to report an edge case that might help other users ending up here. In my use case, I connect to target machines via a bastion host. Simply setting -o StrictHostKeyChecking=no via Ansible doesn't have an effect as long as the default SSH settings in /etc/ssh/ssh_config are:

Hosts *
     # StrictHostKeyChecking ask

Including -o StrictHostKeyChecking=no into the ProxyCommand solved the problem for me:

ansible_ssh_conmon_args:  '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -i {key_file} -W %h:%p -q {user}@{host}"

Overriding the default SSH settings in ~/.ssh/config also did the trick (they get forwarded to the bastion host):

Hosts *
     StrictHostKeyChecking no

This is on Ubuntu 16.04 with Ansible

yfried pushed a commit to redhat-openstack/infrared that referenced this issue Oct 9, 2017

avoids potential ssh key conflicts
Provides better default ssh config by using
workaround from
ansible/ansible#9442 (comment)
- low enough control persist to avoid hijacking
- avoid use of local caches keys to avoid conflicts
- avoids poluting local host keys


Change-Id: I3c36daf3b5c11da250d8f525ce197d95226adeba

This comment has been minimized.

sandeepduhan92 commented Dec 12, 2017


Is it possible to pass multiple remote users as variable in a single playbook.

Scenario:- I have multiple instances and diff users on each server. I wanted to create a playbook in which playbook takes the remote user one by one until successfully login. Once login it will perform the task.

Can anyone please help me on this.

Looking for help.

Thanks & Regards
Sandeep Kumar


This comment has been minimized.

justinsousa commented Jan 23, 2018

Using vagrant VMs, loading my GitHub key into the ssh agent on the Mac with forward agent set in the vagrant file and transport set to smart, I couldn't seem to get this to work. The problem was definitely ansible though as the forwarding was working via vagrant (if I ssh'd to the vagrant vm and ran the test ssh -T, I was authenticated).

The settings that enabled auth with GitHub auth to work on the managed hosts was

record_host_keys = False

along with adding -o UserKnownHostsFile=/dev/null to the ssh args. and the host_key_checking = False which basically adds StrictHostKeyChecking=no to the ssh args for you

so basically the response a few above. thanks @fabianvf

what's odd is, even when I added the correct hosts keys, verification of those keys was not working and that includes without the root user/become used on any tasks.


This comment has been minimized.

th31nitiate commented Feb 2, 2018

I think this is cool as is. It somewhat protects users from making an issue that persists past a certain point of simply testing. The best thing is to perform the mentioned additions to the ansible.cfg and then manage those separately for different environments.

I am strongly against the idea of making this controllable via ANSIBLE_env_vars

@ansibot ansibot added bug and removed bug_report labels Mar 6, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment