Skip to content

Skipping Host Key Checking fails on changed target host key #9442

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
danielsiwiec opened this issue Oct 29, 2014 · 21 comments
Closed

Skipping Host Key Checking fails on changed target host key #9442

danielsiwiec opened this issue Oct 29, 2014 · 21 comments
Labels
bug This issue/PR relates to a bug. P2 Priority 2 - Issue Blocks Release

Comments

@danielsiwiec
Copy link

Issue Type:

Bugfix Pull Request

Ansible Version:

1.7.2

Environment:

N/A

Summary:

When skipping "Host Key Checking" two flags need to be passed to ssh in order to allow connection to a host with a changed host key: StrictHostKeyChecking=no and UserKnownHostsFile=/dev/null. Currently only the first one is passed.

Steps To Reproduce:
  1. export ANSIBLE_HOST_KEY_CHECKING=False
  2. ansible all -m ping -i "hostname.example.com,"
  3. Change the host key (recreate the VM or change the DNS entry to point to a different IP) for the target
  4. ansible all -m ping -i "hostname.example.com,"
Expected Results:

The ping should pass.

Actual Results:
debug3: load_hostkeys: loading entries for host "hostname.example.com" from file "/Users/dsiwiec/.ssh/known_hosts"
debug3: load_hostkeys: found key type RSA in file /Users/dsiwiec/.ssh/known_hosts:49
debug3: load_hostkeys: loaded 1 keys
debug3: load_hostkeys: loading entries for host "168.61.73.29" from file "/Users/dsiwiec/.ssh/known_hosts"
debug3: load_hostkeys: loaded 0 keys
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@       WARNING: POSSIBLE DNS SPOOFING DETECTED!          @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
The RSA host key for hostname.example.com has changed,
and the key for the corresponding IP address 168.61.73.29
is unknown. This could either mean that
DNS SPOOFING is happening or the IP address for the host
and its host key have changed at the same time.
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
d2:d3:92:8b:52:aa:4f:9b:cb:a6:f8:f1:50:04:b3:da.
Please contact your system administrator.
@mpdehaan
Copy link
Contributor

mpdehaan commented Nov 3, 2014

In queue for investigation (not saying I agree just yet :))

@mpdehaan mpdehaan added P2 Priority 2 - Issue Blocks Release bug_report labels Nov 3, 2014
@danielsiwiec
Copy link
Author

Cool, ping me if you had problems reproducing it.

@bcoca
Copy link
Member

bcoca commented Nov 14, 2014

I cannot reproduce the problem, once i "export ANSIBLE_HOST_KEY_CHECKING=False" I don't get any errors.

@danielsiwiec
Copy link
Author

How did you attempt to replicate it? One easy way is to edit the know_hosts file and substitute the target host's key with a different host's key. The problem will not occur if it's a new host that doesn't have an entry in the known_hosts yet - in this case setting ANSIBLE_HOST_KEY_CHECKING to False does solve the problem.

@gildegoma
Copy link
Contributor

@danielsiwiec This is a known issue (also discussed in #3694).

At the moment there is no ad-hoc Ansible option to control UserKnownHostsFile=/dev/null, but you can set this option via ssh_args in ansible.cfg (or the ANSIBLE_SSH_ARGS environment variable).

@danielsiwiec
Copy link
Author

Thanks for pointing that out. According to documentation:

If a host is reinstalled and has a different key in ‘known_hosts’, this will result in an error message until corrected (...) You might not want this.

If you wish to disable this behavior and understand the implications, you can do so (...) by an environment variable:

$ export ANSIBLE_HOST_KEY_CHECKING=False

This functionality currently does not work, unless the UserKnownHostsFile=/dev/null property is set in SSH arguments, which is somewhat confusing. It bit me and took some time to figure out, so that's what I'm addressing with this PR.

@bcoca
Copy link
Member

bcoca commented Nov 25, 2014

@danielsiwiec I tested this by running against the same host and regenerating keys on that host between runs. It worked in all 3 cases:

  • initial host run (not in known_hosts)
  • subsequent host run (while in known_hosts)
  • run after regeneration of keys (known_hosts with different signature)

@gildegoma
Copy link
Contributor

@bcoca In this gist @glenjamin exactly described how to reproduce a similar problem, where the solution consists in using UserKnownHostsFile=/dev/null ssh option.

gildegoma added a commit to hashicorp/vagrant that referenced this issue Nov 30, 2014
Like Vagrant's default SSH behaviors (e.g ssh or ssh-config commands),
the Ansible provisioner should by default not modify or read the user
known host file (e.g. ~/.ssh/known_hosts).

Given that `UserKnownHostsFile=/dev/null` SSH option is usually combined
with `StrictHostKeyChecking=no`, it seems quite reasonable to bind the
activation/disactivation of both options to `host_key_checking`
provisioner attribute.

For the records, a discussion held in Ansible-Development mailing list
clearly confirmed that there is no short-term plan to adapt Ansible to
offer an extra option or change the behavior of
ANSIBLE_HOST_KEY_CHECKING. For this reason, the current implementation
seems reasonable and should be stable on the long run.

Close #3900

Related References:

- https://groups.google.com/forum/#!msg/ansible-devel/iuoZs1oImNs/6xrj5oa1CmoJ
- ansible/ansible#9442
@amenonsen
Copy link
Contributor

I can also not reproduce this problem with devel in any of the various reported problem cases. I think this PR should be closed.

@bcoca
Copy link
Member

bcoca commented Jul 25, 2015

closing the ticket as per comments above

@yakhira
Copy link

yakhira commented Jul 31, 2015

# uncomment this to disable SSH key host checking
host_key_checking = False

@erickeller
Copy link

also confirm that:
export ANSIBLE_HOST_KEY_CHECKING=False does not work in any of the use cases sited by @bcoca ... can we reopen this issue or should we open a new one for fixing the documentation?

@tuxinaut
Copy link

@erickeller same here ansible 1.9.4

@haasn
Copy link

haasn commented Feb 14, 2016

Still seeing this issue with ansible 2.1.0 (devel 4b953c4).

Steps I'm using to reproduce:

  1. clear the contents of ~/.ssh/known_hosts
  2. ANSIBLE_HOST_KEY_CHECKING=false ansible-playbook site.yml
  3. check contents of ~/.ssh/known_hosts

It should be empty, but instead ansible records all of the known host entries.

The proper fix is to disable the user known host file, which is evidently still not being done.

@fabianvf
Copy link
Contributor

I am also seeing this, if I run an ansible playbook on a VM host, then destroy and recreate that host and then rerun the ansible playbook, it will fail with host key verification errors, whether or not host key checking is set to false.

@sivel
Copy link
Member

sivel commented May 25, 2016

Setting host key checking to false, does not mean that it will not check the host key, in fact it means something more similar to "Trust on first use".

It correlates to the openssh option of StrictHostKeyChecking

             If this flag is set to ``yes'', ssh(1) will never automatically add host keys to the ~/.ssh/known_hosts file, and refuses to connect to hosts whose
             host key has changed.  This provides maximum protection against trojan horse attacks, though it can be annoying when the /etc/ssh/ssh_known_hosts
             file is poorly maintained or when connections to new hosts are frequently made.  This option forces the user to manually add all new hosts.  If
             this flag is set to ``no'', ssh will automatically add new host keys to the user known hosts files.  If this flag is set to ``ask'', new host keys
             will be added to the user known host files only after the user has confirmed that is what they really want to do, and ssh will refuse to connect to
             hosts whose host key has changed.  The host keys of known hosts will be verified automatically in all cases.  The argument must be ``yes'', ``no'',
             or ``ask''.  The default is ``ask''.

@fabianvf
Copy link
Contributor

That makes sense. For anyone else who would prefer to not manually deal with hostkey stuff when messing with ansible, putting this in my ansible.cfg fixed it:

[defaults]
host_key_checking = False

[paramiko_connection]
record_host_keys = False

[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null

@hackermd
Copy link

hackermd commented Apr 4, 2017

I would like to report an edge case that might help other users ending up here. In my use case, I connect to target machines via a bastion host. Simply setting -o StrictHostKeyChecking=no via Ansible doesn't have an effect as long as the default SSH settings in /etc/ssh/ssh_config are:

Hosts *
     # StrictHostKeyChecking ask

Including -o StrictHostKeyChecking=no into the ProxyCommand solved the problem for me:

ansible_ssh_conmon_args:  '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -i {key_file} -W %h:%p -q {user}@{host}"

Overriding the default SSH settings in ~/.ssh/config also did the trick (they get forwarded to the bastion host):

Hosts *
     StrictHostKeyChecking no

This is on Ubuntu 16.04 with Ansible 2.2.2.0.

yfried pushed a commit to redhat-openstack/infrared that referenced this issue Oct 9, 2017
Provides better default ssh config by using
workaround from
ansible/ansible#9442 (comment)
- low enough control persist to avoid hijacking
- avoid use of local caches keys to avoid conflicts
- avoids poluting local host keys

RHOSINFRA-60

Change-Id: I3c36daf3b5c11da250d8f525ce197d95226adeba
@sandeepduhan92
Copy link

Hi,

Is it possible to pass multiple remote users as variable in a single playbook.

Scenario:- I have multiple instances and diff users on each server. I wanted to create a playbook in which playbook takes the remote user one by one until successfully login. Once login it will perform the task.

Can anyone please help me on this.

Looking for help.

Thanks & Regards
Sandeep Kumar

@justinsousa
Copy link

Using vagrant VMs, loading my GitHub key into the ssh agent on the Mac with forward agent set in the vagrant file and transport set to smart, I couldn't seem to get this to work. The problem was definitely ansible though as the forwarding was working via vagrant (if I ssh'd to the vagrant vm and ran the test ssh -T git@github.com, I was authenticated).

The settings that enabled auth with GitHub auth to work on the managed hosts was

[paramiko_connection]
record_host_keys = False

along with adding -o UserKnownHostsFile=/dev/null to the ssh args. and the host_key_checking = False which basically adds StrictHostKeyChecking=no to the ssh args for you

so basically the response a few above. thanks @fabianvf

what's odd is, even when I added the correct hosts keys, verification of those keys was not working and that includes without the root user/become used on any tasks.

@th31nitiate
Copy link

th31nitiate commented Feb 2, 2018

I think this is cool as is. It somewhat protects users from making an issue that persists past a certain point of simply testing. The best thing is to perform the mentioned additions to the ansible.cfg and then manage those separately for different environments.

I am strongly against the idea of making this controllable via ANSIBLE_env_vars

@ansibot ansibot added bug This issue/PR relates to a bug. and removed bug_report labels Mar 6, 2018
opnfv-github pushed a commit to opnfv/opnfvdocs that referenced this issue Nov 28, 2018
* Update docs/submodules/releng from branch 'master'
  - CPERF: Fixes issue with known hosts
    
    Exporting the global var to disable ansible host key checking doesn't
    ignore known hosts in the file. To fix this, this patch sets the known
    hosts file to /dev/null.
    
    Reference: ansible/ansible#9442
    
    Jobs currently failing due to known hosts:
    https://build.opnfv.org/ci/job/cperf-apex-csit-master/320/console
    
    Change-Id: Ic3470b368a056b3a3981f9555160a44018f97ebd
    Signed-off-by: Tim Rozet <trozet@redhat.com>
opnfv-github pushed a commit to opnfv/releng that referenced this issue Nov 28, 2018
Exporting the global var to disable ansible host key checking doesn't
ignore known hosts in the file. To fix this, this patch sets the known
hosts file to /dev/null.

Reference: ansible/ansible#9442

Jobs currently failing due to known hosts:
https://build.opnfv.org/ci/job/cperf-apex-csit-master/320/console

Change-Id: Ic3470b368a056b3a3981f9555160a44018f97ebd
Signed-off-by: Tim Rozet <trozet@redhat.com>
opnfv-github pushed a commit to opnfv/opnfvdocs that referenced this issue Nov 28, 2018
* Update docs/submodules/releng from branch 'master'
  - CPERF: Fixes issue with known hosts
    
    Exporting the global var to disable ansible host key checking doesn't
    ignore known hosts in the file. To fix this, this patch sets the known
    hosts file to /dev/null.
    
    Reference: ansible/ansible#9442
    
    Jobs currently failing due to known hosts:
    https://build.opnfv.org/ci/job/cperf-apex-csit-master/320/console
    
    Change-Id: Ic3470b368a056b3a3981f9555160a44018f97ebd
    Signed-off-by: Tim Rozet <trozet@redhat.com>
opnfv-github pushed a commit to opnfv/opnfvdocs that referenced this issue Nov 28, 2018
* Update docs/submodules/releng from branch 'master'
  - CPERF: Fixes issue with known hosts
    
    Exporting the global var to disable ansible host key checking doesn't
    ignore known hosts in the file. To fix this, this patch sets the known
    hosts file to /dev/null.
    
    Reference: ansible/ansible#9442
    
    Jobs currently failing due to known hosts:
    https://build.opnfv.org/ci/job/cperf-apex-csit-master/320/console
    
    Change-Id: Ic3470b368a056b3a3981f9555160a44018f97ebd
    Signed-off-by: Tim Rozet <trozet@redhat.com>
@ansible ansible locked and limited conversation to collaborators Apr 25, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug This issue/PR relates to a bug. P2 Priority 2 - Issue Blocks Release
Projects
None yet
Development

No branches or pull requests