-
Notifications
You must be signed in to change notification settings - Fork 24k
Skipping Host Key Checking fails on changed target host key #9442
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
In queue for investigation (not saying I agree just yet :)) |
Cool, ping me if you had problems reproducing it. |
I cannot reproduce the problem, once i "export ANSIBLE_HOST_KEY_CHECKING=False" I don't get any errors. |
How did you attempt to replicate it? One easy way is to edit the know_hosts file and substitute the target host's key with a different host's key. The problem will not occur if it's a new host that doesn't have an entry in the known_hosts yet - in this case setting ANSIBLE_HOST_KEY_CHECKING to False does solve the problem. |
@danielsiwiec This is a known issue (also discussed in #3694). At the moment there is no ad-hoc Ansible option to control |
Thanks for pointing that out. According to documentation:
This functionality currently does not work, unless the UserKnownHostsFile=/dev/null property is set in SSH arguments, which is somewhat confusing. It bit me and took some time to figure out, so that's what I'm addressing with this PR. |
@danielsiwiec I tested this by running against the same host and regenerating keys on that host between runs. It worked in all 3 cases:
|
@bcoca In this gist @glenjamin exactly described how to reproduce a similar problem, where the solution consists in using UserKnownHostsFile=/dev/null ssh option. |
Like Vagrant's default SSH behaviors (e.g ssh or ssh-config commands), the Ansible provisioner should by default not modify or read the user known host file (e.g. ~/.ssh/known_hosts). Given that `UserKnownHostsFile=/dev/null` SSH option is usually combined with `StrictHostKeyChecking=no`, it seems quite reasonable to bind the activation/disactivation of both options to `host_key_checking` provisioner attribute. For the records, a discussion held in Ansible-Development mailing list clearly confirmed that there is no short-term plan to adapt Ansible to offer an extra option or change the behavior of ANSIBLE_HOST_KEY_CHECKING. For this reason, the current implementation seems reasonable and should be stable on the long run. Close #3900 Related References: - https://groups.google.com/forum/#!msg/ansible-devel/iuoZs1oImNs/6xrj5oa1CmoJ - ansible/ansible#9442
I can also not reproduce this problem with devel in any of the various reported problem cases. I think this PR should be closed. |
closing the ticket as per comments above |
|
also confirm that: |
@erickeller same here |
Still seeing this issue with ansible 2.1.0 (devel 4b953c4). Steps I'm using to reproduce:
It should be empty, but instead ansible records all of the known host entries. The proper fix is to disable the user known host file, which is evidently still not being done. |
I am also seeing this, if I run an ansible playbook on a VM host, then destroy and recreate that host and then rerun the ansible playbook, it will fail with host key verification errors, whether or not host key checking is set to false. |
Setting host key checking to false, does not mean that it will not check the host key, in fact it means something more similar to "Trust on first use". It correlates to the openssh option of
|
That makes sense. For anyone else who would prefer to not manually deal with hostkey stuff when messing with ansible, putting this in my ansible.cfg fixed it:
|
I would like to report an edge case that might help other users ending up here. In my use case, I connect to target machines via a bastion host. Simply setting
Including ansible_ssh_conmon_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -i {key_file} -W %h:%p -q {user}@{host}" Overriding the default SSH settings in
This is on Ubuntu 16.04 with Ansible 2.2.2.0. |
Provides better default ssh config by using workaround from ansible/ansible#9442 (comment) - low enough control persist to avoid hijacking - avoid use of local caches keys to avoid conflicts - avoids poluting local host keys RHOSINFRA-60 Change-Id: I3c36daf3b5c11da250d8f525ce197d95226adeba
Hi, Is it possible to pass multiple remote users as variable in a single playbook. Scenario:- I have multiple instances and diff users on each server. I wanted to create a playbook in which playbook takes the remote user one by one until successfully login. Once login it will perform the task. Can anyone please help me on this. Looking for help. Thanks & Regards |
Using vagrant VMs, loading my GitHub key into the ssh agent on the Mac with forward agent set in the vagrant file and transport set to smart, I couldn't seem to get this to work. The problem was definitely ansible though as the forwarding was working via vagrant (if I ssh'd to the vagrant vm and ran the test ssh -T git@github.com, I was authenticated). The settings that enabled auth with GitHub auth to work on the managed hosts was
along with adding so basically the response a few above. thanks @fabianvf what's odd is, even when I added the correct hosts keys, verification of those keys was not working and that includes without the root user/become used on any tasks. |
I think this is cool as is. It somewhat protects users from making an issue that persists past a certain point of simply testing. The best thing is to perform the mentioned additions to the ansible.cfg and then manage those separately for different environments. I am strongly against the idea of making this controllable via ANSIBLE_env_vars |
* Update docs/submodules/releng from branch 'master' - CPERF: Fixes issue with known hosts Exporting the global var to disable ansible host key checking doesn't ignore known hosts in the file. To fix this, this patch sets the known hosts file to /dev/null. Reference: ansible/ansible#9442 Jobs currently failing due to known hosts: https://build.opnfv.org/ci/job/cperf-apex-csit-master/320/console Change-Id: Ic3470b368a056b3a3981f9555160a44018f97ebd Signed-off-by: Tim Rozet <trozet@redhat.com>
Exporting the global var to disable ansible host key checking doesn't ignore known hosts in the file. To fix this, this patch sets the known hosts file to /dev/null. Reference: ansible/ansible#9442 Jobs currently failing due to known hosts: https://build.opnfv.org/ci/job/cperf-apex-csit-master/320/console Change-Id: Ic3470b368a056b3a3981f9555160a44018f97ebd Signed-off-by: Tim Rozet <trozet@redhat.com>
* Update docs/submodules/releng from branch 'master' - CPERF: Fixes issue with known hosts Exporting the global var to disable ansible host key checking doesn't ignore known hosts in the file. To fix this, this patch sets the known hosts file to /dev/null. Reference: ansible/ansible#9442 Jobs currently failing due to known hosts: https://build.opnfv.org/ci/job/cperf-apex-csit-master/320/console Change-Id: Ic3470b368a056b3a3981f9555160a44018f97ebd Signed-off-by: Tim Rozet <trozet@redhat.com>
* Update docs/submodules/releng from branch 'master' - CPERF: Fixes issue with known hosts Exporting the global var to disable ansible host key checking doesn't ignore known hosts in the file. To fix this, this patch sets the known hosts file to /dev/null. Reference: ansible/ansible#9442 Jobs currently failing due to known hosts: https://build.opnfv.org/ci/job/cperf-apex-csit-master/320/console Change-Id: Ic3470b368a056b3a3981f9555160a44018f97ebd Signed-off-by: Tim Rozet <trozet@redhat.com>
Issue Type:
Bugfix Pull Request
Ansible Version:
1.7.2
Environment:
N/A
Summary:
When skipping "Host Key Checking" two flags need to be passed to ssh in order to allow connection to a host with a changed host key: StrictHostKeyChecking=no and UserKnownHostsFile=/dev/null. Currently only the first one is passed.
Steps To Reproduce:
Expected Results:
The ping should pass.
Actual Results:
The text was updated successfully, but these errors were encountered: