-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Playbook fails when ssh host key changes #452
Comments
I'm also facing this WORKAROUND
|
We hit the same thing, we did something like this for it in a bash script to fix it.
It'd be nice if there was a way to invoke this locally against an inventory. |
disable GCE inventory caching w/ a .ini file
Giant pain in Ansible Tower. Anytime an IP is recycled, we've got manually clear it from the known_hosts. This is my +1 to hopefully help the merge pull request here. |
Maybe I'm missing something... I don't see a PR? |
Is there one from ryanpetrello 452?.. I could be totally mis-reading GitHub here.. matburt@4510cd1 |
Unrelated PR tagged into issue due to naive github matching of the number "452". |
Dang, alright, well my moral support is provided! A little more background - I (we) use openstack and when we terminate a server, then recycle the IP, this issue hits. Depends on what were doing is how frequent this hits. We try to create Ansible Playbooks for all new server components (expand disks, install certs, etc), that run nightly to make sure things are up to date. |
Also experiencing this in a VMWare private cloud. Does Tower not take in to account the project's ansible.cfg? I ask because I have the following in it:
I was under the impression |
I think it should be combined with another option:
As a workaround, I mount /dev/null like this in my docker-compose for the
|
ok will try adding |
I face the same issue. I have following in ansible.cfg on awx_task container:
And it correctly translates into ssh connection parameters, Ansible opens with the target host and yet I get host key changed error:
|
+1 on this issue. AWX doesn't seem to be respecting the ansible.cfg or the environment variable |
I worked around this by setting |
+1 on this issue
Thanks to @sudomateo, your workaround did the job ! |
It should at least remove the ssh fingertip from the known_hosts file when removing the host from the GUI. |
You probably don't want to disable the host key checking "tower wide". ---
defaults:
vars:
host_key_checking: false |
None of the suggestions above worked for me while running AWX 9.2, Ansible 2.9, ansible.cfg file config is not being ignored if you run it in verbose mode, I saw that StrictHostKeyChecking=no was set, while running the job in -vvv verbose mode but still got the SSH key being changed error. So that didn't work. So had to add the following to the inventory file and it worked. [all:vars] I got this solution from |
Not the most efficient, but I've created a playbook that I can execute from AWX that will read through my inventory group and using the shell module it will remove the line items from /root/.ssh/known_hosts Again, it's not efficient because if you have a ton of hosts in your inventory_group, then this run at a linear time.
Welcome to hear suggestions and feedback on how this can be improved! |
I believe this issue is no longer relevant under the new Execution Environment model. Each run is a new container, so the |
while we're not running a new enough AWX version to have EEs, I managed to run an ad-hoc job against |
how to clean know_host in k8s (MicroK8s v1.26.1 revision 4595) pod awx-task? |
ISSUE TYPE
COMPONENT NAME
SUMMARY
When I ran a Job a second time against a set of host I've just rebuilt with
terraform
, it fails due to the host keys being different, invoking a possible spoofing attack.ENVIRONMENT
STEPS TO REPRODUCE
EXPECTED RESULTS
As said in the issue #387, host keys are ignored, thus the job execution should not fail for such a reason.
ACTUAL RESULTS
At the second execution, it fails with this output:
The text was updated successfully, but these errors were encountered: