Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't gather facts when running seperate Task. #109

Closed
vandlol opened this issue Mar 7, 2018 · 26 comments
Closed

Can't gather facts when running seperate Task. #109

vandlol opened this issue Mar 7, 2018 · 26 comments

Comments

@vandlol
Copy link

@vandlol vandlol commented Mar 7, 2018

TASK [_gather_facts : Gather facts] ******************************************************************************
11:59:24 I p=29093 u=root | : TASK [_gather_facts : Gather facts] ******************************************************************************
fatal: [tbi_client01]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (setup) module: _ansible_shell_executable Supported parameters include: fact_path,filter,gather_subset,gather_timeout"}
11:59:24 E p=29093 u=root | : fatal: [tbi_client01]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (setup) module: _ansible_shell_executable Supported parameters include: fact_path,filter,gather_subset,gather_timeout"}

task looks like this:

  • name: Gather facts
    action: setup

while gather_facts: no is set in my playbook

@dw
Copy link
Member

@dw dw commented Mar 7, 2018

@dw
Copy link
Member

@dw dw commented Mar 7, 2018

Ok, I can see some interesting stuff in the log I didn't notice when reading via e-mail. It looks like it might be passing too many variables into setup. Let me have a play

@vandlol
Copy link
Author

@vandlol vandlol commented Mar 7, 2018

Target:
NAME="openSUSE Leap"
VERSION="42.3"
ID=opensuse
ID_LIKE="suse"
VERSION_ID="42.3"
PRETTY_NAME="openSUSE Leap 42.3"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:opensuse:leap:42.3"
BUG_REPORT_URL="https://bugs.opensuse.org"
HOME_URL="https://www.opensuse.org/"

Machine running ansible:
NAME="openSUSE Tumbleweed"
VERSION="20171010"
ID=opensuse
ID_LIKE="suse"
VERSION_ID="20171010"
PRETTY_NAME="openSUSE Tumbleweed"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:opensuse:tumbleweed:20171010"
BUG_REPORT_URL="https://bugs.opensuse.org"
HOME_URL="https://www.opensuse.org/"

ansible --version
ansible 2.4.2.0
config file = /siam/ansible/ansible.cfg
configured module search path = [u'/siam/ansible/plugins/modules', u'/usr/share/ansible', u'/usr/lib/python2.7/site-packages/ara/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.14 (default, Oct 12 2017, 15:50:02) [GCC]

@dw
Copy link
Member

@dw dw commented Mar 7, 2018

Can you share the command line you used to run it? Any ansible*_executable variables are handled specially by the modules, but there is another code path where they aren't, but I can't figure out how it gets triggered :)

@vandlol
Copy link
Author

@vandlol vandlol commented Mar 7, 2018

Since ansible can't handle ssh-keys from variables yet i had to come up with some sort of work around for my problem.
I decrypt my ssh key from my gpg encrypted passwordstore and store it during runtime in a temporary file which is removed after my run has finished

Playbook: test.yml

- hosts: ToBeInstalled
  roles:
    - testing
  gather_facts: no

Role testing:

- name: Decrypt SSH-Keys
  include_role:
    name: _add_ssh_key

- name: Gather Facts
  include_role:
    name: _gather_facts

Role _decrypt_ssh_key:

- name: decrypt password to local key
  copy:
    content: "{{ lookup('passwordstore', 'ssh-keys/' + inventory_hostname + '_rsa returnall=true') }}"
    dest: "{{ playbook_dir }}/tmp/ssh/{{ inventory_hostname }}_rsa"
    mode: 0400
  delegate_to: localhost

While in my host_vars:

ansible_ssh_private_key_file: /siam/ansible/playbooks/tmp/ssh/tbi_client01_rsa

is set

ANSIBLE_STRATEGY=mitogen ansible-playbook playbooks/test.yml

@vandlol vandlol closed this Mar 7, 2018
@vandlol
Copy link
Author

@vandlol vandlol commented Mar 7, 2018

also - i pressed the wrong button while commenting

@vandlol vandlol reopened this Mar 7, 2018
@dw
Copy link
Member

@dw dw commented Mar 7, 2018

At what point does the setup module get called in this structure? Is it from within another role?

@dw
Copy link
Member

@dw dw commented Mar 7, 2018

Whoops, now I understand :)

dw added a commit that referenced this issue Mar 7, 2018
@dw
Copy link
Member

@dw dw commented Mar 7, 2018

I'm going to have to read the code line-by-line later to figure this out, I've reproduced your playbook structure and cannot trigger the problem with your exact version of Ansible. We're still missing something. Do you have anything magical in ansible.cfg?

@vandlol
Copy link
Author

@vandlol vandlol commented Mar 7, 2018

ok - i stripped down my config to only use your strategy plugin + directory informations

  • no difference

from what i get the module_args get lost as soon as i m running with mitogen, when mitogen is not enabled default module_args are passed to the setup task:

fatal: [tbi_client01]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {}
},

w/o mitogen:
"invocation": {"module_args": {"filter": "*", "gather_subset": ["all"], "fact_path": "/etc/ansible/facts.d", "gather_timeout": 10}}

@vandlol
Copy link
Author

@vandlol vandlol commented Mar 7, 2018

Also i noticed theses beauties:
15:17:10 W mitogen: get_module_source('time'): cannot find source
15:17:10 W mitogen: get_module_source('operator'): cannot find source
15:17:10 W mitogen: get_module_source('_locale'): cannot find source
15:17:10 W mitogen: get_module_source('grp'): cannot find source
15:17:10 W mitogen: get_module_source('datetime'): cannot find source
15:17:10 W mitogen: get_module_source('syslog'): cannot find source
15:17:10 W mitogen: get_module_source('itertools'): cannot find source
15:17:10 W mitogen: get_module_source('select'): cannot find source
15:17:10 W mitogen: get_module_source('_random'): cannot find source
15:17:10 W mitogen: get_module_source('binascii'): cannot find source
15:17:10 W mitogen: get_module_source('math'): cannot find source
15:17:10 W mitogen: get_module_source('fcntl'): cannot find source
15:17:10 W mitogen: get_module_source('cStringIO'): cannot find source
15:17:10 W mitogen: get_module_source('cPickle'): cannot find source
15:17:10 W mitogen: get_module_source('_collections'): cannot find source
15:17:10 W mitogen: get_module_source('zlib'): cannot find source
15:17:10 W mitogen: get_module_source('bz2'): cannot find source
15:17:10 W mitogen: get_module_source('_hashlib'): cannot find source
15:17:10 W mitogen: get_module_source('_json'): cannot find source
15:17:10 W mitogen: get_module_source('_io'): cannot find source
15:17:10 W mitogen: get_module_source('strop'): cannot find source
15:17:10 W mitogen: get_module_source('_functools'): cannot find source
15:17:10 W mitogen: get_module_source('_heapq'): cannot find source
15:17:10 W mitogen: get_module_source('_struct'): cannot find source

any correlation?

these appear as soon as the first task is called

dw added a commit that referenced this issue Mar 7, 2018
Could it be that some empty dict magically gets populated from somewhere
invisible?
@dw
Copy link
Member

@dw dw commented Mar 7, 2018

@dw
Copy link
Member

@dw dw commented Mar 7, 2018

@vandlol
Copy link
Author

@vandlol vandlol commented Mar 7, 2018

newest version didn't fix the problem, but i tried several things and got something quite interesting:
once i replaced my role

- name: Gather facts
action: setup

with this:

- name: Gather facts
  setup:
    filter: "*"
    gather_subset: ["all"]
    fact_path: /etc/ansible/facts.d
    gather_timeout: 11

and suddenly module_args appeared:

fatal: [tbi_client01]: FAILED! => {
    "changed": false,
    "invocation": {
        "module_args": {
            "fact_path": "/etc/ansible/facts.d",
            "filter": "*",
            "gather_subset": [
                "all"
            ],
            "gather_timeout": 11
        }
    },
    "msg": "Unsupported parameters for (setup) module: _ansible_shell_executable Supported parameters include: fact_path,filter,gather_subset,gather_timeout"
}

@dw
Copy link
Member

@dw dw commented Mar 8, 2018

Hey, is it definitely that exact step in your playbook that triggers the setup module? Some actions sometimes internally trigger it on demand. Does the part where the failure occurs definitely have the name of the task where you run setup appearing above it as the title?

@vandlol
Copy link
Author

@vandlol vandlol commented Mar 8, 2018

Since module "setup" is even directly mentioned and i dont have any other steps in my stripped down playbook - I'm quite sure.

If you have any idea how to get more in depth data i'd be glad to have a deeper look into it.

14:50:05 E p=30079 u=root | : fatal: [tbi_client01]: FAILED! => {
    "changed": false,
    "invocation": {
        "module_args": {
            "fact_path": "/etc/ansible/facts.d",
            "filter": "*",
            "gather_subset": [
                "all"
            ],
            "gather_timeout": 11
        }
    },
    "msg": "Unsupported parameters for (setup) module: _ansible_shell_executable Supported parameters include: fact_path,filter,gather_subset,gather_timeout"

@dw
Copy link
Member

@dw dw commented Mar 8, 2018

I had not even thought of that! If you pull the latest extension and run with "-vvvv", this will produce extreme logging, the part that would be useful is any line mentioning call_async(... run_module .. setup )

Sorry, a little overworked the past 2 days ;)

@vandlol
Copy link
Author

@vandlol vandlol commented Mar 8, 2018

well - thats funny - newest version won't let me reproduce... did you solve by accident? :D

TASK [testing : Gather Facts] ************************************************************************************

TASK [_gather_facts : Gather facts] ******************************************************************************
ok: [tbi_client01]

@dw
Copy link
Member

@dw dw commented Mar 8, 2018

I fixed a bunch of important bugs yesterday, but I can't see how it could possibly be related! Grmbl.

This is actually a worse outcome than finally finding whatever was causing your issue :)

Yesterday I fixed

  • Modules were getting imported twice in children (can't see how this would fix it)
  • Tons of new logging (nope)
  • Implemented _transfer_data() for <2.4 Ansibles (nope)

And then a9c6c13 .. did your target machine have Ansible installed on it? That might somehow be an explanation

@vandlol
Copy link
Author

@vandlol vandlol commented Mar 8, 2018

can confirm ansible on target machine
i removed ansible from the target machine - setup module still works

@dw
Copy link
Member

@dw dw commented Mar 8, 2018

Ahah! Yes, and I bet the version of Ansible on the target was slightly younger than the one on the host :) You can keep Ansible on the target machine now, it's no problem, that's fully protected against since yesterday.

Feel free to close this bug. I'm happy we've found the culprit :)

/cc #114

@vandlol
Copy link
Author

@vandlol vandlol commented Mar 8, 2018

since i m able to run the whole of my playbook by now - i m thrilled to close this issue :)
thank you for your awesome work

i will report as soon as i dig up something new

@vandlol vandlol closed this Mar 8, 2018
@dw
Copy link
Member

@dw dw commented Mar 8, 2018

Just one final note.. I am always interested in more timing information.. latency of network, CPUs of target boxes, how much playbook time is spent in big apt-gets and suchlike, and finally before/after of /usr/bin/time output. Thanks again

@vandlol
Copy link
Author

@vandlol vandlol commented Mar 8, 2018

This is just from my Playbook doing nothing since nothing as changed. - Will update with Full Playbook runtime.

Before:
Playbook finished: Thu Mar  8 15:47:58 2018, 87 total tasks.  0:01:55 elapsed.

real    2m30.448s
user    0m45.182s
sys     0m19.054s


After:
Playbook finished: Thu Mar  8 15:53:04 2018, 87 total tasks.  0:01:02 elapsed.

real    1m26.091s
user    0m26.613s
sys     0m4.449s

@dw
Copy link
Member

@dw dw commented Mar 8, 2018

Not exactly spectacular, but not bad either :) Is this on a low latency network?

@vandlol
Copy link
Author

@vandlol vandlol commented Mar 9, 2018

it is - both machines currently run on a dedicated switch - also i do a lot of package installation with this - and zypper isn't particularly fast when it comes to repo communication - so i guess thats what killing the time improvements.

dw added a commit that referenced this issue Mar 19, 2018
dw added a commit that referenced this issue Mar 19, 2018
Could it be that some empty dict magically gets populated from somewhere
invisible?
dw added a commit that referenced this issue Apr 18, 2018
dw added a commit that referenced this issue Nov 7, 2018
Python at some point (at least since https://bugs.python.org/issue14605)
began populating sys.meta_path with its internal importer classes,
meaning that interpreters no longer start with an empty sys.meta_path.
dw added a commit that referenced this issue Nov 7, 2018
dw added a commit that referenced this issue Nov 7, 2018
- 3.x target test job support
- new 2.x->3.x Mitogen job
- 3.x runner regression
- 3.x importer regression
- #109
- #391
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
2 participants