New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

yum_repository in item loops causes repo duplication when used with mitogen #154

Closed
zswanson opened this Issue Mar 18, 2018 · 3 comments

Comments

Projects
None yet
2 participants
@zswanson
Copy link

zswanson commented Mar 18, 2018

ansible 2.4.3.0
config file = /playbooks/ansible.cfg
configured module search path = [u'/home/vagrant/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Aug 4 2017, 00:39:18) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]

ansible config:
[defaults]
inventory = ./inventory
display_skipped_hosts = False
log_path=./logs/
strategy_plugins = ./plugins/mitogen/ansible_mitogen/plugins/strategy
strategy = mitogen
callback_plugins = ./plugins/callbacks:~/.ansible/plugins/callback:/usr/share/ansible/plugins/callback
stdout_callback = anstomlog
retry_files_enabled = False

Using mitogen master cloned at b1bfe58

I have a role task that calls yum_repository using with_items (also tested with_dict, same result) to populate a default list of repos and any 'extras' provided by the calling playbook. Expected behavior is one repo file per list item, with only that item in the repo file. With mitogen disabled, that's what I get. With mitogen enabled, each successive repo file in /etc/yum.repos.d/ contains all of the prior items in the loop. Simply enabling/disabling mitogen and re-running the playbook (deleting the created repos between each run) will demonstrate.

---

- hosts: localhost
  gather_facts: no
  become: true

  vars:

    repo_baseurl: "http://myurl.com"

    default_repos:
      - repo: demo-repo1
        description: Base software packages
        url: "{{repo_baseurl}}/repo1"
      - repo: demo-repo2
        description: Misc packages
        url: "{{repo_baseurl}}/repo2"

  tasks:

  - name: Create multiple yum repos
    yum_repository:
      name: '{{item.value.repo}}'
      http_caching: packages
      gpgcheck: no
      description: '{{item.value.description}}'
      state: present
      baseurl: '{{item.value.url}}'
      enabled: yes
    with_items: '{{ default_repos }}'

Expected outpout:
/etc/yum.repos.d/demo-repo1.repo

[demo-repo1]
baseurl = http://myurl.com/repo1
enabled = 1
gpgcheck = 0
http_caching = packages
name = Base software packages

/etc/yum.repos.d/demo-repo2.repo

[demo-repo2]
baseurl = http://myurl.com/repo2
enabled = 1
gpgcheck = 0
http_caching = packages
name = Misc packages

Actual output:
/etc/yum.repos.d/demo-repo1.repo

[demo-repo1]
baseurl = http://myurl.com/repo1
enabled = 1
gpgcheck = 0
http_caching = packages
name = Base software packages

/etc/yum.repos.d/demo-repo2.repo

[demo-repo1]
baseurl = http://myurl.com/repo1
enabled = 1
gpgcheck = 0
http_caching = packages
name = Base software packages
[demo-repo2]
baseurl = http://myurl.com/repo2
enabled = 1
gpgcheck = 0
http_caching = packages
name = Misc packages

@dw

This comment has been minimized.

Copy link
Owner

dw commented Mar 18, 2018

Congrats, looks like you've found the first real example of a module storing global state that persists across runs :)

In this case, the global state is the YumRepo.repofile variable.

So the solution here is to start forking the child after compilation, to prevent that global state persisting. Hopefully a fix won't take too long. Thanks for reporting!

Regarding avoiding having the threading/forking deadlock of #150 not reappear in the child, some careful dancing is required to repair any locks that may be held across the fork. Python takes care of the GIL (naturally), but we must take care of the logging package. A cute workaround can be found here: celery/celery#496

dw added a commit that referenced this issue Mar 18, 2018

@dw dw closed this in bcf5e3b Mar 18, 2018

@dw

This comment has been minimized.

Copy link
Owner

dw commented Mar 18, 2018

Hi there,

I've just pushed a workaround for this specific case, forking is likely to take a little longer. Note that due to the yum_repository module, rerunning Ansible won't magically clean up those bad repo files. You need to delete them and let it recreate them. :)

Thanks for the report!

@dw dw referenced this issue Mar 18, 2018

Closed

ansible: implement fork-based module execution #155

13 of 13 tasks complete
@zswanson

This comment has been minimized.

Copy link
Author

zswanson commented Mar 19, 2018

Confirmed resolved, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment