Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fork (for job adhoc command) start with high value (512) and finish with default value of 5 #14949

Open
4 of 11 tasks
bskou57 opened this issue Mar 5, 2024 · 2 comments
Open
4 of 11 tasks

Comments

@bskou57
Copy link

bskou57 commented Mar 5, 2024

Please confirm the following

  • I agree to follow this project's code of conduct.
  • I have checked the current issues for duplicates.
  • I understand that AWX is open source software provided for free and that I might not receive a timely response.
  • I am NOT reporting a (potential) security vulnerability. (These should be emailed to security@ansible.com instead.)

Bug Summary

Hi

From my AWX web interface I start adhoc command job on my inventory (around 4000 hosts) with fort of 512 but after a while (at the nearly end of the job) it decrease to default value of 5 fork.

can you please advise ?

Thanks for your help

AWX version

23.6.0

Select the relevant components

  • UI
  • UI (tech preview)
  • API
  • Docs
  • Collection
  • CLI
  • Other

Installation method

kubernetes

Modifications

no

Ansible version

2.14.2

Operating system

Redhat 8.8

Web browser

Chrome

Steps to reproduce

run adhoc command (df -h) on huge inventory

Expected results

ps auxw | grep -c ansible
Print 512 lines

Actual results

ps auxw | grep -c ansible
Print 5 lines at the end of job processing

Additional information

[root@a0bb6db8e77c project]# ansible --version
ansible [core 2.15.8]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.18 (main, Sep 7 2023, 00:00:00) [GCC 11.4.1 20230605 (Red Hat 11.4.1-2)] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True

@fosterseth
Copy link
Member

@sivel do you have some insight on the expected behavior when forks is set to 512? should it eventually taper off to 5 processes as number of hosts left dwindle?

@sivel
Copy link
Member

sivel commented Mar 6, 2024

512 forks is an almost obscene amount of forks. 25, 50...maybe 100 if you have a really modern CPU and like 256GB of RAM. I'd have to assume that at 512 forks, the load average on the machine is probably through the roof.

The architecture in ansible-core currently uses a single core to orchestrate and manage all forks. Not only is it tracking the life cycle of a fork, and launching forks, it is also processing and displaying a lot of data. Launching forks also requires a lot of variable calculations. It's not truly asynchronous, so it's also generally impossible to see exactly the amount of forks as you have configured. The more forks you specify, the more CPU bound the orchestration becomes, and the less able new forks are able to be created to stay at the configured fork count.

Also note, as indicated above, that forks are spawned per host per task. They aren't persistent, so there is a new fork spawned for each host+task combination.

But in short, toward the end of an execution, you should see less forks as the previous hosts have finished executing, ultimately dropping to 0. There is nothing that will drop it to the default fork level, it could just be happenstance that you are seeing that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants