Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hgfs fileserver backend spawns too many hg processes #8811

Closed
anitakrueger opened this issue Nov 25, 2013 · 6 comments · Fixed by #8838
Closed

hgfs fileserver backend spawns too many hg processes #8811

anitakrueger opened this issue Nov 25, 2013 · 6 comments · Fixed by #8838
Labels
Bug broken, incorrect, or confusing behavior severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around

Comments

@anitakrueger
Copy link
Contributor

Trying to use the hgfs fileserver backend, I found that on my salt master, it spawned a whopping 412 /usr/bin/hg processes:

root     25380 22749  0 14:03 ?        00:00:00 /usr/bin/python /usr/bin/hg serve --cmdserver pipe --config ui.interactive=True -R /var/cache/salt/master/hgfs/9a09adc81b45dbc4d8dbde61eb39132e
root     25381 22749  0 14:03 ?        00:00:00 /usr/bin/python /usr/bin/hg serve --cmdserver pipe --config ui.interactive=True -R /var/cache/salt/master/hgfs/5b83993f7df3e27a6ed87a404cce64f8

My hgfs configuration looks like this:

fileserver_backend:
  - hg
  - roots

hgfs_remotes:
  - http://hgserver/dev/ops/salt/environments
  - http://hgserver/dev/ops/salt/profiles-common

hgfs_root: sls

hgfs_branch_method: branches

My salt and OS version are below:

           Salt: 0.17.2
         Python: 2.7.3 (default, Sep 26 2013, 20:03:06)
         Jinja2: 2.7.1
       M2Crypto: 0.21.1
 msgpack-python: 0.1.10
   msgpack-pure: Not Installed
       pycrypto: 2.4.1
         PyYAML: 3.10
          PyZMQ: 13.0.0
            ZMQ: 3.2.2
Distributor ID: Ubuntu
Description:    Ubuntu 12.04.3 LTS
Release:    12.04
Codename:   precise

Any idea what causes this? The number of processes only seems to increase. While typing out this issue, I now have 421 hg processes on my salt master.

@deftpunk
Copy link

I have noticed this as well, though I have 670+ hg processes. I am seeing a rate of about 2-3 new processes every minute.

@anitakrueger
Copy link
Contributor Author

I actually went back to cloning the repo and away from hgfs for now, since it was killing the performance on my machine :(

@deftpunk
Copy link

So far I have been able to characterize the issue as follows:

  1. a 'stop salt-master' kills all of the 'hg server --cmdserver ...' processes.
  2. running the salt-master in debug or not makes no difference.
  3. I was able to correlate the following debug messages w/ new processes:

[DEBUG ] Updating fileserver cache
[DEBUG ] MasterEvent PUB socket URI: ipc:///var/run/salt/master/master_event_pub.ipc
[DEBUG ] MasterEvent PULL socket URI: ipc:///var/run/salt/master/master_event_pull.ipc

https://github.com/saltstack/salt/blob/develop/salt/fileserver/__init__.py#L142:

I am running the exact salt/OS configuration as anitakrueger

@basepi
Copy link
Contributor

basepi commented Nov 25, 2013

Thanks for the info. hgfs is pretty young, and obviously has some kinks to be worked out. Looks like the processes that are used to update hgfs are just not exiting properly, which is absolutely not a good thing. We'll look into this.

@tmessi
Copy link
Contributor

tmessi commented Nov 26, 2013

I think I see the problem, a few functions are calling repo.open() without calling repo.close() when finished with the repo. Expect a PR soon...

@basepi
Copy link
Contributor

basepi commented Nov 26, 2013

Awesome, thanks for looking into this!

basepi pushed a commit that referenced this issue Dec 5, 2013
This should resolve #8811 and possibly #8810.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug broken, incorrect, or confusing behavior severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants