Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Duplicate Salt instances on Minion Upgrade #41646

Closed
luka-page opened this issue Jun 8, 2017 · 4 comments
Closed

Duplicate Salt instances on Minion Upgrade #41646

luka-page opened this issue Jun 8, 2017 · 4 comments
Labels
Duplicate Duplicate of another issue or PR - will be closed
Milestone

Comments

@luka-page
Copy link

luka-page commented Jun 8, 2017

Description of Issue/Question

Hi,

We're in the process of upgrading our instance from 2016.11.3 to 2016.11.5 and I have a state which is below that will upgrade the minions however once ran the previous version of salt-minion is still running, the only way I've managed to rectify this is ssh'ing into the minion, stopping the salt service then killing any remaining processes - this isn't ideal for a large estate like ours and it's happened on each minion I've upgraded. Is there a suggested way to control this? has this been confirmed as a bug at all?

install_new_minion:
  pkg.installed:
    - name: salt-minion
    - version: 2016.11.5-3.el6

salt-minion:
  service.running:
    - enable: True
    - reload: True

As you can see I'm reloading the service in my state but this doesn't resolve the issue - a test_versionreport will return the new and old version of the minion until all processes have been killed.

Versions Report

(Provided by running salt --versions-report. Please also mention any differences in master/minion versions.)
Salt-master version report:

`Salt Version:
Salt: 2016.11.3

Dependency Versions:
cffi: 1.8.2
cherrypy: 3.2.2
dateutil: 1.4.1
gitdb: Not Installed
gitpython: Not Installed
ioflo: Not Installed
Jinja2: 2.8
libgit2: Not Installed
libnacl: Not Installed
M2Crypto: 0.20.2
Mako: Not Installed
msgpack-pure: Not Installed
msgpack-python: 0.4.6
mysql-python: Not Installed
pycparser: 2.14
pycrypto: 2.6.1
pygit2: Not Installed
Python: 2.6.6 (r266:84292, Jul 23 2015, 05:13:40)
python-gnupg: Not Installed
PyYAML: 3.10
PyZMQ: 14.3.1
RAET: Not Installed
smmap: Not Installed
timelib: Not Installed
Tornado: 4.2.1
ZMQ: 3.2.5

System Versions:
dist: oracle 6.7
machine: x86_64
release: 3.8.13-98.5.2.el6uek.x86_64
system: Linux
version: Oracle Linux Server 6.7
`

I don't really fancy having to SSH into 100 minions to kill the processes :(

Thanks,

Luka Page

@Ch3LL
Copy link
Contributor

Ch3LL commented Jun 8, 2017

@luka-page this is a duplicate of github.com//issues/40011 which is fixed by #40041 . I would suggest adding some logic to your state to get the pids before upgrade and then after upgrade to kill those old pids, although I have not tested this. Another approach would be to add a state to patch your minion first with the PR and then upgrade the minion.

@Ch3LL Ch3LL closed this as completed Jun 8, 2017
@Ch3LL Ch3LL added the Duplicate Duplicate of another issue or PR - will be closed label Jun 8, 2017
@Ch3LL Ch3LL added this to the Blocked milestone Jun 8, 2017
@luka-page
Copy link
Author

Hi @Ch3LL

I tried patching the minion with the PR but no luck unfortunately we still get the additional processes, as stated in #40148 I think on yum install its using the packaged version. Any other suggestions?

Regards,

@Ch3LL Ch3LL reopened this Jun 13, 2017
@Ch3LL
Copy link
Contributor

Ch3LL commented Jun 13, 2017

What about the other workaround I mentioned to schedule a forcekill of the PIDS that aren't properly shutting down?

@luka-page
Copy link
Author

Hi @Ch3LL

Thanks for your response, I ended up writing a script outside of salt to do the upgrade that killed the old PIDs then did the upgrade, was abit more hassle not using Salt to do it but atleast we're upgraded.

Thanks,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Duplicate Duplicate of another issue or PR - will be closed
Projects
None yet
Development

No branches or pull requests

2 participants