Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2015.8.3-1 minion hangs and never finishes connecting to salt-master #29453

Closed
jalons opened this issue Dec 4, 2015 · 13 comments
Closed

2015.8.3-1 minion hangs and never finishes connecting to salt-master #29453

jalons opened this issue Dec 4, 2015 · 13 comments
Labels
Pending-Discussion The issue or pull request needs more discussion before it can be closed or merged stale
Milestone

Comments

@jalons
Copy link
Contributor

jalons commented Dec 4, 2015

Spinning up a new salt environment on CentOS 6 and salt 2015.8.3-1. I also tried 2015-8.1-1 and had same experience.

$ salt-minion --versions
Salt Version:
           Salt: 2015.8.3

Dependency Versions:
         Jinja2: 2.2.1
       M2Crypto: 0.20.2
           Mako: Not Installed
         PyYAML: 3.10
          PyZMQ: 2.2.0.1
         Python: 2.6.6 (r266:84292, Feb 22 2013, 00:00:18)
           RAET: Not Installed
        Tornado: 4.2.1
            ZMQ: 3.2.2
           cffi: Not Installed
       cherrypy: Not Installed
       dateutil: Not Installed
          gitdb: Not Installed
      gitpython: Not Installed
          ioflo: Not Installed
        libnacl: Not Installed
   msgpack-pure: Not Installed
 msgpack-python: 0.4.6
   mysql-python: Not Installed
      pycparser: Not Installed
       pycrypto: 2.6.1
         pygit2: Not Installed
   python-gnupg: Not Installed
          smmap: Not Installed
        timelib: Not Installed

System Versions:
           dist: centos 6.4 Final
        machine: x86_64
        release: 2.6.32-358.el6.x86_64
         system: CentOS 6.4 Final

Stock master/minion config, short of setting master: localhost in the minion config. Launching salt with salt-minion -l trace:

$ salt-minion -l trace | tee minion.log
[DEBUG   ] Reading configuration from /etc/salt/minion
[DEBUG   ] Using cached minion ID from /etc/salt/minion_id: MINION.domain.tld
[TRACE   ] None of the required configuration sections, 'logstash_udp_handler' and 'logstash_zmq_handler', were found the in the configuration. Not loading the Logstash logging handlers module.
[TRACE   ] The required configuration section, 'fluent_handler', was not found the in the configuration. Not loading the fluent logging handlers module.
[DEBUG   ] Configuration file path: /etc/salt/minion
[TRACE   ] Trying pysss.getgrouplist for 'root'
[TRACE   ] Trying generic group list for 'root'
[TRACE   ] Group list for user 'root': []
[WARNING ] Insecure logging configuration detected! Sensitive data may be logged.
[INFO    ] Setting up the Salt Minion "MINION.domain.tld"
[DEBUG   ] Created pidfile: /var/run/salt-minion.pid
[DEBUG   ] Reading configuration from /etc/salt/minion
[TRACE   ] Loading core.hwaddr_interfaces grain
[TRACE   ] Loading core.hostname grain
[TRACE   ] Loading core.get_master grain
[TRACE   ] Loading core.pythonversion grain
[TRACE   ] Loading core.path grain
[TRACE   ] Loading core.get_server_id grain
[TRACE   ] Loading core.ip6 grain
[TRACE   ] Loading core.ip4 grain
[TRACE   ] Loading core.saltversion grain
[TRACE   ] Loading core.saltpath grain
[TRACE   ] Loading core.pythonexecutable grain
[TRACE   ] Loading core.fqdn_ip4 grain
[TRACE   ] Loading core.fqdn_ip6 grain
[TRACE   ] Loading core.ip6_interfaces grain
[TRACE   ] Loading core.ip4_interfaces grain
[TRACE   ] Loading core.append_domain grain
[TRACE   ] Loading core.os_data grain
[TRACE   ] 'systemd-detect-virt' could not be found in the following search path: ['/sbin', '/usr/sbin', '/bin', '/usr/bin', '/sbin', '/usr/sbin', '/opt/python-2.7.3/bin', '/opt/vertica/bin', '/root/bin', '/usr/local/bin']
[TRACE   ] Loading core.zmqversion grain
[TRACE   ] Loading core.saltversioninfo grain
[TRACE   ] Loading core.pythonpath grain
[TRACE   ] Loading core.id_ grain
[TRACE   ] Loading core.locale_info grain
[TRACE   ] Loading core.get_machine_id grain
[TRACE   ] Loading core.ip_interfaces grain
[TRACE   ] Device ram0 does not report itself as an SSD
[TRACE   ] Device ram1 does not report itself as an SSD
[TRACE   ] Device ram2 does not report itself as an SSD
[TRACE   ] Device ram3 does not report itself as an SSD
[TRACE   ] Device ram4 does not report itself as an SSD
[TRACE   ] Device ram5 does not report itself as an SSD
[TRACE   ] Device ram6 does not report itself as an SSD
[TRACE   ] Device ram7 does not report itself as an SSD
[TRACE   ] Device ram8 does not report itself as an SSD
[TRACE   ] Device ram9 does not report itself as an SSD
[TRACE   ] Device ram10 does not report itself as an SSD
[TRACE   ] Device ram11 does not report itself as an SSD
[TRACE   ] Device ram12 does not report itself as an SSD
[TRACE   ] Device ram13 does not report itself as an SSD
[TRACE   ] Device ram14 does not report itself as an SSD
[TRACE   ] Device ram15 does not report itself as an SSD
[TRACE   ] Device loop0 does not report itself as an SSD
[TRACE   ] Device loop1 does not report itself as an SSD
[TRACE   ] Device loop2 does not report itself as an SSD
[TRACE   ] Device loop3 does not report itself as an SSD
[TRACE   ] Device loop4 does not report itself as an SSD
[TRACE   ] Device loop5 does not report itself as an SSD
[TRACE   ] Device loop6 does not report itself as an SSD
[TRACE   ] Device loop7 does not report itself as an SSD
[TRACE   ] Device xvda reports itself as an SSD
[INFO    ] The salt minion is starting up
[INFO    ] Minion is starting as user 'root'
[DEBUG   ] AsyncEventPublisher PUB socket URI: ipc:///var/run/salt/minion/minion_event_f4fb6fd136_pub.ipc
[DEBUG   ] AsyncEventPublisher PULL socket URI: ipc:///var/run/salt/minion/minion_event_f4fb6fd136_pull.ipc
[INFO    ] Starting pub socket on ipc:///var/run/salt/minion/minion_event_f4fb6fd136_pub.ipc
[INFO    ] Starting pull socket on ipc:///var/run/salt/minion/minion_event_f4fb6fd136_pull.ipc
[DEBUG   ] Minion 'MINION.domain.tld' trying to tune in
[DEBUG   ] sync_connect_master
[DEBUG   ] Initializing new SAuth for ('/etc/salt/pki/minion', 'MINION.domain.tld', 'tcp://127.0.0.1:4506')
[DEBUG   ] Generated random reconnect delay between '1000ms' and '11000ms' (7857)
[DEBUG   ] Setting zmq_reconnect_ivl to '7857ms'
[DEBUG   ] Setting zmq_reconnect_ivl_max to '11000ms'
[DEBUG   ] Initializing new AsyncZeroMQReqChannel for ('/etc/salt/pki/minion', 'MINION.domain.tld', 'tcp://127.0.0.1:4506', 'clear')
@ssgward
Copy link

ssgward commented Dec 4, 2015

@jalons - so you are in multi-master mode (based on referencing a ticket that mentions multi-master mode and you said it was directly related)?

@jalons
Copy link
Contributor Author

jalons commented Dec 4, 2015

Not multi master, but failing in same method with same behavior. @cachedout asked @syedaali to create a separate issue for this (#27152 (comment)) but could not find one, so I created this issue as I appear to have hit the same issue.

@ssgward
Copy link

ssgward commented Dec 4, 2015

@jalons - so simply having a minion on a master and running it in debug mode should reproduce the issue?

@jalons
Copy link
Contributor Author

jalons commented Dec 4, 2015

@ssgward That's how I've managed to create this issue. On three systems now I've:

  • brought down the previous salt minion
  • removed salt package
  • removed salt-minion package
  • deleted /etc/salt/ and /var/cache/salt/*
  • installed salt, salt-minion, and salt-master (2015.8.3-1 and 2015.8.1-1 from http://repo.saltstack.com/yum/redhat/6/x86_64/2015.8/, along with the packages these RPMs require)
  • Launch salt-master into the foreground with -l debug, completely stock config
  • Launch minion into foreground after updating config to point to localhost
  • Experience hang waiting on AsyncZeroMQReqChannel initialization

Given that I remove the cache and configuration directories I originally did not include the tidbit about the previous minion version (0.17.5).

@ssgward
Copy link

ssgward commented Dec 4, 2015

other questions:

  1. did you install via package or bootstrap?
  2. was this a clean install or upgrade?
  3. is your master service running (I was able to reproduce the minion getting in a loop with the salt master service off (default after install), but when running, no problems?

Many of the salt dependencies from the --versions-report output are outdated and may be contributing to the problem. When installing via package and clean install the correct dependencies get installed.

@ssgward
Copy link

ssgward commented Dec 4, 2015

oh, I see your latest comments. Must have posted about the same time. Most questions were answered.

@jalons
Copy link
Contributor Author

jalons commented Dec 4, 2015

@ssgward Good call on the versions being outdated. I'll try to start walking the versions up and getting a pull request in against the .spec to depend on the correct versions.

@ssgward
Copy link

ssgward commented Dec 4, 2015

Here's output from my --versions-report:

Salt Version:
           Salt: 2015.8.3

Dependency Versions:
         Jinja2: unknown
       M2Crypto: Not Installed
           Mako: Not Installed
         PyYAML: 3.11
          PyZMQ: 14.5.0
         Python: 2.6.6 (r266:84292, Jul 23 2015, 15:22:56)
           RAET: Not Installed
        Tornado: 4.2.1
            ZMQ: 4.0.5
           cffi: Not Installed
       cherrypy: Not Installed
       dateutil: Not Installed
          gitdb: Not Installed
      gitpython: Not Installed
          ioflo: Not Installed
        libnacl: Not Installed
   msgpack-pure: Not Installed
 msgpack-python: 0.4.6
   mysql-python: Not Installed
      pycparser: Not Installed
       pycrypto: 2.6.1
         pygit2: Not Installed
   python-gnupg: Not Installed
          smmap: Not Installed
        timelib: Not Installed

System Versions:
           dist: centos 6.7 Final
        machine: x86_64
        release: 2.6.32-431.el6.x86_64
         system: CentOS 6.7 Final

Again, clean install. Not an upgrade. Slightly different distro version that yours, but don't think that should be a significant factor.

@ssgward
Copy link

ssgward commented Dec 4, 2015

ZMQ version would be the first thing I'd upgrade and verify.

@ssgward ssgward added this to the Approved milestone Dec 4, 2015
@jfindlay jfindlay added the Pending-Discussion The issue or pull request needs more discussion before it can be closed or merged label Dec 4, 2015
@jalons
Copy link
Contributor Author

jalons commented Dec 4, 2015

That appears to have resolved the issue.

Given repo.saltstack.com exists and provides zeromq and python-zmq, would it be sufficient to change

https://github.com/saltstack/salt/blob/develop/pkg/rpm/salt.spec#L58 and https://github.com/saltstack/salt/blob/develop/pkg/rpm/salt.spec#L93 to be >= 14.5.0

That in turn brings in zeromq as a requirement as the SaltStack provided python-zmq appears dependant on that version, though I certainly defer to those that are maintaining the build environments for how they want to handle it, rather than the aforementioned pull request.

As an aside, can I edit my original message to remove the incorrect association with the other task?

@ssgward
Copy link

ssgward commented Dec 4, 2015

@dmurphy18 - See comments/questions above.

@ssgward
Copy link

ssgward commented Dec 4, 2015

@jalons - I believe you can edit your original message. Edit by using the pencil in upper right corner of comment box

@stale
Copy link

stale bot commented Feb 19, 2018

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.

@stale stale bot added the stale label Feb 19, 2018
@stale stale bot closed this as completed Feb 26, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Pending-Discussion The issue or pull request needs more discussion before it can be closed or merged stale
Projects
None yet
Development

No branches or pull requests

3 participants