Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Arista EOS Salt Minion - Debug Log Flooded with Repeated Message #50074

Closed
bigpick opened this issue Oct 16, 2018 · 5 comments

Comments

Projects
None yet
3 participants
@bigpick
Copy link

commented Oct 16, 2018

Arista EOS salt-minion debug mode flooded

When running the Arista EOS minion here, the command line is flooded with the same repeated message. Possible repeated behavior of #49082 ?

[DEBUG   ] LazyLoaded state.apply
[DEBUG   ] schedule: Job job1 was scheduled with jid_include, adding to cache (jid_include defaults to True)
[DEBUG   ] schedule: Job job1 was scheduled with a max number of 1
[DEBUG   ] schedule: Job __mine_interval was scheduled with jid_include, adding to cache (jid_include defaults to True)
[DEBUG   ] schedule: Job __mine_interval was scheduled with a max number of 2
...

Setup

Arista EOS minion directly installed on Arista switch: salt-minion 2018.3.0-2772-g34b3b90 (Oxygen)
Salt-master: salt-master 2018.3.2 (Oxygen)

Steps to Reproduce Issue

Install the salt-minion according to https://docs.saltstack.com/en/latest/topics/installation/eos.html
Configure the master for multi-master with failover here:
Master

master_sign_pubkey: True

Minion

master: 172.16.0.1
verify_master_pubkey_sign: True
master_type: failover
master_alive_interval: 90
######   NAPALM connection settings   ######
############################################
napalm:
  driver: eos
  optional_args:
    eos_transport: socket

Copy the /etc/salt/pki/master_sign.pub to the switch's /etc/salt/pki/

Run the salt-master, and then start the minion in debug mode:
salt-minion -l debug

[DEBUG   ] schedule: Job job1 was scheduled with jid_include, adding to cache (jid_include defaults to True)
[DEBUG   ] schedule: Job job1 was scheduled with a max number of 1
[DEBUG   ] schedule: Job __mine_interval was scheduled with jid_include, adding to cache (jid_include defaults to True)
[DEBUG   ] schedule: Job __mine_interval was scheduled with a max number of 2
[DEBUG   ] schedule: Job __master_alive_172.16.0.1 was scheduled with jid_include, adding to cache (jid_include defaults to True)
[DEBUG   ] schedule: Job __master_alive_172.16.0.1 was scheduled with a max number of 1
[DEBUG   ] schedule: Job job1 was scheduled with jid_include, adding to cache (jid_include defaults to True)
[DEBUG   ] schedule: Job job1 was scheduled with a max number of 1
[DEBUG   ] schedule: Job __mine_interval was scheduled with jid_include, adding to cache (jid_include defaults to True)
[DEBUG   ] schedule: Job __mine_interval was scheduled with a max number of 2
[DEBUG   ] schedule: Job __master_alive_172.16.0.1 was scheduled with jid_include, adding to cache (jid_include defaults to True)
[DEBUG   ] schedule: Job __master_alive_172.16.0.1 was scheduled with a max number of 1
[DEBUG   ] schedule: Job job1 was scheduled with jid_include, adding to cache (jid_include defaults to True)
[DEBUG   ] schedule: Job job1 was scheduled with a max number of 1
[DEBUG   ] schedule: Job __mine_interval was scheduled with jid_include, adding to cache (jid_include defaults to True)
[DEBUG   ] schedule: Job __mine_interval was scheduled with a max number of 2
[DEBUG   ] schedule: Job __master_alive_172.16.0.1 was scheduled with jid_include, adding to cache (jid_include defaults to True)
[DEBUG   ] schedule: Job __master_alive_172.16.0.1 was scheduled with a max number of 1
[DEBUG   ] schedule: Job job1 was scheduled with jid_include, adding to cache (jid_include defaults to True)
[DEBUG   ] schedule: Job job1 was scheduled with a max number of 1
[DEBUG   ] schedule: Job __mine_interval was scheduled with jid_include, adding to cache (jid_include defaults to True)
[DEBUG   ] schedule: Job __mine_interval was scheduled with a max number of 2
[DEBUG   ] schedule: Job __master_alive_172.16.0.1 was scheduled with jid_include, adding to cache (jid_include defaults to True)
[DEBUG   ] schedule: Job __master_alive_172.16.0.1 was scheduled with a max number of 1
[DEBUG   ] schedule: Job job1 was scheduled with jid_include, adding to cache (jid_include defaults to True)
[DEBUG   ] schedule: Job job1 was scheduled with a max number of 1
[DEBUG   ] schedule: Job __mine_interval was scheduled with jid_include, adding to cache (jid_include defaults to True)
[DEBUG   ] schedule: Job __mine_interval was scheduled with a max number of 2
[DEBUG   ] schedule: Job __master_alive_172.16.0.1 was scheduled with jid_include, adding to cache (jid_include defaults to True)
[DEBUG   ] schedule: Job __master_alive_172.16.0.1 was scheduled with a max number of 1
[DEBUG   ] schedule: Job job1 was scheduled with jid_include, adding to cache (jid_include defaults to True)
[DEBUG   ] schedule: Job job1 was scheduled with a max number of 1
[DEBUG   ] schedule: Job __mine_interval was scheduled with jid_include, adding to cache (jid_include defaults to True)
[DEBUG   ] schedule: Job __mine_interval was scheduled with a max number of 2
[DEBUG   ] schedule: Job __master_alive_172.16.0.1 was scheduled with jid_include, adding to cache (jid_include defaults to True)
[DEBUG   ] schedule: Job __master_alive_172.16.0.1 was scheduled with a max number of 1

Versions Report

(Provided by running salt --versions-report. Please also mention any differences in master/minion versions.)
Switch

Salt Version:
           Salt: 2018.3.0-2772-g34b3b90
 
Dependency Versions:
           cffi: Not Installed
       cherrypy: Not Installed
       dateutil: 2.4.2
      docker-py: Not Installed
          gitdb: Not Installed
      gitpython: Not Installed
          ioflo: Not Installed
         Jinja2: 2.8
        libgit2: Not Installed
        libnacl: Not Installed
       M2Crypto: 0.21.1
           Mako: Not Installed
   msgpack-pure: Not Installed
 msgpack-python: 0.4.8
   mysql-python: Not Installed
      pycparser: Not Installed
       pycrypto: 2.6.1
   pycryptodome: Not Installed
         pygit2: Not Installed
         Python: 2.7.5 (default, Jun 15 2018, 13:03:35)
   python-gnupg: Not Installed
         PyYAML: 3.12
          PyZMQ: 15.3.0
           RAET: Not Installed
          smmap: Not Installed
        timelib: Not Installed
        Tornado: 4.4.2
            ZMQ: 4.1.4

Master

Salt Version:
           Salt: 2018.3.2
 
Dependency Versions:
           cffi: 1.11.5
       cherrypy: unknown
       dateutil: 2.6.1
      docker-py: Not Installed
          gitdb: 2.0.3
      gitpython: 2.1.8
          ioflo: Not Installed
         Jinja2: 2.10
        libgit2: Not Installed
        libnacl: Not Installed
       M2Crypto: Not Installed
           Mako: 1.0.7
   msgpack-pure: Not Installed
 msgpack-python: 0.5.6
   mysql-python: Not Installed
      pycparser: 2.19
       pycrypto: 2.6.1
   pycryptodome: Not Installed
         pygit2: Not Installed
         Python: 2.7.15rc1 (default, Apr 15 2018, 21:51:34)
   python-gnupg: 0.4.1
         PyYAML: 3.13
          PyZMQ: 17.1.2
           RAET: Not Installed
          smmap: 2.0.3
        timelib: Not Installed
        Tornado: 4.5.3
            ZMQ: 4.2.5
@bigpick

This comment has been minimized.

Copy link
Author

commented Oct 16, 2018

If I add mine_enabled: False to the minion configuration like so:

master: 172.16.0.1
verify_master_pubkey_sign: True
master_type: failover
master_alive_interval: 90
mine_enabled: False
######   NAPALM connection settings   ######
############################################
napalm:
  driver: eos
  optional_args:
    eos_transport: socket

The flooding goes down to only the __master_alive_<IP>:

[DEBUG   ] LazyLoaded status.master
[DEBUG   ] schedule: Job __master_alive_172.16.0.1 was scheduled with jid_include, adding to cache (jid_include defaults to True)
[DEBUG   ] schedule: Job __master_alive_172.16.0.1 was scheduled with a max number of 1
[DEBUG   ] schedule: Job __master_alive_172.16.0.1 was scheduled with jid_include, adding to cache (jid_include defaults to True)
[DEBUG   ] schedule: Job __master_alive_172.16.0.1 was scheduled with a max number of 1
[DEBUG   ] schedule: Job __master_alive_172.16.0.1 was scheduled with jid_include, adding to cache (jid_include defaults to True)
[DEBUG   ] schedule: Job __master_alive_172.16.0.1 was scheduled with a max number of 1
...
@bigpick

This comment has been minimized.

Copy link
Author

commented Oct 16, 2018

After removing all the multi-master stuff and seeing if the issue persists, it appears that it repeats the above behavior, but with the __mine_interval job. Settting mine_enabled: False in the minion's configuration works around this. This is fine, as I don't plan on using mine.

However, is still an issue.

@garethgreenaway

This comment has been minimized.

Copy link
Member

commented Oct 16, 2018

@bigpick Thanks for the report. This issue has been fixed in #49104 and will be available in the Fluorine release. @rallytime there were some other fixes in that PR that aren't applicable to 2018.3 but should we backport the logging fix?

@rallytime

This comment has been minimized.

Copy link
Contributor

commented Oct 16, 2018

@garethgreenaway Yeah, that logging fix should be made against 2018.3. The fix should just be made directly against that branch though, rather than trying to backport anything fro #49104.

@garethgreenaway

This comment has been minimized.

Copy link
Member

commented Jan 9, 2019

Closing this out, if the problem persists please comment & we'll re-open the issue or feel free to open a new issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.