Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unexpected pillar behavior when setting up a master with salt-ssh #24727

Closed
beverlycodes opened this issue Jun 16, 2015 · 5 comments
Closed

Unexpected pillar behavior when setting up a master with salt-ssh #24727

beverlycodes opened this issue Jun 16, 2015 · 5 comments
Labels
Bug broken, incorrect, or confusing behavior Core relates to code central or existential to Salt P2 Priority 2 Salt-SSH severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around stale
Milestone

Comments

@beverlycodes
Copy link

I'm standing up a salt master via salt ssh. My process is as follows:

  • Use salt-ssh salt.master to install salt-master service on the target
  • Checkout my states and pillar data from git into /srv/salt and /srv/pillar on the target

Problem is, once I populate /srv/pillar from git, that data starts getting merged into the local pillar data I've supplied from salt-ssh. All further calls from salt-ssh get pillar data I did not intend to be used during salt-ssh calls. Is this expected? Is there any way I can tell salt-ssh calls to ignore the contents of /srv/pillar on the target?

As an example, I have an sshd_config pillar item for use with the openssh formula under /srv/pillar on my local machine (the machine I run salt-ssh on). In my git repo that gets checked out into /srv/pillar on the target, I have a different sshd_config that I intend to be applied to all minions this master will manage. However, once that git repo has been checked out, its sshd_config pillar item starts getting applied whenever I call salt-ssh '*' state.highstate instead of the pillar item defined on my local machine.

@arroyoc arroyoc added the info-needed waiting for more info label Jun 18, 2015
@arroyoc arroyoc added this to the Blocked milestone Jun 18, 2015
@arroyoc
Copy link

arroyoc commented Jun 18, 2015

@RyanFields Thanks for your report. What is your salt --versions-report output?

@beverlycodes
Copy link
Author

@arroyoc Sorry. I keep forgetting to append that.

           Salt: 2015.5.0
         Python: 2.7.6 (default, Sep  9 2014, 15:04:36)
         Jinja2: 2.7.3
       M2Crypto: 0.22
 msgpack-python: 0.4.2
   msgpack-pure: Not Installed
       pycrypto: 2.6.1
        libnacl: Not Installed
         PyYAML: 3.11
          ioflo: Not Installed
          PyZMQ: 14.3.1
           RAET: Not Installed
            ZMQ: 4.0.5
           Mako: Not Installed

@arroyoc
Copy link

arroyoc commented Jun 18, 2015

@RyanFields No problem. Thanks!

@arroyoc arroyoc added Bug broken, incorrect, or confusing behavior severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around Core relates to code central or existential to Salt Salt-SSH P2 Priority 2 and removed info-needed waiting for more info labels Jun 18, 2015
@arroyoc arroyoc modified the milestones: Approved, Blocked Jun 18, 2015
@beverlycodes
Copy link
Author

For anyone encountering this issue and needing a workaround, the simplest solution is to avoid '*' in your top files. Avoiding it in the pillar ensures that salt-ssh won't consider pillar data meant for minions, and avoiding it in salt ensures that salt-ssh won't apply states meant for minions.

In general terms, make certain that your salt-ssh roster targets won't get matched by anything in the salt and pillar files you clone into /srv/salt and /srv/pillar on the targets. It can be a little tedious to keep track of, but it's far less tedious than repeatedly locking salt-ssh out of a server because it applied the wrong sshd_config.

Strike all that. Something must not have been applied on one of my highstate calls. A couple calls later and I'm back to getting locked out of my master.

@stale
Copy link

stale bot commented Nov 20, 2017

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.

@stale stale bot added the stale label Nov 20, 2017
@stale stale bot closed this as completed Nov 27, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug broken, incorrect, or confusing behavior Core relates to code central or existential to Salt P2 Priority 2 Salt-SSH severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around stale
Projects
None yet
Development

No branches or pull requests

2 participants