You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
beverlycodes opened this issue
Jun 16, 2015
· 5 comments
Labels
Bugbroken, incorrect, or confusing behaviorCorerelates to code central or existential to SaltP2Priority 2Salt-SSHseverity-medium3rd level, incorrect or bad functionality, confusing and lacks a work aroundstale
I'm standing up a salt master via salt ssh. My process is as follows:
Use salt-ssh salt.master to install salt-master service on the target
Checkout my states and pillar data from git into /srv/salt and /srv/pillar on the target
Problem is, once I populate /srv/pillar from git, that data starts getting merged into the local pillar data I've supplied from salt-ssh. All further calls from salt-ssh get pillar data I did not intend to be used during salt-ssh calls. Is this expected? Is there any way I can tell salt-ssh calls to ignore the contents of /srv/pillar on the target?
As an example, I have an sshd_config pillar item for use with the openssh formula under /srv/pillar on my local machine (the machine I run salt-ssh on). In my git repo that gets checked out into /srv/pillar on the target, I have a different sshd_config that I intend to be applied to all minions this master will manage. However, once that git repo has been checked out, its sshd_config pillar item starts getting applied whenever I call salt-ssh '*' state.highstate instead of the pillar item defined on my local machine.
The text was updated successfully, but these errors were encountered:
For anyone encountering this issue and needing a workaround, the simplest solution is to avoid '*' in your top files. Avoiding it in the pillar ensures that salt-ssh won't consider pillar data meant for minions, and avoiding it in salt ensures that salt-ssh won't apply states meant for minions.
In general terms, make certain that your salt-ssh roster targets won't get matched by anything in the salt and pillar files you clone into /srv/salt and /srv/pillar on the targets. It can be a little tedious to keep track of, but it's far less tedious than repeatedly locking salt-ssh out of a server because it applied the wrong sshd_config.
Strike all that. Something must not have been applied on one of my highstate calls. A couple calls later and I'm back to getting locked out of my master.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.
Bugbroken, incorrect, or confusing behaviorCorerelates to code central or existential to SaltP2Priority 2Salt-SSHseverity-medium3rd level, incorrect or bad functionality, confusing and lacks a work aroundstale
I'm standing up a salt master via salt ssh. My process is as follows:
/srv/salt
and/srv/pillar
on the targetProblem is, once I populate /srv/pillar from git, that data starts getting merged into the local pillar data I've supplied from salt-ssh. All further calls from salt-ssh get pillar data I did not intend to be used during salt-ssh calls. Is this expected? Is there any way I can tell salt-ssh calls to ignore the contents of /srv/pillar on the target?
As an example, I have an
sshd_config
pillar item for use with the openssh formula under /srv/pillar on my local machine (the machine I run salt-ssh on). In my git repo that gets checked out into /srv/pillar on the target, I have a differentsshd_config
that I intend to be applied to all minions this master will manage. However, once that git repo has been checked out, itssshd_config
pillar item starts getting applied whenever I callsalt-ssh '*' state.highstate
instead of the pillar item defined on my local machine.The text was updated successfully, but these errors were encountered: