Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Salt master behind ELB or proxy #54297

Closed
shortstack opened this issue Aug 23, 2019 · 13 comments
Closed

Salt master behind ELB or proxy #54297

shortstack opened this issue Aug 23, 2019 · 13 comments
Labels
Question The issue is more of a question rather than a bug or a feature request stale
Milestone

Comments

@shortstack
Copy link

shortstack commented Aug 23, 2019

Description of Issue

Would like to have 1 or more Salt masters in private subnets, behind a proxy or load balancer, not necessarily for load balancing, but just an added layer of security in front of the systems.

Have looked at a dozen other issues looking to do this and all went cold or are outdated. Hoping someone has made this possible by now. I'd rather not have these systems sitting in public subnets.

Thought about setting up a Syndic but that kind of leaves us in the same place--with a master open to the public.

Setup

Salt master(s) running in AWS on Ubuntu 16.04 in private subnet(s)
Set up either behind Nginx or an ELB with TCP listeners in public subnet(s)
Set transport: tcp in master and minion configs.

Versions Report

Salt Version:
Salt: 2019.2.0

Dependency Versions:
cffi: Not Installed
cherrypy: 3.2.3
dateutil: 2.4.2
docker-py: Not Installed
gitdb: 0.6.4
gitpython: 1.0.1
ioflo: Not Installed
Jinja2: 2.8
libgit2: Not Installed
libnacl: Not Installed
M2Crypto: Not Installed
Mako: 1.0.3
msgpack-pure: Not Installed
msgpack-python: 0.4.6
mysql-python: Not Installed
pycparser: Not Installed
pycrypto: 2.6.1
pycryptodome: Not Installed
pygit2: Not Installed
Python: 2.7.12 (default, Nov 12 2018, 14:36:49)
python-gnupg: 0.3.8
PyYAML: 3.11
PyZMQ: 15.2.0
RAET: Not Installed
smmap: 0.9.0
timelib: Not Installed
Tornado: 4.2.1
ZMQ: 4.1.4

System Versions:
dist: Ubuntu 16.04 xenial
locale: UTF-8
machine: x86_64
release: 4.4.0-1087-aws
system: Linux
version: Ubuntu 16.04 xenial

@shortstack
Copy link
Author

so far, just testing with 1 master behind either an ELB or an nginx TCP stream

keys get exchanged and accepted, salt-key shows minions

but beyond that, no commands can be run, no minions return

the debug logs just say that the connection is made and then immediately closed/dropped

@shortstack
Copy link
Author

2019-08-23 20:11:51,306 [salt.crypt       :207 ][DEBUG   ][19307] salt.crypt.get_rsa_pub_key: Loading public key
2019-08-23 20:11:51,315 [salt.crypt       :868 ][DEBUG   ][19307] Decrypting the current master AES key
2019-08-23 20:11:51,315 [salt.crypt       :199 ][DEBUG   ][19307] salt.crypt.get_rsa_key: Loading private key
2019-08-23 20:11:51,316 [salt.crypt       :797 ][DEBUG   ][19307] Loaded minion key: /etc/salt/pki/minion/minion.pem
2019-08-23 20:11:51,319 [salt.crypt       :207 ][DEBUG   ][19307] salt.crypt.get_rsa_pub_key: Loading public key
2019-08-23 20:11:51,320 [salt.transport.tcp:308 ][DEBUG   ][19307] Closing AsyncTCPReqChannel instance
2019-08-23 20:11:51,321 [salt.transport.tcp:1058][DEBUG   ][19307] tcp stream to x.x.x.x:xxxx closed, unable to recv
2019-08-23 20:12:41,103 [salt.minion      :981 ][ERROR   ][19307] Minion unable to successfully connect to a Salt Master.

@garethgreenaway
Copy link
Contributor

@shortstack Thanks for the report. I've had good luck in the past having the Salt master sitting behind the HAProxy load balancer using the standard zeromq setup. Make sure you're setting up port Salt master ports, 4505 and 4506, on the load balancer.

@garethgreenaway garethgreenaway added this to the Approved milestone Aug 26, 2019
@garethgreenaway garethgreenaway added the Question The issue is more of a question rather than a bug or a feature request label Aug 26, 2019
@shortstack
Copy link
Author

shortstack commented Aug 26, 2019

thank you, @garethgreenaway!! looking through docs right now. if you have any sample configs that have worked for you, do you mind sharing?

struggle-bus-ing to find anything zeromq and haproxy related. and using just the tcp listener for haproxy results in a timeout for the minions.

@shortstack
Copy link
Author

DISREGARD! it's alive! thank you!

@garethgreenaway
Copy link
Contributor

@shortstack Awesome! Glad it's working. If everything is good please feel free to close this issue out. Thanks!

@shortstack
Copy link
Author

@garethgreenaway did you ever have issues with runners / presence detection in this scenario? transport is the default / zeromq setup. i enabled presence detection on the master. everything else works fine, but manage.alived/present/joined/allowed all return zero minions, despite them all being up (with manage.up) and available.

@shortstack shortstack reopened this Sep 8, 2019
@garethgreenaway
Copy link
Contributor

@shortstack not that I recall. The one caveat i do remember from this approach is that the minions won't be connected to both masters at the same time so things like this likely won't work on multiple masters, just the one that the minion is connected to.

@shortstack
Copy link
Author

current setup is only 1 master behind haproxy, so they are all going to the same one. what are the nuts and bolts behind presence detection? does that work via zeromq as well? could it maybe be getting blocked somewhere that i'm not aware of?

minions are distributed across multiple cloud environments.

i have a similar setup in another env, master and minions are the same versions as the one above. only 2 differences are 1) no haproxy in the middle, going straight to the master, and 2) all of the systems (master and minions) are on the same network.

@garethgreenaway
Copy link
Contributor

If it's one master and presence isn't working that is weird, have you tried looking at the event runner to see if the presence events are ending up in the queue?

@shortstack
Copy link
Author

yep =/

salt/run/20190909150519387060/new       {
  "_stamp": "2019-09-09T15:05:19.790032",
  "fun": "runner.manage.alived",
  "fun_args": [],
  "jid": "20190909150519387060",
  "user": "sudo_ubuntu"
}
salt/run/20190909150519387060/ret       {
  "_stamp": "2019-09-09T15:05:19.812438",
  "fun": "runner.manage.alived",
  "fun_args": [],
  "jid": "20190909150519387060",
  "return": [],
  "success": true,
  "user": "sudo_ubuntu"
}

@stale
Copy link

stale bot commented Jan 7, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.

@stale stale bot added the stale label Jan 7, 2020
@stale stale bot closed this as completed Jan 14, 2020
@max-arnold
Copy link
Contributor

max-arnold commented Jan 19, 2024

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Question The issue is more of a question rather than a bug or a feature request stale
Projects
None yet
Development

No branches or pull requests

3 participants