-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
server ip is marked as down when using different internal networks #1132
Comments
Hi Spider, same issue here. I've noticed, that when you startup a container with an internal network only, the ports wont get displayed in a simple docker ps. Is it possible that the proxy can't fetch the port informations as well ?
When i run a docker inspect i can see the ExposedPorts, even if its an internal network. But the "Ports"-section is empty.
would appreciate if anyone can verify that. |
i'm digging ... just need to know how the values get filled into the template.
|
think the problem would be an empty .Address variable. #https://github.com/jwilder/docker-gen/blob/a4f4f0167148a5ae52ce063474d850ef8b78f93c/generator.go
but when i inspect the containers, i will get an empty array from the internal network container external:
internal:
|
I can verify your posts. So there is actually no workaround, right? |
If you don't need separate subnet, but you have separate docker-compose, simple workaround is setting up default network to join nginx network. Add this of the bottom of your app docker-compose file: networks:
default:
external:
name: webproxy |
Thanks for your response but unfortunately I need an seperate internal subnet for security purposes ... |
ran into the same issue, any plan on this getting fixed? or is it working as intended / wontfix? |
At the moment theres a workaround, just add a 2nd (not internal) network. But as gtaspider said, for security reason the generator.go needs a fix. Im not sure if its possible and why the guys took the .Address parameter instead of another one who will show the exposed ports. Ps. Think it would fix following issue as well #1066. |
Thanks @meron1122 for the pointer. I can confirm that removing all internal networks from my app's After removing all internal networks and declaring the default, external network everything started working as expected. Mixing in the For future Googlers.Put the snippet from @meron1122's comment at the bottom of your docker compose file and remove all other entries of networks in each of the container's configuration. Please understand that there might be security implications as pointed out by @gtaspider. Important noticeDon't forget to create the external network $ docker network create webproxy This might helpYou might want to stop all your services ( I am only using the
|
I'm on the same boat :( |
I'm experiencing the same issue for any container that exposes two ports (e.g. gitea, one for SSH & one for HTTP), even though it is in the same network as the nginx-proxy. I'm not sure if it's really related, but since a few people here already debugged the surrounding code, I'm wondering if there's any advice for this scenario? |
@muesli That can be solved by using PR #1157. I don’t know if it also solves the problem mentioned by @gtaspider. |
We should give a shot. It seems to solve the issue (just reviewed the changes), but havnt tested it now. |
I can use multiple internal networks with nginx-gen and an older template. Here is the diff between the two:
|
The fix by @webdevotion works and my containers are now proxied properly. It is very tempting to just |
@Redsandro |
@willtho89 thanks for clarifying. So to put this in perspective, all ports are open, and an attacker could find a vector in ports that are not supposed to be public. So while in theory B_critical_db should not be vulnerable over any port, in reality it just became a lot easier to probe for vulnerabilities. I hope we can find a solution that is not too complex to implement |
Does anyone know why https://github.com/hwellmann/nginx-proxy/blob/b61c84192960d937a5e5b517264efe9653f8f899/nginx.tmpl#L17 exists? I'm no nginx expert, but it feels somewhat pointless to hard-code a server as down, it appears that the config builder isn't even trying to determine if the service is up or not. I'm also unclear what causes ``Addresses` to be empty, since that is what is causing that code path to be hit rather than hitting the code path that does actual port assignment. I would love to get this figured out so I can put nginx on an external network and internal network, then put all of my services on an internal network only. |
@MicahZoltu when I reviewed this code, I assumed it was to block connections to the container when a certain unsafe/bridged/shared network type was connected. E.g. when a private isolated network for the container fails. E.g. to prevent this from happening. But in all honestly, I don't know. |
@MicahZoltu Nginx acts as a proxy. This means that virtual hosts should be passed to an another resource. These other resources are the upstream sources. The upstream directive is a list of resources to forward the connection to. A host is linked to the upstream on line 125. Then line 17. Because there is no IP address for the specific container, see elseif on line 14, the resource is added to the upstream list as |
@frederikbosch In this particular code path, the upstream has an IP address and nginx-proxy knows about it. When you look at the generated config, you can see |
Hmm, maybe I see what you are saying. Is |
I assumed in the right direction. 😎 |
This works for me, using separate networks for front and back-end containers: nginx-proxy: wolf-static: networks: There are a whole bunch of other containers connecting to both networks, some just to the proxy-net and some just to the db-net. I don't know if the way you are defining the network has an impact 'internal / external part'. I did't use it. Here's how it shows up in cat /etc/nginx/conf.d/default.conf on nginx-proxy |
@areaeuro if |
@Redsandro Please read the whole comment.
Not all containers in the compose file used were pasted. |
@areaeuro I'm afraid I failed to communicate what I mean. What you seem to have is this:
Separation of networks would be more like this:
|
Hey @Redsandro |
@areaeuro unfortunately there is no better solution. What you do is what we all do. It's sharing a single proxy network that connects I thought that you thought that you had figured out a better solution, so I pointed out that it is the same solution, and we need to keep looking for a solution that has strict separation of networks. To my understanding, as of now it's still not possible. Which is slightly confusing to me because @willtho89 made it sound like it's a (simple?) regression (#1132 (comment)). But none of the mentioned relevant PRs in this issue are merged, so I guess it's not that simple. |
still relevant. |
Also have just been hit with this using a docker-compose and connecting a container to multiple networks. I added the port I wanted exposed to my docker-compose file, and this seemed to make it work (but far from an ideal solution) - checking with the command that @phhutter provided, I can now see the port is mapped:
And in the nginx default.conf file:
|
Networks marked as internal cant expose ports, and this is by design. More details here |
According to nginx upstream docs: > down > marks the server as permanently unavailable. I suppose that this may be useful for users who are manually editing their nginx configuration file: They can mark a server down without loosing the configuration. Is there any other purpose ? For users who generate the configuration: just put the working servers in the configuration and skip others. The most disturbing for me is: server 127.0.0.1 down; What is 127.0.0.1 ? It's the nginx-proxy container. Is it supposed to be an upstream ? Not in any case that I can think of. What does having such line cause ? Well I've had: [emerg] 28#28: invalid number of arguments in "upstream" directive But also: Generated '/etc/nginx/conf.d/default.conf' from 12 containers Running 'nginx -s reload' Error running notify command: nginx -s reload, exit status 1 Received event start for container 5c6cb0bf8e05 Yep, the whole LB is down and no helpful error message about what is going on. I beleive this fixes what most users have been facing in nginx-proxy#1132 As well as the regression introduced in nginx-proxy#1106. > Deleted code is debugged code. — Jeff Sickel
According to nginx upstream docs: > down > marks the server as permanently unavailable. I suppose that this may be useful for users who are manually editing their nginx configuration file: They can mark a server down without loosing the configuration. Is there any other purpose ? For users who generate the configuration: just put the working servers in the configuration and skip others. The most disturbing for me is: server 127.0.0.1 down; What is 127.0.0.1 ? It's the nginx-proxy container. Is it supposed to be an upstream ? Not in any case that I can think of. What does having such line cause ? Well I've had: [emerg] 28#28: invalid number of arguments in "upstream" directive But also: Generated '/etc/nginx/conf.d/default.conf' from 12 containers Running 'nginx -s reload' Error running notify command: nginx -s reload, exit status 1 Received event start for container 5c6cb0bf8e05 Yep, the whole LB is down ! I beleive this fixes what most users have been facing in nginx-proxy#1132 As well as the regression introduced in nginx-proxy#1106. > Deleted code is debugged code. — Jeff Sickel
This patch fixes a critical condition in which the whole LB is down. [emerg] 28#28: invalid number of arguments in "upstream" directive And: Generated '/etc/nginx/conf.d/default.conf' from 12 containers Running 'nginx -s reload' Error running notify command: nginx -s reload, exit status 1 Received event start for container 5c6cb0bf8e05 What is 127.0.0.1 ? Isn't it the nginx-proxy container ? How can it be an upstream at all ? I beleive this fixes what most users have been facing in nginx-proxy#1132 As well as the regression introduced in nginx-proxy#1106.
This patch fixes a critical condition in which the whole LB is down. [emerg] 28#28: invalid number of arguments in "upstream" directive And: Generated '/etc/nginx/conf.d/default.conf' from 12 containers Running 'nginx -s reload' Error running notify command: nginx -s reload, exit status 1 Received event start for container 5c6cb0bf8e05 What is 127.0.0.1 ? Isn't it the nginx-proxy container ? How can it be an upstream at all ? Related: nginx-proxy#1132 nginx-proxy#1106
Please try with #1302 and report your results ;) |
This patch fixes a critical condition in which the whole LB is down. [emerg] 28#28: invalid number of arguments in "upstream" directive And: Generated '/etc/nginx/conf.d/default.conf' from 12 containers Running 'nginx -s reload' Error running notify command: nginx -s reload, exit status 1 Received event start for container 5c6cb0bf8e05 What is 127.0.0.1 ? Isn't it the nginx-proxy container ? How can it be an upstream at all ? Related: nginx-proxy#1132 nginx-proxy#1106 Comments (not OP) of: nginx-proxy#375 Maybe: nginx-proxy#1144
hey guys, A common mistake is to forget to you can double check this with:
BAD Result:
GOOD Result
my fix was
|
@gfiasco Yes, that is exactly the cause of the problem.
and make sure you |
I've also had this issue and weirdly enough, starting the application container via Manual
At this point I'm just glad I found a way that works and wanted to potentially help others that stumble onto this issue with my message. Edit:
I assume this means that the first docker-compose file worked, because docker-compose put it on the same network as the nginx-proxy container when started from within the same directory (just a guess tho). |
still relevant
|
As I understand it the nginx.tmpl template relies on exposed ports only, and doesn't honor VIRTUAL_PORT if the considered container exposes only one port:
Instead I think that:
Something like this:
Patch proposal:
|
@pini-gh thank you. |
@pini-gh a PR would be more than welcome 👍 |
Closed by #1609 |
Got a docker-compose file which will have multiple internal docker networks. The proxy seems to find the correct ip and set it to the correct upstream but the server is marked as down, eventhough the commet says nginx was able to connect to the right network:
default.conf:
When I remove the
down
, safe it and then reload nginx withdocker exec -it test_proxy_1 nginx -s reload
it works like a charm. But if a container starts/stops the file will be rewritten (this is why I want to use this tool..)I created a simple docker-compose file, to show the problem:
This seems to be a bug, so I posted it here.
Thanks in advance,
Spider
The text was updated successfully, but these errors were encountered: