-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VIRTUAL_PORT is not filled in default.conf when container uses host networking #1270
Comments
I tested this PR #1219, it solves this issue like a charm. Very glad to see this PR be merged. |
Option 1:
Option 2:
|
Thank you very much for this. Exactly what I needed. Have you thought about adding the image to Dockerhub? EDIT: I've forked your repo so that I could link it to Dockerhub and setup automatic image building :) |
I think the reason that this PR #1219 wasn't accepted is because it doesn't address the root problem. docker-gen needs to be able to discern the correct internal docker IP of the container using host networking--even if multiple containers on the docker network are using host networking. Using '127.0.0.1' won't work in all cases, and if you are manually setting a docker IP, you've just obviated the reason to use docker-gen. As an example 'vpncontainer' and 'clientcontainer' live on docker network 'vpnnetwork' isolated from bridge. nginx-proxy is connected to vpnnetwork and bridge and is serving clientcontainer's GUI. This can be manually set up by finding vpncontainer's internal docker IP and manually adding it into the default.conf inside the nginx-proxy container. However, 127.0.0.1 is going to fail, and you can't be 100% certain that vpncontainer will come up in the same internal docker IP every time. (And frankly, if you are going to structure your docker load that carefully, why even use docker-gen/nginx-proxy anyway?) The "right" solution is to find the docker network and find the IP address of the first 'host' container (which was assigned an IP address) and use that IP when building the default.conf. That may just be easier said that done, I don't know. |
@ndrwstn I don't understand why you think
I know '127.0.0.1' won't work in containers that do not use host networking. In this case, just leave the If a container uses host networking, I just need to set |
That's the problem. Your PR offers a solution for a subset of the possible instances, when the solution to the superset of instances (grab the right internal IP for each instance) would solve both the host-networking and other problems. That's why I don't think the PR is a great solution. That said, it's not that important to me anymore as I've found another solution (Traefik) which seems to deal with these problems more effectively. |
Description
I have a container with this config
It uses host networking, so I provide
VIRTUAL_IP
to specify the exact upstream address.However, this service container cannot be reached via https://abc.example.com. It returns 502 Bad Gateway.
The log of
nginx-proxy
showsI think maybe there is something wrong with the generated
default.conf
. So I got inside ofnginx-proxy
container and saw the relevant section in/etc/nginx/conf.d/default.conf
is:The upstream server does not fill in the value of
VIRTUAL_PORT
as I provided.I changed it to
server 127.0.0.1:8086 down;
and performednginx -s reload
, I still cannot reach the service container via https://abc.example.com.Then I changed it to
server 127.0.0.1:8086;
and performednginx -s reload
, this issue was solved. My service container now publically reachable via https://abc.example.com.So this is an issue with upstream auto-generation.
The expected behavior is to produce a config similar to
when the user's container uses host networking.
Background Info
Configuration of
nginx-proxy
servicecontent of custom.conf
The text was updated successfully, but these errors were encountered: