Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VIRTUAL_PORT is not filled in default.conf when container uses host networking #1270

Closed
YanzheL opened this issue May 1, 2019 · 8 comments · Fixed by #2222
Closed

VIRTUAL_PORT is not filled in default.conf when container uses host networking #1270

YanzheL opened this issue May 1, 2019 · 8 comments · Fixed by #2222

Comments

@YanzheL
Copy link

YanzheL commented May 1, 2019

Description

I have a container with this config

version: '3.6'
services:
  jellyfin:
    image: jellyfin
    network_mode: 'host'
    volumes:
      - ./config:/config
      - ./cache:/cache
      - ./netdisk:/media
    restart: always
    environment:
      - VIRTUAL_HOST=abc.example.com
      - VIRTUAL_IP=127.0.0.1
      - VIRTUAL_PORT=8096
      - SSL_POLICY=Mozilla-Modern
      - HTTPS_METHOD=noredirect
    privileged: true

It uses host networking, so I provide VIRTUAL_IP to specify the exact upstream address.

However, this service container cannot be reached via https://abc.example.com. It returns 502 Bad Gateway.

The log of nginx-proxy shows

proxy_1  | nginx.1    | abc.example.com 10.246.8.238 - - [01/May/2019:03:07:25 +0000] "GET /favicon.ico HTTP/2.0" 502 575 "https://abc.example.com/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.108 Safari/537.36"
proxy_1  | nginx.1    | 2019/05/01 03:07:25 [error] 2627#2627: *15370 no live upstreams while connecting to upstream, client: 10.246.8.238, server: abc.example.com, request: "GET / HTTP/2.0", upstream: "http://abc.example.com/", host: "abc.example.com"

I think maybe there is something wrong with the generated default.conf. So I got inside of nginx-proxy container and saw the relevant section in /etc/nginx/conf.d/default.conf is:

upstream abc.example.com {                                                                                                                                
                                ## Can be connected with "host" network                                                                                                                                                           
                # jellyfindoc_jellyfin_1                    
                        server 127.0.0.1 down;                                                     
}

The upstream server does not fill in the value of VIRTUAL_PORT as I provided.

I changed it to server 127.0.0.1:8086 down; and performed nginx -s reload, I still cannot reach the service container via https://abc.example.com.

Then I changed it to server 127.0.0.1:8086; and performed nginx -s reload, this issue was solved. My service container now publically reachable via https://abc.example.com.

So this is an issue with upstream auto-generation.

The expected behavior is to produce a config similar to

upstream VIRTUAL_HOST {                                                                                                                                
                                ## Can be connected with "host" network                                                                                                                                                           
                # jellyfindoc_jellyfin_1                    
                        server VIRTUAL_IP:VIRTUAL_PORT;                                                     
}

when the user's container uses host networking.

Background Info

Configuration of nginx-proxy service

version: '3.6'
services:
  proxy:
    image: jwilder/nginx-proxy:alpine
    restart: always
    network_mode: host
    volumes:
      - ./custom.conf:/etc/nginx/conf.d/custom.conf
      - ./ssl/certs:/etc/nginx/certs/:ro
      - /var/run/docker.sock:/tmp/docker.sock:ro

content of custom.conf

client_max_body_size 0;
proxy_request_buffering off;
proxy_read_timeout  36000s;
proxy_send_timeout  36000s;
@YanzheL
Copy link
Author

YanzheL commented May 1, 2019

Some relevant issues:
#1219
#832

@YanzheL
Copy link
Author

YanzheL commented May 1, 2019

I tested this PR #1219, it solves this issue like a charm. Very glad to see this PR be merged.

@hislopzach
Copy link

hislopzach commented May 8, 2019

@YanzheL I would like to use the change implemented by PR #1219 , how do I go about that?

@YanzheL
Copy link
Author

YanzheL commented May 9, 2019

@YanzheL I would like to use the change implemented by PR #1219 , how do I go about that?

Option 1:

  1. Fork this repository

  2. Create a reverse pull request from the source branch of support VIRTUAL_IP env variable #1219 to your forked repository.

  3. Accept and merge the PR you created in your repository.

  4. Build the image by yourself.

    For example:

    cd {path_to_your_forked_repo}
    docker build -t my-nginx-proxy .
    # or use this if your need alpine version
    docker build -f Dockerfile.alpine -t my-nginx-proxy .

    Then use the image my-nginx-proxy in your project.

Option 2:

  1. Clone my forked repository, I've done the step 1,2,3.

  2. Follow step 4 in option 1.

@Freekers
Copy link

Freekers commented Jun 11, 2019

Thank you very much for this. Exactly what I needed. Have you thought about adding the image to Dockerhub?

EDIT: I've forked your repo so that I could link it to Dockerhub and setup automatic image building :)
https://hub.docker.com/r/freekers/nginx-proxy
Thanks!

@ndrwstn
Copy link

ndrwstn commented Jul 1, 2019

I think the reason that this PR #1219 wasn't accepted is because it doesn't address the root problem. docker-gen needs to be able to discern the correct internal docker IP of the container using host networking--even if multiple containers on the docker network are using host networking. Using '127.0.0.1' won't work in all cases, and if you are manually setting a docker IP, you've just obviated the reason to use docker-gen.

As an example 'vpncontainer' and 'clientcontainer' live on docker network 'vpnnetwork' isolated from bridge. nginx-proxy is connected to vpnnetwork and bridge and is serving clientcontainer's GUI. This can be manually set up by finding vpncontainer's internal docker IP and manually adding it into the default.conf inside the nginx-proxy container. However, 127.0.0.1 is going to fail, and you can't be 100% certain that vpncontainer will come up in the same internal docker IP every time. (And frankly, if you are going to structure your docker load that carefully, why even use docker-gen/nginx-proxy anyway?)

The "right" solution is to find the docker network and find the IP address of the first 'host' container (which was assigned an IP address) and use that IP when building the default.conf. That may just be easier said that done, I don't know.

@YanzheL
Copy link
Author

YanzheL commented Sep 15, 2019

@ndrwstn
All containers that use host networking (network_mode: host) can be reached via 127.0.0.1. It's unnecessary to know their internal IP.

I don't understand why you think

Using '127.0.0.1' won't work in all cases

I know '127.0.0.1' won't work in containers that do not use host networking. In this case, just leave the VIRTUAL_IP unset so that nginx-proxy can detect the correct internal IP.

If a container uses host networking, I just need to set VIRTUAL_IP=127.0.0.1 to bypass nginx-proxy's auto internal IP detection.

@YanzheL YanzheL closed this as completed Sep 15, 2019
@YanzheL YanzheL reopened this Sep 15, 2019
@ndrwstn
Copy link

ndrwstn commented Sep 15, 2019

That's the problem. Your PR offers a solution for a subset of the possible instances, when the solution to the superset of instances (grab the right internal IP for each instance) would solve both the host-networking and other problems. That's why I don't think the PR is a great solution.

That said, it's not that important to me anymore as I've found another solution (Traefik) which seems to deal with these problems more effectively.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants