New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: no servers are inside upstream in #438

Open
mrvini opened this Issue May 2, 2016 · 86 comments

Comments

Projects
None yet
@mrvini

mrvini commented May 2, 2016

I've updated my proxy image today, and tried to restart all my other containers behind the proxy, however all of them failed, am I doing something wrong? ( i did follow explanations in issue #64, however that didn't help)

proxy

docker run -d --name nginx-proxy \
    -p 80:80 -p 443:443 \
    --restart=always \
    -v /opt/my-certs:/etc/nginx/certs \
    -v /var/run/docker.sock:/tmp/docker.sock:ro \
    jwilder/nginx-proxy

my dev container (nodejs) built locally and it exposes port 8181

docker run -d --name www.dev1 \
    --restart=always \
    --link db --link redis \
    -e VIRTUAL_PORT=8181 \
    -e VIRTUAL_PROTO=https \
    -e VIRTUAL_HOST=dev1.mysite.com \
    -v /opt/my-volume/web/dev1/:/opt/my-volume/web/ \
    -v /opt/my-certs:/opt/my-certs:ro \
    -w /opt/my-volume/web/ localhost:5000/www \
    bash -c 'npm start server.js'

Right before I run dev container, i can see output of nginx -t

root@fba41f832f35:/app# nginx -t  
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

After I start dev container, i see the following

root@fba41f832f35:/app# nginx -t        
2016/05/02 07:15:49 [emerg] 69#69: no servers are inside upstream in /etc/nginx/conf.d/default.conf:34
nginx: [emerg] no servers are inside upstream in /etc/nginx/conf.d/default.conf:34
nginx: configuration file /etc/nginx/nginx.conf test failed

when I check /etc/nginx/conf.d/default.conf i see empty upstream

upstream dev1.mysite.com {
}

Is there anything I am doing wrong? I've been using same startup script for a good 6 month and it used to work right before I pulled the new image, did anything changed? Please help

@dehy

This comment has been minimized.

Show comment
Hide comment
@dehy

dehy May 2, 2016

Same problem here.
docker 1.9.1cs2 on docker cloud

dehy commented May 2, 2016

Same problem here.
docker 1.9.1cs2 on docker cloud

@dehy

This comment has been minimized.

Show comment
Hide comment
@dehy

dehy May 2, 2016

I had to revert back to a72c7e6

dehy commented May 2, 2016

I had to revert back to a72c7e6

@wader

This comment has been minimized.

Show comment
Hide comment
@wader

wader May 2, 2016

@mrvini can you paste docker network inspect $(docker network ls -q)?

wader commented May 2, 2016

@mrvini can you paste docker network inspect $(docker network ls -q)?

@klaszlo

This comment has been minimized.

Show comment
Hide comment
@klaszlo

klaszlo May 2, 2016

Same error here.
I used 0.4.2 nginx-docker container version, and now I updated 0.7.0. nginx-docker version.
docker logs show:

dockergen.1 | 2016/05/02 15:40:51 Generated '/etc/nginx/conf.d/default.conf' from 4 containers
dockergen.1 | 2016/05/02 15:40:51 Running 'nginx -s reload'
dockergen.1 | 2016/05/02 15:40:51 Error running notify command: nginx -s reload, exit status 1
dockergen.1 | 2016/05/02 15:40:51 Watching docker events
dockergen.1 | 2016/05/02 15:40:52 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification 'nginx -s reload'

Sorry, 'docker network' command is not available here (ubuntu 15.10)

klaszlo commented May 2, 2016

Same error here.
I used 0.4.2 nginx-docker container version, and now I updated 0.7.0. nginx-docker version.
docker logs show:

dockergen.1 | 2016/05/02 15:40:51 Generated '/etc/nginx/conf.d/default.conf' from 4 containers
dockergen.1 | 2016/05/02 15:40:51 Running 'nginx -s reload'
dockergen.1 | 2016/05/02 15:40:51 Error running notify command: nginx -s reload, exit status 1
dockergen.1 | 2016/05/02 15:40:51 Watching docker events
dockergen.1 | 2016/05/02 15:40:52 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification 'nginx -s reload'

Sorry, 'docker network' command is not available here (ubuntu 15.10)

@klaszlo

This comment has been minimized.

Show comment
Hide comment
@klaszlo

klaszlo May 2, 2016

I have two docker inspect output, inspecting the diff, the only suspecting things are:

working version (0.4.2, 3 months ago):

"Config": { ...
...
  "Volumes": {
        "/etc/nginx/certs": {},
        "/var/cache/nginx": {}
    },

Non-working version (current, 0.7.0):

"Config": { ...
...
  "Volumes": {
        "/etc/nginx/certs": {}
    },

Working version:

"Volumes": {
    "/etc/nginx/certs": "/var/lib/docker/vfs/dir/d5235bb01d9facc2c58441bed36f9736da1a4bf5e78f3d2d2ff71bef017c6e82",
    "/tmp/docker.sock": "/run/docker.sock",
    "/var/cache/nginx": "/var/lib/docker/vfs/dir/1789a01ddd62eed650a95c874d1d8e504f1455df08e267ebacf5eb36bb293d7b"
},
"VolumesRW": {
    "/etc/nginx/certs": true,
    "/tmp/docker.sock": true,
    "/var/cache/nginx": true
}

Non-working version (current):

"Volumes": {
    "/etc/nginx/certs": "/var/lib/docker/vfs/dir/07c10059eb20dc6249075c976571d075bc7ac123dd9dec07a8f8651e8c884b39",
    "/tmp/docker.sock": "/run/docker.sock"
},
"VolumesRW": {
    "/etc/nginx/certs": true,
    "/tmp/docker.sock": true
}

klaszlo commented May 2, 2016

I have two docker inspect output, inspecting the diff, the only suspecting things are:

working version (0.4.2, 3 months ago):

"Config": { ...
...
  "Volumes": {
        "/etc/nginx/certs": {},
        "/var/cache/nginx": {}
    },

Non-working version (current, 0.7.0):

"Config": { ...
...
  "Volumes": {
        "/etc/nginx/certs": {}
    },

Working version:

"Volumes": {
    "/etc/nginx/certs": "/var/lib/docker/vfs/dir/d5235bb01d9facc2c58441bed36f9736da1a4bf5e78f3d2d2ff71bef017c6e82",
    "/tmp/docker.sock": "/run/docker.sock",
    "/var/cache/nginx": "/var/lib/docker/vfs/dir/1789a01ddd62eed650a95c874d1d8e504f1455df08e267ebacf5eb36bb293d7b"
},
"VolumesRW": {
    "/etc/nginx/certs": true,
    "/tmp/docker.sock": true,
    "/var/cache/nginx": true
}

Non-working version (current):

"Volumes": {
    "/etc/nginx/certs": "/var/lib/docker/vfs/dir/07c10059eb20dc6249075c976571d075bc7ac123dd9dec07a8f8651e8c884b39",
    "/tmp/docker.sock": "/run/docker.sock"
},
"VolumesRW": {
    "/etc/nginx/certs": true,
    "/tmp/docker.sock": true
}
@klaszlo

This comment has been minimized.

Show comment
Hide comment
@klaszlo

klaszlo May 2, 2016

I exported both images (docker save -o working.tar).
If needed, then I can put somewhere to further inpect it.
0.4.2 version is 185MB
0.7.0 version is 252MB

update: version 0.4.2 works like a charm. (I scp'd to the new server from the old server).

sudo docker run --restart=always \
-d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock jwilder/nginx-proxy:0.4.2

(the only other difference is the docker.sock:ro difference between the new readme, and the old readme)

klaszlo commented May 2, 2016

I exported both images (docker save -o working.tar).
If needed, then I can put somewhere to further inpect it.
0.4.2 version is 185MB
0.7.0 version is 252MB

update: version 0.4.2 works like a charm. (I scp'd to the new server from the old server).

sudo docker run --restart=always \
-d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock jwilder/nginx-proxy:0.4.2

(the only other difference is the docker.sock:ro difference between the new readme, and the old readme)

@wader

This comment has been minimized.

Show comment
Hide comment
@wader

wader May 2, 2016

@klaszlo hmm not sure i follow. 0.4.2 and 0.7.0 are versions of what? lastest for nginx-proxy seems to be 0.3.0

wader commented May 2, 2016

@klaszlo hmm not sure i follow. 0.4.2 and 0.7.0 are versions of what? lastest for nginx-proxy seems to be 0.3.0

@mrvini

This comment has been minimized.

Show comment
Hide comment
@mrvini

mrvini May 2, 2016

@wader, thanks for your comment, that made me look a little deeper and probably it should be the first thing anyone look at.

version for my docker was 1.7.1 which is old, after I upgraded to version 1.11.1, everything works as it should.

as always thanks for a good product and support, if no other support needed for @klaszlo , please close it

mrvini commented May 2, 2016

@wader, thanks for your comment, that made me look a little deeper and probably it should be the first thing anyone look at.

version for my docker was 1.7.1 which is old, after I upgraded to version 1.11.1, everything works as it should.

as always thanks for a good product and support, if no other support needed for @klaszlo , please close it

@klaszlo

This comment has been minimized.

Show comment
Hide comment
@klaszlo

klaszlo May 2, 2016

@wader Sorry for the noise, I wrongly read the version string inspecting the image file
(sudo docker inspect IMAGEID).

I have no idea what is the exact version of my older nginx-proxy docker image, I know that I locally launched (and therefore imported from docker hub) on "2016-01-26T22:39:17.462882618Z".

I'm on ubuntu 15.10, which ships with docker 1.6.2:
http://packages.ubuntu.com/wily/docker.io

Are you suggesting, that only ubuntu 16.04+ supported?

klaszlo commented May 2, 2016

@wader Sorry for the noise, I wrongly read the version string inspecting the image file
(sudo docker inspect IMAGEID).

I have no idea what is the exact version of my older nginx-proxy docker image, I know that I locally launched (and therefore imported from docker hub) on "2016-01-26T22:39:17.462882618Z".

I'm on ubuntu 15.10, which ships with docker 1.6.2:
http://packages.ubuntu.com/wily/docker.io

Are you suggesting, that only ubuntu 16.04+ supported?

@wader

This comment has been minimized.

Show comment
Hide comment
@wader

wader May 2, 2016

@klaszlo sorry don't know, but reading the comments for #337 and what @mrvini says it seems docker 1.10+ might be needed

wader commented May 2, 2016

@klaszlo sorry don't know, but reading the comments for #337 and what @mrvini says it seems docker 1.10+ might be needed

@sherter

This comment has been minimized.

Show comment
Hide comment
@sherter

sherter May 2, 2016

The image sha256:c378d9d861c5fa2addf293a20e47318fbea8a7d621afadaa0328c434202a7b3e is broken for me, too (Error running notify command: nginx -s reload, exit status 1). The one before that (sha256:d72335ddd6913d5914bebed12b5cf807194416be293f1b732d6ad668691e93b8) works fine. You can run images by digest like this:
jwilder/nginx-proxy@sha256:d72335ddd6913d5914bebed12b5cf807194416be293f1b732d6ad668691e93b8

$ docker --version
Docker version 1.10.3, build 8acee1b

sherter commented May 2, 2016

The image sha256:c378d9d861c5fa2addf293a20e47318fbea8a7d621afadaa0328c434202a7b3e is broken for me, too (Error running notify command: nginx -s reload, exit status 1). The one before that (sha256:d72335ddd6913d5914bebed12b5cf807194416be293f1b732d6ad668691e93b8) works fine. You can run images by digest like this:
jwilder/nginx-proxy@sha256:d72335ddd6913d5914bebed12b5cf807194416be293f1b732d6ad668691e93b8

$ docker --version
Docker version 1.10.3, build 8acee1b
@benzht

This comment has been minimized.

Show comment
Hide comment
@benzht

benzht May 3, 2016

I am using the two-container solution with docker-gen and have the same problem in all my machines.

benzht commented May 3, 2016

I am using the two-container solution with docker-gen and have the same problem in all my machines.

@wader

This comment has been minimized.

Show comment
Hide comment
@wader

wader May 3, 2016

@benzht hi, what versions of docker?

wader commented May 3, 2016

@benzht hi, what versions of docker?

@benzht

This comment has been minimized.

Show comment
Hide comment
@benzht

benzht May 3, 2016

Sorry, forgot the details:

  • docker versions 1.10.3 and 1.11.0
  • nginx:latest, docker-gen:latest, 'nginx.tmpl':latest
  • no user-defined networks used (because they did not work so far)
  • containers started with: ...
    -e VIRTUAL_HOST=aaa.bbb.cc,xxx.bbb.ccc
    -e VIRTUAL_PORT=8080
    -e VIRTUAL_PROTO=http
    ...

benzht commented May 3, 2016

Sorry, forgot the details:

  • docker versions 1.10.3 and 1.11.0
  • nginx:latest, docker-gen:latest, 'nginx.tmpl':latest
  • no user-defined networks used (because they did not work so far)
  • containers started with: ...
    -e VIRTUAL_HOST=aaa.bbb.cc,xxx.bbb.ccc
    -e VIRTUAL_PORT=8080
    -e VIRTUAL_PROTO=http
    ...
@wader

This comment has been minimized.

Show comment
Hide comment
@wader

wader May 3, 2016

Thanks, anything interesting in docker network inspect $(docker network ls -q)?

wader commented May 3, 2016

Thanks, anything interesting in docker network inspect $(docker network ls -q)?

@benzht

This comment has been minimized.

Show comment
Hide comment
@benzht

benzht May 3, 2016

machine1 and machine2 have revprox-environment variables set

[
    {
        "Name": "bridge",
        "Id": "12562cb7079b3b4061e12545ac7f795a2f8954f7f40a16c1525d77be890de2cf",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16"
                }
            ]
        },
        "Internal": false,
        "Containers": {
            "2802bd663583505b77370c9088403f2bfee45991a62d1258dfd835659ac5b857": {
                "Name": "machine1",
                "EndpointID": "1f726b852ff0d3e077877697f9162ec48557de77373804f052f80171dde12562",
                "MacAddress": "02:42:ac:11:00:06",
                "IPv4Address": "172.17.0.6/16",
                "IPv6Address": ""
            },
            "633c10d347a82f5d1f0f8af0ab15fa48913735b6f05f307c37c0a7a473214e1a": {
                "Name": "machine2",
                "EndpointID": "c8ae9dbc77862d79f4217755892f96dc294896f4663b0f00fa82a8367c7f9263",
                "MacAddress": "02:42:ac:11:00:05",
                "IPv4Address": "172.17.0.5/16",
                "IPv6Address": ""
            },
            "ab875c71a13a435c9e152c5464dcb567475057405dc9ab6e5c9941d57d854b56": {
                "Name": "pg",
                "EndpointID": "1303f351679a69ab05c9bc9947f28f165d01a1841545b1416a914b4f2e4266a8",
                "MacAddress": "02:42:ac:11:00:04",
                "IPv4Address": "172.17.0.4/16",
                "IPv6Address": ""
            },
            "bcb350d31152ed4cae3ae50226c38650f2b47d91f709664d0e05e36d7e8abe6c": {
                "Name": "nginx",
                "EndpointID": "f0ac2c422780598843209dffdb0b89d7c7ae7baa821d0547bba7b4acc5605773",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    },
    {
        "Name": "host",
        "Id": "2da3ced6d4504489e820f0fb5353cd01adfd7000804a687dba6c1424bd5c17c4",
        "Scope": "local",
        "Driver": "host",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": []
        },
        "Internal": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    },
    {
        "Name": "none",
        "Id": "069bd7fedaf4ce732eb5e9ac645d995befd24aa4aa91828116c49555ad3ea9a5",
        "Scope": "local",
        "Driver": "null",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": []
        },
        "Internal": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]

benzht commented May 3, 2016

machine1 and machine2 have revprox-environment variables set

[
    {
        "Name": "bridge",
        "Id": "12562cb7079b3b4061e12545ac7f795a2f8954f7f40a16c1525d77be890de2cf",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16"
                }
            ]
        },
        "Internal": false,
        "Containers": {
            "2802bd663583505b77370c9088403f2bfee45991a62d1258dfd835659ac5b857": {
                "Name": "machine1",
                "EndpointID": "1f726b852ff0d3e077877697f9162ec48557de77373804f052f80171dde12562",
                "MacAddress": "02:42:ac:11:00:06",
                "IPv4Address": "172.17.0.6/16",
                "IPv6Address": ""
            },
            "633c10d347a82f5d1f0f8af0ab15fa48913735b6f05f307c37c0a7a473214e1a": {
                "Name": "machine2",
                "EndpointID": "c8ae9dbc77862d79f4217755892f96dc294896f4663b0f00fa82a8367c7f9263",
                "MacAddress": "02:42:ac:11:00:05",
                "IPv4Address": "172.17.0.5/16",
                "IPv6Address": ""
            },
            "ab875c71a13a435c9e152c5464dcb567475057405dc9ab6e5c9941d57d854b56": {
                "Name": "pg",
                "EndpointID": "1303f351679a69ab05c9bc9947f28f165d01a1841545b1416a914b4f2e4266a8",
                "MacAddress": "02:42:ac:11:00:04",
                "IPv4Address": "172.17.0.4/16",
                "IPv6Address": ""
            },
            "bcb350d31152ed4cae3ae50226c38650f2b47d91f709664d0e05e36d7e8abe6c": {
                "Name": "nginx",
                "EndpointID": "f0ac2c422780598843209dffdb0b89d7c7ae7baa821d0547bba7b4acc5605773",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    },
    {
        "Name": "host",
        "Id": "2da3ced6d4504489e820f0fb5353cd01adfd7000804a687dba6c1424bd5c17c4",
        "Scope": "local",
        "Driver": "host",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": []
        },
        "Internal": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    },
    {
        "Name": "none",
        "Id": "069bd7fedaf4ce732eb5e9ac645d995befd24aa4aa91828116c49555ad3ea9a5",
        "Scope": "local",
        "Driver": "null",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": []
        },
        "Internal": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]
@wader

This comment has been minimized.

Show comment
Hide comment
@wader

wader May 3, 2016

Weird, i run a setup with nginx-proxy and some container on the same bridge network that works. Only difference i can see is that you have a "Internal": false.
Im using:
Docker version 1.10.3, build 20f81dd
Latest nginx-proxy

No port expose changes? try exact same docker-gen version (0.7.0 i think), does single container version work?

wader commented May 3, 2016

Weird, i run a setup with nginx-proxy and some container on the same bridge network that works. Only difference i can see is that you have a "Internal": false.
Im using:
Docker version 1.10.3, build 20f81dd
Latest nginx-proxy

No port expose changes? try exact same docker-gen version (0.7.0 i think), does single container version work?

@benzht

This comment has been minimized.

Show comment
Hide comment
@benzht

benzht May 3, 2016

I'm not running the nginx-proxy itself (and right now cannot test if it would work with it), but I am using a vanilla nginx with docker-gen and the template from nginx-proxy..as described in the documentation. Later this afternoon I will be able to test a vanilla nginx-proxy

benzht commented May 3, 2016

I'm not running the nginx-proxy itself (and right now cannot test if it would work with it), but I am using a vanilla nginx with docker-gen and the template from nginx-proxy..as described in the documentation. Later this afternoon I will be able to test a vanilla nginx-proxy

@Bre77

This comment has been minimized.

Show comment
Hide comment
@Bre77

Bre77 May 3, 2016

I had this problem on nginx-proxy, so then tried setting up vanilla nginx and docker-gen, had the same problem again. By reverting back to a older template (commit 97c6340) it started working again.

Bre77 commented May 3, 2016

I had this problem on nginx-proxy, so then tried setting up vanilla nginx and docker-gen, had the same problem again. By reverting back to a older template (commit 97c6340) it started working again.

@kalbasit

This comment has been minimized.

Show comment
Hide comment
@kalbasit

kalbasit May 3, 2016

It's also not working for me. I've uploaded all of my .service files and all the info you need on this gist.

kalbasit commented May 3, 2016

It's also not working for me. I've uploaded all of my .service files and all the info you need on this gist.

@rparree

This comment has been minimized.

Show comment
Hide comment
@rparree

rparree May 4, 2016

Just to confirm: 0.3.0 does not work for me neither. Reverting back to 0.2.0 works. Same problems (not registering upstream servers, error on refresh)

  • Docker version 1.9.1, build ee06d03/1.9.1
  • Linux hprp 4.4.8-300.fc23.x86_64
  • Fedora 23 (Workstation Edition)

Docker Networking:

[
    {
        "Name": "host",
        "Id": "bb960310aaa58c288cfe385a11588507f595da69064d05392c5a572d6eac085b",
        "Scope": "local",
        "Driver": "host",
        "IPAM": {
            "Driver": "default",
            "Config": []
        },
        "Containers": {},
        "Options": {}
    },
    {
        "Name": "bridge",
        "Id": "de075602f22cfd005a9c336b12eb5bd1425d2cc47c20cf51d2e0e5242e6925ce",
        "Scope": "local",
        "Driver": "bridge",
        "IPAM": {
            "Driver": "default",
            "Config": [
                {
                    "Subnet": "172.17.0.0/16"
                }
            ]
        },
        "Containers": {},
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        }
    },
    {
        "Name": "none",
        "Id": "f93eea65f260c3877e4fc3f926bc9eaecab6862c71a37ca9060f495ada7ee29a",
        "Scope": "local",
        "Driver": "null",
        "IPAM": {
            "Driver": "default",
            "Config": []
        },
        "Containers": {},
        "Options": {}
    }
]

rparree commented May 4, 2016

Just to confirm: 0.3.0 does not work for me neither. Reverting back to 0.2.0 works. Same problems (not registering upstream servers, error on refresh)

  • Docker version 1.9.1, build ee06d03/1.9.1
  • Linux hprp 4.4.8-300.fc23.x86_64
  • Fedora 23 (Workstation Edition)

Docker Networking:

[
    {
        "Name": "host",
        "Id": "bb960310aaa58c288cfe385a11588507f595da69064d05392c5a572d6eac085b",
        "Scope": "local",
        "Driver": "host",
        "IPAM": {
            "Driver": "default",
            "Config": []
        },
        "Containers": {},
        "Options": {}
    },
    {
        "Name": "bridge",
        "Id": "de075602f22cfd005a9c336b12eb5bd1425d2cc47c20cf51d2e0e5242e6925ce",
        "Scope": "local",
        "Driver": "bridge",
        "IPAM": {
            "Driver": "default",
            "Config": [
                {
                    "Subnet": "172.17.0.0/16"
                }
            ]
        },
        "Containers": {},
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        }
    },
    {
        "Name": "none",
        "Id": "f93eea65f260c3877e4fc3f926bc9eaecab6862c71a37ca9060f495ada7ee29a",
        "Scope": "local",
        "Driver": "null",
        "IPAM": {
            "Driver": "default",
            "Config": []
        },
        "Containers": {},
        "Options": {}
    }
]

@malfario

This comment has been minimized.

Show comment
Hide comment
@malfario

malfario May 4, 2016

Same issue here on CoreOS stable (899.17.0). Had to revert from 0.3 -> 0.2 because of empty upstream entries:

upstream www.xxxx.net {
}
server {
    server_name www.xxxx.net;
    listen 80 ;
    access_log /var/log/nginx/access.log vhost;
    location / {
        proxy_pass http://www.xxxx.net;
    }
}

Docker version info:

Client:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.3
 Git commit:   9894698
 Built:
 OS/Arch:      linux/amd64

Server:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.3
 Git commit:   9894698
 Built:
 OS/Arch:      linux/amd64

Docker networking info:

[
    {
        "Name": "bridge",
        "Id": "7d2d9fe9c3a113f5460e1a4f3cf55e228c18420eae3b06913d8698db3bbee30a",
        "Scope": "local",
        "Driver": "bridge",
        "IPAM": {
            "Driver": "default",
            "Config": [
                {
                    "Subnet": "172.17.0.0/16"
                }
            ]
        },
        "Containers": {
            "0608f72051f2625fff2051458aede50c2bb363004e36d504f40cea0282714176": {
                "EndpointID": "a1906d65db9d1028914188a48be0d4e32371cea7da814a2c1f363eab3d6b1a00",
                "MacAddress": "02:42:ac:11:00:0b",
                "IPv4Address": "172.17.0.11/16",
                "IPv6Address": ""
            },
            "13f0d19ef88321cdb447ae9efecc23232c28558dc2bfcceae05813fb7262d3e8": {
                "EndpointID": "b44764903a59eed598a59aebe9f13e73aa5e78569c7e563bdcfb22956a3fe934",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            },
            "16ee990232a08a359b004f96d92dd837bfd9eedc2408613a839b204015e6091b": {
                "EndpointID": "0d58e894448b9a27884d2d5b15012233aa8fc1956cf47e9c81cc6b37bcbd11b0",
                "MacAddress": "02:42:ac:11:00:0c",
                "IPv4Address": "172.17.0.12/16",
                "IPv6Address": ""
            },
            "32c16a8b0336d26a6cb14a3fb65dd545e08a9fc28cfd1fa61afe1a716a640b11": {
                "EndpointID": "818afd4a0c8d09bc7cc632173481d981d06c718b80c0753ccafb2361048c3791",
                "MacAddress": "02:42:ac:11:00:03",
                "IPv4Address": "172.17.0.3/16",
                "IPv6Address": ""
            },
            "494826356d693a2fa2b6fdaed6a6507b2271dfb721356c341910a8e63d676380": {
                "EndpointID": "24cb57366e2a86b79e00f59ba7ddcdcafa4fc70b21a057c5f999c9798d3a5227",
                "MacAddress": "02:42:ac:11:00:08",
                "IPv4Address": "172.17.0.8/16",
                "IPv6Address": ""
            },
            "6390314b4c9841bc25e0c3937b0d5795b59aa37652fc415813f17f512653c2d0": {
                "EndpointID": "6711a1c4755d91706aac172d3e73fb799bf05afa007ccb7e81c9228f986e36bb",
                "MacAddress": "02:42:ac:11:00:04",
                "IPv4Address": "172.17.0.4/16",
                "IPv6Address": ""
            },
            "6fbb225dc86637599946f41cbdad6130e0019e3dffe2a711586012b496098552": {
                "EndpointID": "b91532b65a946e5548ba9d4c94d32fc4539376e997cebd7b10826b97d210a50f",
                "MacAddress": "02:42:ac:11:00:07",
                "IPv4Address": "172.17.0.7/16",
                "IPv6Address": ""
            },
            "77e1ca97d4fb3b0e1c88a5adbc21e652848bf94e540a0f23157c382114a4246a": {
                "EndpointID": "178d5d9da05a37fa33371d8476981674b112abf939d172b357f6b045b1d1c4ca",
                "MacAddress": "02:42:ac:11:00:06",
                "IPv4Address": "172.17.0.6/16",
                "IPv6Address": ""
            },
            "7c6610b7057f6e0740e33409a3f0927b954b8cec29ac045565ffabf2d126a862": {
                "EndpointID": "e89ff27a9c17656f742a24fdeafc13ad4211cb3d999ba3c9d36c28264e855b0d",
                "MacAddress": "02:42:ac:11:00:09",
                "IPv4Address": "172.17.0.9/16",
                "IPv6Address": ""
            },
            "8a3f0be58a33613d5cd451830cb1d67909cea9d507a7026bbe8412558c78e10a": {
                "EndpointID": "22d51e699719c200ab5b6f218aa0244f6530ff8023c11c7ab2a4d621d42e8e71",
                "MacAddress": "02:42:ac:11:00:0a",
                "IPv4Address": "172.17.0.10/16",
                "IPv6Address": ""
            },
            "9704851a228de372e2b214c53160cd4d7d0227ec39fd8e24c0282388073dd769": {
                "EndpointID": "52f5039611d43a6a81a4498ae1bf989fe6fe56b9dcd7e1d4a96980818fa5850e",
                "MacAddress": "02:42:ac:11:00:0d",
                "IPv4Address": "172.17.0.13/16",
                "IPv6Address": ""
            },
            "b012433aa5872343955aba58fe7a401cbec9de5db39cb63c77a3fa80d03d786b": {
                "EndpointID": "a8bee2ee0f03ad11f1d3ceab066b2c8954cd7ef8f479d04a4f8886d5d1c00d9b",
                "MacAddress": "02:42:ac:11:00:0e",
                "IPv4Address": "172.17.0.14/16",
                "IPv6Address": ""
            },
            "c34a8c138f62ea0c673ef6978aa81b208002ca2ff435923673b097932a46375f": {
                "EndpointID": "072d897d91604623cce3ab445378b48001517fd52d4bc8ea94d1ac015b95fb1b",
                "MacAddress": "02:42:ac:11:00:05",
                "IPv4Address": "172.17.0.5/16",
                "IPv6Address": ""
            },
            "ed532e0529419539a441f7532c816a50a22c4b5498225e235492120a11898414": {
                "EndpointID": "fe35242022cdf93489804080984c9bba3bf46a488e5e0c213be8c6de5485ee13",
                "MacAddress": "02:42:ac:11:00:0f",
                "IPv4Address": "172.17.0.15/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        }
    },
    {
        "Name": "none",
        "Id": "50b976a565ecd34e7df5ee4654a78df43624911f287fa76d2fc89d6b2963daa4",
        "Scope": "local",
        "Driver": "null",
        "IPAM": {
            "Driver": "default",
            "Config": []
        },
        "Containers": {},
        "Options": {}
    },
    {
        "Name": "host",
        "Id": "0436162c23c5f2ddc73fdac5d453982a7ece8bd0161cf97d3c2b40b8eaf53717",
        "Scope": "local",
        "Driver": "host",
        "IPAM": {
            "Driver": "default",
            "Config": []
        },
        "Containers": {},
        "Options": {}
    }
]

malfario commented May 4, 2016

Same issue here on CoreOS stable (899.17.0). Had to revert from 0.3 -> 0.2 because of empty upstream entries:

upstream www.xxxx.net {
}
server {
    server_name www.xxxx.net;
    listen 80 ;
    access_log /var/log/nginx/access.log vhost;
    location / {
        proxy_pass http://www.xxxx.net;
    }
}

Docker version info:

Client:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.3
 Git commit:   9894698
 Built:
 OS/Arch:      linux/amd64

Server:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.3
 Git commit:   9894698
 Built:
 OS/Arch:      linux/amd64

Docker networking info:

[
    {
        "Name": "bridge",
        "Id": "7d2d9fe9c3a113f5460e1a4f3cf55e228c18420eae3b06913d8698db3bbee30a",
        "Scope": "local",
        "Driver": "bridge",
        "IPAM": {
            "Driver": "default",
            "Config": [
                {
                    "Subnet": "172.17.0.0/16"
                }
            ]
        },
        "Containers": {
            "0608f72051f2625fff2051458aede50c2bb363004e36d504f40cea0282714176": {
                "EndpointID": "a1906d65db9d1028914188a48be0d4e32371cea7da814a2c1f363eab3d6b1a00",
                "MacAddress": "02:42:ac:11:00:0b",
                "IPv4Address": "172.17.0.11/16",
                "IPv6Address": ""
            },
            "13f0d19ef88321cdb447ae9efecc23232c28558dc2bfcceae05813fb7262d3e8": {
                "EndpointID": "b44764903a59eed598a59aebe9f13e73aa5e78569c7e563bdcfb22956a3fe934",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            },
            "16ee990232a08a359b004f96d92dd837bfd9eedc2408613a839b204015e6091b": {
                "EndpointID": "0d58e894448b9a27884d2d5b15012233aa8fc1956cf47e9c81cc6b37bcbd11b0",
                "MacAddress": "02:42:ac:11:00:0c",
                "IPv4Address": "172.17.0.12/16",
                "IPv6Address": ""
            },
            "32c16a8b0336d26a6cb14a3fb65dd545e08a9fc28cfd1fa61afe1a716a640b11": {
                "EndpointID": "818afd4a0c8d09bc7cc632173481d981d06c718b80c0753ccafb2361048c3791",
                "MacAddress": "02:42:ac:11:00:03",
                "IPv4Address": "172.17.0.3/16",
                "IPv6Address": ""
            },
            "494826356d693a2fa2b6fdaed6a6507b2271dfb721356c341910a8e63d676380": {
                "EndpointID": "24cb57366e2a86b79e00f59ba7ddcdcafa4fc70b21a057c5f999c9798d3a5227",
                "MacAddress": "02:42:ac:11:00:08",
                "IPv4Address": "172.17.0.8/16",
                "IPv6Address": ""
            },
            "6390314b4c9841bc25e0c3937b0d5795b59aa37652fc415813f17f512653c2d0": {
                "EndpointID": "6711a1c4755d91706aac172d3e73fb799bf05afa007ccb7e81c9228f986e36bb",
                "MacAddress": "02:42:ac:11:00:04",
                "IPv4Address": "172.17.0.4/16",
                "IPv6Address": ""
            },
            "6fbb225dc86637599946f41cbdad6130e0019e3dffe2a711586012b496098552": {
                "EndpointID": "b91532b65a946e5548ba9d4c94d32fc4539376e997cebd7b10826b97d210a50f",
                "MacAddress": "02:42:ac:11:00:07",
                "IPv4Address": "172.17.0.7/16",
                "IPv6Address": ""
            },
            "77e1ca97d4fb3b0e1c88a5adbc21e652848bf94e540a0f23157c382114a4246a": {
                "EndpointID": "178d5d9da05a37fa33371d8476981674b112abf939d172b357f6b045b1d1c4ca",
                "MacAddress": "02:42:ac:11:00:06",
                "IPv4Address": "172.17.0.6/16",
                "IPv6Address": ""
            },
            "7c6610b7057f6e0740e33409a3f0927b954b8cec29ac045565ffabf2d126a862": {
                "EndpointID": "e89ff27a9c17656f742a24fdeafc13ad4211cb3d999ba3c9d36c28264e855b0d",
                "MacAddress": "02:42:ac:11:00:09",
                "IPv4Address": "172.17.0.9/16",
                "IPv6Address": ""
            },
            "8a3f0be58a33613d5cd451830cb1d67909cea9d507a7026bbe8412558c78e10a": {
                "EndpointID": "22d51e699719c200ab5b6f218aa0244f6530ff8023c11c7ab2a4d621d42e8e71",
                "MacAddress": "02:42:ac:11:00:0a",
                "IPv4Address": "172.17.0.10/16",
                "IPv6Address": ""
            },
            "9704851a228de372e2b214c53160cd4d7d0227ec39fd8e24c0282388073dd769": {
                "EndpointID": "52f5039611d43a6a81a4498ae1bf989fe6fe56b9dcd7e1d4a96980818fa5850e",
                "MacAddress": "02:42:ac:11:00:0d",
                "IPv4Address": "172.17.0.13/16",
                "IPv6Address": ""
            },
            "b012433aa5872343955aba58fe7a401cbec9de5db39cb63c77a3fa80d03d786b": {
                "EndpointID": "a8bee2ee0f03ad11f1d3ceab066b2c8954cd7ef8f479d04a4f8886d5d1c00d9b",
                "MacAddress": "02:42:ac:11:00:0e",
                "IPv4Address": "172.17.0.14/16",
                "IPv6Address": ""
            },
            "c34a8c138f62ea0c673ef6978aa81b208002ca2ff435923673b097932a46375f": {
                "EndpointID": "072d897d91604623cce3ab445378b48001517fd52d4bc8ea94d1ac015b95fb1b",
                "MacAddress": "02:42:ac:11:00:05",
                "IPv4Address": "172.17.0.5/16",
                "IPv6Address": ""
            },
            "ed532e0529419539a441f7532c816a50a22c4b5498225e235492120a11898414": {
                "EndpointID": "fe35242022cdf93489804080984c9bba3bf46a488e5e0c213be8c6de5485ee13",
                "MacAddress": "02:42:ac:11:00:0f",
                "IPv4Address": "172.17.0.15/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        }
    },
    {
        "Name": "none",
        "Id": "50b976a565ecd34e7df5ee4654a78df43624911f287fa76d2fc89d6b2963daa4",
        "Scope": "local",
        "Driver": "null",
        "IPAM": {
            "Driver": "default",
            "Config": []
        },
        "Containers": {},
        "Options": {}
    },
    {
        "Name": "host",
        "Id": "0436162c23c5f2ddc73fdac5d453982a7ece8bd0161cf97d3c2b40b8eaf53717",
        "Scope": "local",
        "Driver": "host",
        "IPAM": {
            "Driver": "default",
            "Config": []
        },
        "Containers": {},
        "Options": {}
    }
]
@ginkel

This comment has been minimized.

Show comment
Hide comment
@ginkel

ginkel May 4, 2016

Just an (untested) hypothesis: Could it be that only containers attached to the default bridge are affected?

ginkel commented May 4, 2016

Just an (untested) hypothesis: Could it be that only containers attached to the default bridge are affected?

@wader

This comment has been minimized.

Show comment
Hide comment
@wader

wader May 4, 2016

Tried to reproduce but no go. But ended up with a command to dump template context that might be useful. But remember to clear out secrets if your going to post the output!

docker exec <nginx-proxy-container-id> bash -c "docker-gen <(echo '{{json (dict \".\" $ \"Env\" .Env \"Docker\" .Docker)}}')"

Pipe thru jq . etc for nicer output

wader commented May 4, 2016

Tried to reproduce but no go. But ended up with a command to dump template context that might be useful. But remember to clear out secrets if your going to post the output!

docker exec <nginx-proxy-container-id> bash -c "docker-gen <(echo '{{json (dict \".\" $ \"Env\" .Env \"Docker\" .Docker)}}')"

Pipe thru jq . etc for nicer output

@ginkel

This comment has been minimized.

Show comment
Hide comment
@ginkel

ginkel May 4, 2016

I did some debugging. $CurrentContainer is undefined in nginx.tmpl.

ginkel commented May 4, 2016

I did some debugging. $CurrentContainer is undefined in nginx.tmpl.

@wader

This comment has been minimized.

Show comment
Hide comment
@wader

wader May 4, 2016

@ginkel can you dump context and also see how /proc/self/cgroup looks? that's what docker-gen uses for CurrentContainerID

wader commented May 4, 2016

@ginkel can you dump context and also see how /proc/self/cgroup looks? that's what docker-gen uses for CurrentContainerID

@wader

This comment has been minimized.

Show comment
Hide comment
@wader

wader May 4, 2016

I wonder if jwilder/docker-gen#186 could be the cause of some of these problems

wader commented May 4, 2016

I wonder if jwilder/docker-gen#186 could be the cause of some of these problems

@ginkel

This comment has been minimized.

Show comment
Hide comment
@ginkel

ginkel May 4, 2016

$ cat /proc/self/cgroup                                                       
9:perf_event:/docker/0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37
8:blkio:/docker/0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37
7:net_cls,net_prio:/docker/0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37
6:freezer:/docker/0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37
5:devices:/docker/0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37
4:memory:/docker/0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37
3:cpu,cpuacct:/docker/0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37
2:cpuset:/docker/0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37
1:name=systemd:/docker/0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37

I can dump the context, but it contains loads of secrets. What do you need to know?

ginkel commented May 4, 2016

$ cat /proc/self/cgroup                                                       
9:perf_event:/docker/0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37
8:blkio:/docker/0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37
7:net_cls,net_prio:/docker/0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37
6:freezer:/docker/0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37
5:devices:/docker/0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37
4:memory:/docker/0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37
3:cpu,cpuacct:/docker/0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37
2:cpuset:/docker/0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37
1:name=systemd:/docker/0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37

I can dump the context, but it contains loads of secrets. What do you need to know?

@wader

This comment has been minimized.

Show comment
Hide comment
@wader

wader May 4, 2016

@ginkel Oh ah yeah. I guess the useful stuff would be .Docker.CurrentContainerID and the container IDs. Does one of them match up or how does the container look that should match up

wader commented May 4, 2016

@ginkel Oh ah yeah. I guess the useful stuff would be .Docker.CurrentContainerID and the container IDs. Does one of them match up or how does the container look that should match up

@ginkel

This comment has been minimized.

Show comment
Hide comment
@ginkel

ginkel May 4, 2016

Docker context plus one networked container:

{
  "Env": {
    "_": "/usr/local/bin/docker-gen",
    "DOCKER_HOST": "unix:///tmp/docker.sock",
    "DOWNLOAD_URL": "https://github.com/jwilder/docker-gen/releases/download/0.7.0/docker-gen-linux-amd64-0.7.0.tar.gz",
    "HOME": "/root",
    "HOSTNAME": "0cda9a6198b8",
    "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
    "PWD": "/",
    "SHLVL": "1",
    "VERSION": "0.7.0"
  },
  "Docker": {
    "CurrentContainerID": "0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37",
    "Name": "pegasus",
    "NumContainers": 30,
    "NumImages": 42,
    "Version": "1.11.1",
    "ApiVersion": "1.23",
    "GoVersion": "go1.5.4",
    "OperatingSystem": "linux",
    "Architecture": "amd64"
  },
  ".": [
    {
      "State": {
        "Running": true
      },
      "Mounts": [
        {
          "RW": true,
          "Mode": "rw",
          "Driver": "local",
          "Destination": "/data",
          "Source": "/var/lib/docker/volumes/yapdnsui/_data",
          "Name": "yapdnsui"
        }
      ],
      "IP6Global": "",
      "IP6LinkLocal": "",
      "IP": "172.17.0.26",
      "Labels": {},
      "Node": {
        "Address": {
          "HostIP": "",
          "Proto": "",
          "HostPort": "",
          "Port": "",
          "IP6Global": "",
          "IP6LinkLocal": "",
          "IP": ""
        },
        "Name": "",
        "ID": ""
      },
      "Volumes": {},
      "ID": "7e8f0b7b9f1cf55d8a1cdc420e4cc06b25300e858730d2149986f28b51e1f234",
      "Addresses": [
        {
          "HostIP": "",
          "Proto": "tcp",
          "HostPort": "",
          "Port": "8080",
          "IP6Global": "",
          "IP6LinkLocal": "",
          "IP": "172.17.0.26"
        }
      ],
      "Networks": [
        {
          "IPPrefixLen": 16,
          "IP": "172.17.0.26",
          "Name": "bridge",
          "Gateway": "172.17.0.1",
          "EndpointID": "3006b999463e4cc185b71b67d3f28ad5d81de0c32f28e1165f279a9610db2fb2",
          "IPv6Gateway": "",
          "GlobalIPv6Address": "",
          "MacAddress": "02:42:ac:11:00:1a",
          "GlobalIPv6PrefixLen": 0
        }
      ],
      "Gateway": "172.17.0.1",
      "Name": "yapdnsui",
      "Hostname": "7e8f0b7b9f1c",
      "Image": {
        "Tag": "",
        "Repository": "yapdnsui",
        "Registry": "tgbyte"
      },
      "Env": {
        "VIRTUAL_PROTO": "http",
        "VIRTUAL_PORT": "8080",
        "VIRTUAL_HOST": "XXX",
        "CERT_NAME": "wildcard",
        "DEBIAN_FRONTEND": "noninteractive",
        "DEBUG": "yapdnsui",
        "DUMBINIT_VERSION": "1.0.1",
        "GOSU_VERSION": "1.7",
        "NODE_VERSION": "0.12.13",
        "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
        "PORT": "8080"
      }
    },

ginkel commented May 4, 2016

Docker context plus one networked container:

{
  "Env": {
    "_": "/usr/local/bin/docker-gen",
    "DOCKER_HOST": "unix:///tmp/docker.sock",
    "DOWNLOAD_URL": "https://github.com/jwilder/docker-gen/releases/download/0.7.0/docker-gen-linux-amd64-0.7.0.tar.gz",
    "HOME": "/root",
    "HOSTNAME": "0cda9a6198b8",
    "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
    "PWD": "/",
    "SHLVL": "1",
    "VERSION": "0.7.0"
  },
  "Docker": {
    "CurrentContainerID": "0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37",
    "Name": "pegasus",
    "NumContainers": 30,
    "NumImages": 42,
    "Version": "1.11.1",
    "ApiVersion": "1.23",
    "GoVersion": "go1.5.4",
    "OperatingSystem": "linux",
    "Architecture": "amd64"
  },
  ".": [
    {
      "State": {
        "Running": true
      },
      "Mounts": [
        {
          "RW": true,
          "Mode": "rw",
          "Driver": "local",
          "Destination": "/data",
          "Source": "/var/lib/docker/volumes/yapdnsui/_data",
          "Name": "yapdnsui"
        }
      ],
      "IP6Global": "",
      "IP6LinkLocal": "",
      "IP": "172.17.0.26",
      "Labels": {},
      "Node": {
        "Address": {
          "HostIP": "",
          "Proto": "",
          "HostPort": "",
          "Port": "",
          "IP6Global": "",
          "IP6LinkLocal": "",
          "IP": ""
        },
        "Name": "",
        "ID": ""
      },
      "Volumes": {},
      "ID": "7e8f0b7b9f1cf55d8a1cdc420e4cc06b25300e858730d2149986f28b51e1f234",
      "Addresses": [
        {
          "HostIP": "",
          "Proto": "tcp",
          "HostPort": "",
          "Port": "8080",
          "IP6Global": "",
          "IP6LinkLocal": "",
          "IP": "172.17.0.26"
        }
      ],
      "Networks": [
        {
          "IPPrefixLen": 16,
          "IP": "172.17.0.26",
          "Name": "bridge",
          "Gateway": "172.17.0.1",
          "EndpointID": "3006b999463e4cc185b71b67d3f28ad5d81de0c32f28e1165f279a9610db2fb2",
          "IPv6Gateway": "",
          "GlobalIPv6Address": "",
          "MacAddress": "02:42:ac:11:00:1a",
          "GlobalIPv6PrefixLen": 0
        }
      ],
      "Gateway": "172.17.0.1",
      "Name": "yapdnsui",
      "Hostname": "7e8f0b7b9f1c",
      "Image": {
        "Tag": "",
        "Repository": "yapdnsui",
        "Registry": "tgbyte"
      },
      "Env": {
        "VIRTUAL_PROTO": "http",
        "VIRTUAL_PORT": "8080",
        "VIRTUAL_HOST": "XXX",
        "CERT_NAME": "wildcard",
        "DEBIAN_FRONTEND": "noninteractive",
        "DEBUG": "yapdnsui",
        "DUMBINIT_VERSION": "1.0.1",
        "GOSU_VERSION": "1.7",
        "NODE_VERSION": "0.12.13",
        "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
        "PORT": "8080"
      }
    },

@wader

This comment has been minimized.

Show comment
Hide comment
@wader

wader May 4, 2016

Ok so not the cgroup format issue in your case. I guess this gets a bit confusing because with docker run the CurrentContainerID is the one off container. Maybe better to use docker exec:

docker exec <nginx-proxy-container-id> bash -c "docker-gen <(echo '{{json (dict \".\" $ \"Env\" .Env \"Docker\" .Docker)}}')"

Do you see the nginx-proxy container anywhere and does the ID look weird etc?

wader commented May 4, 2016

Ok so not the cgroup format issue in your case. I guess this gets a bit confusing because with docker run the CurrentContainerID is the one off container. Maybe better to use docker exec:

docker exec <nginx-proxy-container-id> bash -c "docker-gen <(echo '{{json (dict \".\" $ \"Env\" .Env \"Docker\" .Docker)}}')"

Do you see the nginx-proxy container anywhere and does the ID look weird etc?

@ginkel

This comment has been minimized.

Show comment
Hide comment
@ginkel

ginkel May 4, 2016

I used docker exec in the docker-gen container (I use separate docker-gen and nginx containers). The corresponding JSON fragment looks like this:

    {
      "State": {
        "Running": true
      },
      "Mounts": [
        {
          "RW": false,
          "Mode": "",
          "Driver": "",
          "Destination": "/etc/nginx/htpasswd",
          "Source": "/etc/nginx/htpasswd",
          "Name": ""
        },
        {
          "RW": true,
          "Mode": "",
          "Driver": "",
          "Destination": "/etc/nginx/vhost.d",
          "Source": "/etc/reverse-proxy/vhost.d",
          "Name": ""
        },
        {
          "RW": false,
          "Mode": "ro",
          "Driver": "",
          "Destination": "/tmp/docker.sock",
          "Source": "/var/run/docker.sock",
          "Name": ""
        },
        {
          "RW": true,
          "Mode": "",
          "Driver": "local",
          "Destination": "/usr/share/nginx/html",
          "Source": "/var/lib/docker/volumes/9f001507b781bb4120f9b830e3a2e34f47c242d7969f1457cc3a61873492ec48/_data",
          "Name": "9f001507b781bb4120f9b830e3a2e34f47c242d7969f1457cc3a61873492ec48"
        },
        {
          "RW": true,
          "Mode": "rw",
          "Driver": "",
          "Destination": "/etc/docker-gen/templates",
          "Source": "/etc/reverse-proxy/templates",
          "Name": ""
        },
        {
          "RW": false,
          "Mode": "",
          "Driver": "",
          "Destination": "/etc/nginx/certs",
          "Source": "/etc/nginx/certs",
          "Name": ""
        },
        {
          "RW": true,
          "Mode": "",
          "Driver": "",
          "Destination": "/etc/nginx/conf.d",
          "Source": "/etc/reverse-proxy/conf.d",
          "Name": ""
        }
      ],
      "IP6Global": "",
      "IP6LinkLocal": "",
      "IP": "172.17.0.18",
      "Labels": {},
      "Node": {
        "Address": {
          "HostIP": "",
          "Proto": "",
          "HostPort": "",
          "Port": "",
          "IP6Global": "",
          "IP6LinkLocal": "",
          "IP": ""
        },
        "Name": "",
        "ID": ""
      },
      "Volumes": {},
      "ID": "0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37",
      "Addresses": [],
      "Networks": [
        {
          "IPPrefixLen": 16,
          "IP": "172.17.0.18",
          "Name": "bridge",
          "Gateway": "172.17.0.1",
          "EndpointID": "0f61364cc420c16f541740aa12facfd840ead404949ee950409919a30431b4f1",
          "IPv6Gateway": "",
          "GlobalIPv6Address": "",
          "MacAddress": "02:42:ac:11:00:12",
          "GlobalIPv6PrefixLen": 0
        }
      ],
      "Gateway": "172.17.0.1",
      "Name": "reverse-proxy-docker-gen",
      "Hostname": "0cda9a6198b8",
      "Image": {
        "Tag": "",
        "Repository": "docker-gen",
        "Registry": "jwilder"
      },
      "Env": {
        "VERSION": "0.7.0",
        "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
        "DOWNLOAD_URL": "https://github.com/jwilder/docker-gen/releases/download/0.7.0/docker-gen-linux-amd64-0.7.0.tar.gz",
        "DOCKER_HOST": "unix:///tmp/docker.sock"
      }
    },

The ContainerID (0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37) matches up with the other sources.

ginkel commented May 4, 2016

I used docker exec in the docker-gen container (I use separate docker-gen and nginx containers). The corresponding JSON fragment looks like this:

    {
      "State": {
        "Running": true
      },
      "Mounts": [
        {
          "RW": false,
          "Mode": "",
          "Driver": "",
          "Destination": "/etc/nginx/htpasswd",
          "Source": "/etc/nginx/htpasswd",
          "Name": ""
        },
        {
          "RW": true,
          "Mode": "",
          "Driver": "",
          "Destination": "/etc/nginx/vhost.d",
          "Source": "/etc/reverse-proxy/vhost.d",
          "Name": ""
        },
        {
          "RW": false,
          "Mode": "ro",
          "Driver": "",
          "Destination": "/tmp/docker.sock",
          "Source": "/var/run/docker.sock",
          "Name": ""
        },
        {
          "RW": true,
          "Mode": "",
          "Driver": "local",
          "Destination": "/usr/share/nginx/html",
          "Source": "/var/lib/docker/volumes/9f001507b781bb4120f9b830e3a2e34f47c242d7969f1457cc3a61873492ec48/_data",
          "Name": "9f001507b781bb4120f9b830e3a2e34f47c242d7969f1457cc3a61873492ec48"
        },
        {
          "RW": true,
          "Mode": "rw",
          "Driver": "",
          "Destination": "/etc/docker-gen/templates",
          "Source": "/etc/reverse-proxy/templates",
          "Name": ""
        },
        {
          "RW": false,
          "Mode": "",
          "Driver": "",
          "Destination": "/etc/nginx/certs",
          "Source": "/etc/nginx/certs",
          "Name": ""
        },
        {
          "RW": true,
          "Mode": "",
          "Driver": "",
          "Destination": "/etc/nginx/conf.d",
          "Source": "/etc/reverse-proxy/conf.d",
          "Name": ""
        }
      ],
      "IP6Global": "",
      "IP6LinkLocal": "",
      "IP": "172.17.0.18",
      "Labels": {},
      "Node": {
        "Address": {
          "HostIP": "",
          "Proto": "",
          "HostPort": "",
          "Port": "",
          "IP6Global": "",
          "IP6LinkLocal": "",
          "IP": ""
        },
        "Name": "",
        "ID": ""
      },
      "Volumes": {},
      "ID": "0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37",
      "Addresses": [],
      "Networks": [
        {
          "IPPrefixLen": 16,
          "IP": "172.17.0.18",
          "Name": "bridge",
          "Gateway": "172.17.0.1",
          "EndpointID": "0f61364cc420c16f541740aa12facfd840ead404949ee950409919a30431b4f1",
          "IPv6Gateway": "",
          "GlobalIPv6Address": "",
          "MacAddress": "02:42:ac:11:00:12",
          "GlobalIPv6PrefixLen": 0
        }
      ],
      "Gateway": "172.17.0.1",
      "Name": "reverse-proxy-docker-gen",
      "Hostname": "0cda9a6198b8",
      "Image": {
        "Tag": "",
        "Repository": "docker-gen",
        "Registry": "jwilder"
      },
      "Env": {
        "VERSION": "0.7.0",
        "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
        "DOWNLOAD_URL": "https://github.com/jwilder/docker-gen/releases/download/0.7.0/docker-gen-linux-amd64-0.7.0.tar.gz",
        "DOCKER_HOST": "unix:///tmp/docker.sock"
      }
    },

The ContainerID (0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37) matches up with the other sources.

@wader

This comment has been minimized.

Show comment
Hide comment
@wader

wader May 4, 2016

Strange... does docker exec ... bash -c "docker-gen <(echo '{{json (where $ \"ID\" .Docker.CurrentContainerID | first)}}')" return anything?

wader commented May 4, 2016

Strange... does docker exec ... bash -c "docker-gen <(echo '{{json (where $ \"ID\" .Docker.CurrentContainerID | first)}}')" return anything?

@ginkel

This comment has been minimized.

Show comment
Hide comment
@ginkel

ginkel May 4, 2016

Yep:

root@0cda9a6198b8:/# docker-gen <(echo '{{json (where $ "ID" .Docker.CurrentContainerID | first)}}')       
{"ID":"0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37","Addresses":[],"Networks":[{"IP":"172.17.0.18","Name":"bridge","Gateway":"172.17.0.1","EndpointID":"0f61364cc420c16f541740aa12facfd840ead404949ee950409919a30431b4f1","IPv6Gateway":"","GlobalIPv6Address":"","MacAddress":"02:42:ac:11:00:12","GlobalIPv6PrefixLen":0,"IPPrefixLen":16}],"Gateway":"172.17.0.1","Name":"reverse-proxy-docker-gen","Hostname":"0cda9a6198b8","Image":{"Registry":"jwilder","Repository":"docker-gen","Tag":""},"Env":{"DOCKER_HOST":"unix:///tmp/docker.sock","DOWNLOAD_URL":"https://github.com/jwilder/docker-gen/releases/download/0.7.0/docker-gen-linux-amd64-0.7.0.tar.gz","PATH":"/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","VERSION":"0.7.0"},"Volumes":{},"Node":{"ID":"","Name":"","Address":{"IP":"","IP6LinkLocal":"","IP6Global":"","Port":"","HostPort":"","Proto":"","HostIP":""}},"Labels":{},"IP":"172.17.0.18","IP6LinkLocal":"","IP6Global":"","Mounts":[{"Name":"","Source":"/etc/reverse-proxy/templates","Destination":"/etc/docker-gen/templates","Driver":"","Mode":"rw","RW":true},{"Name":"","Source":"/etc/nginx/certs","Destination":"/etc/nginx/certs","Driver":"","Mode":"","RW":false},{"Name":"","Source":"/etc/reverse-proxy/conf.d","Destination":"/etc/nginx/conf.d","Driver":"","Mode":"","RW":true},{"Name":"","Source":"/etc/nginx/htpasswd","Destination":"/etc/nginx/htpasswd","Driver":"","Mode":"","RW":false},{"Name":"","Source":"/etc/reverse-proxy/vhost.d","Destination":"/etc/nginx/vhost.d","Driver":"","Mode":"","RW":true},{"Name":"","Source":"/var/run/docker.sock","Destination":"/tmp/docker.sock","Driver":"","Mode":"ro","RW":false},{"Name":"9f001507b781bb4120f9b830e3a2e34f47c242d7969f1457cc3a61873492ec48","Source":"/var/lib/docker/volumes/9f001507b781bb4120f9b830e3a2e34f47c242d7969f1457cc3a61873492ec48/_data","Destination":"/usr/share/nginx/html","Driver":"local","Mode":"","RW":true}],"State":{"Running":true}}

ginkel commented May 4, 2016

Yep:

root@0cda9a6198b8:/# docker-gen <(echo '{{json (where $ "ID" .Docker.CurrentContainerID | first)}}')       
{"ID":"0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37","Addresses":[],"Networks":[{"IP":"172.17.0.18","Name":"bridge","Gateway":"172.17.0.1","EndpointID":"0f61364cc420c16f541740aa12facfd840ead404949ee950409919a30431b4f1","IPv6Gateway":"","GlobalIPv6Address":"","MacAddress":"02:42:ac:11:00:12","GlobalIPv6PrefixLen":0,"IPPrefixLen":16}],"Gateway":"172.17.0.1","Name":"reverse-proxy-docker-gen","Hostname":"0cda9a6198b8","Image":{"Registry":"jwilder","Repository":"docker-gen","Tag":""},"Env":{"DOCKER_HOST":"unix:///tmp/docker.sock","DOWNLOAD_URL":"https://github.com/jwilder/docker-gen/releases/download/0.7.0/docker-gen-linux-amd64-0.7.0.tar.gz","PATH":"/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","VERSION":"0.7.0"},"Volumes":{},"Node":{"ID":"","Name":"","Address":{"IP":"","IP6LinkLocal":"","IP6Global":"","Port":"","HostPort":"","Proto":"","HostIP":""}},"Labels":{},"IP":"172.17.0.18","IP6LinkLocal":"","IP6Global":"","Mounts":[{"Name":"","Source":"/etc/reverse-proxy/templates","Destination":"/etc/docker-gen/templates","Driver":"","Mode":"rw","RW":true},{"Name":"","Source":"/etc/nginx/certs","Destination":"/etc/nginx/certs","Driver":"","Mode":"","RW":false},{"Name":"","Source":"/etc/reverse-proxy/conf.d","Destination":"/etc/nginx/conf.d","Driver":"","Mode":"","RW":true},{"Name":"","Source":"/etc/nginx/htpasswd","Destination":"/etc/nginx/htpasswd","Driver":"","Mode":"","RW":false},{"Name":"","Source":"/etc/reverse-proxy/vhost.d","Destination":"/etc/nginx/vhost.d","Driver":"","Mode":"","RW":true},{"Name":"","Source":"/var/run/docker.sock","Destination":"/tmp/docker.sock","Driver":"","Mode":"ro","RW":false},{"Name":"9f001507b781bb4120f9b830e3a2e34f47c242d7969f1457cc3a61873492ec48","Source":"/var/lib/docker/volumes/9f001507b781bb4120f9b830e3a2e34f47c242d7969f1457cc3a61873492ec48/_data","Destination":"/usr/share/nginx/html","Driver":"local","Mode":"","RW":true}],"State":{"Running":true}}
@ginkel

This comment has been minimized.

Show comment
Hide comment
@ginkel

ginkel May 4, 2016

For the sake of completeness: I patched nginx.tmpl by changing

{{ range $knownNetwork := $CurrentContainer.Networks }}

into

# XXX {{ $CurrentContainer }} XXX
{{ range $knownNetwork := $CurrentContainer.Networks }}

which yields the following output:

# XXX <no value> XXX

I hope that debugging approach makes any sense...

ginkel commented May 4, 2016

For the sake of completeness: I patched nginx.tmpl by changing

{{ range $knownNetwork := $CurrentContainer.Networks }}

into

# XXX {{ $CurrentContainer }} XXX
{{ range $knownNetwork := $CurrentContainer.Networks }}

which yields the following output:

# XXX <no value> XXX

I hope that debugging approach makes any sense...

@wader

This comment has been minimized.

Show comment
Hide comment
@wader

wader May 4, 2016

Can you try these:

{{.Docker.CurrentContainerID}}
{{where $ "ID" "0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37" | first}}

wader commented May 4, 2016

Can you try these:

{{.Docker.CurrentContainerID}}
{{where $ "ID" "0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37" | first}}
@ginkel

This comment has been minimized.

Show comment
Hide comment
@ginkel

ginkel May 4, 2016

2016/05/04 15:39:39 template error: template: nginx.tmpl:87:17: executing "nginx.tmpl" at <.Docker.CurrentConta...>: Docker is not a field of struct type interface {}

ginkel commented May 4, 2016

2016/05/04 15:39:39 template error: template: nginx.tmpl:87:17: executing "nginx.tmpl" at <.Docker.CurrentConta...>: Docker is not a field of struct type interface {}

@wader

This comment has been minimized.

Show comment
Hide comment
@wader

wader May 4, 2016

Hmm maybe easier to exec into the container and do something like:

wader@wader:~$ docker exec -ti 55d2769d22db bash
root@55d2769d22db:/app# docker-gen <(echo '{{.Docker}}')
{wader 11 244 1.10.3 1.22 go1.5.3 linux amd64 55d2769d22dbc164c98b7460fcd0d369074fe07f8223cc1945216c34eab5ac4d}
root@55d2769d22db:/app# docker-gen <(echo '{{.Docker.CurrentContainerID}}')
55d2769d22dbc164c98b7460fcd0d369074fe07f8223cc1945216c34eab5ac4d
root@55d2769d22db:/app# docker-gen <(echo '{{where $ "ID" "55d2769d22dbc164c98b7460fcd0d369074fe07f8223cc1945216c34eab5ac4d" | first}}')
{55d2769d22dbc164c98b7460fcd0d369074fe07f8223cc1945216c34eab5ac4d ... <cut> ...}
root@55d2769d22db:/app#

wader commented May 4, 2016

Hmm maybe easier to exec into the container and do something like:

wader@wader:~$ docker exec -ti 55d2769d22db bash
root@55d2769d22db:/app# docker-gen <(echo '{{.Docker}}')
{wader 11 244 1.10.3 1.22 go1.5.3 linux amd64 55d2769d22dbc164c98b7460fcd0d369074fe07f8223cc1945216c34eab5ac4d}
root@55d2769d22db:/app# docker-gen <(echo '{{.Docker.CurrentContainerID}}')
55d2769d22dbc164c98b7460fcd0d369074fe07f8223cc1945216c34eab5ac4d
root@55d2769d22db:/app# docker-gen <(echo '{{where $ "ID" "55d2769d22dbc164c98b7460fcd0d369074fe07f8223cc1945216c34eab5ac4d" | first}}')
{55d2769d22dbc164c98b7460fcd0d369074fe07f8223cc1945216c34eab5ac4d ... <cut> ...}
root@55d2769d22db:/app#
@ginkel

This comment has been minimized.

Show comment
Hide comment
@ginkel

ginkel May 4, 2016

root@0cda9a6198b8:/# docker-gen <(echo '{{.Docker}}')                                                                                                                             
{pegasus 30 42 1.11.1 1.23 go1.5.4 linux amd64 0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37}
root@0cda9a6198b8:/# docker-gen <(echo '{{.Docker.CurrentContainerID}}')
0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37
root@0cda9a6198b8:/# docker-gen <(echo '{{where $ "ID" "0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37" | first}}')
{0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37 [] [{172.17.0.18 bridge 172.17.0.1 b1ca6148b85309715d1f885e8b4071f9e60414c8c9589af2f9780b0b1d0d326a   02:42:ac:11:00:12 0 16}] 172.17.0.1 reverse-proxy-docker-gen 0cda9a6198b8 {jwilder docker-gen } map[VERSION:0.7.0 DOWNLOAD_URL:https://github.com/jwilder/docker-gen/releases/download/0.7.0/docker-gen-linux-amd64-0.7.0.tar.gz DOCKER_HOST:unix:///tmp/docker.sock PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] map[] {  {      }} map[] 172.17.0.18   [{ /etc/nginx/htpasswd /etc/nginx/htpasswd   false} { /etc/reverse-proxy/vhost.d /etc/nginx/vhost.d   true} { /var/run/docker.sock /tmp/docker.sock  ro false} {9f001507b781bb4120f9b830e3a2e34f47c242d7969f1457cc3a61873492ec48 /var/lib/docker/volumes/9f001507b781bb4120f9b830e3a2e34f47c242d7969f1457cc3a61873492ec48/_data /usr/share/nginx/html local  true} { /etc/reverse-proxy/templates /etc/docker-gen/templates  rw true} { /etc/nginx/certs /etc/nginx/certs   false} { /etc/reverse-proxy/conf.d /etc/nginx/conf.d   true}] {true}}

Looks good, doesn't it?

Edit: Interesting: In the .tmpl file

{{ where $ "ID" .Docker.CurrentContainerID }}

evaluates to an empty array whereas

docker-gen <(echo '{{where $ "ID" .Docker.CurrentContainerID | first}}')

comes up with a sensible result.

Edit2: If I include

{{ json $ }}

in nginx.tmpl it comes up with a list of containers that does not include the current docker-gen container, most likely because docker-gen has been started with -only-exposed and the docker-gen container does not expose any ports.

ginkel commented May 4, 2016

root@0cda9a6198b8:/# docker-gen <(echo '{{.Docker}}')                                                                                                                             
{pegasus 30 42 1.11.1 1.23 go1.5.4 linux amd64 0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37}
root@0cda9a6198b8:/# docker-gen <(echo '{{.Docker.CurrentContainerID}}')
0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37
root@0cda9a6198b8:/# docker-gen <(echo '{{where $ "ID" "0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37" | first}}')
{0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37 [] [{172.17.0.18 bridge 172.17.0.1 b1ca6148b85309715d1f885e8b4071f9e60414c8c9589af2f9780b0b1d0d326a   02:42:ac:11:00:12 0 16}] 172.17.0.1 reverse-proxy-docker-gen 0cda9a6198b8 {jwilder docker-gen } map[VERSION:0.7.0 DOWNLOAD_URL:https://github.com/jwilder/docker-gen/releases/download/0.7.0/docker-gen-linux-amd64-0.7.0.tar.gz DOCKER_HOST:unix:///tmp/docker.sock PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] map[] {  {      }} map[] 172.17.0.18   [{ /etc/nginx/htpasswd /etc/nginx/htpasswd   false} { /etc/reverse-proxy/vhost.d /etc/nginx/vhost.d   true} { /var/run/docker.sock /tmp/docker.sock  ro false} {9f001507b781bb4120f9b830e3a2e34f47c242d7969f1457cc3a61873492ec48 /var/lib/docker/volumes/9f001507b781bb4120f9b830e3a2e34f47c242d7969f1457cc3a61873492ec48/_data /usr/share/nginx/html local  true} { /etc/reverse-proxy/templates /etc/docker-gen/templates  rw true} { /etc/nginx/certs /etc/nginx/certs   false} { /etc/reverse-proxy/conf.d /etc/nginx/conf.d   true}] {true}}

Looks good, doesn't it?

Edit: Interesting: In the .tmpl file

{{ where $ "ID" .Docker.CurrentContainerID }}

evaluates to an empty array whereas

docker-gen <(echo '{{where $ "ID" .Docker.CurrentContainerID | first}}')

comes up with a sensible result.

Edit2: If I include

{{ json $ }}

in nginx.tmpl it comes up with a list of containers that does not include the current docker-gen container, most likely because docker-gen has been started with -only-exposed and the docker-gen container does not expose any ports.

@wader

This comment has been minimized.

Show comment
Hide comment
@wader

wader May 4, 2016

@AHelper does it work if you connect the docker-gen container to all networks where you have web containers? see "Multiple Networks" in README.

wader commented May 4, 2016

@AHelper does it work if you connect the docker-gen container to all networks where you have web containers? see "Multiple Networks" in README.

@AHelper

This comment has been minimized.

Show comment
Hide comment
@AHelper

AHelper May 4, 2016

That wouldn't achieve separation from having different networks. If I do a docker network connect nginx-gen othernetwork, it will populate upstream since it (assumes nginx is on the same networks as docker-gen) thinks nginx has access to the other container. The intended results is that containers in a different network be skipped altogether if docker-gen can't access it.

The outputs for upstream {{ $host }} { ... } and server {...} blocks should be inside the check to see if docker-gen can connect to the host (that check seems to be fine). That, I think, should fix this.

What should be done about -only-exposed, however? Add an exception to exclude the current container from that logic, allowing $CurrentContainer to be populated?

AHelper commented May 4, 2016

That wouldn't achieve separation from having different networks. If I do a docker network connect nginx-gen othernetwork, it will populate upstream since it (assumes nginx is on the same networks as docker-gen) thinks nginx has access to the other container. The intended results is that containers in a different network be skipped altogether if docker-gen can't access it.

The outputs for upstream {{ $host }} { ... } and server {...} blocks should be inside the check to see if docker-gen can connect to the host (that check seems to be fine). That, I think, should fix this.

What should be done about -only-exposed, however? Add an exception to exclude the current container from that logic, allowing $CurrentContainer to be populated?

@wader

This comment has been minimized.

Show comment
Hide comment
@wader

wader May 5, 2016

@AHelper Ok i haven't used networks that much yet. But if you have two containers A and B on two different networks and there is another container that is a member of both networks, then A and B can't connect directly to each other right?

Would it work better to have a "nginx-proxy-network" that nginx and all other containers that expose public web ports connects to? (and if using separate containers docker-gen also joins same networks as nginx to make the template context happy).

By connect check you mean the loops in nginx.tmpl that looks for containers that share any network with the "current" container?

wader commented May 5, 2016

@AHelper Ok i haven't used networks that much yet. But if you have two containers A and B on two different networks and there is another container that is a member of both networks, then A and B can't connect directly to each other right?

Would it work better to have a "nginx-proxy-network" that nginx and all other containers that expose public web ports connects to? (and if using separate containers docker-gen also joins same networks as nginx to make the template context happy).

By connect check you mean the loops in nginx.tmpl that looks for containers that share any network with the "current" container?

@AHelper

This comment has been minimized.

Show comment
Hide comment
@AHelper

AHelper May 8, 2016

@wader Correct, A and B can't see or access each other, but nginx-proxy C running in both networks that A and B are in can see both.

Putting nginx-proxy in a specific network setup should be a user decision, not a requirement of nginx-proxy IMO, unless I misinterpreted what you said.

Yes, the logic for checking for matching networks.


I am testing out a fix for nginx-proxy. I replaced the loop with an array of matching networks, see https://github.com/AHelper/nginx-proxy/blob/34ee8d77c6f3f20e004647b3421d571eddbd5f2e/nginx.tmpl#L81. I can use this to verify that the container is reachable from nginx-proxy before generating upstream {...} and server {...}. I also keep upstream populated with all possible ways to reach the container, although I don't think this would really be needed since the config should be regenerated if the network config changes.

I'll keep tossing around ideas on that. Also, is there a minimum supported docker version for this? I am wondering what happens when this runs on a pre-network docker version.

AHelper commented May 8, 2016

@wader Correct, A and B can't see or access each other, but nginx-proxy C running in both networks that A and B are in can see both.

Putting nginx-proxy in a specific network setup should be a user decision, not a requirement of nginx-proxy IMO, unless I misinterpreted what you said.

Yes, the logic for checking for matching networks.


I am testing out a fix for nginx-proxy. I replaced the loop with an array of matching networks, see https://github.com/AHelper/nginx-proxy/blob/34ee8d77c6f3f20e004647b3421d571eddbd5f2e/nginx.tmpl#L81. I can use this to verify that the container is reachable from nginx-proxy before generating upstream {...} and server {...}. I also keep upstream populated with all possible ways to reach the container, although I don't think this would really be needed since the config should be regenerated if the network config changes.

I'll keep tossing around ideas on that. Also, is there a minimum supported docker version for this? I am wondering what happens when this runs on a pre-network docker version.

@wader

This comment has been minimized.

Show comment
Hide comment
@wader

wader May 8, 2016

@AHelper doh, now i see what you mean, thanks for explaining

wader commented May 8, 2016

@AHelper doh, now i see what you mean, thanks for explaining

@AHelper

This comment has been minimized.

Show comment
Hide comment
@AHelper

AHelper May 8, 2016

@wader, no problem.

I did a bit more experimenting and that seems to work: http://jenkins.ahelper.me/view/nginx-proxy/job/nginx-proxy-test/32/

I'll grab an older Docker version and look at getting it to work there.

AHelper commented May 8, 2016

@wader, no problem.

I did a bit more experimenting and that seems to work: http://jenkins.ahelper.me/view/nginx-proxy/job/nginx-proxy-test/32/

I'll grab an older Docker version and look at getting it to work there.

@wader

This comment has been minimized.

Show comment
Hide comment
@wader

wader May 8, 2016

@AHelper according to changelog docket network went from experimental to stable in 1.9 release... looking at the PR moby/moby#16645 it seems it was introduced in 1.7... but maybe stuff work differently early on

wader commented May 8, 2016

@AHelper according to changelog docket network went from experimental to stable in 1.9 release... looking at the PR moby/moby#16645 it seems it was introduced in 1.7... but maybe stuff work differently early on

@AHelper

This comment has been minimized.

Show comment
Hide comment
@AHelper

AHelper May 9, 2016

Thanks, I remembered the big thing for 1.9 was "stable" networking (except events 😕 ), I didn't use the experimental networking before. Anyways, getting closer to a patch for this. Updated checking for networks with backwards compatibility (pre-1.9 docker). I think it will still have issues with -only-exposed, but I'm testing on 1.9.1 and 1.7.1 and they seem ok now.

Problem is, if you run -only-exposed, there can't be any checks for reachability without either making a way to directly get the current container (not just ID) or to add some method to try and ping (or equivalent to verify a route exists to the container). With that, I can add all networks that the container is in. If it isn't reachable, nginx should 502 (or related), but it won't make a bad config.


Next issue, if nginx-proxy runs with --net=host, things break down again. Checks must also be done to include accessing by :. The upstream template seems to check, but determining reachability doesn't take that into account.

AHelper commented May 9, 2016

Thanks, I remembered the big thing for 1.9 was "stable" networking (except events 😕 ), I didn't use the experimental networking before. Anyways, getting closer to a patch for this. Updated checking for networks with backwards compatibility (pre-1.9 docker). I think it will still have issues with -only-exposed, but I'm testing on 1.9.1 and 1.7.1 and they seem ok now.

Problem is, if you run -only-exposed, there can't be any checks for reachability without either making a way to directly get the current container (not just ID) or to add some method to try and ping (or equivalent to verify a route exists to the container). With that, I can add all networks that the container is in. If it isn't reachable, nginx should 502 (or related), but it won't make a bad config.


Next issue, if nginx-proxy runs with --net=host, things break down again. Checks must also be done to include accessing by :. The upstream template seems to check, but determining reachability doesn't take that into account.

Giymo11 added a commit to Giymo11/nginx-proxy that referenced this issue May 16, 2016

Updates Readme for using Separate Containers
- Makes the first command more readable
- Gives the docker-gen container a name
- Adds the /etc/nginx/certs volume needed by the latest nginx.tmpl
- Removes the -only-exposed flag as discussed in issue #438

Giymo11 added a commit to Giymo11/nginx-proxy that referenced this issue May 16, 2016

Updates Readme on using Separate Containers
- makes first command more readable
- adds name to docker-gen container
- adds volume for /etc/nginx/certs, which is needed by the latest .tmpl
- removes -only-exposed flag, it causes problems as discussed in issue #438
@AHelper

This comment has been minimized.

Show comment
Hide comment
@AHelper

AHelper Jun 21, 2016

Bumping this, I'm working on quite a few changes (1, 2, 3) in docker-gen that should clear up a few issues here. There are breaking changes in it in order to support Docker APIs before & after networking changes (removing deprecated NetworkSettings config items, replacing with RuntimeContainer.Networks, emulating if pre-v1.21 to keep configs consistent across Docker versions). Also addressing CurrentContainerID pointing to docker-gen in a separate container setup, meaning relying on -only-exposed and being able to get the correct RuntimeContainer struct for, say, nginx will be possible.

AHelper commented Jun 21, 2016

Bumping this, I'm working on quite a few changes (1, 2, 3) in docker-gen that should clear up a few issues here. There are breaking changes in it in order to support Docker APIs before & after networking changes (removing deprecated NetworkSettings config items, replacing with RuntimeContainer.Networks, emulating if pre-v1.21 to keep configs consistent across Docker versions). Also addressing CurrentContainerID pointing to docker-gen in a separate container setup, meaning relying on -only-exposed and being able to get the correct RuntimeContainer struct for, say, nginx will be possible.

@schmunk42

This comment has been minimized.

Show comment
Hide comment
@schmunk42

schmunk42 Sep 11, 2016

Contributor

I added some info why empty upstreams can occur: #565 (comment)

Contributor

schmunk42 commented Sep 11, 2016

I added some info why empty upstreams can occur: #565 (comment)

schmunk42 added a commit to schmunk42/nginx-proxy that referenced this issue Sep 27, 2016

schmunk42 added a commit to schmunk42/nginx-proxy that referenced this issue Feb 19, 2017

colinodell added a commit to unleashedtech/nginx-proxy that referenced this issue Mar 14, 2017

@vladkras

This comment has been minimized.

Show comment
Hide comment
@vladkras

vladkras May 16, 2017

TL;DR
I had same problem no servers are inside upstream in /etc/nginx/conf.d/default.conf
fixed by restarting nginx and php container, and then nginx -s reload inside container (not sure it didn't reload itself) so now it's not empty again:

upstream example.com {
                                ## Can be connect with "your_network" network
                        # your_nginx_1
                        server 172.20.0.2:80;
}

vladkras commented May 16, 2017

TL;DR
I had same problem no servers are inside upstream in /etc/nginx/conf.d/default.conf
fixed by restarting nginx and php container, and then nginx -s reload inside container (not sure it didn't reload itself) so now it's not empty again:

upstream example.com {
                                ## Can be connect with "your_network" network
                        # your_nginx_1
                        server 172.20.0.2:80;
}
@Kugelschieber

This comment has been minimized.

Show comment
Hide comment
@Kugelschieber

Kugelschieber Jun 4, 2017

I can confirm that upstream is empty. As a workaround, I mounted conf.d to a volume and edited default.conf manually.

Kugelschieber commented Jun 4, 2017

I can confirm that upstream is empty. As a workaround, I mounted conf.d to a volume and edited default.conf manually.

@kalbasit

This comment has been minimized.

Show comment
Hide comment
@kalbasit

kalbasit Jun 4, 2017

@DeKugelschieber use https://raw.githubusercontent.com/jwilder/nginx-proxy/a72c7e6e20df3738ca365bf6c14598f6a8017500/nginx.tmpl instead of the one on master and you'll be fine. Below is my nginx-gen.service if it helps

[Unit]
Description=Automatically generate nginx configuration for serving docker containers
Requires=docker.service nginx.service
After=docker.service nginx.service

[Service]
ExecStartPre=/bin/sh -c "rm -f /tmp/nginx.tmpl && curl -Lo /tmp/nginx.tmpl https://raw.githubusercontent.com/jwilder/nginx-proxy/a72c7e6e20df3738ca365bf6c14598f6a8017500/nginx.tmpl"
ExecStartPre=/bin/sh -c "docker inspect nginx-gen >/dev/null 2>&1 && docker rm -f nginx-gen || true"
ExecStartPre=/usr/bin/docker create --name nginx-gen --volumes-from nginx -v /tmp/nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/docker-gen -notify-sighup nginx -watch -only-exposed -wait 5s:30s /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
ExecStart=/usr/bin/docker start -a nginx-gen
ExecStop=-/usr/bin/docker stop nginx-gen
ExecStopPost=/usr/bin/docker rm -f nginx-gen
Restart=on-failure
RestartSec=10

[Install]
WantedBy=multi-user.target

kalbasit commented Jun 4, 2017

@DeKugelschieber use https://raw.githubusercontent.com/jwilder/nginx-proxy/a72c7e6e20df3738ca365bf6c14598f6a8017500/nginx.tmpl instead of the one on master and you'll be fine. Below is my nginx-gen.service if it helps

[Unit]
Description=Automatically generate nginx configuration for serving docker containers
Requires=docker.service nginx.service
After=docker.service nginx.service

[Service]
ExecStartPre=/bin/sh -c "rm -f /tmp/nginx.tmpl && curl -Lo /tmp/nginx.tmpl https://raw.githubusercontent.com/jwilder/nginx-proxy/a72c7e6e20df3738ca365bf6c14598f6a8017500/nginx.tmpl"
ExecStartPre=/bin/sh -c "docker inspect nginx-gen >/dev/null 2>&1 && docker rm -f nginx-gen || true"
ExecStartPre=/usr/bin/docker create --name nginx-gen --volumes-from nginx -v /tmp/nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/docker-gen -notify-sighup nginx -watch -only-exposed -wait 5s:30s /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
ExecStart=/usr/bin/docker start -a nginx-gen
ExecStop=-/usr/bin/docker stop nginx-gen
ExecStopPost=/usr/bin/docker rm -f nginx-gen
Restart=on-failure
RestartSec=10

[Install]
WantedBy=multi-user.target
@Johannestegner

This comment has been minimized.

Show comment
Hide comment
@Johannestegner

Johannestegner Jun 30, 2017

I had this issue and tried all tips and tricks everywhere.
What fixed it for me was to add listen 443; in my nginx server block, without this, my config file didn't get any IP registered for the container.
Not sure if this provides any help, but thought I'd post a comment about it.

Johannestegner commented Jun 30, 2017

I had this issue and tried all tips and tricks everywhere.
What fixed it for me was to add listen 443; in my nginx server block, without this, my config file didn't get any IP registered for the container.
Not sure if this provides any help, but thought I'd post a comment about it.

@Pimmetje

This comment has been minimized.

Show comment
Hide comment
@Pimmetje

Pimmetje Jul 22, 2017

Just to put my 2 cents in the bucket. I modified the template like this:

server {{ $container.IP }}:{{ $container.Env.VIRTUAL_PORT }};

It requires me atm to give also the VIRTUAL_PORT but it does work. Full block where i added the line shown below. I did not try to get it any nicer but if someone knows a permanent fix i am all ears.

For now this works as a workaround.

upstream {{ $upstream_name }} {

{{ range $container := $containers }}
        {{ $addrLen := len $container.Addresses }}

        server {{ $container.IP }}:{{ $container.Env.VIRTUAL_PORT }};

        {{ range $knownNetwork := $CurrentContainer.Networks }}
                {{ range $containerNetwork := $container.Networks }}
                        {{ if or (eq $knownNetwork.Name $containerNetwork.Name) (eq $knownNetwork.Name "host") }}
                                ## Can be connect with "{{ $containerNetwork.Name }}" network

                                {{/* If only 1 port exposed, use that */}}
                                {{ if eq $addrLen 1 }}
                                        {{ $address := index $container.Addresses 0 }}
                                        {{ template "upstream" (dict "Container" $container "Address" $address "Network" $containerNetwork) }}
                                {{/* If more than one port exposed, use the one matching VIRTUAL_PORT env var, falling back to standard web port 80 */}}
                                {{ else }}
                                        {{ $port := coalesce $container.Env.VIRTUAL_PORT "80" }}
                                        {{ $address := where $container.Addresses "Port" $port | first }}
                                        {{ template "upstream" (dict "Container" $container "Address" $address "Network" $containerNetwork) }}
                                {{ end }}
                        {{ end }}
                {{ end }}
        {{ end }}
{{ end }}
}

Pimmetje commented Jul 22, 2017

Just to put my 2 cents in the bucket. I modified the template like this:

server {{ $container.IP }}:{{ $container.Env.VIRTUAL_PORT }};

It requires me atm to give also the VIRTUAL_PORT but it does work. Full block where i added the line shown below. I did not try to get it any nicer but if someone knows a permanent fix i am all ears.

For now this works as a workaround.

upstream {{ $upstream_name }} {

{{ range $container := $containers }}
        {{ $addrLen := len $container.Addresses }}

        server {{ $container.IP }}:{{ $container.Env.VIRTUAL_PORT }};

        {{ range $knownNetwork := $CurrentContainer.Networks }}
                {{ range $containerNetwork := $container.Networks }}
                        {{ if or (eq $knownNetwork.Name $containerNetwork.Name) (eq $knownNetwork.Name "host") }}
                                ## Can be connect with "{{ $containerNetwork.Name }}" network

                                {{/* If only 1 port exposed, use that */}}
                                {{ if eq $addrLen 1 }}
                                        {{ $address := index $container.Addresses 0 }}
                                        {{ template "upstream" (dict "Container" $container "Address" $address "Network" $containerNetwork) }}
                                {{/* If more than one port exposed, use the one matching VIRTUAL_PORT env var, falling back to standard web port 80 */}}
                                {{ else }}
                                        {{ $port := coalesce $container.Env.VIRTUAL_PORT "80" }}
                                        {{ $address := where $container.Addresses "Port" $port | first }}
                                        {{ template "upstream" (dict "Container" $container "Address" $address "Network" $containerNetwork) }}
                                {{ end }}
                        {{ end }}
                {{ end }}
        {{ end }}
{{ end }}
}
@tht

This comment has been minimized.

Show comment
Hide comment
@tht

tht Aug 9, 2017

I've just hit the same issue when trying to add a backend which only publishes port 443. It ends up with an empty upstream block.

I've used the following workaround:

  • Create a new network (type: bridge)
  • Add the reverse proxy and the backend to this new network
  • Restart the backend (this generates a new configuration file for nginx

As long as reverse-proxy and the backends do share the same network it seems to work perfectly fine.

tht commented Aug 9, 2017

I've just hit the same issue when trying to add a backend which only publishes port 443. It ends up with an empty upstream block.

I've used the following workaround:

  • Create a new network (type: bridge)
  • Add the reverse proxy and the backend to this new network
  • Restart the backend (this generates a new configuration file for nginx

As long as reverse-proxy and the backends do share the same network it seems to work perfectly fine.

@revolunet

This comment has been minimized.

Show comment
Hide comment
@revolunet

revolunet Nov 28, 2017

same as #479 ?

looks like as soon as some container with VIRTUAL_HOST isnt on the same network, it breaks the nginx config and container ?

revolunet commented Nov 28, 2017

same as #479 ?

looks like as soon as some container with VIRTUAL_HOST isnt on the same network, it breaks the nginx config and container ?

@vladkras

This comment has been minimized.

Show comment
Hide comment
@vladkras

vladkras Nov 28, 2017

@revolunet yes you are right, in all my later cases proxy container and nginx container were in different networks. Adding one to another
docker network connect container_1_network container_2
with subsequent restart of both of them helps. But I'm still not sure if I have to add nginx to proxy network or vice versa (as docs offers). Both solutions work and fail sometime.

vladkras commented Nov 28, 2017

@revolunet yes you are right, in all my later cases proxy container and nginx container were in different networks. Adding one to another
docker network connect container_1_network container_2
with subsequent restart of both of them helps. But I'm still not sure if I have to add nginx to proxy network or vice versa (as docs offers). Both solutions work and fail sometime.

@schmunk42

This comment has been minimized.

Show comment
Hide comment
@schmunk42

schmunk42 Nov 29, 2017

Contributor

Both solutions work and fail sometime.

For those running this in a swarm, make sure to check if you are still receiving docker events. If you are not seeing any events the nginx will not restart when containers are created or removed.

Contributor

schmunk42 commented Nov 29, 2017

Both solutions work and fail sometime.

For those running this in a swarm, make sure to check if you are still receiving docker events. If you are not seeing any events the nginx will not restart when containers are created or removed.

Laski added a commit to Laski/nginx-proxy that referenced this issue Dec 18, 2017

@william-oicr

This comment has been minimized.

Show comment
Hide comment
@william-oicr

william-oicr Dec 21, 2017

I had this error as well. This was my docker-compose.yml

version: '3'
services:
  proxy:
    build:
      context: .
      dockerfile: Dockerfile
    ports:
     - "80:80"
     - "443:443"
    volumes:
     - /var/run/docker.sock:/tmp/docker.sock
     - ./certs:/etc/nginx/certs:ro
    image: proxy:docker
    container_name: proxy
networks:
  default:
    external:
      name: nginx-proxy

I fixed it by adding - ./nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl to volumes

william-oicr commented Dec 21, 2017

I had this error as well. This was my docker-compose.yml

version: '3'
services:
  proxy:
    build:
      context: .
      dockerfile: Dockerfile
    ports:
     - "80:80"
     - "443:443"
    volumes:
     - /var/run/docker.sock:/tmp/docker.sock
     - ./certs:/etc/nginx/certs:ro
    image: proxy:docker
    container_name: proxy
networks:
  default:
    external:
      name: nginx-proxy

I fixed it by adding - ./nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl to volumes

@mnabialek

This comment has been minimized.

Show comment
Hide comment
@mnabialek

mnabialek Dec 29, 2017

I have same problem on MacOS, never seen it on Windows 10

mnabialek commented Dec 29, 2017

I have same problem on MacOS, never seen it on Windows 10

@closedLoop

This comment has been minimized.

Show comment
Hide comment
@closedLoop

closedLoop Dec 29, 2017

I had the same issue as above. I found the docker-compose files in https://blog.ssdnodes.com/blog/tutorial-using-docker-and-nginx-to-host-multiple-websites/ to fix my issue. It appears it was how I was defining environment variables in the improper format

closedLoop commented Dec 29, 2017

I had the same issue as above. I found the docker-compose files in https://blog.ssdnodes.com/blog/tutorial-using-docker-and-nginx-to-host-multiple-websites/ to fix my issue. It appears it was how I was defining environment variables in the improper format

@pbreah

This comment has been minimized.

Show comment
Hide comment
@pbreah

pbreah Jan 5, 2018

experiencing this same issue on AWS ECS.

Has anyone fixed this on ECS?

ecs-cli compose service up - the client that brings up the services doesn't support "networks" and it skips these from the docker-compose.yml file. Any solutions for ECS?

By default it uses a bridge network, but it still gets the empty upstream.

pbreah commented Jan 5, 2018

experiencing this same issue on AWS ECS.

Has anyone fixed this on ECS?

ecs-cli compose service up - the client that brings up the services doesn't support "networks" and it skips these from the docker-compose.yml file. Any solutions for ECS?

By default it uses a bridge network, but it still gets the empty upstream.

@shikasta-net

This comment has been minimized.

Show comment
Hide comment
@shikasta-net

shikasta-net Jan 14, 2018

I have been experiencing the "empty upstream" issue for a long time and spent the last week doing some extensive debugging. In my case the entire problem stems from {{ $CurrentContainer := where $ "ID" .Docker.CurrentContainerID | first }} being empty, exactly as was discussed about half way up this issue. What's different is that mine sometimes works.
My containers are all started via systemd-docker. Most of the time at starts up, nginx-proxy has no concept of its own container and the upstream blocks are empty. Occasionally the stars align and it starts, knowing about its container and everything works. I thought the issue was a service dependency on docker, the network or something that nginx-proxy needed running before the service started, but I have found that if I systemctl stop docker.proxy.service and systemctl start docker.www.service, nginx-proxy has maybe a 20% chance of not knowing about its container. Hopefully someone can direct me to a way to further diagnose what is occasionally preventing the container detecting itself during creation and thereby help fix this ongoing issue.
Below are the relevant systemd.units. I'm running docker 1.13.1, systemd 229, nginx-proxy:latest (b0bb7ac158f6), letsencrypt-nginx-proxy-companion:latest (7d559ca951b3).

docker.proxy.service

[Unit]
Description=Proxying Container
After=zfs.target docker.service network-online.target
Before=docker.letsencrypt.service
Requires=zfs.target docker.service network-online.target docker.letsencrypt.service

[Service]
TimeoutStartSec=0
Restart=always
RestartSec=10s
Type=notify
NotifyAccess=all
ExecStart=/usr/bin/systemd-docker run --rm --name %n \
  -v /var/run/docker.sock:/tmp/docker.sock:ro \
  -v /log/proxy:/var/log/nginx \
  -v /proxy/conf/proxy.conf:/etc/nginx/conf.d/proxy.conf:ro \
  -v /proxy/certs:/etc/nginx/certs:ro \
  -v /proxy/vhost.d:/etc/nginx/vhost.d \
  -v /proxy/html:/usr/share/nginx/html \
  -p 443:443 \
  -p 80:80 \
  jwilder/nginx-proxy

[Install]
WantedBy=multi-user.target

docker.letsencyrpt.service

[Unit]
Description=Automatic SSL certification Container
After=zfs.target docker.proxy.service
Requires=zfs.target docker.proxy.service

[Service]
TimeoutStartSec=0
Restart=always
RestartSec=10s
Type=notify
NotifyAccess=all
ExecStart=/usr/bin/systemd-docker run --rm --name %n \
  -v /var/run/docker.sock:/var/run/docker.sock:ro \
  --volumes-from docker.proxy.service \
  -v /proxy/certs:/etc/nginx/certs:rw \
  jrcs/letsencrypt-nginx-proxy-companion

[Install]
WantedBy=multi-user.target

docker.www.service

[Unit]
Description=Place holder page Container
After=zfs.target docker.proxy.service docker.letsencrypt.service
Requires=zfs.target docker.proxy.service

[Service]
TimeoutStartSec=0
Restart=always
RestartSec=10s
Type=notify
NotifyAccess=all
ExecStart=/usr/bin/systemd-docker run --rm --name %n \
  -v /log/website:/var/log/nginx \
  -v /website/content:/usr/share/nginx/html:ro \
  -e "VIRTUAL_HOST=www.example.com" \
  -e "LETSENCRYPT_HOST=www.example.com" \
  -e "LETSENCRYPT_EMAIL=me@example.com" \
  nginx

[Install]
WantedBy=multi-user.target

(NB host names, email and paths have been obfuscated in these samples)

shikasta-net commented Jan 14, 2018

I have been experiencing the "empty upstream" issue for a long time and spent the last week doing some extensive debugging. In my case the entire problem stems from {{ $CurrentContainer := where $ "ID" .Docker.CurrentContainerID | first }} being empty, exactly as was discussed about half way up this issue. What's different is that mine sometimes works.
My containers are all started via systemd-docker. Most of the time at starts up, nginx-proxy has no concept of its own container and the upstream blocks are empty. Occasionally the stars align and it starts, knowing about its container and everything works. I thought the issue was a service dependency on docker, the network or something that nginx-proxy needed running before the service started, but I have found that if I systemctl stop docker.proxy.service and systemctl start docker.www.service, nginx-proxy has maybe a 20% chance of not knowing about its container. Hopefully someone can direct me to a way to further diagnose what is occasionally preventing the container detecting itself during creation and thereby help fix this ongoing issue.
Below are the relevant systemd.units. I'm running docker 1.13.1, systemd 229, nginx-proxy:latest (b0bb7ac158f6), letsencrypt-nginx-proxy-companion:latest (7d559ca951b3).

docker.proxy.service

[Unit]
Description=Proxying Container
After=zfs.target docker.service network-online.target
Before=docker.letsencrypt.service
Requires=zfs.target docker.service network-online.target docker.letsencrypt.service

[Service]
TimeoutStartSec=0
Restart=always
RestartSec=10s
Type=notify
NotifyAccess=all
ExecStart=/usr/bin/systemd-docker run --rm --name %n \
  -v /var/run/docker.sock:/tmp/docker.sock:ro \
  -v /log/proxy:/var/log/nginx \
  -v /proxy/conf/proxy.conf:/etc/nginx/conf.d/proxy.conf:ro \
  -v /proxy/certs:/etc/nginx/certs:ro \
  -v /proxy/vhost.d:/etc/nginx/vhost.d \
  -v /proxy/html:/usr/share/nginx/html \
  -p 443:443 \
  -p 80:80 \
  jwilder/nginx-proxy

[Install]
WantedBy=multi-user.target

docker.letsencyrpt.service

[Unit]
Description=Automatic SSL certification Container
After=zfs.target docker.proxy.service
Requires=zfs.target docker.proxy.service

[Service]
TimeoutStartSec=0
Restart=always
RestartSec=10s
Type=notify
NotifyAccess=all
ExecStart=/usr/bin/systemd-docker run --rm --name %n \
  -v /var/run/docker.sock:/var/run/docker.sock:ro \
  --volumes-from docker.proxy.service \
  -v /proxy/certs:/etc/nginx/certs:rw \
  jrcs/letsencrypt-nginx-proxy-companion

[Install]
WantedBy=multi-user.target

docker.www.service

[Unit]
Description=Place holder page Container
After=zfs.target docker.proxy.service docker.letsencrypt.service
Requires=zfs.target docker.proxy.service

[Service]
TimeoutStartSec=0
Restart=always
RestartSec=10s
Type=notify
NotifyAccess=all
ExecStart=/usr/bin/systemd-docker run --rm --name %n \
  -v /log/website:/var/log/nginx \
  -v /website/content:/usr/share/nginx/html:ro \
  -e "VIRTUAL_HOST=www.example.com" \
  -e "LETSENCRYPT_HOST=www.example.com" \
  -e "LETSENCRYPT_EMAIL=me@example.com" \
  nginx

[Install]
WantedBy=multi-user.target

(NB host names, email and paths have been obfuscated in these samples)

@Calder-Ty

This comment has been minimized.

Show comment
Hide comment
@Calder-Ty

Calder-Ty Feb 16, 2018

I know this is an old issue, but for me to fix this all I had to do was ensure that my application and Nginx were on the same network. (I use two different compose files, one for Nginx/Docker gen/letsencrypt and one for my web-app). I know that is a fairly dumb thing to forget, but I wanted to put it out there for anyone else who might be reading through this.

Calder-Ty commented Feb 16, 2018

I know this is an old issue, but for me to fix this all I had to do was ensure that my application and Nginx were on the same network. (I use two different compose files, one for Nginx/Docker gen/letsencrypt and one for my web-app). I know that is a fairly dumb thing to forget, but I wanted to put it out there for anyone else who might be reading through this.

@ryanalexanderson

This comment has been minimized.

Show comment
Hide comment
@ryanalexanderson

ryanalexanderson Feb 28, 2018

...Also in the realm of silly mistakes, I had a boilerplate of environment variables being used in unrelated docker-compose files on the same machine that unnecessarily defined VIRTUAL_HOST on a different network. They interfered with the real VIRTUAL_HOST in the correct network/docker-compose. The "docker network inspect $(docker network ls -q)" command tipped me off.

ryanalexanderson commented Feb 28, 2018

...Also in the realm of silly mistakes, I had a boilerplate of environment variables being used in unrelated docker-compose files on the same machine that unnecessarily defined VIRTUAL_HOST on a different network. They interfered with the real VIRTUAL_HOST in the correct network/docker-compose. The "docker network inspect $(docker network ls -q)" command tipped me off.

@jrd

This comment has been minimized.

Show comment
Hide comment
@jrd

jrd Jul 8, 2018

Thank you ryanalexanderson ! That was my problem: i forgot to put one container in the right network.

jrd commented Jul 8, 2018

Thank you ryanalexanderson ! That was my problem: i forgot to put one container in the right network.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment