New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HostPort seemingly not working #23920

Closed
microadam opened this Issue Apr 6, 2016 · 76 comments

Comments

Projects
None yet
@microadam

microadam commented Apr 6, 2016

I am not sure if what I am doing is supposed to work. But I have created the following pod:

apiVersion: v1
kind: Pod
metadata:
  name: nginx-host
spec:
  containers:
  - image: caseydavenport/nginx
    imagePullPolicy: IfNotPresent
    name: nginx-host
    ports:
    - containerPort: 80
      hostPort: 80
  restartPolicy: Always

This gives me a pod with the following:

        "hostIP": "178.x.x.x",
        "podIP": "10.x.x.x",

From the host (178.x.x.x), running: curl http://10.x.x.x gets me the response from nginx that I expect.
From the host (178.x.x.x), running curl http://178.x.x.x gets me port 80: Connection refused.

Should this work? Or have I missed something?

Versions:

Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.1", GitCommit:"50809107cd47a1f62da362bccefdd9e6f7076145", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.1", GitCommit:"50809107cd47a1f62da362bccefdd9e6f7076145", GitTreeState:"clean"}

Networking: Calico

Host OS: Ubuntu 16.04

Docker: 1.10.3

Thanks a lot

@microadam

This comment has been minimized.

Show comment
Hide comment
@microadam

microadam Apr 6, 2016

@ZHB I got an email notification with you asking for the output of ip route, but your message seems to have gone now? The output is this:

default via 178.x.x.x dev eth0 onlink
10.233.64.0/26 via 192.168.10.3 dev tunl0  proto bird onlink
blackhole 10.233.64.64/26  proto bird
10.233.64.65 dev calief9a4578fa8  scope link
10.233.64.71 dev cali70f7dfd2fb5  scope link
10.233.64.81 dev cali4660569efc0  scope link
10.233.64.128/26 via 192.168.10.1 dev tunl0  proto bird onlink
172.17.0.0/16 dev docker0  proto kernel  scope link  src 172.17.0.1 linkdown
178.x.x.x/xx dev eth0  proto kernel  scope link  src 178.x.x.x
192.168.10.0/24 dev k8s0  proto kernel  scope link  src 192.168.10.2
192.168.128.0/17 dev eth0  proto kernel  scope link  src 192.168.202.48

microadam commented Apr 6, 2016

@ZHB I got an email notification with you asking for the output of ip route, but your message seems to have gone now? The output is this:

default via 178.x.x.x dev eth0 onlink
10.233.64.0/26 via 192.168.10.3 dev tunl0  proto bird onlink
blackhole 10.233.64.64/26  proto bird
10.233.64.65 dev calief9a4578fa8  scope link
10.233.64.71 dev cali70f7dfd2fb5  scope link
10.233.64.81 dev cali4660569efc0  scope link
10.233.64.128/26 via 192.168.10.1 dev tunl0  proto bird onlink
172.17.0.0/16 dev docker0  proto kernel  scope link  src 172.17.0.1 linkdown
178.x.x.x/xx dev eth0  proto kernel  scope link  src 178.x.x.x
192.168.10.0/24 dev k8s0  proto kernel  scope link  src 192.168.10.2
192.168.128.0/17 dev eth0  proto kernel  scope link  src 192.168.202.48
@microadam

This comment has been minimized.

Show comment
Hide comment
@microadam

microadam Apr 6, 2016

Just to say, I can get this to work if I set hostNetwork: true on the pod, but that then exposes all of the ports in the pod to the host network, where as I only want one specific port (80) to be exposed

microadam commented Apr 6, 2016

Just to say, I can get this to work if I set hostNetwork: true on the pod, but that then exposes all of the ports in the pod to the host network, where as I only want one specific port (80) to be exposed

@microadam

This comment has been minimized.

Show comment
Hide comment
@microadam

microadam Apr 14, 2016

anyone able to confirm if I am just doing something wrong with this? or if it should work like I have described?

Thanks

microadam commented Apr 14, 2016

anyone able to confirm if I am just doing something wrong with this? or if it should work like I have described?

Thanks

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Apr 14, 2016

Member

It should work, and it does work on my local cluster.

On Thu, Apr 14, 2016 at 7:37 AM, Adam Duncan notifications@github.com
wrote:

anyone able to confirm if I am just doing something wrong with this? or if
it should work like I have described?

Thanks


You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub
#23920 (comment)

Member

thockin commented Apr 14, 2016

It should work, and it does work on my local cluster.

On Thu, Apr 14, 2016 at 7:37 AM, Adam Duncan notifications@github.com
wrote:

anyone able to confirm if I am just doing something wrong with this? or if
it should work like I have described?

Thanks


You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub
#23920 (comment)

@microadam

This comment has been minimized.

Show comment
Hide comment
@microadam

microadam Apr 15, 2016

@thockin Thanks for confirming. Could I ask what setup your test cluster is please? (networking type, cloud provider, host OS etc).

microadam commented Apr 15, 2016

@thockin Thanks for confirming. Could I ask what setup your test cluster is please? (networking type, cloud provider, host OS etc).

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Apr 18, 2016

Member

GCE, underlay networking, Debian

We pass HostPort thru to Docker, so maybe start with docker inspect?

On Fri, Apr 15, 2016 at 10:54 AM, Adam Duncan notifications@github.com
wrote:

@thockin https://github.com/thockin Thanks for confirming. Could I ask
what setup your test cluster is please? (networking type, cloud provider,
host OS etc).


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#23920 (comment)

Member

thockin commented Apr 18, 2016

GCE, underlay networking, Debian

We pass HostPort thru to Docker, so maybe start with docker inspect?

On Fri, Apr 15, 2016 at 10:54 AM, Adam Duncan notifications@github.com
wrote:

@thockin https://github.com/thockin Thanks for confirming. Could I ask
what setup your test cluster is please? (networking type, cloud provider,
host OS etc).


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#23920 (comment)

@microadam

This comment has been minimized.

Show comment
Hide comment
@microadam

microadam Apr 19, 2016

@thockin sorry, I am not familiar with underlay networking (or GCE for that matter). It it a GCE specific thing? I can't see anything about underlaying networking in the k8s documentation. I understand the various different overlay networking options (flannel, weave etc), I assume this is different to these?

With regards to docker inspect. The output of the networking section is as follows:

"NetworkSettings": {
            "Bridge": "",
            "SandboxID": "",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": null,
            "SandboxKey": "",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "",
            "Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "MacAddress": "",
            "Networks": null
        }

Would I be right in assuming that I should be seeing port 80 in the "Ports" property that is currently null?

Thanks for your assistance!

microadam commented Apr 19, 2016

@thockin sorry, I am not familiar with underlay networking (or GCE for that matter). It it a GCE specific thing? I can't see anything about underlaying networking in the k8s documentation. I understand the various different overlay networking options (flannel, weave etc), I assume this is different to these?

With regards to docker inspect. The output of the networking section is as follows:

"NetworkSettings": {
            "Bridge": "",
            "SandboxID": "",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": null,
            "SandboxKey": "",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "",
            "Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "MacAddress": "",
            "Networks": null
        }

Would I be right in assuming that I should be seeing port 80 in the "Ports" property that is currently null?

Thanks for your assistance!

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Apr 19, 2016

Member

Sorry, docker would call it "bridge" mode.

You need to look at docker inspect on the "pause" container for your pod.

You can also run netstat -nap and look for your port (on the host)

On Tue, Apr 19, 2016 at 12:06 AM, Adam Duncan notifications@github.com
wrote:

@thockin https://github.com/thockin sorry, I am not familiar with
underlay networking (or GCE for that matter). It it a GCE specific thing? I
can't see anything about underlaying networking in the k8s documentation.

With regards to docker inspect. The output of the networking section is as
follows:

"NetworkSettings": {
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": null,
"SandboxKey": "",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": null
}

Would I be right in assuming that I should be seeing port 80 in the
"Ports" property that is currently null?

Thanks for your assistance!


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#23920 (comment)

Member

thockin commented Apr 19, 2016

Sorry, docker would call it "bridge" mode.

You need to look at docker inspect on the "pause" container for your pod.

You can also run netstat -nap and look for your port (on the host)

On Tue, Apr 19, 2016 at 12:06 AM, Adam Duncan notifications@github.com
wrote:

@thockin https://github.com/thockin sorry, I am not familiar with
underlay networking (or GCE for that matter). It it a GCE specific thing? I
can't see anything about underlaying networking in the k8s documentation.

With regards to docker inspect. The output of the networking section is as
follows:

"NetworkSettings": {
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": null,
"SandboxKey": "",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": null
}

Would I be right in assuming that I should be seeing port 80 in the
"Ports" property that is currently null?

Thanks for your assistance!


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#23920 (comment)

@microadam

This comment has been minimized.

Show comment
Hide comment
@microadam

microadam Apr 20, 2016

@thockin Ah yes of course. I forget about the pause container. Looking at this there are a couple of interesting things. Not sure which is most relevant:

"HostConfig": {
  "Binds": null,
  "ContainerIDFile": "",
  "LogConfig": {
      "Type": "json-file",
      "Config": {}
  },
  "NetworkMode": "none",
  "PortBindings": null,

and

"NetworkSettings": {
    "Bridge": "",
    "SandboxID": "995f5b5b80714e4ad98e5db6fb6069980d32c308c72fe875cd5598463e7e0ebd",
    "HairpinMode": false,
    "LinkLocalIPv6Address": "",
    "LinkLocalIPv6PrefixLen": 0,
    "Ports": {},
    "SandboxKey": "/var/run/docker/netns/995f5b5b8071",
    "SecondaryIPAddresses": null,
    "SecondaryIPv6Addresses": null,
    "EndpointID": "",
    "Gateway": "",
    "GlobalIPv6Address": "",
    "GlobalIPv6PrefixLen": 0,
    "IPAddress": "",
    "IPPrefixLen": 0,
    "IPv6Gateway": "",
    "MacAddress": "",
    "Networks": {
        "none": {
            "IPAMConfig": null,
            "Links": null,
            "Aliases": null,
            "NetworkID": "bcf6c68b0dd8461cc977d6168ad01a98e6499826085d58e3f8a16eb18fef32f6",
            "EndpointID": "792de55003f37bfd94f2dac62d05dff792b10fc5eef0fe771e78c58accf5d2b7",
            "Gateway": "",
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "MacAddress": ""
        }
    }
}

Guessing Bridge should not be an empty string?

Nothing about anything listening on port 80 when I run netstat either.

microadam commented Apr 20, 2016

@thockin Ah yes of course. I forget about the pause container. Looking at this there are a couple of interesting things. Not sure which is most relevant:

"HostConfig": {
  "Binds": null,
  "ContainerIDFile": "",
  "LogConfig": {
      "Type": "json-file",
      "Config": {}
  },
  "NetworkMode": "none",
  "PortBindings": null,

and

"NetworkSettings": {
    "Bridge": "",
    "SandboxID": "995f5b5b80714e4ad98e5db6fb6069980d32c308c72fe875cd5598463e7e0ebd",
    "HairpinMode": false,
    "LinkLocalIPv6Address": "",
    "LinkLocalIPv6PrefixLen": 0,
    "Ports": {},
    "SandboxKey": "/var/run/docker/netns/995f5b5b8071",
    "SecondaryIPAddresses": null,
    "SecondaryIPv6Addresses": null,
    "EndpointID": "",
    "Gateway": "",
    "GlobalIPv6Address": "",
    "GlobalIPv6PrefixLen": 0,
    "IPAddress": "",
    "IPPrefixLen": 0,
    "IPv6Gateway": "",
    "MacAddress": "",
    "Networks": {
        "none": {
            "IPAMConfig": null,
            "Links": null,
            "Aliases": null,
            "NetworkID": "bcf6c68b0dd8461cc977d6168ad01a98e6499826085d58e3f8a16eb18fef32f6",
            "EndpointID": "792de55003f37bfd94f2dac62d05dff792b10fc5eef0fe771e78c58accf5d2b7",
            "Gateway": "",
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "MacAddress": ""
        }
    }
}

Guessing Bridge should not be an empty string?

Nothing about anything listening on port 80 when I run netstat either.

@caseydavenport

This comment has been minimized.

Show comment
Hide comment
@caseydavenport

caseydavenport May 5, 2016

Member

@thockin @microadam - Correct me if I'm misunderstanding something, but I think that hostPort just isn't going to work with CNI based integrations at the moment (I see you're using Calico).

hostPort currently relies on Docker to configure the port mapping, but in the CNI case Docker doesn't have the knowledge to do this, since pods are started with net=none.

I think this could be resolved by sending hostPort the way of NodePort, and writing the iptables rules from k8s instead of docker. I don't think the kube-proxy watches pods at the moment, so probably not a trivial change.

Member

caseydavenport commented May 5, 2016

@thockin @microadam - Correct me if I'm misunderstanding something, but I think that hostPort just isn't going to work with CNI based integrations at the moment (I see you're using Calico).

hostPort currently relies on Docker to configure the port mapping, but in the CNI case Docker doesn't have the knowledge to do this, since pods are started with net=none.

I think this could be resolved by sending hostPort the way of NodePort, and writing the iptables rules from k8s instead of docker. I don't think the kube-proxy watches pods at the moment, so probably not a trivial change.

@freehan

This comment has been minimized.

Show comment
Hide comment
@freehan
Member

freehan commented May 5, 2016

@bboreham

This comment has been minimized.

Show comment
Hide comment
@bboreham
Contributor

bboreham commented Aug 2, 2016

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Aug 2, 2016

Member

FWIW it is possible to do in many CNI implementations, but I hesitate to
promise that it will work for all CNI implementations. The question then
is which side of that fence it belongs on.

On Tue, Aug 2, 2016 at 5:57 AM, Bryan Boreham notifications@github.com
wrote:

Referencing CNI discussion at
http://github.com/containernetworking/cni/issues/46
containernetworking/cni#46


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#23920 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVEmHiGO4qOqWW_162FOU-oRUVRWOks5qbz66gaJpZM4IBN57
.

Member

thockin commented Aug 2, 2016

FWIW it is possible to do in many CNI implementations, but I hesitate to
promise that it will work for all CNI implementations. The question then
is which side of that fence it belongs on.

On Tue, Aug 2, 2016 at 5:57 AM, Bryan Boreham notifications@github.com
wrote:

Referencing CNI discussion at
http://github.com/containernetworking/cni/issues/46
containernetworking/cni#46


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#23920 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVEmHiGO4qOqWW_162FOU-oRUVRWOks5qbz66gaJpZM4IBN57
.

@bboreham

This comment has been minimized.

Show comment
Hide comment
@bboreham

bboreham Sep 5, 2016

Contributor

New discussion about a solution in CNI at #31307

Contributor

bboreham commented Sep 5, 2016

New discussion about a solution in CNI at #31307

@danielschonfeld

This comment has been minimized.

Show comment
Hide comment
@danielschonfeld

danielschonfeld Sep 30, 2016

Contributor

I can confirm this problem and solution renders all the Ingress examples useless. Most of those nginx RC and DS examples contain the use of hostPort but no use of hostNetwork. Took me a while to stumble on this issue and figure out how to make it work.

The above is true to v1.4, bare metal

Contributor

danielschonfeld commented Sep 30, 2016

I can confirm this problem and solution renders all the Ingress examples useless. Most of those nginx RC and DS examples contain the use of hostPort but no use of hostNetwork. Took me a while to stumble on this issue and figure out how to make it work.

The above is true to v1.4, bare metal

@chulkilee

This comment has been minimized.

Show comment
Hide comment
@chulkilee

chulkilee Sep 30, 2016

I confirm this problem when I set up a cluster on bare metal using kubeadm. Since I want to use nginx ingress controller on 80 and 443 but apiserver uses 443 in the cluster, hostNetwork cannot be an option in my case.

chulkilee commented Sep 30, 2016

I confirm this problem when I set up a cluster on bare metal using kubeadm. Since I want to use nginx ingress controller on 80 and 443 but apiserver uses 443 in the cluster, hostNetwork cannot be an option in my case.

@spiddy

This comment has been minimized.

Show comment
Hide comment
@spiddy

spiddy Oct 4, 2016

same issue here, I stumbled on that when creating a cluster using kube-aws with v1.4.0_coreos.1 (AWS environment / without Calico). The problem is the Ingress controller (the nginx one) cannot be used since it uses hostPort as mentioned before.

spiddy commented Oct 4, 2016

same issue here, I stumbled on that when creating a cluster using kube-aws with v1.4.0_coreos.1 (AWS environment / without Calico). The problem is the Ingress controller (the nginx one) cannot be used since it uses hostPort as mentioned before.

@veryhumble

This comment has been minimized.

Show comment
Hide comment
@veryhumble

veryhumble Oct 6, 2016

I have the same issue. Running 1.4 on CentOS with Weave. Does anyone have a workaround? Without an ingress controller the cluster is pretty much unusable.

veryhumble commented Oct 6, 2016

I have the same issue. Running 1.4 on CentOS with Weave. Does anyone have a workaround? Without an ingress controller the cluster is pretty much unusable.

@axsuul

This comment has been minimized.

Show comment
Hide comment
@axsuul

axsuul Oct 8, 2016

Also need to rely on hostPort with https://github.com/kubernetes/contrib/tree/master/for-demos/proxy-to-service

Am using Weave for networking and I need to expose a service to port 80 on the host

axsuul commented Oct 8, 2016

Also need to rely on hostPort with https://github.com/kubernetes/contrib/tree/master/for-demos/proxy-to-service

Am using Weave for networking and I need to expose a service to port 80 on the host

@howlym

This comment has been minimized.

Show comment
Hide comment
@howlym

howlym Oct 13, 2016

I have the same issue with kube-registry-proxy on 1.4 bare metal with kube-weave cni.

howlym commented Oct 13, 2016

I have the same issue with kube-registry-proxy on 1.4 bare metal with kube-weave cni.

@bacongobbler

This comment has been minimized.

Show comment
Hide comment
@bacongobbler

bacongobbler Oct 19, 2016

Contributor

We've also seen this issue with Deis. We use a hostPort with a container called registry-proxy to bypass the Docker --insecure-registry requirement for internal networks. We've seen this occur only on CoreOS-specific installations such as with kube-aws. Vagrant (Fedora), GKE (Debian), AWS (Ubuntu) and Minikube (custom ISO) all work without issue.

To reproduce:

  • install Workflow v2.7.0 on a kube-aws cluster
  • observe registry-proxy isn't listening on the host's port 5555 with netstat -tan | grep 5555

A bit of history/debugging is available on both deis/registry#64 and deis/workflow#442

Contributor

bacongobbler commented Oct 19, 2016

We've also seen this issue with Deis. We use a hostPort with a container called registry-proxy to bypass the Docker --insecure-registry requirement for internal networks. We've seen this occur only on CoreOS-specific installations such as with kube-aws. Vagrant (Fedora), GKE (Debian), AWS (Ubuntu) and Minikube (custom ISO) all work without issue.

To reproduce:

  • install Workflow v2.7.0 on a kube-aws cluster
  • observe registry-proxy isn't listening on the host's port 5555 with netstat -tan | grep 5555

A bit of history/debugging is available on both deis/registry#64 and deis/workflow#442

@carsonoid

This comment has been minimized.

Show comment
Hide comment
@carsonoid

carsonoid Oct 20, 2016

For those that care one possible workaround is to use NodePort mappings to make ingress controllers work when using cni. Not an ideal solution for anyone that uses hostPort a lot but if you only need it for ingress it can be acceptable. The big downside is that you have to map to ports in the allowed nodeport range or change the range to allow lower ports.

carsonoid commented Oct 20, 2016

For those that care one possible workaround is to use NodePort mappings to make ingress controllers work when using cni. Not an ideal solution for anyone that uses hostPort a lot but if you only need it for ingress it can be acceptable. The big downside is that you have to map to ports in the allowed nodeport range or change the range to allow lower ports.

aboyett added a commit to aboyett/deis-charts that referenced this issue Oct 27, 2016

fix(registry): remove the registry proxy to fix CNI in 2.5.0
CNI networking doesn't properly implement HostPort yet[1]. as a result
the deis registry proxy (and other things depending on HostPort) fail to
work properly. this commit is an adaptation of a patch[2] prepared by
bacongobbler to address this issue by patching both the controller and
builder to not use the registry proxy and delete the proxy deployment
entirely

[1] kubernetes/kubernetes#23920
[2] deis/registry#64 (comment)
@caseydavenport

This comment has been minimized.

Show comment
Hide comment
@caseydavenport

caseydavenport Jul 13, 2017

Member

Closing as fixed in v1.7.0

/close

Member

caseydavenport commented Jul 13, 2017

Closing as fixed in v1.7.0

/close

flah00 added a commit to Adaptly/charts that referenced this issue Jul 17, 2017

[stable/nginx-ingress] Add hostNetwork option (#1250)
* [stable/nginx-ingress] Add hostNetwork option

Required for use with kubeadm / CNI using bare-metal clusters,
until kubernetes/kubernetes#23920 is
fixed

* [stable/nginx-ingress] bump chart version

* [stable/nginx-ingress] Actually use the value of hostNetwork

* bump to 0.6.0

* [stable/nginx-ingress] hostNetwork is part of podspec

yanns pushed a commit to yanns/charts that referenced this issue Jul 28, 2017

[stable/nginx-ingress] Add hostNetwork option (#1250)
* [stable/nginx-ingress] Add hostNetwork option

Required for use with kubeadm / CNI using bare-metal clusters,
until kubernetes/kubernetes#23920 is
fixed

* [stable/nginx-ingress] bump chart version

* [stable/nginx-ingress] Actually use the value of hostNetwork

* bump to 0.6.0

* [stable/nginx-ingress] hostNetwork is part of podspec

flah00 added a commit to Adaptly/charts that referenced this issue Aug 9, 2017

[stable/nginx-ingress] Add hostNetwork option (#1250)
* [stable/nginx-ingress] Add hostNetwork option

Required for use with kubeadm / CNI using bare-metal clusters,
until kubernetes/kubernetes#23920 is
fixed

* [stable/nginx-ingress] bump chart version

* [stable/nginx-ingress] Actually use the value of hostNetwork

* bump to 0.6.0

* [stable/nginx-ingress] hostNetwork is part of podspec
@linhdsv

This comment has been minimized.

Show comment
Hide comment
@linhdsv

linhdsv Aug 11, 2017

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: node-exporter
spec:
  template:
    metadata:
      labels:
        app: node-exporter
      name: node-exporter
    spec:
      hostNetwork: true
      hostPID: true
      containers:
      - image:  quay.io/prometheus/node-exporter:v0.14.0
        args:
        - "-collector.procfs=/host/proc"
        - "-collector.sysfs=/host/sys"
        name: node-exporter
        ports:
        - containerPort: 9100
          hostPort: 9100
          name: scrape
        resources:
          requests:
            memory: 30Mi
            cpu: 100m
          limits:
            memory: 50Mi
            cpu: 200m
        volumeMounts:
        - name: proc
          readOnly:  true
          mountPath: /host/proc
        - name: sys
          readOnly: true
          mountPath: /host/sys
      volumes:
      - name: proc
        hostPath:
          path: /proc
      - name: sys
        hostPath:
          path: /sys

Is this issue resolved yet? I got same error with my node-exporter service.

linhdsv commented Aug 11, 2017

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: node-exporter
spec:
  template:
    metadata:
      labels:
        app: node-exporter
      name: node-exporter
    spec:
      hostNetwork: true
      hostPID: true
      containers:
      - image:  quay.io/prometheus/node-exporter:v0.14.0
        args:
        - "-collector.procfs=/host/proc"
        - "-collector.sysfs=/host/sys"
        name: node-exporter
        ports:
        - containerPort: 9100
          hostPort: 9100
          name: scrape
        resources:
          requests:
            memory: 30Mi
            cpu: 100m
          limits:
            memory: 50Mi
            cpu: 200m
        volumeMounts:
        - name: proc
          readOnly:  true
          mountPath: /host/proc
        - name: sys
          readOnly: true
          mountPath: /host/sys
      volumes:
      - name: proc
        hostPath:
          path: /proc
      - name: sys
        hostPath:
          path: /sys

Is this issue resolved yet? I got same error with my node-exporter service.

@seletskiy

This comment has been minimized.

Show comment
Hide comment
@seletskiy

seletskiy Aug 11, 2017

@linhdsv: I've solved issue by using custom Calico CNI configuration to enable portmap — https://github.com/seletskiy/kubernetes-bootstrap

seletskiy commented Aug 11, 2017

@linhdsv: I've solved issue by using custom Calico CNI configuration to enable portmap — https://github.com/seletskiy/kubernetes-bootstrap

@hypnoglow

This comment has been minimized.

Show comment
Hide comment
@hypnoglow

hypnoglow Aug 15, 2017

Solution for Weave CNI is described here

hypnoglow commented Aug 15, 2017

Solution for Weave CNI is described here

flah00 added a commit to Adaptly/charts that referenced this issue Aug 22, 2017

[stable/nginx-ingress] Add hostNetwork option (#1250)
* [stable/nginx-ingress] Add hostNetwork option

Required for use with kubeadm / CNI using bare-metal clusters,
until kubernetes/kubernetes#23920 is
fixed

* [stable/nginx-ingress] bump chart version

* [stable/nginx-ingress] Actually use the value of hostNetwork

* bump to 0.6.0

* [stable/nginx-ingress] hostNetwork is part of podspec

flah00 added a commit to Adaptly/charts that referenced this issue Aug 29, 2017

[stable/nginx-ingress] Add hostNetwork option (#1250)
* [stable/nginx-ingress] Add hostNetwork option

Required for use with kubeadm / CNI using bare-metal clusters,
until kubernetes/kubernetes#23920 is
fixed

* [stable/nginx-ingress] bump chart version

* [stable/nginx-ingress] Actually use the value of hostNetwork

* bump to 0.6.0

* [stable/nginx-ingress] hostNetwork is part of podspec

flah00 added a commit to Adaptly/charts that referenced this issue Sep 3, 2017

[stable/nginx-ingress] Add hostNetwork option (#1250)
* [stable/nginx-ingress] Add hostNetwork option

Required for use with kubeadm / CNI using bare-metal clusters,
until kubernetes/kubernetes#23920 is
fixed

* [stable/nginx-ingress] bump chart version

* [stable/nginx-ingress] Actually use the value of hostNetwork

* bump to 0.6.0

* [stable/nginx-ingress] hostNetwork is part of podspec

dghubble added a commit to poseidon/terraform-render-bootkube that referenced this issue Sep 13, 2017

dghubble added a commit to poseidon/terraform-render-bootkube that referenced this issue Sep 13, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment