Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ingress is not listening on port 80 #4799

Closed
wxq851685279 opened this issue Dec 1, 2019 · 13 comments
Closed

ingress is not listening on port 80 #4799

wxq851685279 opened this issue Dec 1, 2019 · 13 comments

Comments

@wxq851685279
Copy link

wxq851685279 commented Dec 1, 2019

NGINX Ingress controller version: 0.26.1

Kubernetes version (use kubectl version): v1.16.3

Environment:

  • Cloud provider or hardware configuration:

  • OS (e.g. from /etc/os-release): deepin 15.11

  • Kernel (e.g. uname -a): Linux w-pc 5.4.0-xanmod0 Basic structure  #1.191125 SMP PREEMPT Mon Nov 25 16:18:17 -03 2019 x86_64 GNU/Linux

  • Install tools:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
  • Others:

What happened:
localhost: 31486 access normally, but port 80 is not accessible, why.

kubectl get service -n ingress-nginx
NAME            TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx   LoadBalancer   10.103.3.137   <pending>     80:31486/TCP,443:31929/TCP   74m

Use netstat to find no process listening on port 80.

netstat  -tunlp | grep 80
tcp        0      0 192.168.2.187:2380      0.0.0.0:*               LISTEN      6931/etcd           

What you expected to happen:
Accessible via localhost port 80

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:
This is a single node k8s test environment built locally

@aledbf
Copy link
Member

aledbf commented Dec 1, 2019

Cloud provider or hardware configuration:

If that's empty I assume you are trying to use the ingress controller in bare-metal (or docker in docker)
In that case you cannot use a service type=LoadBalancer. Please check https://kubernetes.github.io/ingress-nginx/deploy/baremetal/

@wxq851685279
Copy link
Author

Cloud provider or hardware configuration:

If that's empty I assume you are trying to use the ingress controller in bare-metal (or docker in docker)
In that case you cannot use a service type=LoadBalancer. Please check https://kubernetes.github.io/ingress-nginx/deploy/baremetal/

I do n’t understand what to do to solve this problem,
I tried to install in ubuntu18.04 and it was normal. Could this be related to the operating system?

@aledbf
Copy link
Member

aledbf commented Dec 2, 2019

I tried to install in ubuntu18.04 and it was normal. Could this be related to the operating system?

If you used Ubuntu and it worked, then it is not related to the ingress-nginx itself.
Did you search is the operating system is supported in Kubernetes?

Keep in mind the ingress controller is just another pod and has nothing to do with the Kubernetes networking exposing ports or configuring iptables rules.

@wxq851685279
Copy link
Author

I tried to install in ubuntu18.04 and it was normal. Could this be related to the operating system?

If you used Ubuntu and it worked, then it is not related to the ingress-nginx itself.
Did you search is the operating system is supported in Kubernetes?

Keep in mind the ingress controller is just another pod and has nothing to do with the Kubernetes networking exposing ports or configuring iptables rules.

The problem has been solved, you can add the following code to the deployment.

spec: 
  template:
    spec:
      hostNetwork: true

@aledbf
Copy link
Member

aledbf commented Dec 2, 2019

The problem has been solved, you can add the following code to the deployment.

What do you mean? In the provided yaml files to install the ingress controller?
If that's the request, no.

What you did (hostNetwork: true) means you can only have one pod of the ingress controller per node. Something not everyone expects.

@wxq851685279
Copy link
Author

The problem has been solved, you can add the following code to the deployment.

What do you mean? In the provided yaml files to install the ingress controller?
If that's the request, no.

What you did (hostNetwork: true) means you can only have one pod of the ingress controller per node. Something not everyone expects.

I don't use this approach in a production environment, I just use it locally for testing.

@msrinivascharan
Copy link

msrinivascharan commented Apr 29, 2020

i am working on POC, added "hostNetwork: true" to ingress controller deployment manifest. worked fine for me. thank you @wxq851685279

@alansoliditydev
Copy link

added "hostNetwork: true" worked for me, also.
my strategy is one nginx-ingress-controller per node, very well fit for my cases.

@majorinche
Copy link

it works for me too
but not sure why?
we did not set this "hostNetwork: true" before, ingress is still ok, and run for several month

recently only after doing something, like restart docker service, ingress-nginx did not work. curl x.x.x.x:80 is not ok. very strange

@dprateek1991
Copy link

I would like to appreciate @aledbf for the solution here. I have been facing 504 Gateway Timeout and 502 Bad Gateway errors on my services (Apache Spark History Server and another standalone service for Spark). Both the services were running on 0.0.0.0:18080 and 0.0.0.0:8080 respectively within the container/ pod and it took me a week to find this setting.

Adding this setting in deployment of both the services, magically fixed the issues -

spec: template: spec: hostNetwork: true

@JamesCHub
Copy link

JamesCHub commented Nov 5, 2021

Just a bit more context on this - in some tutorials you might be advised to simply do a
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.4/deploy/static/provider/baremetal/deploy.yaml

If you're working with a baremetal implementation on a private LAN without a load balancer, you'll need to modify this step a bit.

instead of applying that deploy.yaml directly, do a wget (e.g. wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.4/deploy/static/provider/baremetal/deploy.yaml) then edit the deploy.yaml - scroll down to the Deployment resource and add the hostNetwork: true key:value pair. So for example:

      serviceAccountName: ingress-nginx
      terminationGracePeriodSeconds: 300
      hostNetwork: true
      volumes:
        - name: webhook-cert
          secret:
            secretName: ingress-nginx-admission

etc.

So what you're doing is modifying the spec of the template for the controller deployment.

Deploy the modified version with something like:

kubectl apply -f deploy.yaml

The Ingress resource that you create to use this deployment/controller should refer to it like this:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: web-api-ingress
  annotations:
    kubernetes.io/ingress.class: nginx

You can then do a describe like this:

kubectl describe service -n ingress-nginx ingress-nginx-controller

And find out what lucky node has been designated as your ingress.

@wxq851685279
Copy link
Author

wxq851685279 commented Nov 7, 2021

@JamesCHub
METALLB can be installed on bare metal to solve this problem.

https://metallb.universe.tf/

@cdprete
Copy link

cdprete commented Jun 10, 2023

I tried to install in ubuntu18.04 and it was normal. Could this be related to the operating system?

If you used Ubuntu and it worked, then it is not related to the ingress-nginx itself.
Did you search is the operating system is supported in Kubernetes?
Keep in mind the ingress controller is just another pod and has nothing to do with the Kubernetes networking exposing ports or configuring iptables rules.

The problem has been solved, you can add the following code to the deployment.

spec: 
  template:
    spec:
      hostNetwork: true

HI @wxq851685279.
When I add this to all my deployments (3 in total) only one of them starts correctly.
The other 2 stay in state "Pending" with the message /1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod...
What to do then?
I'm actually using k3s with Rancher Desktop as k8s cluster: https://ranchermanager.docs.rancher.com/v2.7/how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/ingress-configuration

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants