This page contains general FAQ for Ingress, there is also a per-backend FAQ in this directory with site specific information.
Table of Contents
- How is Ingress different from a Service?
- I created an Ingress and nothing happens, what now?
- How do I deploy an Ingress controller?
- Are Ingress controllers namespaced?
- How do I disable an Ingress controller?
- How do I run multiple Ingress controllers in the same cluster?
- How are the Ingress controllers tested?
- An Ingress controller E2E is failing, what should I do?
- Is there a roadmap for Ingress features?
How is Ingress different from a Service?
The Kubernetes Service is an abstraction over endpoints (pod-ip:port pairings). The Ingress is an abstraction over Services. This doesn't mean all Ingress controller must route through a Service, but rather, that routing, security and auth configuration is represented in the Ingress resource per Service, and not per pod. As long as this configuration is respected, a given Ingress controller is free to route to the DNS name of a Service, the VIP, a NodePort, or directly to the Service's endpoints.
I created an Ingress and nothing happens, what now?
describe on the Ingress. If you see create/add events, you have an Ingress
controller running in the cluster, otherwise, you either need to deploy or
restart your Ingress controller. If the events associated with an Ingress are
insufficient to debug, consult the controller specific FAQ.
How do I deploy an Ingress controller?
The following platforms currently deploy an Ingress controller addon: GCE, GKE, minikube. If you're running on any other platform, you can deploy an Ingress controller by following this example.
Are Ingress controllers namespaced?
Ingress is namespaced, this means 2 Ingress objects can have the same name in 2 namespaces, and must only point to Services in its own namespace. An admin can deploy an Ingress controller such that it only satisfies Ingress from a given namespace, but by default, controllers will watch the entire Kubernetes cluster for unsatisfied Ingress.
How do I disable an Ingress controller?
Either shutdown the controller satisfying the Ingress, or use the
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test annotations: kubernetes.io/ingress.class: "nginx" spec: tls: - secretName: tls-secret backend: serviceName: echoheaders-https servicePort: 80
The GCE controller will only act on Ingresses with the annotation value of "gce" or empty string "" (the default value if the annotation is omitted).
The nginx controller will only act on Ingresses with the annotation value of "nginx" or empty string "" (the default value if the annotation is omitted).
To completely stop the Ingress controller on GCE/GKE, please see this FAQ.
How do I run multiple Ingress controllers in the same cluster?
Multiple Ingress controllers can co-exist and key off the
How are the Ingress controllers tested?
Testing for the Ingress controllers is divided between:
- Ingress repo: unit tests and pre-submit integration tests run via travis
- Kubernetes repo: pre-submit e2e, post-merge e2e, per release-branch e2e
An Ingress controller E2E is failing, what should I do?
First, identify the reason for failure.
- Look at the build log, if there's nothing obvious, search for quota issues.
- Find events logged by the controller in the build log
- Ctrl+f "quota" in the build log
- If the failure is in the GCE controller:
- Navigate to the test artifacts for that run and look at glbc.log, eg
- Look up the
PROJECT=line in the build log, and navigate to that project looking for quota issues (
gcloud compute project-info describe project-nameor navigate to the cloud console > compute > quotas)
- If the failure is for a non-cloud controller (eg: nginx)
- Make sure the firewall rules required by the controller are opened on the right ports (80/443), since the jenkins builders run outside the Kubernetes cluster.
Note that you currently need help from a test-infra maintainer to access the GCE test project. If you think the failures are related to project quota, cleanup leaked resources and bump up quota before debugging the leak.
If the preceding identification process fails, it's likely that the Ingress api is broken upstream. Try to setup a dev environment from HEAD and create an Ingress. You should be deploying the latest release image to the local cluster.
If neither of these 2 strategies produces anything useful, you can either start reverting images, or digging into the underlying infrastructure the e2es are running on for more nefarious issues (like permission and scope changes for some set of nodes on which an Ingress controller is running).
Is there a roadmap for Ingress features?
The community is working on it. There are currently too many efforts in flight to serialize into a flat roadmap. You might be interested in the following issues:
As well as the issues in this repo.