Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Update Kubernetes Ingress Status #2173

Closed
micahhausler opened this issue Sep 25, 2017 · 42 comments
Closed

Feature Request: Update Kubernetes Ingress Status #2173

micahhausler opened this issue Sep 25, 2017 · 42 comments
Assignees
Labels
Milestone

Comments

@micahhausler
Copy link

micahhausler commented Sep 25, 2017

Currently (as of v1.4.0-rc3) traefik does not update the ingress status.loadBalancer field to anything. This makes it difficult to auto-assign DNS (such as kubernetes external-dns). Other Kubernetes ingress controllers update this field (see the update-status flag on the nginx controller)

The Kuberentes LoadBalancerIngress displays either a hostname or an ip field consisting of a single string value. It would be fantastic to have a Configuration option to specify the value to update managed ingress rules with. A user-configurable flag like --kubernetes.ingressEndpoint.ip or --kubernetes.ingressEndpoint.hostname.

Do you want to request a feature or report a bug?

Feature

What did you do?

my-ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-ingress
  namespace: default
spec:
  rules:
  - host: my-service.example.com
    http:
      paths:
      - backend:
          serviceName: my-service
          servicePort: 3000
        path: /
$ kubectl apply -f my-ingress.yaml

What did you expect to see?

$ kubectl get -f my-ingress.yaml -o yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-ingress
  namespace: default
spec:
  rules:
  - host: my-service.example.com
    http:
      paths:
      - backend:
          serviceName: my-service
          servicePort: 3000
        path: /
status:
  loadBalancer:
    ingress:
    - hostname: my-elb-hostname.us-east-1.elb.amazonaws.com

What did you see instead?

$ kubectl get -f my-ingress.yaml -o yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-ingress
  namespace: default
spec:
  rules:
  - host: my-service.example.com
    http:
      paths:
      - backend:
          serviceName: my-service
          servicePort: 3000
        path: /
status:
  loadBalancer: {}

Output of traefik version: (What version of Traefik are you using?)

$ docker run -it --rm traefik:v1.4.0-rc3 version
Version:      v1.4.0-rc3
Codename:     roquefort
Go version:   go1.9
Built:        2017-09-18_04:38:27PM
OS/Arch:      linux/amd64

What is your environment & configuration (arguments, toml, provider, platform, ...)?

Deployed in Kubernetes via daemonset

# No config file used
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: traefik-858bd5f3
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: traefik-ingress-controller
  template:
    metadata:
      labels:
        app: traefik-ingress-controller
    spec:
      containers:
      - args:
        - --web
        - --kubernetes
        image: traefik:latest
        name: traefik-ingress-controller
        ports:
        - containerPort: 80
          protocol: TCP
        - containerPort: 8080
          protocol: TCP
      nodeSelector:
        node-role.kubernetes.io/worker: ""

If applicable, please paste the log output in debug mode (--debug switch)

# N/A
@timoreimann
Copy link
Contributor

Hi @micahhausler, thank you for your feature request.

Do I understand correctly that the field should be populated based on a static custom configuration option? Or, to put it differently, it should not be computed dynamically somehow?

@micahhausler
Copy link
Author

The Nginx ingress controller elects a leader and only routes traffic through the leader, so it updates any managed Ingress rules status.loadBalancer.ingress.ip field to the IP of the leader's node.

With Traefik on K8s, you could do that, but my implementation is to use multiple traefik pods and route them behind a kubernetes Load Balancer (AWS ELB) service. The traefik pods don't inherently know which kubernetes service they are behind, so the solution above is to just set a manual configuration of the IP or Hostname (AWS ELB uses hostname)

The best option would probably be to have default behavior like the Nginx ingress controller where the master updates the status fields of all its managed ingress routes to the node's IP, with a configurable override.

The MVP that solves my use case would just be a static configuration option.

@timoreimann
Copy link
Contributor

@micahhausler appreciate the clarification.

The static (MVP) approach should be fairly easy to implement. @dtomcej @errm any thoughts / objections?

@micahhausler
Copy link
Author

micahhausler commented Sep 25, 2017

After digging into the nginx ingress controller, it actually has a --publish-service flag that it can use to lookup the service that is fronting the ingress controller, and applies that service's status.loadBalancer directly on each ingress rule. This would also be easy to implement for the MVP and could avoid a static configuration.

@dtomcej
Copy link
Contributor

dtomcej commented Oct 13, 2017

I'm not really sure what the purpose of this would be. Is it to update the ingress status to show which ingress controller has satisfied the ingress?

If so, we would run into issues.

In traefik, all nodes satisfy requests, and all nodes forward traffic to backend pods.

This would be constantly overwritten as each ingress controller would feel "responsible".

I suppose we could have the master node (which is responsible for running jobs etc) handle this annotation, but again, it would not be accurate...

@micahhausler
Copy link
Author

micahhausler commented Oct 13, 2017

@dtomcej The purpose was stated in the original issue, for use with other kubernetes components (like external-dns) and to conform to the way other ingress controllers behave. I'm not advocating just putting one node IP (though thats what the nginx ingress controller does), I'm advocating setting it to the kubernetes service that fronts traefik (a loadbalancer DNS on AWS, or a VIP for GCP) in which case, all nodes would still answer requests. It wouldn't have to be set, but the option would be very nice.

The master node would still be responsible for setting the status, but it would just assign a service address.

Does that make sense?

@oivindoh
Copy link

oivindoh commented Dec 5, 2017

Nginx simply sets the external IP for the load balanced service fronting nginx in my case, which is a fantastic solution that plays well with e.g. external-dns.

@errm
Copy link
Contributor

errm commented Dec 5, 2017

  • I think we should implement something similar to the --publish-service flag on the nginix controller.
  • It might also be useful to provide a static configuration also, e.g. I have worked with a cluster where traefik was fronted with a nodeport service, and an external load balancer went in front of this.... unless you are on a cloud provider, having the service populated correctly is unlikely...

@timoreimann
Copy link
Contributor

#2173 (comment) sounds reasonable to me. @dtomcej WDYT?

@dtomcej
Copy link
Contributor

dtomcej commented Jan 6, 2018

No real objections here.

It does seem very environment dependant, and potentially unreliable for new users to detect automatically with different use cases, as @errm mentioned, but they should be able to fall back to static configuration.

I would say give it a go!

@jcardoso-bv
Copy link

Any updates to this issue? I see that as of v1.5.0 Traefik is still not setting status.loadBalancer on our Kubernetes cluster.

Manually defining external-dns.alpha.kubernetes.io/target in our manifests is starting to become unmanageable as an ongoing workaround.

@timoreimann
Copy link
Contributor

Nobody is currently working on this, presumably due to bandwidth constraints only. Whoever feels like picking up the task and submitting a PR is absolutely welcome. :-)

@jcardoso-bv
Copy link

I would if I knew Go well enough :|

@so0k
Copy link

so0k commented Mar 23, 2018

For people that want to use traefik with externalDNS and are considering to use external-dns.alpha.kubernetes.io/target work-around (until this issue is resolved).

Please note that you will have to change your --txt-prefix flag for external-dns or you will get errors about CNAME and TXT record with overlapping record names (at least on R53).

changing --txt-prefix flag, will orphan all your existing TXT ownership records, so this is a significant change and may require some migration steps.

see also kubernetes-sigs/external-dns#262

simple prefixer to ease migration - https://sourcegraph.com/github.com/honestbee/devops-tools@2b1006a7d55f1dd0a439f48f4fbc763b15822832/-/blob/r53-txtprefix/main.go#L56

@dtomcej
Copy link
Contributor

dtomcej commented May 7, 2018

@micahhausler @so0k @jcardoso-bv Is this still an issue?

Please let us know!

Thanks!

@jcardoso-bv
Copy link

Very much so. The current workaround of declaring the desired DNS hostname via an externaldns annotation is not sustainable longterm.

So much so we've stopped using Traefik entirely on our clusters and moved to NGINX and ALB ingress controllers where the status.loadBalancer field is populated correctly.

@timurkhafizov
Copy link

Hi @dtomcej.

Yes, it is still an issue, unfortunately. Agree with @jcardoso-bv. Manually managing external-dns.alpha.kubernetes.io/target is a big pain.
We do like Traefik so populate domains via Service, but that involves manual work too.

@micahhausler
Copy link
Author

Same story as @jcardoso-bv, I’ve switched to other controllers for the same reason (among other traefik pain points with Kubernetes)

@oivindoh
Copy link

oivindoh commented May 9, 2018

I've added some tooling around my deployments that pick out the IP associated with traefik and updates DNS names, but it would be much better to be able to go back to having external-dns (or similar) handle this dynamically by having the data available in the ingress status.

@so0k
Copy link

so0k commented May 10, 2018

We also abandoned Traefik due to this issue

@chrizmo
Copy link

chrizmo commented May 11, 2018

We also have this issue.

@ldez ldez self-assigned this May 14, 2018
@dtomcej
Copy link
Contributor

dtomcej commented May 14, 2018

screen shot 2018-05-14 at 11 01 07 am

We are getting close ;) Expect a PR soon!

@jcardoso-bv
Copy link

Looking good :)

@ldez
Copy link
Member

ldez commented May 18, 2018

Enjoy, in the next version (1.7) the issue was resolved 🎉

@traefiker traefiker added this to the 1.7 milestone May 18, 2018
@yue9944882
Copy link
Contributor

Updating ingress is more of cloud provider nature but Traefik is a generally a cluster-wide reverse proxy which is a subset of full-functioned ingress controller. So configuring a static hostname/ip land patching that config into ingresses looks like a fake to me, frankly speaking. At least until someday Traefik provides interfaces for cloud providers 😄AFAICT

@ms-choudhary
Copy link

ms-choudhary commented Aug 21, 2018

@ldez @dtomcej 1.7 doesn't seem to work for me

I'm using stable/traefik helm chart for deployment, just updated imageTag to 1.7. However, status is still not updated in ingress:

status:
  loadBalancer: {}

Traefik version I'm using:

docker run -it --rm traefik:1.7 version
Version:      v1.7.0-rc3
Codename:     maroilles
Go version:   go1.10.3
Built:        2018-08-01_01:37:51PM
OS/Arch:      linux/amd64

@timurkhafizov
Copy link

@ms-choudhary you have to instruct Traefik to update Ingress status. Please check the docs - https://docs.traefik.io/v1.7/configuration/backends/kubernetes/#ingressendpoint and the last section of https://docs.traefik.io/v1.7/configuration/backends/kubernetes/#configuration

@amalucelli
Copy link

Using helm and the v1.7, I still faced some additional steps to get this workings:

  1. Manually add the following configuration in the configmap:
[kubernetes]
  [kubernetes.ingressEndpoint]
    publishedService = "{{ .Release.Namespace }}
/{{ template "traefik.fullname" . }}"
  1. Manually edit the ClusterRole and add the specification:
- apiGroups:
  - extensions
  resources:
  - ingresses/status
  verbs:
  - update
  1. Recreate the traefik pod to get the new configurations applied.

I can also open a PR with those changes to the official chart, not sure if this is planned to be added.

@alex-s-team
Copy link

i tried @amalucelli solution as well as all the google results i could find but the status is not populated and therefore the ingress status stays at initializing. i think the problem is, that the services do not have a public ip (this is a test setup, no cloud provider, no external dns, all private ips). the ip address of the node the load balancer is running on is 10.2.2.20, but it is not set anywhere in the ingress object or traefik.
traefik is routing everything correctly, its just not updating the status, is there a workaround anyone can think of?

@panho66
Copy link

panho66 commented Oct 22, 2018

same problem as alex-s-team

@sebastiansirch
Copy link

We tried @amalucelli solution as well, but are still facing the same problem

  status:
    loadBalancer: {}

@dtomcej
Copy link
Contributor

dtomcej commented Oct 29, 2018

This issue has been closed for almost 6 months. If you are encountering issues, please open a new issue instead of bumping a long-closed feature request.

Thanks!

@amalucelli
Copy link

Actually I'm using the v1.7.2 version, and it seems to working as expected, here is my configuration using helm:

image: traefik
imageTag: v1.7.2
rbac:
  enabled: true
metrics:
  prometheus:
    enabled: true
kubernetes:
  ingressEndpoint:
    publishedService: "traefik/traefik"
ssl:
  enabled: true

@phynias
Copy link

phynias commented Nov 1, 2018

wondering if anyone has this working without helm. like the op, i am using a daemonset yaml to get traefik going. hoping to get the config to add and get this going.

@rbq
Copy link
Contributor

rbq commented Dec 7, 2018

Same here, using the DaemonSet YAML example with NET_BIND_SERVICE on a single node bare-metal installation and would really like to see that status populated.

@materemias
Copy link

helm chart only generates the proper configuration, see
https://github.com/helm/charts/blob/master/stable/traefik/templates/configmap.yaml#L143
how traefik.toml is generated

@reymonlu
Copy link

Same issue, anyone discover something new ?

@rbq
Copy link
Contributor

rbq commented Jan 24, 2019

@reymonlu For me it was just the missing RBAC permissions when using the provided DaemonSet.

@reymonlu
Copy link

reymonlu commented Jan 24, 2019

@rbq Thanks for your answer. I updated RBAC permisssions too but ingress rules still on "initializing"...
Can you show me your file traefik.toml because i don't know what to put in the [kubernetes] section.

@rbq
Copy link
Contributor

rbq commented Jan 24, 2019

@reymonlu Not using a config file, just arguments in the container spec of my DaemonSet:

      - image: traefik
        name: traefik-ingress-lb
        ports: [...]
        securityContext: [...]
        args:
        - --api
        - --kubernetes
        - --kubernetes.ingressendpoint.hostname=www.my-domain.com
        - --logLevel=INFO

@reymonlu
Copy link

@rbq Thank you for that. I have some difficulties to understand this point of [KubernetesIngressEndPoint].
I have a cluster of 3 nodes : 1 etcd + controlplane and 2 workers. When I used Nginx Ingress controller, the status field of all my ingresses was fill with plubic's IPs of my 2 workers node (the master node is unshedulable). I don't get the purpose of this field and what should I configure in it.
If you have some times to explain it to me I would be very grateful.

@rbq
Copy link
Contributor

rbq commented Jan 25, 2019

@reymonlu I'm actually not sure about the exact purpose of this field. Maybe it's a fallback for a default Ingress w/o host configuration?

@traefik traefik locked and limited conversation to collaborators Sep 1, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests