Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Health check for service is not working k8s crd #6492

Closed
mengqi1984 opened this issue Mar 16, 2020 · 4 comments
Closed

Health check for service is not working k8s crd #6492

mengqi1984 opened this issue Mar 16, 2020 · 4 comments

Comments

@mengqi1984
Copy link

mengqi1984 commented Mar 16, 2020

Do you want to request a feature or report a bug?

Bug

What did you do?

I want to use the health check function in traefik 2.1.6 which refered in https://docs.traefik.io/reference/dynamic-configuration/kubernetes-crd/ to detect the k8s service heart beat.
I set up the env as documents mentioned above but there is no health check requests sent to the k8s service at all.
Traefik -> IngressRoute (health check http 80)-> | k8s service1 nginx
| k8s service2 nginx (dead)

What did you expect to see?

On my 2 back k8s service should receive health check from traefik ingress routes periodically (like 5 seconds).

What did you see instead?

I uese tcpdump on k8s service to monitor the port 80, no http requests received at all.

Output of traefik version: (What version of Traefik are you using?)

Version: 2.1.6
Codename: cantal
Go version: go1.13.8
Built: 2020-02-28T17:40:18Z
OS/Arch: linux/amd64

(paste your output here)

What is your environment & configuration (arguments, toml, provider, platform, ...)?

traefik-deployment.yml

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ingressroutes.traefik.containo.us
  namespace: default
spec:
  group: traefik.containo.us
  version: v1alpha1
  names:
    kind: IngressRoute
    plural: ingressroutes
    singular: ingressroute
  scope: Namespaced

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ingressroutetcps.traefik.containo.us
  namespace: default
spec:
  group: traefik.containo.us
  version: v1alpha1
  names:
    kind: IngressRouteTCP
    plural: ingressroutetcps
    singular: ingressroutetcp
  scope: Namespaced

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: middlewares.traefik.containo.us
  namespace: default
spec:
  group: traefik.containo.us
  version: v1alpha1
  names:
    kind: Middleware
    plural: middlewares
    singular: middleware
  scope: Namespaced

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: tlsoptions.traefik.containo.us
  namespace: default
spec:
  group: traefik.containo.us
  version: v1alpha1
  names:
    kind: TLSOption
    plural: tlsoptions
    singular: tlsoption
  scope: Namespaced

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: traefikservices.traefik.containo.us
  namespace: default
spec:
  group: traefik.containo.us
  version: v1alpha1
  names:
    kind: TraefikService
    plural: traefikservices
    singular: traefikservice
  scope: Namespaced
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
  namespace: default
rules:
  - apiGroups:
      - ""
    resources:
      - services
      - endpoints
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - traefik.containo.us
    resources:
      - middlewares
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - traefik.containo.us
    resources:
      - ingressroutes
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - traefik.containo.us
    resources:
      - ingressroutetcps
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - traefik.containo.us
    resources:
      - tlsoptions
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - traefik.containo.us
    resources:
      - traefikservices
    verbs:
      - get
      - list
      - watch

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
  namespace: default
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-ingress-controller
subjects:
  - kind: ServiceAccount
    name: traefik-ingress-controller
    namespace: default
---
apiVersion: v1
kind: ServiceAccount
metadata:
  namespace: default
  name: traefik-ingress-controller
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: traefik
  namespace: default
  labels:
    k8s-app: traefik-ingress-lb
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: traefik-ingress-lb
  template:
    metadata:
      labels:
        k8s-app: traefik-ingress-lb
    spec:
      serviceAccountName: traefik-ingress-controller
      containers:
        - name: traefik
          image: traefik:2.1
          args:
            - --api.insecure
            - --accesslog
            - --entrypoints.web.address=:80
            - --entryPoints.traefik.address=:8090
            - --api.dashboard=true
            - --providers.kubernetescrd
            - --log.level=DEBUG
            - --log.format=json
            - --serverstransport.insecureskipverify=true
          ports:
            - name: web
              containerPort: 80
            - name: admin
              containerPort: 8090
---
apiVersion: v1
kind: Service
metadata:
  name: traefik
  namespace: default
spec:
  selector:
    k8s-app: traefik-ingress-lb
  externalTrafficPolicy: Cluster
  ports:
    - protocol: TCP
      name: web
      port: 80
      targetPort: 80
      nodePort: 30080
    - protocol: TCP
      name: admin
      port: 8090
      targetPort: 8090
      nodePort: 30090
  type: NodePort

lx-ingressroutes.yml

---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: nginx-web
  namespace: default
spec:
  entryPoints:
    - web
  routes:
    - match: PathPrefix(`/index`)
      kind: Rule
      services:
        - name: nginx0-svc
          port: 80
          healthCheck:
            path: /ffff
            scheme: http
            intervalSeconds: 5
            timeoutSeconds: 2
        - name: nginx1-svc
          port: 80
          healthCheck:
            path: /dddd
            scheme: http
            intervalSeconds: 5
            timeoutSeconds: 2

nginx0.yml

---
apiVersion: v1
kind: Service
metadata:
  name: nginx0-svc
  namespace: default
  labels:
    app: nginx0-svc
spec:
  ports:
  - port: 80
    name: http
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx0
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx0
  namespace: default
  labels:
    app: nginx0
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx0
  template:
    metadata:
      labels:
        app: nginx0
    spec:
      containers:
      - name: nginx0
        image: nginx:latest
        ports:
        - containerPort: 80

nginx1.yml

---
apiVersion: v1
kind: Service
metadata:
  name: nginx1-svc
  namespace: default
  labels:
    app: nginx1-svc
spec:
  ports:
  - port: 80
    name: http
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx1
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx1
  namespace: default
  labels:
    app: nginx1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx1
  template:
    metadata:
      labels:
        app: nginx1
    spec:
      containers:
      - name: nginx1
        image: nginx:latest
        ports:
        - containerPort: 80

If applicable, please paste the log output in DEBUG level (--log.level=DEBUG switch)

(paste your output here)
@ldez
Copy link
Contributor

ldez commented Mar 16, 2020

Hello,

the documentation is wrong, we will remove this section.

You have to use the heathcheck systems from k8s.

@ldez ldez closed this as completed Mar 16, 2020
v2 automation moved this from issues to Done Mar 16, 2020
@xbaun
Copy link

xbaun commented Mar 16, 2020

Is there a reason why health check is not enabled for services in k8s CRDs? This would be useful when requests are routed to services that are located outside of the k8s cluster. K8s itself dosen't support a liveness probe for ExternalName services.
At the moment I have to use a custom helm chart as a workaround to configure the health check directly through the file provider combined with a mounted config map. It would be nice to use CRDs for this instead.

@mengqi1984
Copy link
Author

Is there a reason why health check is not enabled for services in k8s CRDs? This would be useful when requests are routed to services that are located outside of the k8s cluster. K8s itself dosen't support a liveness probe for ExternalName services.
At the moment I have to use a custom helm chart as a workaround to configure the health check directly through the file provider combined with a mounted config map. It would be nice to use CRDs for this instead.

Yes, this is what I want. I have different external services but how can I know the health status and how should I notify/tell traefik which one is down or not.

@yousong
Copy link

yousong commented Mar 20, 2020

Hello,

the documentation is wrong, we will remove this section.

You have to use the heathcheck systems from k8s.

It occurred to me that k8s service does not do active health check. The pod liveness/readiness check are done by kubelet on node. On event of node crash, we are left with pod eviction strategy of the whole k8s cluster which can only be globally configured through command line arguments of kube-controller-manager and each kubelet process, with timeout.

If that is the case, then it should be practically useful we could have "traditional" load balancer health check mechanism in effect with traefik inside k8s.

@traefik traefik locked and limited conversation to collaborators Apr 28, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
No open projects
v2
Done
Development

No branches or pull requests

5 participants