Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improvement: add /liveness endpoint for Kubernetes hosting #1677

Closed
JorritSalverda opened this issue Sep 23, 2016 · 9 comments
Closed

Improvement: add /liveness endpoint for Kubernetes hosting #1677

JorritSalverda opened this issue Sep 23, 2016 · 9 comments
Labels
task/feature Requests for new features in Kong

Comments

@JorritSalverda
Copy link

JorritSalverda commented Sep 23, 2016

Summary

To run Kong in Kubernetes a /liveness endpoint would be useful. If /liveness fails to return a 200 OK the node will be killed and replaced. Hence this would require information about whether the node is healthy and correctly connected to the cluster. Not to be confused with whether its ready to receive traffic, for that a separate /readiness endpoint is required - see #1678.

@JorritSalverda
Copy link
Author

I currently use /status for this, but that might be too costly for such a regular check? I don't think it fails the 200 OK if the node didn't join the cluster correctly.

@JorritSalverda JorritSalverda changed the title Add /liveness endpoint for Kubernetes hosting Improvement: add /liveness endpoint for Kubernetes hosting Sep 23, 2016
@elruwen
Copy link

elruwen commented Oct 1, 2016

I use status, too. The thing with a health check is always: If you implement it very light, it does not check much. If you implement it very heavy, it might impacts performance. Maybe a health page and via a query parameter one could supply something like light, normal, heavy.

@pamiel
Copy link
Contributor

pamiel commented Nov 18, 2016

+1
For sure would be very usefull for the integration of Kong into Kubernetes, for a production grade level.

@countergram
Copy link

+1 not just for Kubernetes, but other load balancing environments as well.

/status is on 8001, not 8000, which is not ideal for some deployments (i.e. trying to lock down access to the management API -- the public load balancer should not even have access to 8001.)

@Tigraine
Copy link

+1 I ran into huge issues with performance when running Kong on Cassandra using the /status endpoint.
I solved this now by doing a execProbe: https://www.tigraine.at/2017/01/19/configuring-kong-health-checks-in-kubernetes

Once a bit more load was applied the /status endpoint would go up to 2-3 seconds response time and even sometimes go higher than that - leading to a killed Kong Pod in K8s

But having a cheap HTTP endpoint would be very beneficial especially for L7 Load Balancing.

@peter-evans
Copy link
Contributor

@Tigraine The execProbe is a good workaround. Thank you!

@podon
Copy link

podon commented Jun 1, 2017

+1

/status or /health endpoint on HTTP:8000

@Kong Kong deleted a comment from hutchic Dec 8, 2017
@Kong Kong deleted a comment from hutchic Dec 8, 2017
@eduardobaitello
Copy link

This will be a very nice feature. Too bad that this issue is stalled :(

@neomantra
Copy link

neomantra commented Aug 13, 2018

With the advent of Kong 0.14, this can be done DIY using its new "Nginx Directive Injection" feature.

Here's how I did it -- pardon any typos here as I'm converting from a Terraform-based deployment versus pure YAML.

To your "kong-proxy" Deployment, add this to the spec's container environment variable list:

  - name: KONG_NGINX_PROXY_INCLUDE
    value: '/usr/local/kong/kube/proxy_health_check.conf'

Also add the relevant probes to the spec (which will automatically get picked up by something like GKE's ingress-gce):

  livenessProbe:
    failureThreshold: 3
    httpGet:
      path: /liveness
      port: 8001
      scheme: HTTP
  readinessProbe:
    failureThreshold: 3
    httpGet:
      path: /readiness
      port: 8001
      scheme: HTTP

Now add that "included" file to the container via a ConfigMap; this is the actual health check implementation:

apiVersion: v1
   kind: ConfigMap
   metadata:
     name: kong-proxy
     namespace: kong
   data:
     proxy_health_check.conf: <<END_OF_PROXY_HEALTH_CHECKS
location /liveness {
  content_by_lua_block {
    ngx.say("OK")
  } 
}
location /readiness {
  content_by_lua_block {
    ngx.say("OK")
  } 
}
END_OF_PROXY_HEALTH_CHECKS

I'm naive as to what the best tests are so just used something simple. I tried to use the result of kong health but was tripped up because of some permissions of /usr/local/kong/.kong_env which user nobody of the worker thread could not access. I didn't want to loosen the permissions.

Something like this should be worked into Kong's Kubernetes examples as it is imperative for exposing Kong behind other load-balancers.

@kikito kikito added task/feature Requests for new features in Kong and removed proposal labels Feb 8, 2021
@guanlan guanlan closed this as completed May 26, 2021
@Kong Kong locked and limited conversation to collaborators May 26, 2021

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
task/feature Requests for new features in Kong
Projects
None yet
Development

No branches or pull requests