New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support Kubernetes liveness/readiness probes as condition #5
Comments
Hello @r-chris, supporting additional protocols is definitely not out of scope, though the configuration syntax is built around HTTP at the moment. In the future, one of the ideas I had in mind was to add support for mainstream infrastructure components, like databases (i.e. mysql, postgres) and caches (redis, memcached). The configuration would also be very simple, something like services:
- type: redis # Is this field really necessary if the protocol is in the uri anyways?
url: "redis://somehost:6379"
command: GET some_key
conditions:
- (TBD)
- type: mysql # Is this field really necessary if the protocol is in the uri anyways?
url: "mysql://somehost:3306/your_db_here"
command: SELECT id FROM users WHERE name = 'John Doe'
conditions:
- (TBD) The pros would mainly revolve around readability and ease of use, which is one of the most important points IMHO. The cons, however, would revolve around the fact that each new technology that would need to be supported would need to be implemented. Likewise, this could grow the tree of dependencies quite a lot in the long run, so perhaps a more generic solution would be preferable - granted this can be done without adding too much complexity. To be frank, I haven't put too much thought into this yet, so if you have any suggestions, these would be greatly appreciated. |
@TwinProduction Why not leave the dependencies to the container owner? Allow configuration of tools (types in this case) to point to the individual binary/package to be used to run the healthcheck. That way it would be infinitely extendable without relying on packaging up dependencies. |
Sounds good. I was thinking of a way to utilize the status monitoring already build into Kubernetes / Docker, but I supposed those are already exposed through HTTP anyway?
Of course you could add this per container, but then you have to find a way to run that reliably next to your existing processes, which I think tends to be painful.
… On 3 Sep 2020, at 17:13, Christian C. ***@***.***> wrote:
Hello @r-chris, supporting additional protocols is definitely not out of scope, though the configuration syntax is built around HTTP at the moment.
In the future, one of the ideas I had in mind was to add support for mainstream infrastructure components, like databases (i.e. mysql, postgres) and caches (redis, memcached).
The configuration would also be very simple, something like
services:
- type: redis # Is this field really necessary if the protocol is in the uri anyways?
url: "redis://somehost:6379"
command: GET some_key
conditions:
- (TBD)
- type: mysql # Is this field really necessary if the protocol is in the uri anyways?
url: "mysql://somehost:3306/your_db_here"
command: SELECT id FROM users WHERE name = 'John Doe'
conditions:
- (TBD)
The pros would mainly revolve around readability and ease of use, which is one of the most important points IMHO.
The cons, however, would revolve around the fact that each new technology that would need to be supported would need to be implemented. Likewise, this could grow the tree of dependencies quite a lot in the long run, so perhaps a more generic solution would be preferable - granted this can be done without adding too much complexity.
To be frank, I haven't put too much thought into this yet, so if you have any suggestions, these would be greatly appreciated.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
@r-chris It's certainly possible to do that, but the thing is, Kubernetes' health is determined by the probes, and the probe configuration offers things that aren't really possible to do from an external pod. For instance, in Kubernetes, you can run a command within the container to determine whether a container is healthy or not: livenessProbe:
exec:
command:
- cat
- /tmp/healthy This isn't something that an external pod can replicate. Once again, though, this would severely limit what the conditions would be for such service, as the only data you'd get would be whether the container (and by association, the pod) is healthy or not. |
I would love to see DNS based Health checks something like
|
@mapl please create a separate feature request for this. This issue is more targeted at adding support for leveraging Kubernetes' probes to determine whether a service is healthy or not. |
Somewhat related: as of v1.2.0, monitoring TCP services is now supported and Kubernetes auto discovery has been added in v1.4.0 Unfortunately, I haven't tackled leveraging the probe readiness/liveness results to determine whether a service is healthy or not yet, but since the dependencies are here, this has certainly increased the odds of this being implemented in the future. After reflecting on it a bit more, I've come up with a decent implementation that should be acceptable:
Example without using auto discovery: services:
- name: twinnation
url: "https://twinnation.org/health"
interval: 30s
kubernetes:
service-name: "twinnation"
namespace: "default"
conditions:
- "[KUBERNETES_LIVENESS_HEALTHY] == true"
- "[KUBERNETES_READINESS_HEALTHY] == true" Example using auto discovery: kubernetes:
auto-discover: true
cluster-mode: "out"
service-template:
interval: 30s
conditions:
- "[KUBERNETES_LIVENESS_HEALTHY] == true"
- "[KUBERNETES_READINESS_HEALTHY] == true"
namespaces:
- name: default
hostname-suffix: ".default.svc.cluster.local" # Might be unnecessary? See note below
target-path: "/health" # Might be unnecessary? See note below That said, some adjustments might be needed. By default, calls will be made to the required I do have a new concern though:
|
Hi - this looks very promising. What is your view on supporting additional monitoring options besides sending HTTP requests? In particular it would be great to also monitor service health for backend services with gatus. I read you are deploying this in Kubernetes and I would want to do the same. It would be great if we could also probe for the Kubernetes health status of backend services, which do not provide HTTP endpoints. Anyway, would this be out of scope for your project here?
The text was updated successfully, but these errors were encountered: