-
Notifications
You must be signed in to change notification settings - Fork 792
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expose http endpoint details #63
Comments
From a manageability standpoint, I'd advise against this. You want you cluster manager to be assigning ports so that it can do healthchecking, graceful shutdown and longer term network QoS based on port numbers.
I'm unsure if I should support this, you can always implement your own variant of start_http_server. These are meant as starting points for the most common use cases.
This isn't completely sufficient to solve this. Consider a gauge whose role is "last time X happened", when a process died that could go backwards. |
Agreed; I’m gonna claim that most of your users don’t have a cluster manager. :) Also maybe I misunderstand, but I’m solely talking about the metrics endpoint. Not the application service port.
Fair enough. I’m closing then.
True, but wouldn’t that something to solve using |
The problem is that once the process isn't there anymore, Prometheus will stop scraping it and the value it has will go stale so the max won't see it. |
That depends on the approach tho, right? Because the way I do it, is to use consul, and register the workers’ metrics endpoints a service_id like Therefore unless I scale down, there’s always metrics with those worker ids. Am I missing something? |
The problem is the scaling down, or if any worker gets restarted. |
Hm how does restarting affect anything? Assuming they re-register on start? |
They no longer have the latest time whatever event happened, as that's all stored in memory. The challenge is to make a unicorned app have the same semantics metric wise as a threaded one. |
Ah, I get it now. For them, the last moment it happened is “never” after start. Still feels like something that could be figured out within Prometheus (dropping 0 or something?). Anyhow, thanks for your time! |
It's possible, but it's heading very much into advanced topics that something simple like this shouldn't require. |
Yeah totally. I’m just in love with Prometheus’ philosophy of keeping the client side as simple as possible so I’m exploring my options. |
I’m building an infrastructure that uses service discovery to find metrics. Now, I find it very tedious to set ports by hand so I prefer to listen on port 0 and then use service discovery to tell prometheus about it:
Currently there’s no way to achieve that using
client_python
. It would be super helpful if you’d exposehttpd
from https://github.com/prometheus/client_python/blob/master/prometheus_client/exposition.py#L64 .As a bonus point, this also solves the multiple processes problem from #30 in a more prometheus-like way I find: just expose multiple metrics and let prometheus figure it out.
The text was updated successfully, but these errors were encountered: