-
Notifications
You must be signed in to change notification settings - Fork 824
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Configure the way Prometheus exposed #484
Comments
To be clear, do you mean that you want the prometheus pod itself to be exposed on a different port? Or do you mean that you want to change the port used by the containers for exposing metrics endpoints that prometheus scrapes? |
Changing the port used by the containers for exposing metrics endpoints that Prometheus scrapes. |
OK so you want to change the port on which the model exposes metrics. In many of our examples (e.g. model with metrics) when you deploy a model and do
I'm wondering whether you'd be happy with an approach that involves changing the 8000 here to 9000? That would mean all REST requests to the model go to 9000 instead of 8000, not just prometheus. Or do you need to split it so that most REST requests still go to 8000 and the prometheus endpoint alone moves to 9000? There is a ENGINE_SERVER_PORT environment variable injected into that pod which controls which port the I'm in the process of looking into the options here. For now just trying to determine whether you mind which port is used for REST rest requests for the model or if the only port constraint for you is that there has to be prometheus data available on 9000. |
So actually I want the metrics to be exposed somehow on 14040 port, as well as the path of how I get it, so it should be configurable by my needs. I am asking it because this is the current way my company works. |
The engine and model/predictor pods are created by the cluster-manager when you deploy a model. The seldon cluster-manager watches for SeldonDeployments and creates pods to implement the requested graph. So it's the cluster-manager that decides which ports will be used. I've started a branch to make these configurable. All of the metrics come from the engine container so the idea is to enable you to set the engine REST port to 14040. Next I'll look into making the |
First of all, thanks. |
The metrics fall under the spring boot actuator so I think we could add an option to move the actuator to a different port with |
@OrBarda I've now submitted a pull request but it assumes that the metrics will be on the same port as the REST port. I'm considering whether we should review that first and try to get it merged so that you have something you can use. If we do that then we could next look separately at moving the metrics to a different port under a new issue. Alternatively we can try to incorporate the change to split the metrics port from the REST port - that just might mean it takes a bit longer to get something merged. Let us know if you have a preference. |
Raised #487 for having prometheus metrics on a separate, dedicated port. That change will be approached separately. So the current issue is now just for making the REST port and prometheus path configurable |
* Handle errors from scheduler load and unload pipeline * Handle scheduler errors and decide if retryable * stability fixes for when scheduler restarted * review fixes * decrease non-retryable errors for controller grpc calls to scheduler
My company has a convention and that's the way we collect the metrics, means I need to change the port it is exposed in and also the path somehow
Now it is exposed via the pod on 8000/prometheus, and I wish it will be configurable.
The text was updated successfully, but these errors were encountered: