Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configure the way Prometheus exposed #484

Closed
OrBarda opened this issue Apr 2, 2019 · 9 comments · Fixed by #485
Closed

Configure the way Prometheus exposed #484

OrBarda opened this issue Apr 2, 2019 · 9 comments · Fixed by #485

Comments

@OrBarda
Copy link

OrBarda commented Apr 2, 2019

My company has a convention and that's the way we collect the metrics, means I need to change the port it is exposed in and also the path somehow
Now it is exposed via the pod on 8000/prometheus, and I wish it will be configurable.

@ryandawsonuk
Copy link
Contributor

To be clear, do you mean that you want the prometheus pod itself to be exposed on a different port? Or do you mean that you want to change the port used by the containers for exposing metrics endpoints that prometheus scrapes?

@OrBarda
Copy link
Author

OrBarda commented Apr 2, 2019

Changing the port used by the containers for exposing metrics endpoints that Prometheus scrapes.
Also, I forgot to mention that I am running with Spring boot app.

@ryandawsonuk
Copy link
Contributor

OK so you want to change the port on which the model exposes metrics. In many of our examples (e.g. model with metrics) when you deploy a model and do kubectl get service you see a service like:

NAME                                                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                         AGE
mymodel-mymodel                                             ClusterIP   10.106.142.242   <none>        8000/TCP,5001/TCP               106s

I'm wondering whether you'd be happy with an approach that involves changing the 8000 here to 9000? That would mean all REST requests to the model go to 9000 instead of 8000, not just prometheus. Or do you need to split it so that most REST requests still go to 8000 and the prometheus endpoint alone moves to 9000?

There is a ENGINE_SERVER_PORT environment variable injected into that pod which controls which port the seldon-container-engine sidecar container runs on. You can see this with kubectl describe pod mymodel-mymodel-b79af31-69ff7f777-8c7bt. I'm noting this in part because there's also an env var for PREDICTIVE_UNIT_SERVICE_PORT which sets the other container to use 9000. These two containers should be set to use different ports since they share a port space. I think there are also other components which look for the seldon-container-engine on that port.

I'm in the process of looking into the options here. For now just trying to determine whether you mind which port is used for REST rest requests for the model or if the only port constraint for you is that there has to be prometheus data available on 9000.

@OrBarda
Copy link
Author

OrBarda commented Apr 2, 2019

So actually I want the metrics to be exposed somehow on 14040 port, as well as the path of how I get it, so it should be configurable by my needs. I am asking it because this is the current way my company works.

@ryandawsonuk
Copy link
Contributor

The engine and model/predictor pods are created by the cluster-manager when you deploy a model. The seldon cluster-manager watches for SeldonDeployments and creates pods to implement the requested graph. So it's the cluster-manager that decides which ports will be used. I've started a branch to make these configurable. All of the metrics come from the engine container so the idea is to enable you to set the engine REST port to 14040.

Next I'll look into making the /prometheus path configurable and ensuring that prometheus can still pull that data when the path is changed.

@OrBarda
Copy link
Author

OrBarda commented Apr 3, 2019

First of all, thanks.
Second, is it possible to query the metrics on a different port then the REST one?

@ryandawsonuk
Copy link
Contributor

The metrics fall under the spring boot actuator so I think we could add an option to move the actuator to a different port with management.port in the properties file.

@ryandawsonuk
Copy link
Contributor

@OrBarda I've now submitted a pull request but it assumes that the metrics will be on the same port as the REST port. I'm considering whether we should review that first and try to get it merged so that you have something you can use. If we do that then we could next look separately at moving the metrics to a different port under a new issue. Alternatively we can try to incorporate the change to split the metrics port from the REST port - that just might mean it takes a bit longer to get something merged. Let us know if you have a preference.

@ryandawsonuk
Copy link
Contributor

ryandawsonuk commented Apr 4, 2019

Raised #487 for having prometheus metrics on a separate, dedicated port. That change will be approached separately. So the current issue is now just for making the REST port and prometheus path configurable

agrski pushed a commit that referenced this issue Dec 2, 2022
* Handle errors from scheduler load and unload pipeline

* Handle scheduler errors and decide if retryable

* stability fixes for when scheduler restarted

* review fixes

* decrease non-retryable errors for controller grpc calls to scheduler
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants