-
Notifications
You must be signed in to change notification settings - Fork 831
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Metrics Endpoint dose not work with Istio Sidecar #2720
Comments
So the problem is that we're requiring the scrape path to be We normally suggest to disable istio sidecars by using
Or do you want to be able to use the sidecar? Is it only the sidecar that fails to scrape? Does the scraping work for the main model? |
Yes, I hope to use sidecar for traffic management and other tasks
I can’t guarantee, but when I ran the example provided by seldon, I found no other problems |
So we need to update the sidecar to allow the correct Prometheus scrape path? Is it an http/https issue? |
I think it is an issue of Maybe just need to set the Annotation |
@domgoer and @cliveseldon - I can confirm that @domgoer's suggestion is correct. Previously, we had the following error: 2020-12-07T17:28:21.670730Z error failed scraping application metrics: error scraping http://localhost:80/prometheus: Get "http://localhost:80/prometheus": dial tcp 127.0.0.1:80: connect: connection refused Now we have zero logs of this value. Istio automatically merges all metrics on its own, so this will resolve this issue. i.e., you only need to uncomment the "prometheus.io/port" line, and it's solved. |
Closing. Please reopen if still an issue. |
It looks this issue it not solved yet. For anyone else struggling with it, it looks that setting the two following envs in the operator's pod helps:
The first causes the executor to listen on port This doesn't solve the issue completely - as the annotation would do - but at least allows the metrics to be scraped without recompilation. If you're using helm chart, these are the overrides you need to set: |
Thanks mate.
didn't work.. |
@Shahard2 I don't know for sure, but I think |
The issue looks like is not yet solved. @Shahard2 solution worked albeit it implies a few issues with Istio overall health, in my case at least. What it really solved was adding the annotation to my
This doesn't affect the overall operator cooperation with Istio. I work with seldon-core:1.13.1. So I believe this solution would be the recommended one. @cliveseldon might re-opening the issue be beneficial in your opinion? |
Another alternative is to not use metrics merging - this allows the use of Istio sidecars, but also allows all metrics to be exposed as if the sidecar weren't there:
The main issue is that Istio's metrics merging assumes a single application endpoint to be exposed for metrics, because that's what the Prometheus annotations approach expects. For pods with multiple containers producing metrics (e.g. a Seldon executor and a Seldon microservice or MLServer), this assumption breaks. |
Describe the bug
As described in the doc
But Istio Sidecar will overwrite the
annotations
which related toPrometheus
and the metrics for the business service will be collected internally at Sidecar.Because of the lack of
prometheus/port
, Sidecar defaults tohttp://127.0.0.1:80/prometheus
to collect metrics.Environment
kubectl version
]kubectl get --namespace seldon-system deploy seldon-controller-manager -o yaml | grep seldonio
]Alternatively run
echo "#### Kubernetes version:\n $(kubectl version) \n\n#### Seldon Images:\n$(kubectl get --namespace seldon-system deploy seldon-controller-manager -o yaml | grep seldonio)"
Sidecar Log
The text was updated successfully, but these errors were encountered: