New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
podMonitorSelector alone does not work #3164
Comments
This is unfortunately due to some historic legacy. I personally think we should actually remove this, since we have the If you want to do a PR to fix this, I'd be happy to move forward with it. |
Hmm, that sounds like it would be a breaking change for someone. I think I
can see how to extend the existing logic to be "if service and pod monitors
were unspecified", what do you think about doing that first, and then
tackling the legacy-cleanup separately?
…On Fri, Apr 24, 2020 at 5:17 PM Frederic Branczyk ***@***.***> wrote:
This is unfortunately due to some historic legacy. I personally think we
should actually remove this, since we have the additionalScrapeConfigs
feature. (this used to be intentional to allow the user to manage the
prometheus config themselves if service monitor selectors were unspecified,
the additionalScrapeConfigs feature turned out to be the better choice
though)
If you want to do a PR to fix this, I'd be happy to move forward with it.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#3164 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABAEBRFU723YSAQCUT2QKLROHCR7ANCNFSM4MP44VEA>
.
|
That sounds good to me! Maybe we can specifically call this out in field descriptions (and change log) that this behavior will be dropped in the future and we do so in a couple of releases? |
In an environment that only uses pod monitors, it is natural to only specify a matcher for pod monitors. Support this by treating them just like service monitors are treated. This preserves the special rule that "no matchers" means "do not manage the configuration at all" – this can be removed at a later stage. Fixes prometheus-operator#3164. Signed-off-by: Matthias Rampke <mr@soundcloud.com>
In an environment that only uses pod monitors, it is natural to only specify a matcher for pod monitors. Support this by treating them just like service monitors are treated. The condition in `sync` was getting hairy. Move it down into `createOrUpdateConfigurationSecret`, so that we can keep the happy-path left-aligned overall. This preserves the special rule that "no matchers" means "do not manage the configuration at all" – this can be removed at a later stage. Fixes prometheus-operator#3164. Signed-off-by: Matthias Rampke <mr@soundcloud.com>
In an environment that only uses pod monitors, it is natural to only specify a matcher for pod monitors. Support this by treating them just like service monitors are treated. The condition in `sync` was getting hairy. Move it down into `createOrUpdateConfigurationSecret`, so that we can keep the happy-path left-aligned overall. This preserves the special rule that "no matchers" means "do not manage the configuration at all" – this can be removed at a later stage. In the meantime, improve the messaging around this in logs and documentation. Fixes prometheus-operator#3164. Signed-off-by: Matthias Rampke <mr@soundcloud.com>
In an environment that only uses pod monitors, it is natural to only specify a matcher for pod monitors. Support this by treating them just like service monitors are treated. The condition in `sync` was getting hairy. Move it down into `createOrUpdateConfigurationSecret`, so that we can keep the happy-path left-aligned overall. This preserves the special rule that "no matchers" means "do not manage the configuration at all" – this can be removed at a later stage. In the meantime, improve the messaging around this in logs and documentation. Add additional logging and documentation around this, including the deprecation check. Fixes prometheus-operator#3164. Signed-off-by: Matthias Rampke <mr@soundcloud.com>
In an environment that only uses pod monitors, it is natural to only specify a matcher for pod monitors. Support this by treating them just like service monitors are treated. The condition in `sync` was getting hairy. Move it down into `createOrUpdateConfigurationSecret`, so that we can keep the happy-path left-aligned overall. This preserves the special rule that "no matchers" means "do not manage the configuration at all" – this can be removed at a later stage. In the meantime, improve the messaging around this in logs and documentation. Add additional logging and documentation around this, including the deprecation check. Fixes prometheus-operator#3164. Signed-off-by: Matthias Rampke <mr@soundcloud.com>
In an environment that only uses pod monitors, it is natural to only specify a matcher for pod monitors. Support this by treating them just like service monitors are treated. The condition in `sync` was getting hairy. Move it down into `createOrUpdateConfigurationSecret`, so that we can keep the happy-path left-aligned overall. This preserves the special rule that "no matchers" means "do not manage the configuration at all" – this can be removed at a later stage. In the meantime, improve the messaging around this in logs and documentation. Add additional logging and documentation around this, including the deprecation check. Fixes prometheus-operator#3164. Signed-off-by: Matthias Rampke <mr@soundcloud.com>
What happened?
When using a
Prometheus
that only has apodMonitorSelector: {}
but not aserviceMonitorSelector: {}
, the pod monitor is never synced.When starting this way, no targets appear in Prometheus. Since these are the only ones in the namespace, the
secret/prometheus-prometheus
entryprometheus.yaml.gz
remains an empty string.Adding a dummy
serviceMonitorSelector: {}
fixes this, even though there are no service monitors anywhere in the cluster.Conversely, when updating the
Prometheus
to remove theserviceMonitorSelector
, updates to the podMonitor are no longer reflected in Prometheus.Did you expect to see something different?
It should be possible to use only
PodMonitor
.How to reproduce it (as minimally and precisely as possible):
Apply the following manifests (or a variation on them) plus a deployment/pods with matching labels.
rocket-podmonitor.yaml.gz
rocket-prometheus.yaml.gz
Observe that
kubectl get -n rocket secret/prometheus-prometheus -o yaml
is empty and the timestamp is from before submitting the podmonitor:secret.yaml
Delete the podmonitor. Update the
Prometheus
object to addserviceMonitorSelector: {}
. Add the podmonitor. Observe that the secret now contains information, and the debug log output is much more comprehensive.Now the secret changes and targets appear.
To test the other way around, modify the
Prometheus
to remove theserviceMonitorSelector
. Delete thePodMonitor
. Observe that the secret does not change, and the targets are still being targetted.Environment
Prometheus Operator version:
v0.38.1
Kubernetes version information:
podmonitor sync log without servicemonitorselector
podmonitor sync log with servicemonitorselector
Anything else we need to know?:
Adding a service monitor selector is a viable workaround for me.
The text was updated successfully, but these errors were encountered: