Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

creating service monitor can only scrape service in monitoring namespace? #2557

Closed
sloppycoder opened this issue Apr 13, 2019 · 25 comments
Closed
Labels

Comments

@sloppycoder
Copy link

I created a test project on github here.

When I create deployment and service in the monitoring namespace, then created the service monitor as following

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: profile-svc
  labels:
    app: profile-svc
    k8s-app: profile-svc
spec:
  selector:
    matchLabels:
      app: profile-svc
  endpoints:
    - port: web
      scheme: http
      path: '/actuator/prometheus'
      interval: 15s
      honorLabels: true

Operator picked up the service monitor and I saw the following configuration in Prometheus dashboard.


- job_name: monitoring/profile-svc/0
  honor_labels: true
  scrape_interval: 15s
  scrape_timeout: 10s
  metrics_path: /actuator/prometheus
  scheme: http
  kubernetes_sd_configs:
  - api_server: null
    role: endpoints
    namespaces:
      names:
      - monitoring
  relabel_configs:
    - blah
    - blah

Everything seems to be fine, Prometheus scrapes my metrics endpoints and I can then see the data in the dashboard. So it appears that my deployment and service configuration are wired up correctly.

However, if I create deployment and service in my own project namespace, and then create the same service monitor, my pod does not receive any scraping requests. I checked the configuration, it's the same as before:


- job_name: monitoring/profile-svc/0
  honor_labels: true
  scrape_interval: 15s
  scrape_timeout: 10s
  metrics_path: /actuator/prometheus
  scheme: http
  kubernetes_sd_configs:
  - api_server: null
    role: endpoints
    namespaces:
      names:
      - monitoring  <- is this correct? shouldn't this be my project namespace?
  relabel_configs:
    - blah
    - blah
    

Is the above configuration correct? Is this a bug in the Operator?

Also, I found that if I create the service monitor in my own project namespace, Operator does not detect it and no changes are made to Prometheus configuration. Is this the intended behavior?

@sloppycoder
Copy link
Author

I found some references to this in #1921. but still not sure what I should do...

@brancz
Copy link
Contributor

brancz commented Apr 15, 2019

The ServiceMonitor object should be created in the same namespace as the application lives in. You need to make sure that the serviceMonitorNamespaceSelector selects that namespace and that the Prometheus server has the appropriate permissions to access Service/Endpoints/Pod objects in that namespace.

I saw you used the kube-prometheus stack, which we just a few minutes ago moved to it's own repository: coreos/kube-prometheus. It also has instructions on how to use jsonnet to add extra namespaces to watch, then the RBAC permissions get generated automatically.

@aslimacc
Copy link

@brancz Thank you for response

For me, Prometheus cannot detect ServiceMonitor on other Namespaces if the ServiceMonitor object is created in the same namespace as the application lives in.

When I add the same ServiceMonitor object on additionalServiceMonitors. It works

What is wrong?

@brancz
Copy link
Contributor

brancz commented Apr 23, 2019

@aslimacc sorry we don't maintain or use the helm charts, we can't help you with that.

@rafaeltuelho
Copy link

@brancz , I'm also trying to make the Prometheus scrap additional namespaces... Following the README instructions I added additional namespaces to my jsonnet config, like this:

local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') + {
  _config+:: {
    namespace: 'monitoring',

    prometheus+:: {
      namespaces+: ['my-namespace', 'my-second-namespace'],
    },
  },
};

But, the generated manifests only create additional Roles and RoleBinds. Should it generate ServiceMonitor objects for each additional namespaces? If not, should I explicitly config my jsonnet to generate ServiceMonitor? How can I do this? I could not find any example...

@brancz
Copy link
Contributor

brancz commented Jun 5, 2019

Typically we let the owners of the namespace decide how they provision the ServiceMonitor within that namespace. If you want to though, you can add another one for example with:

... + {
  prometheus+:: {
    serviceMonitorKubeScheduler:
      {
        apiVersion: 'monitoring.coreos.com/v1',
        kind: 'ServiceMonitor',
        metadata: {
          name: 'my-servicemonitor',
          namespace: 'my-namespace',
        },
        spec: {
          jobLabel: 'app',
          endpoints: [
            {
              port: 'http-metrics',
            },
          ],
          selector: {
            matchLabels: {
              'app': 'myapp',
            },
          },
        },
      },
  },
};

Would you like to add a PR to add this to the docs of kube-prometheus? 🙂

@rafaeltuelho
Copy link

Awesome!
I tested here and it worked like a charm!

Sure, I'll do a PR for the README.

rafaeltuelho added a commit to rafaeltuelho/kube-prometheus that referenced this issue Jun 5, 2019
In the **Adding additional namespaces to monitor** section I appended a note showing the need for ServiceMonitor when adding additional namespaces... 

see: prometheus-operator/prometheus-operator#2557 (comment)
rafaeltuelho added a commit to rafaeltuelho/kube-prometheus that referenced this issue Jul 31, 2019
In the **Adding additional namespaces to monitor** section I appended a note showing the need for ServiceMonitor when adding additional namespaces... 

see: prometheus-operator/prometheus-operator#2557 (comment)
@stale
Copy link

stale bot commented Aug 14, 2019

This issue has been automatically marked as stale because it has not had any activity in last 60d. Thank you for your contributions.

@stale stale bot added the stale label Aug 14, 2019
lilic pushed a commit to lilic/kube-prometheus that referenced this issue Aug 16, 2019
In the **Adding additional namespaces to monitor** section I appended a note showing the need for ServiceMonitor when adding additional namespaces... 

see: prometheus-operator/prometheus-operator#2557 (comment)
lilic pushed a commit to lilic/kube-prometheus that referenced this issue Aug 16, 2019
In the **Adding additional namespaces to monitor** section I appended a note showing the need for ServiceMonitor when adding additional namespaces... 

see: prometheus-operator/prometheus-operator#2557 (comment)
@stale stale bot closed this as completed Aug 21, 2019
@paulfantom paulfantom reopened this Aug 21, 2019
@stale stale bot removed the stale label Aug 21, 2019
@vishalsparmar
Copy link

vishalsparmar commented Sep 10, 2019

Awesome!
I tested here and it worked like a charm!

Sure, I'll do a PR for the README.

Can you confirm where do you change to ad new namspace in servicemonitor yaml ? i am trying to get Prometheus in default NS to find service pod in other namespace. I can see my SM in prometheus but it still can not discover service under that NS. dev-hih-01/pol-hih-dev-bal-service/0 (0/0 up)

@vishalsparmar
Copy link

The ServiceMonitor object should be created in the same namespace as the application lives in. You need to make sure that the serviceMonitorNamespaceSelector selects that namespace and that the Prometheus server has the appropriate permissions to access Service/Endpoints/Pod objects in that namespace.

I saw you used the kube-prometheus stack, which we just a few minutes ago moved to it's own repository: coreos/kube-prometheus. It also has instructions on how to use jsonnet to add extra namespaces to watch, then the RBAC permissions get generated automatically.

HI @brancz can you please guide me here on what is needed for proemtheus in default NS to discover pods service in other NS . i can see my SM with NS but it still can not find nay service pods . it only discovers Labels in other NS .

@brancz
Copy link
Contributor

brancz commented Sep 10, 2019

You should be able to use the serviceMonitorNamespaceSelector for that. The empty selector ({}) selects every namespace by default. This is what kube-prometheus does. What you need to do additionally is setup the RBAC roles so Prometheus has permissions to discover targets in that namespace.

@vishalsparmar
Copy link

You should be able to use the serviceMonitorNamespaceSelector for that. The empty selector ({}) selects every namespace by default. This is what kube-prometheus does. What you need to do additionally is setup the RBAC roles so Prometheus has permissions to discover targets in that namespace.

Thanks i have serviceMonitorNamespaceSelector: {} in my Prometheus object , on RBAC roles do i need to define SA clusterrole and cluserrolebinding , but how do i tell Prometheus to use this new Role for Service monitor ?

@brancz
Copy link
Contributor

brancz commented Sep 10, 2019

you need to bind it against the serviceaccount used by the Prometheus server.

@vishalsparmar
Copy link

vishalsparmar commented Sep 10, 2019 via email

@vishalsparmar
Copy link

vishalsparmar commented Sep 10, 2019 via email

@vishalsparmar
Copy link

Thanks this now works  On Tuesday, 10 September 2019, 13:50:20 WEST, Frederic Branczyk notifications@github.com wrote: you need to bind it against the serviceaccount used by the Prometheus server. — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

@brancz , i am trying to monitor our external sies with blackboxexporter, i have added servicemonitor for it but Prometheus is scraping blackboxexporter service podso nly and not the real targets . am i missing anything , the prometheus shows blackbox as UP though

@brancz
Copy link
Contributor

brancz commented Sep 19, 2019

blackbox showing up is correct, the blackbox exporter is up, and reporting blackbox metrics about what you're probing

@stale
Copy link

stale bot commented Nov 18, 2019

This issue has been automatically marked as stale because it has not had any activity in last 60d. Thank you for your contributions.

@stale stale bot added the stale label Nov 18, 2019
@aceeric
Copy link

aceeric commented Dec 4, 2019

Regarding this guidance:

The ServiceMonitor object should be created in the same namespace as the application lives in

I'm confused by something: If the ServiceMonitor CR has to be created in the namespace of the service that is being monitored, why does the CR have a namespaceSelector? If I am understanding this correctly, if I have Strimzi - for example - running in three QA namespaces and I want them all monitored, I would create a ServiceMonitor in all three namespaces and designate each namespaceSelector to match the namespace. In light of this, how does the namespaceSelector support cross-namespace monitoring?

@stale stale bot removed the stale label Dec 4, 2019
@brancz
Copy link
Contributor

brancz commented Dec 10, 2019

The namespaceSelector in the ServiceMonitor is an artifact of when ServiceMonitors had to be created in the same namespace as the Prometheus object. I recommend to not use it, as in new versions of the API we would probably not include it anymore.

@afirth
Copy link

afirth commented Jan 2, 2020

The namespaceSelector in the ServiceMonitor is an artifact of when ServiceMonitors had to be created in the same namespace as the Prometheus object. I recommend to not use it, as in new versions of the API we would probably not include it anymore.

Firstly, thanks for all the hard work putting this together @brancz . Just wanted to say that the namespaceSelector is very useful in cases where we create many similar services across multiple namespaces (e.g. each customer gets it's own namespace). This let's us deploy just one serviceMonitor, instead of 100s, unless I'm misunderstanding here.

@metost
Copy link

metost commented Feb 29, 2020

The namespaceSelector in the ServiceMonitor is an artifact of when ServiceMonitors had to be created in the same namespace as the Prometheus object. I recommend to not use it, as in new versions of the API we would probably not include it anymore.

Firstly, thanks for all the hard work putting this together @brancz . Just wanted to say that the namespaceSelector is very useful in cases where we create many similar services across multiple namespaces (e.g. each customer gets it's own namespace). This let's us deploy just one serviceMonitor, instead of 100s, unless I'm misunderstanding here.

Yes. If you define your service monitor to have namespaceSelector like this it will look in all namespaces:

namespaceSelector:
    any: true

Or you can define a list like this:

  namespaceSelector:
    matchNames:
      - ns-name-1
      - ns-name-2
      - ns-name-3
      - ns-name-4

@stale
Copy link

stale bot commented Apr 29, 2020

This issue has been automatically marked as stale because it has not had any activity in last 60d. Thank you for your contributions.

@stale stale bot added the stale label Apr 29, 2020
@tirelibirefe
Copy link

tirelibirefe commented Jun 26, 2020

@afirth @aceeric
Please look at the problem here:
#3297

I have the same/similar problem.

I've a Prometheus and prometheus-operator under namespace=monitoring and another prometheus instance under namespace=kafka . I used manifests provided by Strimzi. My Kafka is installed by Strimzi and I would like to monitor strimzi-kafka resources in namespace=kafka from namespace=monitoring.

...but prometheus in namespace=monitoring cannot scrape servicemonitor in namespace=kafka.

I deleted servicemonitor in namespace=kafka and installed servicemonitor namespace=monitoring, nothing changed.

I deleted namespaceselector, nothing changed.

I set namespaceselector as following...

  namespaceSelector:
    matchNames:
      - *

...nothing changed.

prometheus-operator logs don't say anything.

All I see the same screens:
image

image

Were you able to monitor Strimzi kafka resources from outside of namespace=kafka ? Could you please advise how can I accomplish this task?

Thanks & Regards

@stale stale bot removed the stale label Jun 26, 2020
@afirth
Copy link

afirth commented Jun 26, 2020

The original question was answered by @brancz some time ago (along with lots of other related ones, thanks!), and could probably be closed.

@brancz brancz closed this as completed Jun 26, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

10 participants