Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

prometheus configuration for kubernetes from example folder doesn't work with 1.3 #2147

Closed
vorozhko opened this Issue Nov 2, 2016 · 18 comments

Comments

Projects
None yet
6 participants
@vorozhko
Copy link

vorozhko commented Nov 2, 2016

What did you do?
Upgraded prometheus to 1.3 in our GKE cluster.

What did you expect to see?
Working prometheus.

What did you see instead? Under which circumstances?
First I have seen:
time="2016-11-02T13:55:23Z" level=error msg="Error loading config: couldn't load configuration (-config.file=/etc/prometheus/prometheus.yml): Unknown Kubernetes SD role "apiserver"" source="main.go:149"

After I deleted the whole job-name with apiserver role I got following:
time="2016-11-02T14:24:20Z" level=error msg="Error loading config: couldn't load configuration (-config.file=/etc/prometheus/prometheus.yml): unknown fields in kubernetes_sd_config: in_cluster, api_servers" source="main.go:149

Environment
Prometheus container v1.3.0 in GKE cluster.

  • System information:

  • Prometheus version:

    1.3

  • Prometheus configuration file:

scrape_configs:
    - job_name: 'kubernetes-nodes'

      # Default to scraping over https. If required, just disable this or change to
      # `http`.
      scheme: https

      # This TLS & bearer token file config is used to connect to the actual scrape
      # endpoints for cluster components. This is separate to discovery auth
      # configuration (`in_cluster` below) because discovery & scraping are two
      # separate concerns in Prometheus.
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        # If your node certificates are self-signed or use a different CA to the
        # master CA, then disable certificate verification below. Note that
        # certificate verification is an integral part of a secure infrastructure
        # so this should only be disabled in a controlled environment. You can
        # disable certificate verification by uncommenting the line below.
        #
        insecure_skip_verify: true
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token

      kubernetes_sd_configs:
      - api_servers:
        - 'https://kubernetes.default.svc'
        in_cluster: true
        role: node

      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)

    # Scrape config for service endpoints.
    #
    # The relabeling allows the actual service scrape endpoint to be configured
    # via the following annotations:
    #
    # * `prometheus.io/scrape`: Only scrape services that have a value of `true`
    # * `prometheus.io/scheme`: If the metrics endpoint is secured then you will need
    # to set this to `https` & most likely set the `tls_config` of the scrape config.
    # * `prometheus.io/path`: If the metrics path is not `/metrics` override this.
    # * `prometheus.io/port`: If the metrics are exposed on a different port to the
    # service then set this appropriately.
    - job_name: 'kubernetes-service-endpoints'

      kubernetes_sd_configs:
      - api_servers:
        - 'https://kubernetes.default.svc'
        in_cluster: true
        role: endpoint

      relabel_configs:
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
        action: replace
        target_label: __scheme__
        regex: (https?)
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
        action: replace
        target_label: __address__
        regex: (.+)(?::\d+);(\d+)
        replacement: $1:$2
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels: [__meta_kubernetes_service_namespace]
        action: replace
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_service_name]
        action: replace
        target_label: kubernetes_name

    # Example scrape config for probing services via the Blackbox Exporter.
    #
    # The relabeling allows the actual service scrape endpoint to be configured
    # via the following annotations:
    #
    # * `prometheus.io/probe`: Only probe services that have a value of `true`
    - job_name: 'kubernetes-services'

      metrics_path: /probe
      params:
        module: [http_2xx]

      kubernetes_sd_configs:
      - api_servers:
        - 'https://kubernetes.default.svc'
        in_cluster: true
        role: service

      relabel_configs:
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
        action: keep
        regex: true
      - source_labels: [__address__]
        target_label: __param_target
      - target_label: __address__
        replacement: blackbox
      - source_labels: [__param_target]
        target_label: instance
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels: [__meta_kubernetes_service_namespace]
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_service_name]
        target_label: kubernetes_name

    # Example scrape config for pods
    #
    # The relabeling allows the actual pod scrape endpoint to be configured via the
    # following annotations:
    #
    # * `prometheus.io/scrape`: Only scrape pods that have a value of `true`
    # * `prometheus.io/path`: If the metrics path is not `/metrics` override this.
    # * `prometheus.io/port`: Scrape the pod on the indicated port instead of the default of `9102`.
    - job_name: 'kubernetes-pods'

      kubernetes_sd_configs:
      - api_servers:
        - 'https://kubernetes.default.svc'
        in_cluster: true
        role: pod

      relabel_configs:
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
        action: replace
        regex: (.+):(?:\d+);(\d+)
        replacement: ${1}:${2}
        target_label: __address__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - source_labels: [__meta_kubernetes_pod_namespace]
        action: replace
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_pod_name]
        action: replace
        target_label: kubernetes_pod_name
  • Logs:
time="2016-11-02T14:24:20Z" level=info msg="Starting prometheus (version=1.3.0, branch=master, revision=18254a172b1e981ed593442b2259bd63617d6aca)" source="main.go:75"
time="2016-11-02T14:24:20Z" level=info msg="Build context (go=go1.7.3, user=root@d363f050a0e0, date=20161101-17:06:27)" source="main.go:76"
time="2016-11-02T14:24:20Z" level=info msg="Loading configuration file /etc/prometheus/prometheus.yml" source="main.go:247"
time="2016-11-02T14:24:20Z" level=error msg="Error loading config: couldn't load configuration (-config.file=/etc/prometheus/prometheus.yml): unknown fields in kubernetes_sd_config: in_cluster, api_servers" source="main.go:149"
@brancz

This comment has been minimized.

Copy link
Member

brancz commented Nov 2, 2016

@vorozhko thanks for reporting, we have already noticed this and will shortly adapt the config. Sorry for the inconvenience.

@jimmidyson

This comment has been minimized.

Copy link
Member

jimmidyson commented Nov 2, 2016

@brancz I've got a config ready to submit unless you're doing it?

@brancz

This comment has been minimized.

Copy link
Member

brancz commented Nov 2, 2016

@jimmidyson go for it, I might have some discussion points, but I'm not done.

@widgetpl

This comment has been minimized.

Copy link

widgetpl commented Nov 3, 2016

config from PR dosent work for me

time="2016-11-03T09:35:22Z" level=error msg="Error loading config: couldn't load configuration (-config.file=/etc/prometheus/prometheus.yml): yaml: unmarshal errors:\n line 21: cannot unmarshal !!map into []*config.KubernetesSDConfig\n line 29: cannot unmarshal !!map into []*config.KubernetesSDConfig\n line 68: cannot unmarshal !!map into []*config.KubernetesSDConfig\n line 90: cannot unmarshal !!map into []*config.KubernetesSDConfig" source="main.go:149"

@jimmidyson

This comment has been minimized.

Copy link
Member

jimmidyson commented Nov 3, 2016

Hmm those lines don't seem to relate to anything changed in the PR updated config (https://github.com/jimmidyson/prometheus/blob/da23543f29e51701b4e8ec0ffc1912e3a530c5d1/documentation/examples/prometheus-kubernetes.yml). Are the line numbers accurate in the parsing I wonder?

@widgetpl

This comment has been minimized.

Copy link

widgetpl commented Nov 3, 2016

i have removed comments from config so it looks like

apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus
  namespace: infra
  labels:
    app: prometheus
    component: core
data:
  prometheus.yml: |
    scrape_configs:
    - job_name: 'kubernetes-apiservers'
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        insecure_skip_verify: true
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: default;kubernetes;https

    - job_name: 'kubernetes-nodes'
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        insecure_skip_verify: true
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      kubernetes_sd_configs:
        role: node
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)


    - job_name: 'kubernetes-service-endpoints'
      kubernetes_sd_configs:
        role: endpoint

      relabel_configs:
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
        action: replace
        target_label: __scheme__
        regex: (https?)
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
        action: replace
        target_label: __address__
        regex: (.+)(?::\d+);(\d+)
        replacement: $1:$2
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels: [__meta_kubernetes_service_namespace]
        action: replace
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_service_name]
        action: replace
        target_label: kubernetes_name

    # Example scrape config for probing services via the Blackbox Exporter.
    #
    # The relabeling allows the actual service scrape endpoint to be configured
    # via the following annotations:
    #
    # * `prometheus.io/probe`: Only probe services that have a value of `true`
    - job_name: 'kubernetes-services'
      metrics_path: /probe
      params:
        module: [http_2xx]
      kubernetes_sd_configs:
        role: service

      relabel_configs:
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
        action: keep
        regex: true
      - source_labels: [__address__]
        target_label: __param_target
      - target_label: __address__
        replacement: blackbox
      - source_labels: [__param_target]
        target_label: instance
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels: [__meta_kubernetes_service_namespace]
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_service_name]
        target_label: kubernetes_name


    - job_name: 'kubernetes-pods'
      kubernetes_sd_configs:
        role: pod
      relabel_configs:
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
        action: replace
        regex: (.+):(?:\d+);(\d+)
        replacement: ${1}:${2}
        target_label: __address__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - source_labels: [__meta_kubernetes_pod_namespace]
        action: replace
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_pod_name]
        action: replace
        target_label: kubernetes_pod_name

sorry for that. So the lines are:

role: node
role: endpoint
role: service
role: pod

@jimmidyson

This comment has been minimized.

Copy link
Member

jimmidyson commented Nov 3, 2016

No worries! Can you paste it into a gist to more easily view line numbers?

@widgetpl

This comment has been minimized.

@jimmidyson

This comment has been minimized.

Copy link
Member

jimmidyson commented Nov 3, 2016

Looks like you've dropped the important - before role on each line the parser bailed on. I've checked the file in the PR & it is correct so must have happened when you were removing comment lines.

@widgetpl

This comment has been minimized.

Copy link

widgetpl commented Nov 3, 2016

yes you are right probably during removing preceding - apiservers configuration.
Right now i have:

time="2016-11-03T10:08:43Z" level=error msg="Error loading config: couldn't load configuration (-config.file=/etc/prometheus/prometheus.yml): Unknown Kubernetes SD role \"endpoint\"" source="main.go:149"

i have updated Gist

ok i see that it is - role: endpoints

@brancz

This comment has been minimized.

Copy link
Member

brancz commented Nov 3, 2016

To follow the naming of the kubernetes API Objects it is now called endpoints rather than endpoint :)

@jimmidyson

This comment has been minimized.

Copy link
Member

jimmidyson commented Nov 3, 2016

Yeah endpoints is correct in the PR btw.

@widgetpl

This comment has been minimized.

Copy link

widgetpl commented Nov 3, 2016

looks like it is working now, thanks guys for helping.

@fabxc fabxc closed this in #2148 Nov 3, 2016

eformat added a commit to eformat/prometheus-ose that referenced this issue Nov 10, 2016

noelo added a commit to noelo/prometheus-ose that referenced this issue Nov 11, 2016

@loguido

This comment has been minimized.

Copy link

loguido commented Nov 17, 2016

My prometheus server is outside kubernetes cluster. With version 1.3 i cannot use apiserver option anymore. I've not understood how can i connect to apiserver...

@jimmidyson

This comment has been minimized.

Copy link
Member

jimmidyson commented Nov 17, 2016

@loguido See the example config at https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml#L8. The example config wasn't updated when SD options changed unfortunately, but config in master works fine with 1.3+.

@loguido

This comment has been minimized.

Copy link

loguido commented Nov 17, 2016

Thank you @jimmidyson, but as far as i can see with this config i cannot contact any kubernetes api server because my prometheus is not inside kubernetes cluster.

@fabxc

This comment has been minimized.

Copy link
Member

fabxc commented Nov 18, 2016

Please check the configuration of the documentation:
https://prometheus.io/docs/operating/configuration/#<kubernetes_sd_config>
If the server_address field is left empty, we assume to be in the cluster
and will auto-detect it. If Prometheus is running outside of it, you can
simply provide it.
It should be the same configuration as before just that its now not plural
anymore and only accepts a single API server rather than a list.

On Thu, Nov 17, 2016 at 12:32 PM loguido notifications@github.com wrote:

Thank you @jimmidyson https://github.com/jimmidyson, but as far as i
can see with this config i cannot contact any kubernetes api server because
my prometheus is not inside kubernetes cluster.


You are receiving this because you modified the open/close state.

Reply to this email directly, view it on GitHub
#2147 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AEuA8vRGG-Al7e_2mjYgrzEl93dCcPhVks5q_DtegaJpZM4KnTX1
.

@lock

This comment has been minimized.

Copy link

lock bot commented Mar 24, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 24, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.