Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(Another) panic: runtime error: invalid memory address or nil pointer dereference #2339

Closed
ichekrygin opened this Issue Jan 12, 2017 · 7 comments

Comments

Projects
None yet
4 participants
@ichekrygin
Copy link

ichekrygin commented Jan 12, 2017

What did you do? - Run Prometheus

What did you expect to see? - Prometheus keeps running

What did you see instead? Under which circumstances? - Prometheus crashes with:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x563266]

Environment
Prometheus running inside Kubernetes cluster hosted on AWS

  • System information:
/prometheus # uname -srm
Linux 4.7.3-coreos-r2 x86_64
  • Prometheus version:
/prometheus # prometheus -version
prometheus, version 1.4.1 (branch: master, revision: 2a89e8733f240d3cd57a6520b52c36ac4744ce12)
  build user:       root@e685d23d8809
  build date:       20161128-09:59:22
  go version:       go1.7.3
  • Alertmanager version: N/A

  • Prometheus configuration file:

global:
  scrape_interval: 30s
  scrape_timeout: 30s
rule_files:
- /etc/prometheus/recording.rules
scrape_configs:
- job_name: etcd
  static_configs:
    - targets:
      - 10.72.132.6:2379
      - 10.72.134.15:2379
      - 10.72.146.27:2379
      - 10.72.145.36:2379
      - 10.72.144.241:2379
- job_name: 'prometheus'
  static_configs:
    - targets: ['localhost:9090']
- job_name: 'cloudwatch'
  static_configs:
    - targets: ['prometheus-cloudwatch-exporter:80']
- job_name: 'kube-state-metrics'
  static_configs:
    - targets: ['kube-state-metrics:8080']

- job_name: 'kubernetes-apiservers'
  kubernetes_sd_configs:
  - role: endpoints
  scheme: https
  tls_config:
    ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    insecure_skip_verify: true
  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
  relabel_configs:
  - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
    action: keep
    regex: default;kubernetes;https

- job_name: 'kubernetes-nodes'
  scheme: https
  tls_config:
    ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    insecure_skip_verify: true
  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
  kubernetes_sd_configs:
  - role: node
  relabel_configs:
  - action: labelmap
    regex: __meta_kubernetes_node_label_(.+)

- job_name: 'kubernetes-service-endpoints'
  scheme: https
  kubernetes_sd_configs:
  - role: endpoints
  relabel_configs:
  - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
    action: keep
    regex: true
  - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
    action: replace
    target_label: __scheme__
    regex: (https?)
  - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
    action: replace
    target_label: __metrics_path__
    regex: (.+)
  - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
    action: replace
    target_label: __address__
    regex: (.+)(?::\d+);(\d+)
    replacement: $1:$2
  - action: labelmap
    regex: __meta_kubernetes_service_label_(.+)
  - source_labels: [__meta_kubernetes_service_namespace]
    action: replace
    target_label: kubernetes_namespace
  - source_labels: [__meta_kubernetes_service_name]
    action: replace
    target_label: kubernetes_name

- job_name: 'kubernetes-services'
  scheme: https
  metrics_path: /probe
  params:
    module: [http_2xx]
  kubernetes_sd_configs:
  - role: service
  relabel_configs:
  - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
    action: keep
    regex: true
  - source_labels: [__address__]
    target_label: __param_target
  - target_label: __address__
    replacement: blackbox
  - source_labels: [__param_target]
    target_label: instance
  - action: labelmap
    regex: __meta_kubernetes_service_label_(.+)
  - source_labels: [__meta_kubernetes_service_namespace]
    target_label: kubernetes_namespace
  - source_labels: [__meta_kubernetes_service_name]
    target_label: kubernetes_name

- job_name: 'kubernetes-pods'
  scheme: https
  kubernetes_sd_configs:
  - role: pod
  relabel_configs:
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
    action: keep
    regex: true
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
    action: replace
    target_label: __metrics_path__
    regex: (.+)
  - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
    action: replace
    regex: (.+):(?:\d+);(\d+)
    replacement: ${1}:${2}
    target_label: __address__
  - action: labelmap
    regex: __meta_kubernetes_pod_label_(.+)
  - source_labels: [__meta_kubernetes_pod_namespace]
    action: replace
    target_label: kubernetes_namespace
  - source_labels: [__meta_kubernetes_pod_name]
    action: replace
    target_label: kubernetes_pod_name
  • Alertmanager configuration file: N/A

  • Logs:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x563266]

goroutine 497 [running]:
panic(0x18939a0, 0xc420018030)
        /usr/local/go/src/runtime/panic.go:500 +0x1a1
github.com/prometheus/prometheus/storage/local/chunk.(*Desc).Add(0xc428100980, 0x15993cb7499, 0x3ff0000000000000, 0x0, 0x0, 0x0, 0x8000101, 0x0)
        /go/src/github.com/prometheus/prometheus/storage/local/chunk/chunk.go:136 +0x26
github.com/prometheus/prometheus/storage/local.(*memorySeries).add(0xc446717110, 0x15993cb7499, 0x3ff0000000000000, 0xc446717110, 0x0, 0x0)
        /go/src/github.com/prometheus/prometheus/storage/local/series.go:245 +0x115
github.com/prometheus/prometheus/storage/local.(*MemorySeriesStorage).Append(0xc42015e8c0, 0xc45a8c10a0, 0x0, 0x0)
        /go/src/github.com/prometheus/prometheus/storage/local/storage.go:858 +0x398
github.com/prometheus/prometheus/storage.Fanout.Append(0xc4200689e0, 0x2, 0x2, 0xc45a8c10a0, 0xc45d46f440, 0x26f5670)
        /go/src/github.com/prometheus/prometheus/storage/storage.go:60 +0x66
github.com/prometheus/prometheus/storage.(*Fanout).Append(0xc420068c40, 0xc45a8c10a0, 0xc485f3db50, 0xc485f3db40)
        <autogenerated>:3 +0x6e
github.com/prometheus/prometheus/retrieval.ruleLabelsAppender.Append(0x265af40, 0xc420068c40, 0xc455c97e90, 0xc45a8c10a0, 0x0, 0x0)
        /go/src/github.com/prometheus/prometheus/retrieval/target.go:241 +0x1b3
github.com/prometheus/prometheus/retrieval.(*ruleLabelsAppender).Append(0xc45d45eaa0, 0xc45a8c10a0, 0x0, 0x0)
        <autogenerated>:33 +0x6e
github.com/prometheus/prometheus/retrieval.(*scrapeLoop).append(0xc45db75310, 0xc46031f000, 0x3de, 0x600)
        /go/src/github.com/prometheus/prometheus/retrieval/scrape.go:456 +0x92
github.com/prometheus/prometheus/retrieval.(*scrapeLoop).run(0xc45db75310, 0x6fc23ac00, 0x6fc23ac00, 0x0)
        /go/src/github.com/prometheus/prometheus/retrieval/scrape.go:425 +0x602
created by github.com/prometheus/prometheus/retrieval.(*scrapePool).sync
        /go/src/github.com/prometheus/prometheus/retrieval/scrape.go:240 +0x3e5

@beorn7

This comment has been minimized.

Copy link
Member

beorn7 commented Jan 13, 2017

Trying to append to a chunk desc which has no chunk.

This might be fixed with #2277 , which will be released with the upcoming 1.5.0.

I'll keep an eye on it.

@beorn7 beorn7 self-assigned this Jan 13, 2017

@fabxc

This comment has been minimized.

Copy link
Member

fabxc commented Jan 17, 2017

Mh, will it or will it not?
With the label "might" this issue will stick around forever until being closed for being too old (as I suppose that's a very infrequent crash).

@beorn7

This comment has been minimized.

Copy link
Member

beorn7 commented Jan 17, 2017

The theory that #2277 is easy to falsify if it happens again with the fix in. As it is the nature of the theory, it is difficult to verify conclusively.

So yes, if we don't see the same backtrace with 1.5 in a while, we will close this issue.

@ichekrygin

This comment has been minimized.

Copy link
Author

ichekrygin commented Jan 17, 2017

@beorn7 any word on eta for 1.5.0?

@beorn7

This comment has been minimized.

Copy link
Member

beorn7 commented Jan 17, 2017

We should go for it soon. I have a few things up my sleeves that would be nice to get in, but it's not a blocker.

@beorn7

This comment has been minimized.

Copy link
Member

beorn7 commented Feb 1, 2017

1.5 is out. Please re-open if the same thing happens again with 1.5.

@lock

This comment has been minimized.

Copy link

lock bot commented Mar 24, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 24, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.