Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Panic after upgrade 2.4.3 to 2.7.1 #5202

Closed
wleese opened this Issue Feb 11, 2019 · 2 comments

Comments

Projects
None yet
2 participants
@wleese
Copy link

wleese commented Feb 11, 2019

When upgrading Prometheus from 2.4.3 to 2.7.1

 level=info ts=2019-02-11T12:26:44.709257742Z caller=kubernetes.go:201 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
 level=info ts=2019-02-11T12:26:44.710505333Z caller=kubernetes.go:201 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
 level=info ts=2019-02-11T12:26:44.711558252Z caller=kubernetes.go:201 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
 level=info ts=2019-02-11T12:26:44.712770757Z caller=kubernetes.go:201 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
 level=info ts=2019-02-11T12:26:44.714364213Z caller=kubernetes.go:201 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
 level=info ts=2019-02-11T12:26:44.715412567Z caller=kubernetes.go:201 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
 level=info ts=2019-02-11T12:26:44.716649605Z caller=kubernetes.go:201 component="discovery manager notify" discovery=k8s msg="Using pod service account via in-cluster config"
 level=info ts=2019-02-11T12:26:44.727791547Z caller=main.go:722 msg="Completed loading of configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml
-config-reloader level=info ts=2019-02-11T12:26:44.72825157Z caller=reloader.go:208 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=
 level=error ts=2019-02-11T12:26:49.646633379Z caller=scrape.go:147 component="scrape manager" scrape_pool=istio-demo/istio-demo-istio-demo-servicemonitor/0 msg="Error creating HTTP client" err="unable to use specified CA cert /etc/prometheus/secrets/istio-prometheus-certs/root-cert.pem: open /etc/prometheus/secrets/istio-prometheus-certs/root-cert.pem: no such file or directory"
 level=error ts=2019-02-11T12:26:49.648118428Z caller=scrape.go:147 component="scrape manager" scrape_pool=pieabo/pieabo-pieabo-servicemonitor/0 msg="Error creating HTTP client" err="unable to use specified CA cert /etc/prometheus/secrets/istio-prometheus-certs/root-cert.pem: open /etc/prometheus/secrets/istio-prometheus-certs/root-cert.pem: no such file or directory"
 level=warn ts=2019-02-11T12:26:50.757977344Z caller=scrape.go:1091 component="scrape manager" scrape_pool=monitoringctl/monitoringctl-alertmanager-bolcom-dev-d0t-platform/0 target=http://10.22.7.27:9093/metrics msg="Error on ingesting samples with different value but same timestamp" num_dropped=3
 level=warn ts=2019-02-11T12:26:52.300550842Z caller=scrape.go:1091 component="scrape manager" scrape_pool=monitoring/alertmanager-bolcom-dev-d0t-platform/0 target=http://10.22.11.11:9093/metrics msg="Error on ingesting samples with different value but same timestamp" num_dropped=14
 level=warn ts=2019-02-11T12:27:00.961245949Z caller=scrape.go:1091 component="scrape manager" scrape_pool=monitoring/bolcom-dev-d0t-platform-pushgateway/0 target=http://10.22.11.9:9091/metrics msg="Error on ingesting samples with different value but same timestamp" num_dropped=11

 panic: runtime error: invalid memory address or nil pointer dereference
 [signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x669c12]
 
 goroutine 2928 [running]:
 net/http.(*Client).deadline(0x0, 0xc0246422f8, 0x40bb8f, 0xc033d3f440)
 	/usr/local/go/src/net/http/client.go:187 +0x22
 net/http.(*Client).do(0x0, 0xc01b4ff000, 0x0, 0x0, 0x0)
 	/usr/local/go/src/net/http/client.go:527 +0xab
 net/http.(*Client).Do(0x0, 0xc01b4ff000, 0x23, 0xc00d75b350, 0x9)
 	/usr/local/go/src/net/http/client.go:509 +0x35
 github.com/prometheus/prometheus/scrape.(*targetScraper).scrape(0xc03ae12c00, 0x1fd4a60, 0xc00f876de0, 0x1fb2760, 0xc0352e6af0, 0x0, 0x0, 0x0, 0x0)
 	/app/scrape/scrape.go:471 +0x111
 github.com/prometheus/prometheus/scrape.(*scrapeLoop).run(0xc03a95f480, 0x6fc23ac00, 0x2540be400, 0x0)
 	/app/scrape/scrape.go:813 +0x487
 created by github.com/prometheus/prometheus/scrape.(*scrapePool).sync
 	/app/scrape/scrape.go:336 +0x45d

Related to #5172 ?

@simonpasquier

This comment has been minimized.

Copy link
Member

simonpasquier commented Feb 11, 2019

Yes it looks the same problem as #5172... Would you mind closing this one?

@simonpasquier

This comment has been minimized.

Copy link
Member

simonpasquier commented Feb 18, 2019

Closing as it is the same stack trace and already covered by #5172. Thanks for the report anyway!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.