Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nil pointer dereference in web stack #4577

Closed
gouthamve opened this Issue Sep 5, 2018 · 6 comments

Comments

Projects
None yet
2 participants
@gouthamve
Copy link
Member

gouthamve commented Sep 5, 2018

Launched the current master and saw these:

level=error ts=2018-09-05T17:26:41.234957855Z caller=stdlib.go:89 component=web caller="http: panic serving 10.56.0.181:49400" msg="runtime error: invalid memory address or nil pointer dereference"
level=error ts=2018-09-05T17:26:41.52166956Z caller=stdlib.go:89 component=web caller="http: panic serving 10.56.0.181:49396" msg="runtime error: invalid memory address or nil pointer dereference"

Cannot replicate these anymore after starting though.

Full logs:

level=info ts=2018-09-05T17:20:41.337776999Z caller=main.go:238 msg="Starting Prometheus" version="(version=2.4.0-rc0, branch=2.4-rc0, revision=867b7e6515343314ee0d140372b3b55f48179798)"
level=info ts=2018-09-05T17:20:41.337869614Z caller=main.go:239 build_context="(go=go1.10.3, user=root@b5e6a8343129, date=20180905-17:02:16)"
level=info ts=2018-09-05T17:20:41.33790013Z caller=main.go:240 host_details="(Linux 4.4.111+ #1 SMP Thu Apr 19 11:45:40 PDT 2018 x86_64 prometheus-0 (none))"
level=info ts=2018-09-05T17:20:41.337936835Z caller=main.go:241 fd_limits="(soft=1048576, hard=1048576)"
level=info ts=2018-09-05T17:20:41.3379604Z caller=main.go:242 vm_limits="(soft=unlimited, hard=unlimited)"
level=info ts=2018-09-05T17:20:41.3434182Z caller=web.go:397 component=web msg="Start listening for connections" address=:80
level=info ts=2018-09-05T17:20:41.343410053Z caller=main.go:554 msg="Starting TSDB ..."
level=info ts=2018-09-05T17:20:41.350650391Z caller=web.go:440 component=web msg="router prefix" prefix=/prometheus
level=info ts=2018-09-05T17:20:41.618695814Z caller=repair.go:39 component=tsdb msg="found healthy block" mint=1534852800000 maxt=1534917600000 ulid=01CNGDSZWBP8BJ3T9518HSXC8E
level=info ts=2018-09-05T17:20:41.633832082Z caller=repair.go:39 component=tsdb msg="found healthy block" mint=1534917600000 maxt=1534982400000 ulid=01CNJBKGYC1NATN97SZG9XCMNH
level=info ts=2018-09-05T17:20:41.635191604Z caller=repair.go:39 component=tsdb msg="found healthy block" mint=1534982400000 maxt=1535047200000 ulid=01CNM9D1CPCTZYX06J0PZRBZYK
level=info ts=2018-09-05T17:20:41.649227599Z caller=repair.go:39 component=tsdb msg="found healthy block" mint=1535047200000 maxt=1535112000000 ulid=01CNP76R7E28WP45W2FN7Z99HN
level=info ts=2018-09-05T17:20:41.678222688Z caller=repair.go:39 component=tsdb msg="found healthy block" mint=1535112000000 maxt=1535176800000 ulid=01CNR5067AYCR8CJ2D868M4CHB
level=info ts=2018-09-05T17:20:41.694101114Z caller=repair.go:39 component=tsdb msg="found healthy block" mint=1535176800000 maxt=1535241600000 ulid=01CNT2SQ7N1KX7QZE1PN1PENTR
level=info ts=2018-09-05T17:20:41.708235819Z caller=repair.go:39 component=tsdb msg="found healthy block" mint=1535241600000 maxt=1535306400000 ulid=01CNW0K6A3RNX3F0SQRPD6WVAD
level=info ts=2018-09-05T17:20:41.726382044Z caller=repair.go:39 component=tsdb msg="found healthy block" mint=1535306400000 maxt=1535371200000 ulid=01CNXYCRXEQBHB6B1GKG4KZECT
level=info ts=2018-09-05T17:20:41.741375379Z caller=repair.go:39 component=tsdb msg="found healthy block" mint=1535371200000 maxt=1535436000000 ulid=01CNZW6D9CM0HT5E8RVK4N8GGZ
level=info ts=2018-09-05T17:20:41.751870977Z caller=repair.go:39 component=tsdb msg="found healthy block" mint=1535436000000 maxt=1535500800000 ulid=01CP1SZV7EX3PGJS0646H52SP3
level=info ts=2018-09-05T17:20:41.791135672Z caller=repair.go:39 component=tsdb msg="found healthy block" mint=1535500800000 maxt=1535565600000 ulid=01CP3QSKA8V2PDQGHMQF4XEV3W
level=info ts=2018-09-05T17:20:41.820881482Z caller=repair.go:39 component=tsdb msg="found healthy block" mint=1535565600000 maxt=1535630400000 ulid=01CP5NJWEY0Z53D4A68PGEH2YD
level=info ts=2018-09-05T17:20:41.844892828Z caller=repair.go:39 component=tsdb msg="found healthy block" mint=1535630400000 maxt=1535695200000 ulid=01CP7KCEZRN0K6PMAGK0SFKFMS
level=info ts=2018-09-05T17:20:41.858710639Z caller=repair.go:39 component=tsdb msg="found healthy block" mint=1535695200000 maxt=1535760000000 ulid=01CP9H5YN1RFW1TTM6HXK1CRJW
level=info ts=2018-09-05T17:20:41.864876636Z caller=repair.go:39 component=tsdb msg="found healthy block" mint=1535760000000 maxt=1535824800000 ulid=01CPBEZHW5YXP87PERHWPYCHXH
level=info ts=2018-09-05T17:20:41.88363159Z caller=repair.go:39 component=tsdb msg="found healthy block" mint=1535824800000 maxt=1535889600000 ulid=01CPDCS05QNPP4ZZ8NWCTKAXNP
level=info ts=2018-09-05T17:20:41.909966764Z caller=repair.go:39 component=tsdb msg="found healthy block" mint=1535889600000 maxt=1535954400000 ulid=01CPFAJJ6WQYHRRK0THB49GF39
level=info ts=2018-09-05T17:20:41.920952606Z caller=repair.go:39 component=tsdb msg="found healthy block" mint=1535954400000 maxt=1536019200000 ulid=01CPH8C350BK45312QDNC6S40V
level=info ts=2018-09-05T17:20:41.94538984Z caller=repair.go:39 component=tsdb msg="found healthy block" mint=1536019200000 maxt=1536084000000 ulid=01CPK65P2VQ4WBSKYZTZJYM91V
level=info ts=2018-09-05T17:20:41.961863298Z caller=repair.go:39 component=tsdb msg="found healthy block" mint=1536148800000 maxt=1536156000000 ulid=01CPN3YKF0FB8YAZHNC61NDDJD
level=info ts=2018-09-05T17:20:41.969880269Z caller=repair.go:39 component=tsdb msg="found healthy block" mint=1536084000000 maxt=1536148800000 ulid=01CPN3Z8X56XMB9RTPNFK56MY7
level=info ts=2018-09-05T17:20:41.983965018Z caller=repair.go:39 component=tsdb msg="found healthy block" mint=1536156000000 maxt=1536163200000 ulid=01CPNATAP77FA63RJD1YJ4FFEC
level=info ts=2018-09-05T17:20:41.996100572Z caller=wal.go:1247 component=tsdb msg="migrating WAL format"
level=info ts=2018-09-05T17:22:06.545782461Z caller=main.go:564 msg="TSDB started"
level=info ts=2018-09-05T17:22:06.545891278Z caller=main.go:624 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
level=info ts=2018-09-05T17:22:06.550473618Z caller=kubernetes.go:187 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2018-09-05T17:22:06.551957324Z caller=kubernetes.go:187 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2018-09-05T17:22:06.552911463Z caller=kubernetes.go:187 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2018-09-05T17:22:06.553882586Z caller=kubernetes.go:187 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2018-09-05T17:22:06.554950383Z caller=kubernetes.go:187 component="discovery manager notify" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2018-09-05T17:22:06.631197965Z caller=main.go:650 msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml
level=info ts=2018-09-05T17:22:06.631277232Z caller=main.go:523 msg="Server is ready to receive web requests."
level=info ts=2018-09-05T17:22:06.631382952Z caller=main.go:624 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
level=info ts=2018-09-05T17:22:06.633898439Z caller=queue_manager.go:258 component=remote queue=0:URL1 msg="Stopping remote storage..."
level=info ts=2018-09-05T17:22:06.633977466Z caller=queue_manager.go:266 component=remote queue=0:URL1 msg="Remote storage stopped."
level=info ts=2018-09-05T17:22:06.633990809Z caller=queue_manager.go:258 component=remote queue=1:URL2 msg="Stopping remote storage..."
level=info ts=2018-09-05T17:22:06.634087858Z caller=queue_manager.go:266 component=remote queue=1:URL2 msg="Remote storage stopped."
level=info ts=2018-09-05T17:22:06.634102421Z caller=queue_manager.go:258 component=remote queue=2:URL3 msg="Stopping remote storage..."
level=info ts=2018-09-05T17:22:06.634126763Z caller=queue_manager.go:266 component=remote queue=2:URL3 msg="Remote storage stopped."
level=error ts=2018-09-05T17:22:06.634479038Z caller=node.go:81 component="discovery manager scrape" discovery=k8s role=node msg="node informer unable to sync cache"
level=error ts=2018-09-05T17:22:06.63455484Z caller=endpoints.go:130 component="discovery manager scrape" discovery=k8s role=endpoint msg="endpoints informer unable to sync cache"
level=info ts=2018-09-05T17:22:06.634568558Z caller=kubernetes.go:187 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=error ts=2018-09-05T17:22:06.635190409Z caller=pod.go:85 component="discovery manager scrape" discovery=k8s role=pod msg="pod informer unable to sync cache"
level=info ts=2018-09-05T17:22:06.635443985Z caller=kubernetes.go:187 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2018-09-05T17:22:06.636281083Z caller=kubernetes.go:187 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2018-09-05T17:22:06.637293185Z caller=kubernetes.go:187 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2018-09-05T17:22:06.638441445Z caller=kubernetes.go:187 component="discovery manager notify" discovery=k8s msg="Using pod service account via in-cluster config"
level=error ts=2018-09-05T17:22:06.641171811Z caller=pod.go:85 component="discovery manager notify" discovery=k8s role=pod msg="pod informer unable to sync cache"
level=info ts=2018-09-05T17:22:06.823312319Z caller=main.go:650 msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml
level=warn ts=2018-09-05T17:23:15.769592287Z caller=manager.go:389 component="rule manager" group=k8s.rules msg="Evaluating rule failed" rule="record: namespace_name:container_cpu_usage_seconds_total:sum_rate\nexpr: sum by(namespace, label_name) (sum by(namespace, pod_name) (rate(container_cpu_usage_seconds_total{image!=\"\",job=\"kube-system/cadvisor\"}[5m]))\n  * on(namespace, pod_name) group_left(label_name) label_replace(kube_pod_labels{job=\"default/kube-state-metrics\"},\n  \"pod_name\", \"$1\", \"pod\", \"(.*)\"))\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
level=warn ts=2018-09-05T17:23:15.781660641Z caller=manager.go:389 component="rule manager" group=k8s.rules msg="Evaluating rule failed" rule="record: namespace_name:container_memory_usage_bytes:sum\nexpr: sum by(namespace, label_name) (sum by(pod_name, namespace) (container_memory_usage_bytes{image!=\"\",job=\"kube-system/cadvisor\"})\n  * on(namespace, pod_name) group_left(label_name) label_replace(kube_pod_labels{job=\"default/kube-state-metrics\"},\n  \"pod_name\", \"$1\", \"pod\", \"(.*)\"))\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
level=warn ts=2018-09-05T17:23:15.820598187Z caller=manager.go:389 component="rule manager" group=k8s.rules msg="Evaluating rule failed" rule="record: namespace_name:kube_pod_container_resource_requests_memory_bytes:sum\nexpr: sum by(namespace, label_name) (sum by(namespace, pod) (kube_pod_container_resource_requests_memory_bytes{job=\"default/kube-state-metrics\"})\n  * on(namespace, pod) group_left(label_name) label_replace(kube_pod_labels{job=\"default/kube-state-metrics\"},\n  \"pod_name\", \"$1\", \"pod\", \"(.*)\"))\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
level=warn ts=2018-09-05T17:23:15.830019674Z caller=manager.go:389 component="rule manager" group=k8s.rules msg="Evaluating rule failed" rule="record: namespace_name:kube_pod_container_resource_requests_cpu_cores:sum\nexpr: sum by(namespace, label_name) (sum by(namespace, pod) (kube_pod_container_resource_requests_cpu_cores{job=\"default/kube-state-metrics\"}\n  and on(pod) kube_pod_status_scheduled{condition=\"true\"}) * on(namespace, pod) group_left(label_name)\n  label_replace(kube_pod_labels{job=\"default/kube-state-metrics\"}, \"pod_name\", \"$1\",\n  \"pod\", \"(.*)\"))\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
level=warn ts=2018-09-05T17:24:15.906747134Z caller=manager.go:389 component="rule manager" group=k8s.rules msg="Evaluating rule failed" rule="record: namespace_name:container_cpu_usage_seconds_total:sum_rate\nexpr: sum by(namespace, label_name) (sum by(namespace, pod_name) (rate(container_cpu_usage_seconds_total{image!=\"\",job=\"kube-system/cadvisor\"}[5m]))\n  * on(namespace, pod_name) group_left(label_name) label_replace(kube_pod_labels{job=\"default/kube-state-metrics\"},\n  \"pod_name\", \"$1\", \"pod\", \"(.*)\"))\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
level=warn ts=2018-09-05T17:24:15.951130399Z caller=manager.go:389 component="rule manager" group=k8s.rules msg="Evaluating rule failed" rule="record: namespace_name:container_memory_usage_bytes:sum\nexpr: sum by(namespace, label_name) (sum by(pod_name, namespace) (container_memory_usage_bytes{image!=\"\",job=\"kube-system/cadvisor\"})\n  * on(namespace, pod_name) group_left(label_name) label_replace(kube_pod_labels{job=\"default/kube-state-metrics\"},\n  \"pod_name\", \"$1\", \"pod\", \"(.*)\"))\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
level=warn ts=2018-09-05T17:24:15.963463278Z caller=manager.go:389 component="rule manager" group=k8s.rules msg="Evaluating rule failed" rule="record: namespace_name:kube_pod_container_resource_requests_memory_bytes:sum\nexpr: sum by(namespace, label_name) (sum by(namespace, pod) (kube_pod_container_resource_requests_memory_bytes{job=\"default/kube-state-metrics\"})\n  * on(namespace, pod) group_left(label_name) label_replace(kube_pod_labels{job=\"default/kube-state-metrics\"},\n  \"pod_name\", \"$1\", \"pod\", \"(.*)\"))\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
level=warn ts=2018-09-05T17:24:15.984304734Z caller=manager.go:389 component="rule manager" group=k8s.rules msg="Evaluating rule failed" rule="record: namespace_name:kube_pod_container_resource_requests_cpu_cores:sum\nexpr: sum by(namespace, label_name) (sum by(namespace, pod) (kube_pod_container_resource_requests_cpu_cores{job=\"default/kube-state-metrics\"}\n  and on(pod) kube_pod_status_scheduled{condition=\"true\"}) * on(namespace, pod) group_left(label_name)\n  label_replace(kube_pod_labels{job=\"default/kube-state-metrics\"}, \"pod_name\", \"$1\",\n  \"pod\", \"(.*)\"))\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
level=info ts=2018-09-05T17:25:46.634318813Z caller=queue_manager.go:340 component=remote queue=0:URL0 msg="Remote storage resharding" from=1 to=2
level=error ts=2018-09-05T17:26:41.234957855Z caller=stdlib.go:89 component=web caller="http: panic serving 10.56.0.181:49400" msg="runtime error: invalid memory address or nil pointer dereference"
level=error ts=2018-09-05T17:26:41.52166956Z caller=stdlib.go:89 component=web caller="http: panic serving 10.56.0.181:49396" msg="runtime error: invalid memory address or nil pointer dereference"
level=info ts=2018-09-05T17:27:46.634422061Z caller=queue_manager.go:340 component=remote queue=0:URL0 msg="Remote storage resharding" from=2 to=1
@gouthamve

This comment has been minimized.

Copy link
Member Author

gouthamve commented Sep 5, 2018

One more popped up:

level=error ts=2018-09-05T17:35:46.758809736Z caller=stdlib.go:89 component=web caller="http: panic serving 10.56.0.181:53572" msg="runtime error: invalid memory address or nil pointer dereference"

So this seems to be fairly common, will investigate tomorrow.

@gouthamve

This comment has been minimized.

Copy link
Member Author

gouthamve commented Sep 5, 2018

Triggered by: curl 'http://localhost:9090/api/v1/series?match\[\]=\{le!=%22%22\}'

@simonpasquier

This comment has been minimized.

Copy link
Member

simonpasquier commented Sep 6, 2018

Want to try #4221? I tried to reproduce on my machine but it works for me ™️.

@gouthamve

This comment has been minimized.

Copy link
Member Author

gouthamve commented Sep 6, 2018

On it.

@gouthamve

This comment has been minimized.

Copy link
Member Author

gouthamve commented Sep 6, 2018

Hmm, thanks for that PR! super weird error:

goroutine 2065 [running]:
github.com/prometheus/prometheus/web.withStackTracer.func1.1(0x1c8fd80, 0xc42066fb60, 0xc432cf8700)
    /go/src/github.com/prometheus/prometheus/web/web.go:82 +0xbc
panic(0x1800e60, 0x286fc20)
    /usr/local/go/src/runtime/panic.go:502 +0x229
github.com/prometheus/prometheus/vendor/github.com/go-kit/kit/log.(*context).Log(0xc45bbe9620, 0xc451e9bce0, 0x6, 0x6, 0x0, 0x1c91740)
    /go/src/github.com/prometheus/prometheus/vendor/github.com/go-kit/kit/log/log.go:124 +0x1a0
github.com/prometheus/prometheus/web/api/v1.(*API).respond(0xc4202ee2d0, 0x1ca1400, 0xc448cc8600, 0x16c9060, 0xc450033160)
    /go/src/github.com/prometheus/prometheus/web/api/v1/api.go:1006 +0x589
github.com/prometheus/prometheus/web/api/v1.(*API).Register.func1.1(0x1ca1400, 0xc448cc8600, 0xc432cf8b00)
    /go/src/github.com/prometheus/prometheus/web/api/v1/api.go:181 +0xad
net/http.HandlerFunc.ServeHTTP(0xc420138f20, 0x1ca1400, 0xc448cc8600, 0xc432cf8b00)
    /usr/local/go/src/net/http/server.go:1947 +0x44
github.com/prometheus/prometheus/util/httputil.CompressionHandler.ServeHTTP(0x1c93420, 0xc420138f20, 0x7fd304b60948, 0xc45a4ab950, 0xc432cf8b00)
    /go/src/github.com/prometheus/prometheus/util/httputil/compression.go:90 +0x7c
github.com/prometheus/prometheus/util/httputil.(CompressionHandler).ServeHTTP-fm(0x7fd304b60948, 0xc45a4ab950, 0xc432cf8b00)
    /go/src/github.com/prometheus/prometheus/web/web.go:281 +0x57
github.com/prometheus/prometheus/web.(*Handler).testReady.func1(0x7fd304b60948, 0xc45a4ab950, 0xc432cf8b00)
    /go/src/github.com/prometheus/prometheus/web/web.go:391 +0x55
net/http.HandlerFunc.ServeHTTP(0xc420138f60, 0x7fd304b60948, 0xc45a4ab950, 0xc432cf8b00)
    /usr/local/go/src/net/http/server.go:1947 +0x44
github.com/prometheus/prometheus/vendor/github.com/prometheus/client_golang/prometheus/promhttp.InstrumentHandlerResponseSize.func1(0x1ca3640, 0xc448cc85a0, 0xc432cf8b00)
    /go/src/github.com/prometheus/prometheus/vendor/github.com/prometheus/client_golang/prometheus/promhttp/instrument_server.go:196 +0xed
net/http.HandlerFunc.ServeHTTP(0xc42043d440, 0x1ca3640, 0xc448cc85a0, 0xc432cf8b00)
    /usr/local/go/src/net/http/server.go:1947 +0x44
github.com/prometheus/prometheus/vendor/github.com/prometheus/client_golang/prometheus/promhttp.InstrumentHandlerDuration.func2(0x1ca3640, 0xc448cc85a0, 0xc432cf8b00)
    /go/src/github.com/prometheus/prometheus/vendor/github.com/prometheus/client_golang/prometheus/promhttp/instrument_server.go:76 +0xb5
github.com/prometheus/prometheus/vendor/github.com/prometheus/common/route.(*Router).handle.func1(0x1ca3640, 0xc448cc85a0, 0xc432cf8a00, 0x0, 0x0, 0x0)
    /go/src/github.com/prometheus/prometheus/vendor/github.com/prometheus/common/route/route.go:60 +0x222
github.com/prometheus/prometheus/vendor/github.com/julienschmidt/httprouter.(*Router).ServeHTTP(0xc4200477c0, 0x1ca3640, 0xc448cc85a0, 0xc432cf8a00)
    /go/src/github.com/prometheus/prometheus/vendor/github.com/julienschmidt/httprouter/router.go:299 +0x6d1
github.com/prometheus/prometheus/vendor/github.com/prometheus/common/route.(*Router).ServeHTTP(0xc4201386c0, 0x1ca3640, 0xc448cc85a0, 0xc432cf8a00)
    /go/src/github.com/prometheus/prometheus/vendor/github.com/prometheus/common/route/route.go:98 +0x4c
net/http.StripPrefix.func1(0x1ca3640, 0xc448cc85a0, 0xc432cf8900)
    /usr/local/go/src/net/http/server.go:1986 +0x19a
net/http.HandlerFunc.ServeHTTP(0xc4206d4c00, 0x1ca3640, 0xc448cc85a0, 0xc432cf8900)
    /usr/local/go/src/net/http/server.go:1947 +0x44
net/http.(*ServeMux).ServeHTTP(0xc42043c780, 0x1ca3640, 0xc448cc85a0, 0xc432cf8900)
    /usr/local/go/src/net/http/server.go:2337 +0x130
github.com/prometheus/prometheus/vendor/github.com/opentracing-contrib/go-stdlib/nethttp.Middleware.func2(0x1cada00, 0xc43a1ca000, 0xc432cf8700)
    /go/src/github.com/prometheus/prometheus/vendor/github.com/opentracing-contrib/go-stdlib/nethttp/server.go:74 +0x3ab
net/http.HandlerFunc.ServeHTTP(0xc4206d4c90, 0x1cada00, 0xc43a1ca000, 0xc432cf8700)
    /usr/local/go/src/net/http/server.go:1947 +0x44
github.com/prometheus/prometheus/web.withStackTracer.func1(0x1cada00, 0xc43a1ca000, 0xc432cf8700)
    /go/src/github.com/prometheus/prometheus/web/web.go:87 +0x9d
net/http.HandlerFunc.ServeHTTP(0xc4206d4cc0, 0x1cada00, 0xc43a1ca000, 0xc432cf8700)
    /usr/local/go/src/net/http/server.go:1947 +0x44
net/http.serverHandler.ServeHTTP(0xc42021c750, 0x1cada00, 0xc43a1ca000, 0xc432cf8700)
    /usr/local/go/src/net/http/server.go:2694 +0xbc
net/http.(*conn).serve(0xc44b1dc460, 0x1caf1c0, 0xc438148700)
    /usr/local/go/src/net/http/server.go:1830 +0x651
created by net/http.(*Server).Serve
    /usr/local/go/src/net/http/server.go:2795 +0x27b

gouthamve added a commit to gouthamve/prometheus that referenced this issue Sep 6, 2018

Logger is nil for API. Fixes prometheus#4577
Signed-off-by: Goutham Veeramachaneni <gouthamve@gmail.com>

@gouthamve gouthamve closed this in 3e87c04 Sep 6, 2018

@lock

This comment has been minimized.

Copy link

lock bot commented Mar 22, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 22, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.