Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prometheus/TSDB doesn't unmap opened chunks/index??? #5426

Open
pborzenkov opened this Issue Apr 2, 2019 · 1 comment

Comments

Projects
None yet
1 participant
@pborzenkov
Copy link

pborzenkov commented Apr 2, 2019

Bug Report

I've seen a Prometheus crash due to fatal error: runtime: cannot allocate memory in go runtime:

Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: fatal error: runtime: cannot allocate memory
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: runtime stack:
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: runtime.throw(0x1d02d0a, 0x1f)
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: /usr/lib/golang/src/runtime/panic.go:616 +0x81
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: runtime.persistentalloc1(0x120, 0x0, 0x2c5fdb0, 0xe)
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: /usr/lib/golang/src/runtime/malloc.go:997 +0x27f
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: runtime.persistentalloc.func1()
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: /usr/lib/golang/src/runtime/malloc.go:950 +0x45
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: runtime.systemstack(0x0)
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: /usr/lib/golang/src/runtime/asm_amd64.s:409 +0x79
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: runtime.mstart()
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: /usr/lib/golang/src/runtime/proc.go:1175
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: goroutine 2050 [running]:
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: runtime.systemstack_switch()
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: /usr/lib/golang/src/runtime/asm_amd64.s:363 fp=0xc442cc42b8 sp=0xc442cc42b0 pc=0x457400
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: runtime.persistentalloc(0x120, 0x0, 0x2c5fdb0, 0x7f6e10d77e88)
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: /usr/lib/golang/src/runtime/malloc.go:949 +0x82 fp=0xc442cc4300 sp=0xc442cc42b8 pc=0x410d52
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: runtime.newBucket(0x1, 0xe, 0x4248c6)
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: /usr/lib/golang/src/runtime/mprof.go:173 +0x5e fp=0xc442cc4338 sp=0xc442cc4300 pc=0x42404e
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: runtime.stkbucket(0x1, 0xf6000, 0xc442cc43e0, 0xe, 0x20, 0x1, 0x7f6e10d77e88)
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: /usr/lib/golang/src/runtime/mprof.go:240 +0x1b0 fp=0xc442cc4398 sp=0xc442cc4338 pc=0x424350
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: runtime.mProf_Malloc(0xc48e504000, 0xf6000)
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: /usr/lib/golang/src/runtime/mprof.go:344 +0xd6 fp=0xc442cc4510 sp=0xc442cc4398 pc=0x424926
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: runtime.profilealloc(0xc4204e8400, 0xc48e504000, 0xf6000)
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: /usr/lib/golang/src/runtime/malloc.go:865 +0x4b fp=0xc442cc4530 sp=0xc442cc4510 pc=0x410aeb
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: runtime.mallocgc(0xf6000, 0x1ae7340, 0x1, 0x2c5e130)
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: /usr/lib/golang/src/runtime/malloc.go:790 +0x479 fp=0xc442cc45d0 sp=0xc442cc4530 pc=0x410219
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: runtime.makeslice(0x1ae7340, 0x0, 0x61c5, 0x2c5e130, 0x0, 0x0)
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: /usr/lib/golang/src/runtime/slice.go:61 +0x77 fp=0xc442cc4600 sp=0xc442cc45d0 pc=0x441657
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: github.com/prometheus/prometheus/promql.(*evaluator).rangeEval(0xc44b99e960, 0xc424c7c040, 0xc442cc5200, 0x2, 0x2, 0x0, 0xc442cc4d18, 0x4113cc)
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: /builddir/build/BUILD/prometheus-2.7.1/src/github.com/prometheus/prometheus/promql/engine.go:787 +0x461 fp=0xc442cc4cb0 sp=0xc442cc4600 pc=0x158c431
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: github.com/prometheus/prometheus/promql.(*evaluator).eval(0xc44b99e960, 0x1ef7600, 0xc42c303ae0, 0xc442cc52f0, 0xc44b99e960)
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: /builddir/build/BUILD/prometheus-2.7.1/src/github.com/prometheus/prometheus/promql/engine.go:910 +0x30bf fp=0xc442cc5290 sp=0xc442cc4cb0 pc=0x1590b1f
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: github.com/prometheus/prometheus/promql.(*evaluator).Eval(0xc44b99e960, 0x1ef7600, 0xc42c303ae0, 0x0, 0x0, 0x0, 0x0)
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: /builddir/build/BUILD/prometheus-2.7.1/src/github.com/prometheus/prometheus/promql/engine.go:699 +0x80 fp=0xc442cc52c8 sp=0xc442cc5290 pc=0x158bc30
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: github.com/prometheus/prometheus/promql.(*Engine).execEvalStmt(0xc420720360, 0x1f07680, 0xc47607e240, 0xc425307f20, 0xc42c303b30, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: /builddir/build/BUILD/prometheus-2.7.1/src/github.com/prometheus/prometheus/promql/engine.go:436 +0x487 fp=0xc442cc55f0 sp=0xc442cc52c8 pc=0x1589a87
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: github.com/prometheus/prometheus/promql.(*Engine).exec(0xc420720360, 0x1f07680, 0xc47607e240, 0xc425307f20, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: /builddir/build/BUILD/prometheus-2.7.1/src/github.com/prometheus/prometheus/promql/engine.go:390 +0x5a7 fp=0xc442cc56b0 sp=0xc442cc55f0 pc=0x1589387
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: github.com/prometheus/prometheus/promql.(*query).Exec(0xc425307f20, 0x1f07680, 0xc47607e0f0, 0xc4581622d0)
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: /builddir/build/BUILD/prometheus-2.7.1/src/github.com/prometheus/prometheus/promql/engine.go:178 +0x94 fp=0xc442cc5778 sp=0xc442cc56b0 pc=0x15877b4
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: github.com/prometheus/prometheus/rules.EngineQueryFunc.func1(0x1f07680, 0xc47607e0f0, 0xc4581622d0, 0x4f, 0x270c1f82, 0xed4341f50, 0x2c3f700, 0x1522916, 0x1f07600, 0xc42003c028, ...)
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: /builddir/build/BUILD/prometheus-2.7.1/src/github.com/prometheus/prometheus/rules/manager.go:170 +0xc5 fp=0xc442cc5800 sp=0xc442cc5778 pc=0x15cb225
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: github.com/prometheus/prometheus/rules.(*RecordingRule).Eval(0xc4557c6980, 0x1f07680, 0xc47607e0f0, 0x270c1f82, 0xed4341f50, 0x2c3f700, 0xc420583a80, 0xc4205b8280, 0x1d82f2e85c6, 0x2c3f700, ...)
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: /builddir/build/BUILD/prometheus-2.7.1/src/github.com/prometheus/prometheus/rules/recording.go:76 +0xcf fp=0xc442cc5940 sp=0xc442cc5800 pc=0x15c95df
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: github.com/prometheus/prometheus/rules.(*Group).Eval.func1(0x1f07600, 0xc42003c028, 0xc44ea323c0, 0x270c1f82, 0xed4341f50, 0x2c3f700, 0x2e, 0x1f22a20, 0xc4557c6980)
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: /builddir/build/BUILD/prometheus-2.7.1/src/github.com/prometheus/prometheus/rules/manager.go:468 +0x285 fp=0xc442cc5c70 sp=0xc442cc5940 pc=0x15cb9f5
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: github.com/prometheus/prometheus/rules.(*Group).Eval(0xc44ea323c0, 0x1f07600, 0xc42003c028, 0x270c1f82, 0xed4341f50, 0x2c3f700)
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: /builddir/build/BUILD/prometheus-2.7.1/src/github.com/prometheus/prometheus/rules/manager.go:536 +0xad fp=0xc442cc5cf0 sp=0xc442cc5c70 pc=0x15c6cbd
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: github.com/prometheus/prometheus/rules.(*Group).run.func1()
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: /builddir/build/BUILD/prometheus-2.7.1/src/github.com/prometheus/prometheus/rules/manager.go:292 +0xc1 fp=0xc442cc5d70 sp=0xc442cc5cf0 pc=0x15cb531
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: github.com/prometheus/prometheus/rules.(*Group).run(0xc44ea323c0, 0x1f07600, 0xc42003c028)
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: /builddir/build/BUILD/prometheus-2.7.1/src/github.com/prometheus/prometheus/rules/manager.go:344 +0x303 fp=0xc442cc5fa8 sp=0xc442cc5d70 pc=0x15c58a3
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: github.com/prometheus/prometheus/rules.(*Manager).Update.func1.1(0xc42018f4a0, 0xc44ea323c0)
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: /builddir/build/BUILD/prometheus-2.7.1/src/github.com/prometheus/prometheus/rules/manager.go:781 +0x60 fp=0xc442cc5fd0 sp=0xc442cc5fa8 pc=0x15ce5c0
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: runtime.goexit()
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: /usr/lib/golang/src/runtime/asm_amd64.s:2361 +0x1 fp=0xc442cc5fd8 sp=0xc442cc5fd0 pc=0x459e11
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: created by github.com/prometheus/prometheus/rules.(*Manager).Update.func1
Apr 01 18:03:45 c01-inf2.vstoragedomain prometheus[10700]: /builddir/build/BUILD/prometheus-2.7.1/src/github.com/prometheus/prometheus/rules/manager.go:776 +0x56

I started the investigation after Prometheus restart and the crash is most likely due to vm_area exhaustion:

[root@c01-inf2 ~]# sysctl vm.max_map_count
vm.max_map_count = 65530
[root@c01-inf2 ~]# cat /proc/23684/maps | wc -l
65532

It looks like Prometheus doesn't call munmap on mmaped chunks/index at all (straced the process for ~5 min while doing different queries):

[root@c01-inf2 ~]# cat /tmp/mmap.str |grep mmap | tail -10
23745 mmap(NULL, 18143315, PROT_READ, MAP_SHARED, 274, 0) = -1 ENOMEM (Cannot allocate memory)
23745 mmap(NULL, 18143315, PROT_READ, MAP_SHARED, 275, 0) = -1 ENOMEM (Cannot allocate memory)
23745 mmap(NULL, 18143315, PROT_READ, MAP_SHARED, 276, 0) = -1 ENOMEM (Cannot allocate memory)
23745 mmap(NULL, 18143315, PROT_READ, MAP_SHARED, 277, 0) = -1 ENOMEM (Cannot allocate memory)
23745 mmap(NULL, 18143315, PROT_READ, MAP_SHARED, 278, 0) = -1 ENOMEM (Cannot allocate memory)
23745 mmap(NULL, 18143315, PROT_READ, MAP_SHARED, 279, 0) = -1 ENOMEM (Cannot allocate memory)
23745 mmap(NULL, 18143315, PROT_READ, MAP_SHARED, 280, 0) = -1 ENOMEM (Cannot allocate memory)
23745 mmap(NULL, 18143315, PROT_READ, MAP_SHARED, 281, 0) = -1 ENOMEM (Cannot allocate memory)
23745 mmap(NULL, 18143315, PROT_READ, MAP_SHARED, 282, 0) = -1 ENOMEM (Cannot allocate memory)
23745 mmap(NULL, 18143315, PROT_READ, MAP_SHARED, 283, 0) = -1 ENOMEM (Cannot allocate memory)

[root@c01-inf2 ~]# cat /tmp/mmap.str |grep munmap | tail -10

And the same blocks are mmaped multiple times, for example:

[root@c01-inf2 ~]# cat /proc/23684/maps | grep 01D73NZTWYTXQAKGDENJDEC5QE
7e5bf06e7000-7e5bf1734000 r--s 00000000 08:03 3807907                    /var/lib/prometheus/data/01D73NZTWYTXQAKGDENJDEC5QE/index
7e5bf1734000-7e5bf2882000 r--s 00000000 08:03 3807908                    /var/lib/prometheus/data/01D73NZTWYTXQAKGDENJDEC5QE/chunks/000001
7e63ec76c000-7e63ed7b9000 r--s 00000000 08:03 3807907                    /var/lib/prometheus/data/01D73NZTWYTXQAKGDENJDEC5QE/index
7e63ed7b9000-7e63ee907000 r--s 00000000 08:03 3807908                    /var/lib/prometheus/data/01D73NZTWYTXQAKGDENJDEC5QE/chunks/000001
7e6be67c5000-7e6be7812000 r--s 00000000 08:03 3807907                    /var/lib/prometheus/data/01D73NZTWYTXQAKGDENJDEC5QE/index
7e6be7812000-7e6be8960000 r--s 00000000 08:03 3807908                    /var/lib/prometheus/data/01D73NZTWYTXQAKGDENJDEC5QE/chunks/000001
7e73de832000-7e73df87f000 r--s 00000000 08:03 3807907                    /var/lib/prometheus/data/01D73NZTWYTXQAKGDENJDEC5QE/index
7e73df87f000-7e73e09cd000 r--s 00000000 08:03 3807908                    /var/lib/prometheus/data/01D73NZTWYTXQAKGDENJDEC5QE/chunks/000001
7e7bd4833000-7e7bd5880000 r--s 00000000 08:03 3807907                    /var/lib/prometheus/data/01D73NZTWYTXQAKGDENJDEC5QE/index
7e7bd5880000-7e7bd69ce000 r--s 00000000 08:03 3807908                    /var/lib/prometheus/data/01D73NZTWYTXQAKGDENJDEC5QE/chunks/000001
7e83c8808000-7e83c9855000 r--s 00000000 08:03 3807907                    /var/lib/prometheus/data/01D73NZTWYTXQAKGDENJDEC5QE/index
7e83c9855000-7e83ca9a3000 r--s 00000000 08:03 3807908                    /var/lib/prometheus/data/01D73NZTWYTXQAKGDENJDEC5QE/chunks/000001
7e8bba7d1000-7e8bbb81e000 r--s 00000000 08:03 3807907                    /var/lib/prometheus/data/01D73NZTWYTXQAKGDENJDEC5QE/index
7e8bbb81e000-7e8bbc96c000 r--s 00000000 08:03 3807908                    /var/lib/prometheus/data/01D73NZTWYTXQAKGDENJDEC5QE/chunks/000001
7e93aa74e000-7e93ab79b000 r--s 00000000 08:03 3807907                    /var/lib/prometheus/data/01D73NZTWYTXQAKGDENJDEC5QE/index
7e93ab79b000-7e93ac8e9000 r--s 00000000 08:03 3807908                    /var/lib/prometheus/data/01D73NZTWYTXQAKGDENJDEC5QE/chunks/000001
7e9b9869f000-7e9b996ec000 r--s 00000000 08:03 3807907                    /var/lib/prometheus/data/01D73NZTWYTXQAKGDENJDEC5QE/index
7e9b996ec000-7e9b9a83a000 r--s 00000000 08:03 3807908                    /var/lib/prometheus/data/01D73NZTWYTXQAKGDENJDEC5QE/chunks/000001
<skipped>

While TSDB handles ENOMEM gracefully, go runtime will crash if it can't mmap additional memory region due to vm_area limit.

Environment

  • System information:
[root@c01-inf2 ~]# uname -srm
Linux 3.10.0-862.20.2.vz7.73.25 x86_64
  • Prometheus version:
[root@c01-inf2 ~]# prometheus --version
prometheus, version 2.7.1 (branch: non-git, revision: non-git)
  build user:       mockbuild@builder11.eng.sw.ru
  build date:       20190401-09:08:31
  go version:       go1.10.4
  • Logs:
    No errors or indication of abnormal behaviour in the logs.
@pborzenkov

This comment has been minimized.

Copy link
Author

pborzenkov commented Apr 2, 2019

Actually, looks like there is an abnormal behaviour in the logs:

Apr 02 11:21:20 c01-inf2.vstoragedomain prometheus[23684]: level=info ts=2019-04-02T08:21:20.370466107Z caller=main.go:589 msg="Server is ready to receive web requests."
Apr 02 11:21:23 c01-inf2.vstoragedomain prometheus[23684]: level=info ts=2019-04-02T08:21:23.648843745Z caller=compact.go:443 component=tsdb msg="write block" mint=1553787208745 maxt=1553788800000 ulid=01D7EJ6VEMHX75QDP257Y0MSAJ
Apr 02 11:22:14 c01-inf2.vstoragedomain prometheus[23684]: level=info ts=2019-04-02T08:22:14.393952689Z caller=compact.go:443 component=tsdb msg="write block" mint=1553787208745 maxt=1553788800000 ulid=01D7EJ8D4NA7D9CB2Q8Q9P5ZM3
Apr 02 11:23:03 c01-inf2.vstoragedomain prometheus[23684]: level=info ts=2019-04-02T08:23:03.68076303Z caller=compact.go:443 component=tsdb msg="write block" mint=1553787208745 maxt=1553788800000 ulid=01D7EJ9X46G1JNBHVW0YNNSDJF
Apr 02 11:23:55 c01-inf2.vstoragedomain prometheus[23684]: level=info ts=2019-04-02T08:23:55.748961413Z caller=compact.go:443 component=tsdb msg="write block" mint=1553787208745 maxt=1553788800000 ulid=01D7EJBFS53K6A8DRV4JB5WMCK
Apr 02 11:24:43 c01-inf2.vstoragedomain prometheus[23684]: level=info ts=2019-04-02T08:24:43.853416019Z caller=compact.go:443 component=tsdb msg="write block" mint=1553787208745 maxt=1553788800000 ulid=01D7EJCZ038XB09SMFP0CEF76D
Apr 02 11:25:35 c01-inf2.vstoragedomain prometheus[23684]: level=info ts=2019-04-02T08:25:35.720641508Z caller=compact.go:443 component=tsdb msg="write block" mint=1553787208745 maxt=1553788800000 ulid=01D7EJEH3VPBHG10GBNCAPGJ7G
Apr 02 11:26:26 c01-inf2.vstoragedomain prometheus[23684]: level=info ts=2019-04-02T08:26:26.228097225Z caller=compact.go:443 component=tsdb msg="write block" mint=1553787208745 maxt=1553788800000 ulid=01D7EJG2WZJHDSXV4PC53073TW
Apr 02 11:27:14 c01-inf2.vstoragedomain prometheus[23684]: level=info ts=2019-04-02T08:27:14.002674599Z caller=compact.go:443 component=tsdb msg="write block" mint=1553787208745 maxt=1553788800000 ulid=01D7EJHHN0TQA8CRCSEMPX4NT5
Apr 02 11:28:03 c01-inf2.vstoragedomain prometheus[23684]: level=info ts=2019-04-02T08:28:03.141370401Z caller=compact.go:443 component=tsdb msg="write block" mint=1553787208745 maxt=1553788800000 ulid=01D7EJK1AHZM6T48SB4V5JZBDP
Apr 02 11:28:51 c01-inf2.vstoragedomain prometheus[23684]: level=info ts=2019-04-02T08:28:51.942796773Z caller=compact.go:443 component=tsdb msg="write block" mint=1553787208745 maxt=1553788800000 ulid=01D7EJMH4B80F4MM5G8YRMF5GF

Prometheus tried to compact the same time range again and again until it crashed like this:

Apr 02 13:37:44 c01-inf2.vstoragedomain prometheus[23684]: level=error ts=2019-04-02T10:37:44.824757358Z caller=db.go:341 component=tsdb msg="compaction failed" err="persist head block: write compaction: write chunks: no space left on device"
Apr 02 13:37:44 c01-inf2.vstoragedomain prometheus[23684]: level=warn ts=2019-04-02T10:37:44.969474243Z caller=scrape.go:835 component="scrape manager" scrape_pool=consulsd target=http://s02-inf2.vstoragedomain:38142/metrics msg="append failed" err="write to WAL: log samples: write /var/lib/prometheus/data/wal/00004326: no space left on device"
Apr 02 13:37:44 c01-inf2.vstoragedomain prometheus[23684]: level=warn ts=2019-04-02T10:37:44.969660773Z caller=scrape.go:850 component="scrape manager" scrape_pool=consulsd target=http://s02-inf2.vstoragedomain:38142/metrics msg="appending scrape report failed" err="write to WAL: log samples: write /var/lib/prometheus/data/wal/00004326: no space left on device"
Apr 02 13:37:45 c01-inf2.vstoragedomain prometheus[23684]: level=warn ts=2019-04-02T10:37:45.056883446Z caller=scrape.go:835 component="scrape manager" scrape_pool=consulsd target=http://c07-inf2.vstoragedomain:3903/metrics msg="append failed" err="write to WAL: log samples: write /var/lib/prometheus/data/wal/00004326: no space left on device"
Apr 02 13:37:45 c01-inf2.vstoragedomain prometheus[23684]: level=warn ts=2019-04-02T10:37:45.057001413Z caller=scrape.go:850 component="scrape manager" scrape_pool=consulsd target=http://c07-inf2.vstoragedomain:3903/metrics msg="appending scrape report failed" err="write to WAL: log samples: write /var/lib/prometheus/data/wal/00004326: no space left on device"
Apr 02 13:37:45 c01-inf2.vstoragedomain prometheus[23684]: level=warn ts=2019-04-02T10:37:45.205543404Z caller=scrape.go:835 component="scrape manager" scrape_pool=consulsd target=http://s01-inf2.vstoragedomain:44042/metrics msg="append failed" err="write to WAL: log samples: write /var/lib/prometheus/data/wal/00004326: no space left on device"
Apr 02 13:37:45 c01-inf2.vstoragedomain prometheus[23684]: panic: runtime error: slice bounds out of range
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.