Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Invalid iterator crash bug in newSeriesFrontier() #114

Closed
juliusv opened this issue Mar 31, 2013 · 4 comments
Closed

Invalid iterator crash bug in newSeriesFrontier() #114

juliusv opened this issue Mar 31, 2013 · 4 comments
Assignees

Comments

@juliusv
Copy link
Member

juliusv commented Mar 31, 2013

With my expressions benchmark (living in branch "julius-metrics-persistence-benchmarks"), I managed to provoke the following crash in newSeriesFrontier():

$ go run -a expressions_benchmark.go --leveldbFlushOnMutate=false -numTimeseries=10 -populateStorage=true -deleteStorage=true -evalIntervalSeconds=3600 > /tmp/foo.txt
^[OFpanic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x18 pc=0x7f9e61c48254]

goroutine 1 [select]:
github.com/prometheus/prometheus/storage/metric.(*tieredStorage).MakeView(0xf840000e00, 0xf8445f8040, 0xf8420e0140, 0xdf8475800, 0x0, ...)
    /home/julius/gosrc/src/github.com/prometheus/prometheus/storage/metric/tiered.go:135 +0x34b
github.com/prometheus/prometheus/rules/ast.viewAdapterForRangeQuery(0xf8400bee00, 0xf8420e0f80, 0x0, 0x0, 0x0, ...)
    /home/julius/gosrc/src/github.com/prometheus/prometheus/rules/ast/query_analyzer.go:138 +0x480
github.com/prometheus/prometheus/rules/ast.EvalVectorRange(0xf844652ac0, 0xf8420e0f80, 0x0, 0x0, 0x0, ...)
    /home/julius/gosrc/src/github.com/prometheus/prometheus/rules/ast/ast.go:274 +0xff
main.doBenchmark(0x6b5ce4, 0xf800000008)
    /home/julius/gosrc/src/github.com/prometheus/prometheus/expressions_benchmark.go:113 +0x45c
main.main()
    /home/julius/gosrc/src/github.com/prometheus/prometheus/expressions_benchmark.go:153 +0x37a

goroutine 2 [syscall]:
created by runtime.main
    /home/julius/go/src/pkg/runtime/proc.c:221

goroutine 9 [syscall]:
github.com/jmhodges/levigo._Cfunc_leveldb_iter_key(0x7f9e480017e0, 0xf843f274e0)
    github.com/jmhodges/levigo/_obj/_cgo_defun.c:178 +0x2f
github.com/jmhodges/levigo.(*Iterator).Key(0xf843f274a0, 0x746920410000000f, 0x7f9e620038a0, 0x100000001)
    github.com/jmhodges/levigo/_obj/batch.cgo1.go:519 +0x44
github.com/prometheus/prometheus/storage/raw/leveldb.levigoIterator.Key(0xf843f274a0, 0xf843f27498, 0xf843f27490, 0xf84009e820, 0x0, ...)
    /home/julius/gosrc/src/github.com/prometheus/prometheus/storage/raw/leveldb/leveldb.go:121 +0xe8
github.com/prometheus/prometheus/storage/raw/leveldb.(*levigoIterator).Key(0xf84195e060, 0x0, 0x0, 0x0)
    /home/julius/gosrc/src/github.com/prometheus/prometheus/storage/raw/leveldb/batch.go:0 +0x8c
github.com/prometheus/prometheus/storage/metric.extractSampleKey(0xf844bbeb40, 0xf84195e060, 0xf844652480, 0x0, 0x0, ...)
    /home/julius/gosrc/src/github.com/prometheus/prometheus/storage/metric/leveldb.go:692 +0xa4
github.com/prometheus/prometheus/storage/metric.newSeriesFrontier(0xf8400d6000, 0xf8400d9b40, 0xf8400d6000, 0xf84195e000, 0x0, ...)
    /home/julius/gosrc/src/github.com/prometheus/prometheus/storage/metric/frontier.go:147 +0x7ee
github.com/prometheus/prometheus/storage/metric.(*tieredStorage).renderView(0xf840000e00, 0xf8445f8040, 0xf8420e0140, 0xf840f596c0)
    /home/julius/gosrc/src/github.com/prometheus/prometheus/storage/metric/tiered.go:384 +0x444
github.com/prometheus/prometheus/storage/metric.(*tieredStorage).Serve(0xf840000e00, 0x0)
    /home/julius/gosrc/src/github.com/prometheus/prometheus/storage/metric/tiered.go:181 +0x143
created by main.main
    /home/julius/gosrc/src/github.com/prometheus/prometheus/expressions_benchmark.go:139 +0x292

goroutine 10 [syscall]:
created by addtimer
    /home/julius/go/src/pkg/runtime/ztime_amd64.c:72
exit status 2

The culprit is this line, where we rewind the iterator although it is possible that it is already pointing at the first element on disk:

Please add logic to prevent this as well as a regression test.

@ghost ghost assigned matttproud Mar 31, 2013
@matttproud
Copy link
Member

@juliusv, I discovered several bugs in both the view materialization and in the benchmark itself. I'll file pull requests for the respective pieces shortly.

@matttproud
Copy link
Member

First, we will need to re-run the benchmarks of the old and new system with this commit cherrypicked: 6e0c65c.

@matttproud
Copy link
Member

#115 should address the second part after the tests under ./rules/... are fixed.

@juliusv juliusv closed this as completed Apr 15, 2013
simonpasquier pushed a commit to simonpasquier/prometheus that referenced this issue Oct 12, 2017
@lock
Copy link

lock bot commented Mar 25, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 25, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants