-
Notifications
You must be signed in to change notification settings - Fork 8.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Promtool: Add support for compaction analysis #8940
Promtool: Add support for compaction analysis #8940
Conversation
cmd/promtool/main.go
Outdated
@@ -142,6 +142,10 @@ func main() { | |||
dumpMinTime := tsdbDumpCmd.Flag("min-time", "Minimum timestamp to dump.").Default(strconv.FormatInt(math.MinInt64, 10)).Int64() | |||
dumpMaxTime := tsdbDumpCmd.Flag("max-time", "Maximum timestamp to dump.").Default(strconv.FormatInt(math.MaxInt64, 10)).Int64() | |||
|
|||
tsdbAnalyzeCompaction := tsdbCmd.Command("analyze-compaction", "Plot a distribution of how full TSDB chunks are, relative to the maximum capacity of 120 samples.") | |||
analyzeCompactionDB := tsdbAnalyzeCompaction.Arg("db path", "Database path (default is "+defaultDBPath+").").Default(defaultDBPath).String() | |||
analyzeCompactionBlockID := tsdbAnalyzeCompaction.Arg("block ID", "The TSDB block to analyze").Default("").String() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could also consider adding this analysis to the existing tsdb analyze
command.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMHO it would be fit in the scope of tsdb analyze
but lets see what others think :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dgl do you have any thoughts on this one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, makes sense to include it to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 can we merge to tsdb analyze then? 🤗
ecbac81
to
ee54a9c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, LGTM when added to tsdb analyze
(:
cmd/promtool/main.go
Outdated
@@ -142,6 +142,10 @@ func main() { | |||
dumpMinTime := tsdbDumpCmd.Flag("min-time", "Minimum timestamp to dump.").Default(strconv.FormatInt(math.MinInt64, 10)).Int64() | |||
dumpMaxTime := tsdbDumpCmd.Flag("max-time", "Maximum timestamp to dump.").Default(strconv.FormatInt(math.MaxInt64, 10)).Int64() | |||
|
|||
tsdbAnalyzeCompaction := tsdbCmd.Command("analyze-compaction", "Plot a distribution of how full TSDB chunks are, relative to the maximum capacity of 120 samples.") | |||
analyzeCompactionDB := tsdbAnalyzeCompaction.Arg("db path", "Database path (default is "+defaultDBPath+").").Default(defaultDBPath).String() | |||
analyzeCompactionBlockID := tsdbAnalyzeCompaction.Arg("block ID", "The TSDB block to analyze").Default("").String() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 can we merge to tsdb analyze then? 🤗
0d5fe71
to
00bcde1
Compare
00bcde1
to
3a7828d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! LGTM, just small tiny nits.
Plus: Can we mention new functionality in tsdb analysis
command help?
cmd/promtool/tsdb.go
Outdated
|
||
fmt.Printf("\nCompaction analysis:\n") | ||
fmt.Println("Fullness: Amount of chunks") | ||
// Normalize absolute counts to percentages and print them out |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Normalize absolute counts to percentages and print them out | |
// Normalize absolute counts to percentages and print them out. |
cmd/promtool/tsdb.go
Outdated
return err | ||
} | ||
chunkSize := math.Min(float64(chk.NumSamples()), maxSamplesPerChunk) | ||
// Calculate the bucket for the chunk and increment it in the histogram |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Calculate the bucket for the chunk and increment it in the histogram | |
// Calculate the bucket for the chunk and increment it in the histogram. |
cmd/promtool/tsdb.go
Outdated
} | ||
|
||
for _, chk := range chks { | ||
// Load the actual data of the chunk |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Load the actual data of the chunk | |
// Load the actual data of the chunk. |
cmd/promtool/tsdb.go
Outdated
} | ||
|
||
func analyzeCompaction(block tsdb.BlockReader, indexr tsdb.IndexReader) (err error) { | ||
postingsr, err := indexr.Postings("", "") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
postingsr, err := indexr.Postings("", "") | |
postingsr, err := indexr.Postings(index.AllPostingsKey()) |
This commit extends the promtool tsdb analyze command to help troubleshoot high Prometheus disk usage. The command now plots a distribution of how full chunks are relative to the maximum capacity of 120 samples per chunk. Signed-off-by: fpetkovski <filip.petkovsky@gmail.com>
3a7828d
to
0f47b5f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One more thing ;p
Co-authored-by: Bartlomiej Plotka <bwplotka@gmail.com>
Thanks! |
This pull request seems a massive regression on the time spent by tsdb analysis, as it now read the chunks. I think we should put this option behind a flag. |
Hi @roidelapluie, thanks for the review. What do you think about an |
This commit extends the
promtool tsdb analyze
command to helptroubleshoot high Prometheus disk usage. The command will now plot a distribution of
how full chunks are relative to the maximum capacity of 120 samples per chunk.
Here is a sample output: