Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve checkpoint series iterator. #3193

Merged
merged 6 commits into from
Jan 21, 2021

Conversation

cyriltovena
Copy link
Contributor

This is a big refactoring allowing to use less memory when iterating over series for checkpointing purpose.

❯ benchcmp  before.txt after.txt8
benchmark                       old ns/op     new ns/op     delta
Benchmark_SeriesIterator-16     13062511      7110916       -45.56%

benchmark                       old allocs     new allocs     delta
Benchmark_SeriesIterator-16     28517          240            -99.16%

Improves memory usage of checkpointer series iterator.
benchmark                       old bytes     new bytes     delta
Benchmark_SeriesIterator-16     33072200      145433        -99.56%

Signed-off-by: Cyril Tovena <cyril.tovena@gmail.com>
Signed-off-by: Cyril Tovena <cyril.tovena@gmail.com>
Signed-off-by: Cyril Tovena <cyril.tovena@gmail.com>
Signed-off-by: Cyril Tovena <cyril.tovena@gmail.com>
Signed-off-by: Cyril Tovena <cyril.tovena@gmail.com>
@codecov-io
Copy link

Codecov Report

Merging #3193 (38170c4) into master (40a637e) will decrease coverage by 0.09%.
The diff coverage is 70.14%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #3193      +/-   ##
==========================================
- Coverage   63.33%   63.23%   -0.10%     
==========================================
  Files         191      191              
  Lines       16455    16514      +59     
==========================================
+ Hits        10421    10442      +21     
- Misses       5089     5128      +39     
+ Partials      945      944       -1     
Impacted Files Coverage Δ
pkg/chunkenc/memchunk.go 68.10% <41.81%> (-3.51%) ⬇️
pkg/ingester/checkpoint.go 71.42% <88.57%> (+3.14%) ⬆️
pkg/chunkenc/encoding_helpers.go 55.73% <100.00%> (-0.72%) ⬇️
pkg/chunkenc/pool.go 83.21% <100.00%> (+0.86%) ⬆️
pkg/promtail/positions/positions.go 46.80% <0.00%> (-11.71%) ⬇️
pkg/querier/queryrange/downstreamer.go 95.29% <0.00%> (-2.36%) ⬇️
pkg/logql/evaluator.go 89.87% <0.00%> (-0.36%) ⬇️
pkg/querier/queryrange/limits.go 95.83% <0.00%> (+4.16%) ⬆️

Copy link
Member

@owen-d owen-d left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit, then LGTM.
Great work

// it's possible the stream has been flushed to storage
// in between starting the checkpointing process and
// checkpointing this stream.
return s.Next()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't think of a deadlock scenario while holding the chunk mutex here and calling s.Next(), but I am worried about it.

Comment on lines +241 to +248
defer stream.chunkMtx.RUnlock()

if len(stream.chunks) < 1 {
// it's possible the stream has been flushed to storage
// in between starting the checkpointing process and
// checkpointing this stream.
return s.Next()
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
defer stream.chunkMtx.RUnlock()
if len(stream.chunks) < 1 {
// it's possible the stream has been flushed to storage
// in between starting the checkpointing process and
// checkpointing this stream.
return s.Next()
}
if len(stream.chunks) < 1 {
// it's possible the stream has been flushed to storage
// in between starting the checkpointing process and
// checkpointing this stream.
stream.chunkMtx.RUnlock()
return s.Next()
}
defer stream.chunkMtx.RUnlock()

Signed-off-by: Cyril Tovena <cyril.tovena@gmail.com>
@cyriltovena cyriltovena merged commit 07ece2b into grafana:master Jan 21, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants