-
Notifications
You must be signed in to change notification settings - Fork 453
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Seal data blocks and close encoders when they are sufficiently old #19
Conversation
// If the context is nil (e.g., when it's just obtained from the pool), | ||
// we return immediately. | ||
if b.ctx == nil { | ||
return | ||
} | ||
if encoder := b.encoder; encoder != nil { | ||
b.ctx.RegisterCloser(encoder.Close) | ||
if b.writable { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if encoder := b.encoder; encoder != nil && b.writable {
b.ctx.RegisterCloser(encoder.Close)
} else if stream := b.stream; stream != nil && !b.writable {
//...
}
Perhaps?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hm.I feel conditioning on b.writable
is easier to comprehend than mixing it up with nil checks for encoders and streams
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair enough. You could do something that I've done in other places successfully to avoid this type of big branching:
cleanup := func() {
b.ctx.Close()
b.ctx = nil
b.encoder = nil
b.stream = nil
}
if b.writable {
// If the block is not sealed, we need to close the encoder.
if encoder := b.encoder; encoder != nil {
b.ctx.RegisterCloser(encoder.Close)
}
cleanup()
return
}
if stream := b.stream; stream != nil {
b.ctx.RegisterCloser(func() {
if bytesPool := b.opts.GetBytesPool(); bytesPool != nil {
segment := stream.Segment()
stream.Reset(m3db.Segment{})
bytesPool.Put(segment.Head)
bytesPool.Put(segment.Tail)
}
stream.Close()
})
}
cleanup()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure thing, updated.
* Add new iterators and complete implementation of new raw batch protocol * Add client Fetch and FetchAll methods * Add parallel package testing
LGTM |
Set an explicit writer subscope when constructing a scope so it easier to distinguish writer metrics.
Set an explicit writer subscope when constructing a scope so it easier to distinguish writer metrics.
* Reset stream on timer creation, and replace math.Pow with plain multiplication * Record remote address when decode error happens * Preallocate enough buffer to hold aggregated metrics * Optionally disable sample pooling * Add options to periodically flush streams at write time to reduce processing time during global flushing * Add configuration to specify flushEvery * Fix pathological case triggering quadratic runtime for merging sorted lists * Record number of timers and timer batches separately * Address feedback
* Reset stream on timer creation, and replace math.Pow with plain multiplication * Record remote address when decode error happens * Preallocate enough buffer to hold aggregated metrics * Optionally disable sample pooling * Add options to periodically flush streams at write time to reduce processing time during global flushing * Add configuration to specify flushEvery * Fix pathological case triggering quadratic runtime for merging sorted lists * Record number of timers and timer batches separately * Address feedback
* Adding Subscribe to the KV Store interface * Switch to using Watch from m3x * Add error case testing
* Adding Subscribe to the KV Store interface * Switch to using Watch from m3x * Add error case testing
* Adding Subscribe to the KV Store interface * Switch to using Watch from m3x * Add error case testing
* Adding Subscribe to the KV Store interface * Switch to using Watch from m3x * Add error case testing
[PLAT-65312] Measure E2E latency for per aggregation
cc @robskillington
When the blocks are old enough, there should be no writes going into those blocks, which means we can pull out the raw data out of the encoders, store them in the blocks, and close the encoders, which can save us the memory overhead of the encoder themselves.