Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Seal data blocks and close encoders when they are sufficiently old #19

Merged
merged 7 commits into from
Jul 14, 2016

Conversation

xichen2020
Copy link

cc @robskillington

When the blocks are old enough, there should be no writes going into those blocks, which means we can pull out the raw data out of the encoders, store them in the blocks, and close the encoders, which can save us the memory overhead of the encoder themselves.

@coveralls
Copy link

coveralls commented Jul 13, 2016

Coverage Status

Coverage increased (+0.04%) to 76.488% when pulling f1efa19 on xichen-seal-blocks into ce94736 on master.

// If the context is nil (e.g., when it's just obtained from the pool),
// we return immediately.
if b.ctx == nil {
return
}
if encoder := b.encoder; encoder != nil {
b.ctx.RegisterCloser(encoder.Close)
if b.writable {
Copy link
Collaborator

@robskillington robskillington Jul 13, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if encoder := b.encoder; encoder != nil && b.writable { 
  b.ctx.RegisterCloser(encoder.Close)
} else if stream := b.stream; stream != nil && !b.writable {
  //...
}

Perhaps?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hm.I feel conditioning on b.writable is easier to comprehend than mixing it up with nil checks for encoders and streams

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair enough. You could do something that I've done in other places successfully to avoid this type of big branching:

cleanup := func() {
    b.ctx.Close()
    b.ctx = nil
    b.encoder = nil
    b.stream = nil
}
if b.writable {
    // If the block is not sealed, we need to close the encoder.
    if encoder := b.encoder; encoder != nil {
        b.ctx.RegisterCloser(encoder.Close)
    }
    cleanup()
    return
}
if stream := b.stream; stream != nil {
    b.ctx.RegisterCloser(func() {
        if bytesPool := b.opts.GetBytesPool(); bytesPool != nil {
            segment := stream.Segment()
            stream.Reset(m3db.Segment{})
            bytesPool.Put(segment.Head)
            bytesPool.Put(segment.Tail)
        }
        stream.Close()
    })
}
cleanup()

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure thing, updated.

robskillington and others added 6 commits July 13, 2016 19:59
* Add new iterators and complete implementation of new raw batch protocol
* Add client Fetch and FetchAll methods
* Add parallel package testing
@robskillington
Copy link
Collaborator

LGTM :shipit:

@coveralls
Copy link

coveralls commented Jul 14, 2016

Coverage Status

Coverage increased (+1.1%) to 77.578% when pulling 443c904 on xichen-seal-blocks into ce94736 on master.

@xichen2020 xichen2020 merged commit c240683 into master Jul 14, 2016
@xichen2020 xichen2020 deleted the xichen-seal-blocks branch July 14, 2016 02:47
prateek pushed a commit that referenced this pull request May 11, 2018
Adding grpc server and client to m3coordinator. Currently uses a static resolver.
prateek pushed a commit that referenced this pull request May 12, 2018
Adding grpc server and client to m3coordinator. Currently uses a static resolver.
prateek pushed a commit that referenced this pull request May 12, 2018
Adding grpc server and client to m3coordinator. Currently uses a static resolver.
prateek pushed a commit that referenced this pull request May 12, 2018
Adding grpc server and client to m3coordinator. Currently uses a static resolver.
prateek added a commit that referenced this pull request Oct 4, 2018
robskillington pushed a commit that referenced this pull request Oct 4, 2018
robskillington pushed a commit that referenced this pull request Oct 7, 2018
Set an explicit writer subscope when constructing a scope so it easier to distinguish writer metrics.
robskillington pushed a commit that referenced this pull request Oct 7, 2018
Set an explicit writer subscope when constructing a scope so it easier to distinguish writer metrics.
prateek pushed a commit that referenced this pull request Oct 9, 2018
* Reset stream on timer creation, and replace math.Pow with plain multiplication

* Record remote address when decode error happens

* Preallocate enough buffer to hold aggregated metrics

* Optionally disable sample pooling

* Add options to periodically flush streams at write time to reduce processing time during global flushing

* Add configuration to specify flushEvery

* Fix pathological case triggering quadratic runtime for merging sorted lists

* Record number of timers and timer batches separately

* Address feedback
prateek pushed a commit that referenced this pull request Oct 9, 2018
* Reset stream on timer creation, and replace math.Pow with plain multiplication

* Record remote address when decode error happens

* Preallocate enough buffer to hold aggregated metrics

* Optionally disable sample pooling

* Add options to periodically flush streams at write time to reduce processing time during global flushing

* Add configuration to specify flushEvery

* Fix pathological case triggering quadratic runtime for merging sorted lists

* Record number of timers and timer batches separately

* Address feedback
cw9 added a commit that referenced this pull request Oct 12, 2018
cw9 added a commit that referenced this pull request Oct 13, 2018
cw9 added a commit that referenced this pull request Oct 14, 2018
cw9 pushed a commit that referenced this pull request Oct 14, 2018
* Adding Subscribe to the KV Store interface

* Switch to using Watch from m3x

* Add error case testing
cw9 pushed a commit that referenced this pull request Oct 16, 2018
* Adding Subscribe to the KV Store interface

* Switch to using Watch from m3x

* Add error case testing
cw9 pushed a commit that referenced this pull request Oct 16, 2018
* Adding Subscribe to the KV Store interface

* Switch to using Watch from m3x

* Add error case testing
cw9 pushed a commit that referenced this pull request Oct 16, 2018
* Adding Subscribe to the KV Store interface

* Switch to using Watch from m3x

* Add error case testing
prateek pushed a commit that referenced this pull request Apr 7, 2019
prateek pushed a commit that referenced this pull request Apr 8, 2019
jnyi added a commit to jnyi/m3 that referenced this pull request Oct 21, 2022
[PLAT-65312] Measure E2E latency for per aggregation
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants