Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tighten lock around appending new chunks of read data in stream #28

Merged
merged 6 commits into from May 2, 2020

Conversation

willscott
Copy link
Contributor

we can likely save some allocations by keeping reads segmented rather than forcing them to go through a single continuous slice of bytes.

There's an interleaving that becomes possible with this change, where reads want to know the capacity as though writes haven't "landed", but writes want to know the remaining "un-reserved" space. Figuring out how to represent that sanely still.

@willscott willscott requested a review from Stebalien May 1, 2020 00:57
@Stebalien Stebalien requested a review from dirkmc May 1, 2020 02:39
@willscott
Copy link
Contributor Author

ref: #26

@willscott willscott marked this pull request as ready for review May 1, 2020 17:10
Copy link
Member

@Stebalien Stebalien left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice!

util.go Outdated
}

func (s *segmentedBuffer) Len() int {
return int(atomic.LoadUint64(&s.len))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What was the motivation for the atomics? We end up doing a lot of atomic operations just to allow checking the length without taking a lock.

(I haven't profiled it so it may be fine, I'm just wondering).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reads will be regularly checking if the length has extended. my intuition was this would be faster than locking, reading, and unlocking.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, my real concern is whether that outweighs the cost of multiple atomic operations (and potential memory barriers) while we're holding a lock anyways. But it works so I have no real objections.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i suspect a lot of that has to do with the list of buffers rather than the length atomics 🤷‍♂️

I'm also going to revert back to uint32's throughout, as they were before. There was an underflow that i was confirming, which is why i bumped them up to 64 bits.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM.

util.go Outdated Show resolved Hide resolved
util.go Outdated
s.bm.Lock()
defer s.bm.Unlock()

currentWindow := atomic.LoadUint64(&s.len) + atomic.LoadUint64(&s.cap) + s.pending
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I think we can drop the atomic loads here given that these values are only ever modified under the lock. But I'm really not sure about that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think i agree, but it seemed like a sane pattern to keep all accesses atomic when it's needed at least some of the time to prevent unexpected glitches.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair enough.

util.go Show resolved Hide resolved
util.go Outdated

currentWindow := atomic.LoadUint64(&s.len) + atomic.LoadUint64(&s.cap) + s.pending
if currentWindow > max {
// Not an error, but rather an in-flight reservation hasn't hit.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you expand on this?

@Stebalien
Copy link
Member

The lock contention appears to be completely gone.

Copy link
Member

@Stebalien Stebalien left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, one little nit now that we're not using io.Copy. Then LGTM.

stream.go Outdated Show resolved Hide resolved
util.go Outdated
if n == length {
break LOOP
}
case io.EOF:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually wonder if we should ignore the error any time we read a full message. But that's probably an edge-case that really doesn't matter.

util.go Outdated Show resolved Hide resolved
Co-authored-by: Steven Allen <steven@stebalien.com>
@willscott willscott merged commit 7bf6d4a into master May 2, 2020
@Stebalien Stebalien mentioned this pull request May 26, 2020
77 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants